What to Expect Here
Talking about tech these past few years, specifically AI, has felt a bit like being pulled between extremes of forced "awe" and "horror". A techno “utopia” filled with superintelligent AI and robot butlers. Or doomsday scenarios where all jobs are gone and Skynet is your landlord. Whether the sentiments are sincere or not, they're exhausting.
Luckily there are some great voices out there that have already been adding more nuanced, critical, and grounded analyses and perspectives that give everyday people and domain experts more practical tools to reason about the AI industry and the technology we're creating and propagating.
My hope is that Inside Voice can be a small contribution to those efforts as well. A place for regular people, tech workers, small businesses, startups, even big enterprises — anyone who's tired of unsolicited "answers to all their problems" and is instead looking for more and better questions being asked about the changing role of technology in our work and lives.
Why am I doing this?
I've been in tech for a decade and I'm now an AI consultant. I've worked extensively with clients on Voice AI projects in particular (think real-time transcription, voice agents, voice-powered interfaces). And whether it's at the beginning of a project or in the middle, many times the question "so why are we doing X this way?" will arise. I love those questions and want to make sure they keep getting asked. But too often, I see them getting drowned out by hype, fear, or the pressure to just ship something. I don't think that's good for business, and I definitely don't think it's conducive to building robust communities around the development of novel tech.
I want to surface and wrestle with more of those “why” questions. My goal is to provide technical literacy that feels like talking with your "tech-y friend" — someone who always has time for your questions, doesn't talk down to you, never shrouds simple ideas in unnecessary mystique, and tries to learn more than they teach. An informed public encourages more and better discourse.
More people need to feel they have a say in where this technology goes. "Development" isn't just what engineers do, it's also the public sentiments we encourage, the habits and norms we cultivate, the everyday choices we make about what to use or refuse, and the conversations we do or don't have. The clearer we understand the tools, the more agency we have in shaping their impact.
So how will this blog be different? You'll find fewer hot takes here and more sincere questions:
Why does so much "innovation" default to excess, to scaling at all costs?
What would it look like if users and businesses had more nimble and contextually appropriate tools and were able to prioritize data sovereignty?
How do we decide when new tech actually solves a problem versus just sounding impressive?
What should every person know before regularly interacting with generative AI tools in their daily lives?
So…what exactly am I going to talk about?
I have a lot of thoughts bouncing around in my head, but my posts will typically be contained to the following topics/sections:
Industry Analysis
[How to spot AI hype and ask better questions]
I'll poke at the hype cycles, the narratives, claims from the big players and AI labs and then ask how does this work, who does this affect, and why does this matter for the rest of us. The goal will always be to help develop your own critical lens for this technology so you can draw your own conclusions about industry claims past, present, and future.
Workbench
[Technical explorations and deep dives]
At my core, I love getting into the fine details of things, the nitty gritty. I love taking things apart, diagnosing bugs, fixing broken machines, and building custom solutions. So I might do a deep dive into things like the anatomy of a low-latency AI voice agent, or the reason voice bots keep interrupting you, or the importance of reciprocity and conversational repair for enjoyable conversations and how to translate that into code, or the nuances of customizing your own task-specific language models. Sometimes I'll have code. Sometimes I'll draw some diagrams. Sometimes it'll just be me banging my head against a problem and sharing the joys of that.AI Literacy
[Plainspoken explainers to demystify AI technologies]
No matter what, I think generative AI will be here to stay in one way or another. So it is crucial that we continue to develop our own literacy around this technology. I'll provide accessible explainers and mental models that aim to demystify how these systems work so you can intentionally use them (or refuse them). I want people to fully understand what all these AI products and services mean when they say "use with caution".
Sandbox
[Cross-disciplinary connections and fun “what ifs”]
The grab bag, the playground. The place where I'll post the occasional "shower thought”, wild connection, research deep dive, or speculative tangent that doesn't fit anywhere else.
What you shouldn't expect:
Thinly-veiled promotion for [insert latest cutting-edge AI product].
"five prompt tricks to 10x your workflow" listicles.
Dire warnings about AI apocalypse with a vibe that can only be described as "menacing glee".
Anything I wouldn't bother reading myself.
I want this blog to be a place where curiosity, play, healthy skepticism, and critical analysis are encouraged and appreciated as fertilizer to the soil that grows innovation and fuel for our collective engines of creativity.
If you want hype, if you want doom, those places are easy enough to find. If you want something else that hopefully feels a bit more familiar, maybe more like the type of conversations you’d have with your friends or family or colleagues, then stick around and see if you like it here.
I don't have a content schedule. I'm not promising weekly posts. I'll write when I have something worth sharing.
If all of that works for you, welcome. If not, no hard feelings.