Originally published on Medium.
Vociferous v3: A Personal Milestone in Learning How to Build Carefully

Transcribe: The main view for Vociferous.
Vociferous v3 is the first release that feels like I’m putting a real piece of software into people’s hands rather than sharing a clever contraption I built for myself. I’m excited about it in a way that’s hard to compress into a tidy “release notes” kind of excitement, because the program means more to me than the sum of its features. Vociferous started as a small pile of scripts with one job: listen to my voice, convert it into text, dump the result into my clipboard, and let me paste it wherever I was working. It was practical, blunt, and selfish in the best way. It existed to reduce friction in my own life. It didn’t need polish because it wasn’t trying to be anything other than a personal lever that moved my day forward.
What changed, slowly and then all at once, is that I kept coming back to it. The moment you experience the cognitive relief of speaking your thoughts and watching them become usable text, the tool stops being a novelty. It becomes infrastructure for your attention. I didn’t want something that “helped me write” in the generic, content-factory sense. I wanted something that helped me think out loud in private, capture exactly what I meant, keep the rawness that matters, and then optionally refine it in a controlled way without turning it into something that sounds like a marketing team. Vociferous grew because that need is real, and because I’m stubborn enough to keep pressing on a tool until it becomes what I actually intended.
I also need to say, plainly, that I did not start this project from a position of deep architectural competence. I started it like a lot of people start things that matter to them: with a vision, a lot of impatience, and an incomplete understanding of what makes software stable over time. In the beginning, every change felt like brute force. When I brought AI agents into the workflow, that brute force got amplified. I’d describe what I wanted, get back a huge blast of code, and then spend an exhausting amount of time trying to undo the parts that missed the point. It felt like I was trying to corral greased pigs, except the pigs were abstractions, and the grease was my own inability to specify constraints at a systems level. I could feel that something was wrong in my approach, because I was moving quickly while also becoming more fragile. To aptly quote the most arrogant person I’ve ever heard speak:
“[That’s like] the worst trade deal in the history of trade deals, maybe ever.”
Somewhere along the way I learned the difference between “making changes” and “making controlled changes.” That sounds obvious, but it’s not a small shift. Controlled change requires that you understand what a system is allowed to do, not just what you want it to do. It requires that you can see boundaries, identify lifecycles, respect concurrency, and recognize when you’re about to patch a symptom instead of healing a design. The biggest improvement in my workflow wasn’t discovering some magical prompt. It was learning enough about systems design and architecture that I could start using AI like a scalpel instead of a demolition charge. I still make mistakes, and I still have plenty of gaps, but I’m no longer building purely by thrashing. I started this as a bumbling idiot. I’m still a bumbling idiot, just slightly less so, with a better grasp on why things break.
This developer diary series exists for two reasons. The first is selfish: I want a record of how I got from those early scripts to something I’m willing to call a real application. The second is communal: I’m the president of the programming club at Franklin University, and I want to use this project as teaching material, not in the sense of “copy what I did,” but in the sense of “here’s what it looks like when someone builds a tool that matters to them and has to learn the hard parts in public.” A lot of software education is either too theoretical to feel real or too tutorial-driven to teach judgment. Vociferous forces judgment. It forces tradeoffs. It forces discipline. And those are exactly the muscles I want people to build if they want to create tools that genuinely improve their lives instead of just shipping another disposable app.
Vociferous is important to me because it’s an expression of a belief that I’ve grown into, and I’m comfortable saying it directly: human-first AI tooling should be the priority for developers right now. I’m not naive about the industrial push behind AI, or the way capitalist incentives shape what gets built, scaled, and sold. Some of that is inevitable, and some of it is even useful, but it also creates a culture where research and production are often destructive processes for the people around them. Tools get optimized for throughput rather than care. Systems get built that extract attention rather than protect it. AI is powerful enough to either accelerate that harm or resist it, and I think it deserves a higher level of respect than it typically gets in product culture. I’m not in a position to influence the entire direction of the industry, but I can influence small areas where my hands are actually on the wheel.
One of those areas is what I think of as cognitive offload that does not hollow you out. I don’t want a program that replaces someone’s ability to think. I want a program that amplifies it. There’s a meaningful difference between generating a synthetic voice and refining the words you actually spoke. Vociferous is designed around that difference. You speak in your office, your bedroom, behind closed doors, and the tool captures your words locally. It transcribes them as faithfully as it can. Then, if you choose, it refines them to a degree you control, not to make you sound like “AI wrote this,” but to make your own speech clearer while still being recognizably you. That’s the promise. It’s also why bugs in this program don’t feel like abstract inconveniences to me. They feel like a violation of the contract I’m trying to keep with the person using it.
That’s the context for why a set of fifteen issues has become, for me, a kind of narrative spine for what comes next. These aren’t random tickets. They’re pressure points where the software is currently failing to uphold the experience it’s supposed to provide. Some of them are severe in a way that makes my stomach drop. If a database migration path can “nuke” user history because it detects a legacy schema in the wrong way, that’s not a minor technical footnote. That’s me risking someone’s record of what they said and did. If the system can silently fail to write to the database but still return a dummy entry so the UI behaves as though the data is safe, that is a betrayal disguised as normal operation. If configuration writes are non-atomic and a crash can corrupt the config file, that’s the difference between a tool you can trust and a tool that turns mysterious when you need it most. Those aren’t edge cases in a human-first application. They are direct attacks on reliability, and reliability is the foundation of cognitive safety.
Other issues are less dramatic but still deeply connected to what the software is supposed to be. The refinement service has concurrency hazards that can freeze the experience in the exact moment you need the tool to be responsive. The GPU confirmation flow can block indefinitely and strand the system waiting, which is a technical description of a very human experience: the program is “stuck,” your thought has moved on, and now you’re managing the tool instead of using it. The provisioning pipeline attempting to install dependencies at runtime is another one that hits a nerve. It might feel convenient from the perspective of “make it work,” but it’s the opposite of respect for the user’s environment. It introduces nondeterminism, offline failures, security risk, and support nightmares, all wrapped in the appearance of automation. Even the model artifact validation being shallow and non-atomic has the same shape: the system can claim it is ready when it’s actually in a half-valid state, and those are the kinds of problems that don’t just crash, they erode trust slowly because they are intermittent and hard to explain.
There are also issues that, at first glance, might look like “mere polish,” but they still tell you whether the program is holding its end of the bargain. The fact that certain UI state changes don’t render until a restart is not just a cosmetic problem; it implies that the application doesn’t yet have a properly centralized, dependable pathway for invalidation and refresh. It means the interface is not reliably reflecting reality, which is exactly what a cognitive tool must never do. The export history dialog clipping a button seems small until you remember what it represents: a user at the end of a workflow, trying to retrieve what they created, encountering friction at the moment the program should be quietly getting out of the way. Even the lack of a first-class database backup export option matters more than it seems. If I’m going to ship something that stores personal thought, I should respect the fact that the user may want to back it up, move it, protect it, and control it without hunting through directories or guessing where state lives.
This is the part where I’ll say something that’s probably obvious but still worth stating: when I encounter misbehavior in Vociferous, I take it personally. I don’t mean that in a melodramatic way. I mean it in the sense that the program is meant to protect a person’s flow, and when it fails, the failure is experienced in someone’s mind, not just in a log file. And because I’m the one building it, it triggers a very specific impulse in me: to attack the issue with extreme prejudice, to rip it out, to resolve it immediately, to “win.” That impulse is not always helpful. It’s how you end up dropping hydrogen bombs on a codebase, and it’s how you regress into the exact development pattern that creates deeper instability.
So the real project at this stage isn’t just “fix these fifteen issues.” The real project is building a system where those classes of issues become harder to introduce. It’s moving from reactive patching to proactive design. It’s using architecture as a lever for user experience rather than treating architecture as a separate concern. It’s also learning to pace myself so that urgency doesn’t become carelessness, because the difference between a tool that helps people and a tool that frustrates them is often the difference between a quick fix and a correct fix.
Vociferous 3.0.0 is me planting a flag in the present. It’s me saying: this is where the software is now, this is what it’s trying to be, and this is what I’m willing to do to make it worthy of the role it’s aiming to play in someone’s life. The developer diary series will move forward from here, because I’m still building, and because I want the story to be honest while it’s happening, not polished into false clarity after the fact. But as I write the future chapters, I’ll also circle back, because the early chaos matters. It shaped how I think. It shaped how I prompt. It shaped how I design. And if I’m going to use this journey as instructional material, I want people to see the real arc: not from incompetence to mastery, but from brute force to control, from impulse to discipline, from “make it work” to “make it trustworthy.”