Originally published on Medium.

Shadows at the Summit

The most honest AI conversations at the OhioX AI Summit didn’t happen on stage: They happened in the “shadows” between sessions.

Wednesday’s brisk morning air ushered a stream of attendees into Columbus State Community College’s Mitchell Hall, each eager to escape the chill and socialize over fresh coffee. It was September 19th — the second annual OhioX AI Summit — and those shuffling into the building were among the brightest minds in Ohio’s tech landscape. Under the fluorescent lighting, vendors and their tables buzzed with activity. Foot traffic bypassed “crowded” and moved straight to “sardine-packed.” Yet even in the pre-panel enthusiasm, I had a particularly convenient vantage point — stationed behind the registration table — to catch glimpses of something subtler: hopeful apprehension.

Time, as it tends to do, marched on. The opening panel delivered a strong 10,000-foot overview of Ohio’s current AI infrastructure before outlining what needs to happen next. That momentum carried smoothly into an engaging discussion on AI for manufacturing, which layered technical nuance atop the broader themes introduced earlier. Before long, the morning wrapped as Frank LaRose, Ohio’s Secretary of State, closed the segment with energetic remarks on the state’s technological future. Then came a familiar flow: attendees filing out toward the catered lunch sets, conversations spilling into the hallways and alcoves.

By this point, I had already cycled through a series of excellent one-on-one conversations. I’ll admit I had an ulterior motive for volunteering my time to help OhioX set up the event. Nothing sinister; I was just hunting for an answer to a question that had been prodding at my brain with all the elegance of a rusty spork. That deserves some elaboration, so we’ll need a brief field trip through time.

On Friday, September 12th, I attended the Rooted & Rising event at Franklin University. The main attraction was a presentation by Kevin Surace — described as “the father of the AI virtual assistant and a Silicon Valley innovator, serial entrepreneur, CEO, and futurist.” His talk, a showcase of AI’s capabilities, was titled AI Is Now! Accelerate Your Work Today! It began promptly at 5 p.m., and I filed in with the rest of the audience, genuinely curious to see what was in store…

…and was promptly, immeasurably, disappointed. As diametrically opposed to some of his ideas as I found myself, it felt like I’d met my AI anti-hero. Armed with a full audio recording of the session, I’ll dive much deeper into why in a separate article. For now, it’s enough context to return to the matter at hand. In short: my first-ever AI event left me so thoroughly underwhelmed that I needed to know if the industry I’m so enamored with was actually a bushel of spoiled apples. Flashback concluded.

Which brings us back to the question of the hour: Did I find more of the same at the OhioX AI Summit? Thankfully, the answer is a resounding “no.” Although there’s far more to unpack.

As the afternoon unfolded, an unforeseen contrast began to emerge. The panels kept pace with the morning’s energy — polished, strategic, very on-message. The conversations I had off-stage, though, were running on an entirely different track. If the panels were a well-lit showroom, the hallways were the maintenance tunnels humming beneath it. Fresh off my time as a tech installation and repair specialist, my spidey-sense came alive in those tunnels.

Nearly everyone I spoke with — AI engineers, startup founders, marketing leads, various directors — gravitated toward topics that never made it up to the stage. These weren’t fringe or reckless ideas; they were quite thorny, and often touchy. People wanted to talk about cognitive offloading and the quiet erosion of critical thinking. They wanted to argue through the ethics of generative models. They wanted to ask why superintelligence feels more like a marketing fixture than a plausible near-term outcome. They wanted to know what happens to a workforce that begins its career by skipping straight to abstraction. And the list kept growing: guardrails and model safety; the slow hollowing-out of core skills in favor of “just go ask the AI”; state-controlled systems and synthetic media; deepfakes and unregulated image manipulation destabilizing entire communities. Yeah, I know, that’s enough headache and heartache to send anyone into a stupor. Yet, the concerns all circled back to a handful of questions.

What is our duty, as technologists, to the society we’re in the process of uprooting? How far must we go to uphold our moral obligations when those obligations run counter to profit motives, political pressure, or institutional momentum? What happens when AI wipes out the corporate ladder’s bottom rung, leaving new workers stranded — expected to possess skills and experience that are now impossible to earn before their first job?

That last question arrived during the 2:50 p.m. breakout session, Co-Pilots, Not Replacements ­ — Unleashing Human Potential with AI. The answer offered was confident: AI will let people upskill into those entry-level roles so quickly that the loss of traditional pathways won’t matter. The goalposts for “entry level” are moving upward, yes, but AI will lift people to meet them. It was a solid attempt, almost enough to take the edge off the unease, but the absence of any real guarantee hung in the air. Naturally — only fools deal in absolutes, and few fools were present on that particular Wednesday.

Still, the reality is unavoidable. As AI professionals building systems that will reshape entire communities, we bear responsibility for the fallout. Some people will suffer because of these decisions. It’s on us to reduce that suffering as much as possible. “Change management” was a term thrown around liberally at almost every major panel, and for good reason.

That responsibility isn’t abstract for me. At Franklin University, I wear three hats — ridiculously — all at once: Treasurer for the broader ACM chapter we sit under, President of the Programming Special Interest Group (SIG), and a Senior Officer on the Cybersecurity SIG as well. From those seats, I’ve been telling students for a while now that AI literacy isn’t optional homework; it’s table stakes. And in this economy, it can often be the difference between surviving and living.

You don’t just need to know that AI exists — you need to build with it, debug with it, wrestle with it, before you step into a workforce that already assumes you can.

The old path went from “write everything from scratch” to “lean on frameworks as cheat codes.” Now we’ve added another layer: tools that can spit out code, architectures, explanations on demand. That’s not a replacement for engineers. It’s more like swinging a sword that’s been enchanted with explosions: sometimes it’s spectacular, sometimes it blows a hole in the floor. If you don’t understand the blade underneath the spell, you’re a liability, not a hire. Side note: I’m planning to release another article titled Stratified Encapsulation discussing this in more detail.

So, I’ve been chasing this stuff for close to a decade now, starting with pathfinding algorithms for games and spiraling outward from there. Since the first wave of large language models hit, I’ve burned through more conversations with them than I can sanely estimate — high tens of thousands, at the very least — spanning everything from toy projects to serious troubleshooting and full deployments. That’s given me a weird vantage point. I already have the “enchanted sword” in hand and a decent grip on where it tends to misfire. Many students don’t, and plenty of working professionals already look like deer in the headlights around this stuff. So when I sat at the Summit listening to people talk about “upskilling” and “future talent,” I didn’t just hear strategy slides. I heard a gap that my own community of students is about to walk straight into if people like me don’t drag them toward this technology and force them to poke at it, break it, and learn from it. Left alone, it’s a meat grinder. Given the roles I’ve taken on, I can’t pretend that isn’t my problem.

And yet, despite the weight of those concerns, the overall atmosphere at the Summit never tipped into doom-and-gloom. Not even close. People were excited. The caution just sat alongside that excitement, tugging at its sleeve, which is exactly what I’d hoped to see. Most of the crowd struck me as remarkably levelheaded, upbeat realists — and that was a massive comfort after the Rooted & Rising fiasco. Many attendees openly doubted the timelines and assumptions behind the impending “superintelligence” wave, but they did it from a technical, almost workmanlike place. There was very little doomsaying, and certainly no theatrical panic. Just a lot of people trying to keep their balance on a moving floor, which I greatly respect.

Now, to be fair, some of these discussions did surface at the panel level. Toward the end of the day, several speakers emphasized our responsibility to the workforce we’re reshaping. There was clear agreement that the emerging generation — students like me, and the early-career professionals right behind us — must be trained to understand AI deeply, not just lean on it as a shortcut. There was talk of entrepreneurship, of funding pipelines, of infrastructure both technical and human. There was a genuine call to chart the future of AI together, as a community rather than as a bunch of disconnected silos. Those were earnest, valuable conversations, and they deserved the space they got.

Even so, the pattern held: the most delicate, consequential topics continued to live almost exclusively in 1-on-1 conversations. Not avoided, per se, just gently steered away from the panel’s circle of stools. I admired the honesty in those quieter exchanges, but I couldn’t help feeling that some of these issues deserved a spotlight. Regulation and safety, for example, could easily have carried a dedicated panel.

Because damn, those hallway discussions were not trivial. They covered the very real, distressing dangers of nude deepfakes (NY Times article) and the explosion of CSAM (NY Times article). They covered the deeply unsettling instances of LLMs offering users pathways to self-harm — sometimes without anyone jailbreaking them. They covered the ethical debris of scraped media used to train image and video generators without consent. They covered the near-total absence of high-level safety standards in this country, even as AI systems seep into every corner of public life.

And the list of concerns didn’t stop there. AI-driven cyberattacks. Autonomous botnets. Compromised LLMs quietly generating malicious content. The death of modern digital art. Legal hearings that have already seen defendants with AI-generated attorneys as they try to frugally butter their way clear of trouble.

These were not sci-fi scenarios, much as I wish they were. They were real worries coming from people who build, deploy, and regulate this technology every day — and it was a balm to see them taking those worries seriously.

Put together, the polished optimism on stage and the sharper conversations in the hallways didn’t cancel each other out. They helped form a clearer picture of what “now” looks like for AI in Ohio.

The industry isn’t blind, or reckless, or sprinting forward without a thought in its head. It’s tangled up in a heated exchange with itself over how, when, and where to surface its hardest problems.

I understand that the nature of some of these issues prevents them from being discussed more freely, so as someone who’s fascinated by what AI was, is, and will be, I could not have reasonably asked for more at this time.

That tension says a lot about where this corner of Ohio’s AI ecosystem is right now: full of people who genuinely want to do the right thing, but still nudged by the instinct to stay on-message. Watching that play out in real time was energizing. I walked away grateful I’d spent a day looking to the shadows running beneath the Summit, not just observing center-stage.