Meetup Summary: Solving Real-World Business Problems with Generative AI

December 14th, 2025

Introduction

The recent Northern Virginia Software Architecture Roundtable meetup brought together four seasoned technology and product leaders, Scott Day, Emile Karam, Lokesh Kumar, and Mohan Rao, for an evening of practical insights and real-world demonstrations of how AI is reshaping modern organizations. Across topics including problem-driven AI adoption, building AI-powered digital workforces, breakthroughs in natural voice-based agents, and transforming unstructured conversations into commercial intelligence, each speaker shared lessons learned from building and deploying AI at scale. The sessions highlighted not only where AI is delivering immediate value today, but also how rapidly the ecosystem is evolving and what teams must do to prepare.

Solving Real-World Business Problems with Gen AI

Scott Day – Principal, Day Strategies

Scott focused on how organizations can use generative AI to solve real business problems – and how easy it is to get this wrong by leading with technology instead of understanding the underlying need. Drawing on his experience as a CTO through multiple technology waves (web, blockchain, and now AI), he emphasized that AI initiatives fail when teams rush toward solutions without clearly defining the business problem. He urged technologists to observe real users, uncover pain points, study bottlenecks, identify sources of friction, and repeatedly ask “why?” until the root need is revealed. The biggest obstacles, he argued, are usually not technical, but around people, process, and change management.

Scott then outlined several broad patterns where generative AI is a natural fit: analyzing and synthesizing large amounts of text, customer-service automation, rapid content creation and testing, and extracting insights from unstructured data. To illustrate these categories, he shared three concrete examples from his work. His legal team struggled with a backlog of NDAs, so his team  built a lightweight system that compared vendor contracts against a master template using GPT-4 which cut turnaround time from days to minutes. Marketing faced a daily flood of media coverage, so he and his team built an automated sentiment and summarization tool to help the team understand which stories mattered. A third use case involved customer-service automation, but here the prototype revealed hallucination issues that required more careful design and possibly RAG or specialized tools before deployment.

He closed with guidance for any AI initiative: start small, work with early adopters, minimize risk, build iteratively, and share successes and lessons learned openly across the organization. The point is not to chase hype but to build confidence and momentum with small wins before tackling bigger, more complex problems. Ultimately, he reminded the audience that technology won’t be the hardest part, change management will.

QuadSail Captain (AI for Consultancy Operations)

Emile Karam – Founder and Principal, QuadSail

Emile presented a real-world AI use case he is actively building: QuadSail Captain, an AI-powered digital workforce designed to augment and scale a consultancy’s capabilities. As a solo founder, he wanted to use AI not just as a tool, but as a teammate – one that could extend his expertise, accelerate processes, and enable him to operate like a larger firm without hiring dozens of staff. He framed this with the idea of having virtual advisory “board members.” Icons like Peter Drucker, Alex Hormozi, Andrew Ng, and Clay Christensen who could be consulted instantly for strategic decisions, proposal refinement, workshop design, and client delivery. His goal is to move beyond generic prompting into a structured, single interface where curated personas provide consistently high-quality, context-aware advice.

Emile outlined how Captain supports the entire consultancy lifecycle: attract (market positioning, ICP definition, thought leadership), close (proposal enhancement and engagement scoping), deliver (workshop design and client outputs), and scale (operational efficiency). Technically, the MVP runs locally using Llama as the core LLM, N8N for workflow orchestration, and Qdrant as the vector database, all wrapped in UI layers and integrations with tools like Google Workspace, HubSpot, Miro, and ClickUp. This design keeps costs low, maintains data control, and enables continual learning as Emile uses it.

He closed by emphasizing lessons learned: think big but start small, use AI to build AI (e.g., metaprompting), iterate rapidly, and structure problems into solvable components. His long-term roadmap includes a “digital twin” of himself (the “First Mate”), followed by digital CXOs and a full digital workforce operating alongside humans. Ultimately, Emile believes every knowledge worker will eventually have autonomous AI teammates that amplify human creativity and execution.

Voice AI That Finally Feels Human

Lokesh Kumar – CTO, Sheeva.AI

Lokesh’s talk focused on how voice AI has evolved from clunky IVR systems to real-time conversational agents that feel genuinely human. He began by emphasizing that voice is the most natural, universal, high-bandwidth communication channel humans have – rich with tone, emotion, pacing, and nuance. Historically, voice bots failed because of brittle NLU systems, slow latency, weak ASR accuracy, and rigid menu trees. But major breakthroughs, especially Whisper trained on 5M+ hours of audio, improved text-to-speech models, and WebRTC for low-latency bidirectional streaming have changed the game.

He walked through the architecture of modern voice agents: speech-to-text, an LLM “thinker,” text-to-speech “speaker,” and real-time orchestration. The challenge now is achieving sub 1.5 second latency, natural turn-taking, noise robustness, and preventing the agent from interrupting or hallucinating. Lokesh showed a live demo of an in-car voice assistant built at Sheeva.AI that handled car model identification, troubleshooting dashboard lights, scheduling a dealership appointment, and locating nearby gas stations all through natural conversation. He contrasted this with the cost and complexity of similar systems only a few years ago, demonstrating how dramatically faster, cheaper, and more capable the new stack is.

He also described their engineering approach: using ElevenLabs for voice, OEM manuals processed into structured knowledge, YAML prompts generated by LLMs, vector retrieval, and careful monitoring of where latency accumulates (LLM, RAG, API calls, WebRTC). He highlighted that new “direct voice” models trained on raw audio rather than intermediated text are poised to solve remaining challenges like noise, multilingual switching, and emotional nuance. In his view, apps will increasingly give way to voice-first interfaces as conversational AI becomes the primary interaction layer for many tasks.

Commercial Intelligence for Scaling Services Businesses

Mohan Rao – Chief Product and Technology Officer, Knownwell

Mohan closed the event by presenting Knownwell, a commercial intelligence platform designed to help B2B services companies scale sustainably, a notoriously difficult challenge. He explained that services firms often struggle to move beyond $50-100M in revenue because quality is tied to people, client perception is subjective, and deal health drifts without clear metrics. Despite the abundance of meetings, calls, and emails, the critical signals about client satisfaction, relationship strength, and commercial alignment remain buried in unstructured data. Traditional CRMs don’t solve this because they rely on manual inputs and are focused on sales, not ongoing client delivery.

Knownwell’s solution is to ingest the full stream of unstructured and structured data – operational conversations, internal discussions, CRM events, financial metrics – and use LLM-based reasoning to produce a “commercial health score” and related insights. Mohan described three core pillars: service quality perception, relationship strength, and commercial alignment, each supported by domain-specific rubrics that transform free-form signals into structured, auditable intelligence. The platform provides account teams with risk detection, trending signals, 360° client views, and generative deliverables such as QBR decks and status reports. A key feature is a “digital chief of staff” that can answer questions like “What’s the latest with this client?” or “What actions should I take next?”

He shared the architectural principles behind building an AI-native product: multimodal ingestion, entity graph modeling, layered intelligence, and multipass reasoning. The orchestration layer manages pipelines that progressively structure and refine data from raw transcripts into actionable insights. The system uses graph-structured entities, memory, and domain heuristics to stabilize the inherent nondeterminism of LLMs. Mohan emphasized that existing tools couldn’t solve this problem because the required intelligence must be extracted from conversations, not manually curated CRM fields. Knownwell aims to give services firms the operational rigor and predictive engine they’ve never had, turning unstructured conversational “exhaust” into a strategic advantage.

Top 10 Takeaways from the event

1. AI initiatives succeed when they start with real business problems, not technology excitement.

From Scott’s opening message to Mohan’s closing architecture, every speaker reinforced the need to understand the problem first. AI should be a response to pain points, not a tech-driven science experiment.

2. Small, low-risk AI projects build confidence and accelerate organizational adoption.

Both Scott and Emile highlighted that the fastest wins came from small, targeted use cases: NDA review, media summarization, proposal enhancement, or workshop generation. These early successes create momentum and reduce fear in teams unfamiliar with AI.

3. Unstructured conversational data is a goldmine, and most companies ignore it.

Mohan drove this home: services firms generate enormous volumes of meeting transcripts, emails, chats, and calls. Historically this data went unused. Today, LLMs can turn that “operational exhaust” into structured insight, risk detection, and client health scoring.

4. AI is evolving from “tool” to “teammate” to “digital workforce.”

Emile’s roadmap (virtual advisors to digital twin to digital CXOs) captures a broader trend: AI systems are no longer just utilities. They’re becoming collaborators with agency, decision support, and domain-specific expertise. Every employee may soon have a personal “chief of staff.”

5. Voice AI has reached a turning point; natural, fast, and multilingual.

Lokesh demonstrated how real-time, humanlike voice assistants are now practical using Whisper, WebRTC, and stack components like 11 Labs. Latency is low enough, accuracy high enough, and language support broad enough to rethink many existing apps as voice-first experiences.

6. RAG, vector stores, and agent orchestration are the new backbone of AI systems.

All presenters referenced:

  • Retrieval-augmented generation
  • Vector databases like Qdrant
  • Automated orchestration layers (N8N, custom orchestrators, pipelines)
  • Domain-specific knowledge bases

These components are becoming standard for reliable, hallucination-resistant AI.

7. AI makes the biggest impact when combined with domain expertise.

The MVPs shown (legal NDA reviewer, in-car concierge, consultancy advisor, commercial intelligence engine) all required:

  • Clear domain rules
  • High-quality internal data
  • Well-defined rubrics/guardrails

AI alone is not the solution; AI + domain context is.

8. Latency, hallucination control, and accuracy are still the technical battlegrounds.

Lokesh and Scott both shared real-world lessons: hallucinations can kill early trust, and latency >1.5 seconds breaks the conversational experience. Architectural choices matter.

9. Change management, not the tech, will be the hardest part.

Scott emphasized this strongly, and Emile echoed it: people, process, and trust must evolve alongside the technology. Teams must learn new workflows, iterate faster, and get comfortable working with AI-based collaborators.

10. AI is dramatically reducing the cost and time to prototype.

Examples across the talks showed projects that previously cost tens of thousands of dollars and months of work can now be built in:

  • Hours (media sentiment analyzer)
  • Days (in-car voice assistant prototype)
  • Weeks (consultancy digital advisory board MVP)

This changes how companies should think about experimentation, risk, and innovation.

Solution Street is a Software Engineering and Consulting company. Reach out today to learn how AI can streamline your operations and deliver measurable efficiency gains!