Three Worst Practices for Real-World Conversational AI Applications
Posted January, 2026
Conversational AI is evolving faster now than at any other point in history. Only a few years ago, most conversational systems relied on tightly scripted flows, intent-classification models, and a collection of handcrafted rules. However, in the last few years, conversational AI has been profoundly impacted by large language models (LLMs) and related technologies such as Retrieval Augmented Generation (RAG) and graph-based retrieval systems. But with this flexibility comes new complexity. Many organizations are discovering that building conversational systems that solve a real problem is not simply about "adding an LLM." In fact, a report from MIT on "The State of AI in Business 2025" found that 95% of organizations are getting zero return from their AI investments. Why does this happen? We're going to talk about three major AI system project mistakes: having no clear objective in mind, failure to think about the rest of the system besides the AI, and not having a plan for obtaining and using the right data.
This post outlines three warnings that any team—whether upgrading an existing assistant or designing a new one—should avoid to deliver successful conversational AI applications.
Failure to be clear about the problem that needs to be solved
The capabilities of conversational AI technology are evolving rapidly. This evolution is only outpaced by the hype around it. The hype and capabilities of GenAI, not to mention its apparent lower cost, lead to the temptation to plunge into AI projects without careful planning. Earlier conversational systems required expensive cycles of data collection, annotation, model design, and testing. GenAI-based conversational systems seem much easier to implement. But if your organization is thinking about how it can benefit from AI, start by deciding what you want to accomplish.
As with any software project, don't let the technology dictate the direction. Start with the problem that needs to be solved. Starting without a clear goal will just waste your team's time. Once the goal is grounded, you can select the right technical components to accomplish a successful project.
Conversational systems can fail not because the models are weak, but because the goals are not well-defined. Earlier generations of NLU systems, which emphasized intent classification, entity extraction, and structured representations usually were preceded by careful planning and design because they were quite expensive to implement.
Putting this into practice
Teams should begin by considering three questions:
- What is the user trying to accomplish?
- What does the organization need to accomplish?
- What business, technical, or compliance constraints need to be considered?
When these answers drive the architecture, the technical direction will be much more straightforward.
Failure to understand data requirements
All AI projects fundamentally rest on data. From the training of foundation models to data that supports application-specific fine-tuning, projects can only be successful if they're based on the right data.
Simple RAG is no longer enough
Traditional RAG retrieves content based on semantic similarity, which works well for factual question-answering but is not able to deal wellwith:
- Multi-step reasoning
- Complex entities
- Relationship-heavy information
- Operational or structured data
This is where GraphRAG, using a graph database for retrieval, has gained traction. Graph databases naturally represent relationships, hierarchies, connections, and constraints between data points. For conversational applications, this means the model can reason across data more effectively and reduce errors on complex queries.
Putting this into practice
Choose the right retrieval method(s) for the job:
- Vector search works well for unstructured knowledge: articles, documentation, FAQs.
- Graph retrieval can be used for relationship-heavy information: policies, workflows, customer or product relationships, and data requiring precise contextual understanding.
- Hybrid retrieval combines both when neither of these approachesalone captures the full picture.
Treating Conversational AI projects as once and done
One of the most misunderstood parts of building conversational AI is the failure to recognize that conversational AI systems are dynamic. A good way to understand this is to think about the fact that AI systems are often called "models". A model by definition is a representation of something else, and when that something else changes, the model is no longer a good representation. Any results based on an inaccurate model will by definition become inaccurate. The world changes -- new products are introduced and old ones disappear, data changes, user behavior shifts, and business constraints evolve. The only way to make sure that the model remains a good reflection of the world is to ensure that the model is continuously measured and adapted as the world changes.
Why this matters
Earlier generations of NLU used focused understanding metrics like intent accuracy, entity resolution, and correct classification, as well as usability metrics like latency and user satisfaction. These considerations haven't gone away because newer LLMs have more capable NLU.
And don't forget the obvious-- when a new product is introduced, especially if it's accompanied by significant advertising, customers are going to ask about it -- be prepared for these questions.
Rooted in Best Practices, Powered by GenAI
Conversational AI today is far more capable than earlier systems, but it's still doing many of the same jobs. Successful deployment of practical systems requires attention to many of the same considerations that made earlier systems successful.
- Build with intent
- Ground systems in structured, trustworthy data
- Evaluate continuously
How can I keep from getting left behind?
Of course, the value of applying Agentic AI to current business needs, as discussed in this blog, is undeniable, but agentic AI also has the potential to create whole new applications and even industries on a scale that is difficult to imagine. Agentic AI represents a fundamental change in technology, with the potential to create opportunites that are now difficult to imagine. Think of some of the transformative technologies of the 20th century like aviation, television, and the internet. They ended up changing our lives in ways that were unimagined when they were invented. Agentic AI will be like that. How can we take advantage of this potential? Our next blog will present some ideas for using agentic AI in totally new ways.