In nearly every conversation about AI-powered customer service, we inevitably face one recurring question: "Does your AI learn from our historical conversations?" This question reveals a common misconception, fueled by ambiguous marketing from various companies, about how modern Large Language Models (LLMs)—like those powering Kustomer's AI Agent—actually work.

What Does "Training" Actually Mean?

Large Language Models (LLM), as the name suggests, are still machine learning models that are trained using data and some algorithms known as neural networks. This is not different from traditional machine learning. The key difference is the scale of the data and the size of the resulting model. Both are large. These models are trained with all the texts available on the Internet, all the books and synthetic data created ad hoc for this. The training process is very expensive (in terms of manual labeling, time, and compute required). As of today, only large companies which own the infrastructure (millions of GPUs but also cheap energy) train these models.

What does Kustomer and other companies do to build AI Agents and other modern AI products? We use modern AI that is pre-trained by providers on public data. This saves us the costs, resources and time required to “train” the LLM or fine-tune the LLM with our own custom data. Instead, Kustomer combines the capabilities of the public LLM with custom instructions, search engines, and other custom tools for context on how to solve for real world customer service use cases. This is known as “grounding.” Rather than rewriting the model's knowledge through expensive and labor-intensive training cycles, modern AI systems simply reference relevant information when needed.

The Misleading Marketing of "Coaching"

Some vendors reinforce misunderstandings by using terms like "training" or "coaching" on their websites to describe how their AI works. On the surface, this implies a model being continuously refined or trained. In reality, these systems often employ a technique called Retrieval-Augmented Generation (RAG), where historical conversations or FAQs are grouped (or "clustered") and stored for retrieval to answer queries. But crucially, this does not equate to traditional training. Instead, it involves leveraging past conversations as a context repository, allowing the AI to respond appropriately without changing the model itself.

Privacy and Practicality

True training of an LLM, as some imagine it, raises substantial privacy concerns—especially when dealing with sensitive customer data. Moreover, practically speaking, continually training an LLM would be costly, inefficient, and slow to reflect changes. 

Kustomer’s approach prioritizes user privacy and operational efficiency, ensuring robust responses without the complications of training. We believe using a high quality LLM from OpenAI, AWS Bedrock and other providers better serves our customers in terms of cost efficiencies.

Moving Beyond "Training"

Kustomer is utilizing sophisticated solutions like leveraging summarized historical conversations, structured guidance, and secure "long-term memory" features. These are not traditional training but rather intelligent ways of contextually informing AI agents.

Why Does This Matter?

Clearing up this misunderstanding isn't merely academic—it has real-world implications for how businesses adopt AI solutions. Companies need accurate knowledge to evaluate vendors truthfully, safeguard customer data, and avoid overspending on unnecessary "training" capabilities.

At Kustomer, our commitment is transparency and innovation. We strive to educate our customers, dispel myths, and focus on genuinely impactful advancements in customer experience.

When you hear the words "train" or "coach" in the context of AI, remember: clarity matters. True AI leadership isn't about perpetuating myths—it's about clearly communicating capabilities and building trust through transparency.

Have questions or thoughts about how AI can realistically and responsibly enhance your customer service? Let’s continue this conversation together. Schedule a meeting.