How Prosodica Engineers Built an AI System to Evaluate Customer Calls Fairly and at Scale

Senior Data Solutions Engineer Ankush Rastogi explains how Auto-Evaluation was created using AI reasoning and cross-functional collaboration to preserve human context in quality reviews.

Written by Taylor Rose
Published on Feb. 09, 2026
Three Prosodica employees look at a computer screen in a candid photo.
Credit: Prosodica
Brand Studio Logo
REVIEWED BY
Justine Sullivan | Feb 09, 2026
Summary: In this story, Prosodica Senior Data Solutions Engineer Ankush Rastogi explains how his team built Auto-Evaluation, an AI system designed to review customer service calls fairly and at scale without losing human context. The article explores how explainable AI, careful system design and deep cross-functional collaboration helped Prosodica balance speed,... more

“Move fast and break things” is the opposite of how Ankush Rastogi’s team engineers success. 

Rastogi is a senior data solutions engineer at Prosodica, a business intelligence company whose platform automatically tags and scores customer calls to pinpoint areas of concern. Rastogi’s team built the newly launched Auto-Evaluation, an addition to the company’s platform that uses AI to review customer calls at scale.

According to Rastogi, the product launch was emblematic of the engineering culture at Prosidica: “Complexity wasn’t brushed aside for optics and success was treated as collective rather than individual,” Rastogi said. 

That mindset shows up directly in the product.

“Auto-Evaluation is designed not just to function, but to be defensible, humane and genuinely useful over time,” he said.

Many companies prioritize speed or “AI spectacle” when launching products, Rastogi said, especially in emerging AI categories. But not his employer.

“Prosodica is willing to move slower if it means respecting the people affected by the system — especially frontline employees,” he added.

Slowing down might sound counterintuitive when working with rapidly evolving technology like AI, but according to Rastogi, that’s exactly what sets his employer apart.

“There’s a strong culture of craft and responsibility, with real attention paid to second-order effects like morale, trust, and long-term behavior change,” Rastogi said. 

Built In spoke with Rastogi about how the engineering team at Prosidica blends cutting-edge innovation with thoughtfulness and precision — and what that means for customers and employees alike.  


 

Image of Ankush Rastogi
Ankush Rastogi
Senior Data Solutions Engineer  • Prosodica

What It’s Like to Work as an Engineer at Prosodica

What role did you play in developing and launching Auto-Evaluation? What tools or technologies did your team use to build it?

I led the design of the data and evaluation architecture that makes Auto-Evaluation scalable, explainable, and reliable in production. My role sat at the intersection of data engineering, analytics, and generative AI — ensuring those disciplines worked together cohesively rather than as isolated layers, which is easy to describe but difficult to execute in production systems. 

We built robust data pipelines to model conversations as structured case files, incorporating behavioral signals, customer effort indicators, and outcome evidence. Large language models are used as reasoning engines — not as black boxes — to synthesize this measured context into evaluation narratives similar to how experienced quality assurance analysts reason. The separation of measurement and reasoning was intentional: It improves consistency, auditability, and long-term trust in the system.

 

A photo of a Prosodica employee in a meeting
Credit: Prosodica

 

Why Prosodica Built Auto-Evaluation to Preserve Human Context in Customer Calls

Why did the company need to build this product? 

Prosodica built Auto-Evaluation to address a core problem in contact centers: Human quality assurance teams cannot realistically review enough conversations to provide fair, consistent feedback at scale. 

But beyond volume, the deeper issue was trust. Traditional evaluations often feel arbitrary, overly rigid, or disconnected from the reality of emotionally complex customer interactions — undermining morale, performance, and growth. Poorly designed automation risks scaling that frustration. Auto-Evaluation was built to scale clarity instead. 

By grounding evaluations in observable behaviors and full conversational context through a multi-signal evaluation approach, the product delivers consistent, explainable feedback that representatives can learn from. For customers, this means better coaching, reduced friction between representatives, quality assurance, and more aligned leadership decisions. For Prosodica, it enables enterprise-scale support without sacrificing fairness or human impact.

“It enables enterprise-scale support without sacrificing fairness or human impact.”

 

What obstacles did you encounter along the way? How did you successfully overcome them?

The biggest obstacle was trust. Representatives and quality assurance teams are understandably skeptical of automated evaluation systems, especially those that feel rigid or opaque. A technically accurate system isn’t enough if it doesn’t feel credible to the people affected by it. We addressed this by prioritizing explainability from the start. 

Evaluations are grounded in observable behaviors and clearly traceable inputs, not vague model judgments. We repeatedly tested the system against real, messy calls to ensure outputs felt recognizable rather than magical. Alignment came from shared review sessions across teams, where we examined actual conversations together and agreed on what “good” truly looks like in practice.

“We repeatedly tested the system against real, messy calls to ensure outputs felt recognizable rather than magical.” 

 

How Prosodica’s Engineering, Product and Domain Teams Built Auto-Evaluation Together

What teams did you collaborate with in order to get this across the finish line? What strategies did you employ to ensure that cross-functional collaboration went smoothly?

Auto-Evaluation was a deeply cross-functional effort. I collaborated closely with product, engineering, and domain experts with firsthand contact-center experience. Product ensured the system aligned with how evaluations are used day to day. Engineering focused on durability, scale, and operational stability. Domain experts grounded decisions in real-world representative workflows and constraints. 

Our key strategy was shared ownership through direct exposure to the work itself. We didn’t rely solely on dashboards or specs — we reviewed real calls together. That created a common language and prevented misalignment between technical decisions and operational reality. This approach ensured the final system worked in practice, not just in theory and made the work more rewarding, because everyone could see its real impact on people doing the job every day.

“Our key strategy was shared ownership through direct exposure to the work itself. We didn’t rely solely on dashboards or specs — we reviewed real calls together.” 

 

How Auto-Evaluation Reflects Prosodica’s Mission to Create Better Conversations

Conversations are where work becomes human — requiring empathy, judgment, and emotional labor — yet many systems reduce that complexity to a score that doesn’t feel earned. Building fairer evaluation systems is a way of recognizing the value of that work. When feedback reflects reality and supports growth, employee experience and customer outcomes improve together. That throughline — better conversations, better work, and better outcomes — is what makes this product meaningful beyond its technical achievement and why it’s the kind of system engineers want to spend time building.

 

Frequently Asked Questions

Prosodica is a business intelligence company whose platform automatically tags and scores customer service calls to identify areas of concern, improve coaching and support better decision-making across organizations.

Prosodica’s Auto-Evaluation system uses structured data pipelines and large language models as reasoning engines to analyze full customer conversations. The system evaluates calls based on observable behaviors, effort indicators and outcomes to generate explainable, narrative-style evaluations similar to how experienced quality assurance analysts reason.

At Prosodica, AI evaluates conversations using full conversational context rather than isolated metrics, allowing it to account for emotionally complex interactions. By grounding evaluations in real call data and testing against messy, real-world conversations, the system produces feedback that feels recognizable and credible to human reviewers.

No. Auto-Evaluation is designed to support and scale quality assurance, not replace it. Human expertise remains central, with AI used to improve consistency, fairness and auditability while freeing teams to focus on coaching, trust-building and higher-impact work.

Engineers at Prosodica work at the intersection of data engineering, analytics and generative AI in a culture that values craft, responsibility and thoughtful innovation. Teams collaborate closely across engineering, product and domain expertise, prioritizing explainability, shared ownership and long-term trust over speed or “AI spectacle.”

Responses have been edited for length and clarity. Images provided by Shutterstock or listed companies.