Your AI Advocate for All Things Customer Service
Overview:
Over the past seven months, my team collaborated closely with Consumer Reports to develop an AI agent that empowers consumers to advocate for their rights when dealing with customer service. The goal of the project was to create an AI-powered platform that mediates interactions between consumers and businesses. Our solution, FairPlay, addresses consumer pain points by informing, guiding, and acting on their behalf during the post-purchase phase, ensuring their needs are met in real-time.
My Role:
As the design lead, I was responsible for translating research insights into innovative design solutions, rapidly prototyping and iterating on new interaction paradigms across varying levels of fidelity. This included exploring high-risk assumptions such as voice agents acting on behalf of users, human intervention in case of AI errors, and three-way conversations. Additionally, I led the creation of the final high-fidelity prototypes, wireframes, and user flows.
Duration
7 Months
Client
Consumer Reports
Team
1 UX Lead (me) + 1 PM + 2 Researchers + 1 Strategist
TL; DR
The Problem
The majority of consumers experience frustration and need support during the post-purchase phase.
Consumer Reports has long been a trusted guide for consumers during their pre-purchase journey, providing unbiased product ratings and reviews. However, 74% of customers reported encountering a product or service issue in the past year (National Customer Rage Survey, 2023). This presents a unique opportunity for CR to leverage AI technology to help consumers advocate for their rights during the post-purchase phase.
The Solution
FairPlay AI empowers consumers with policy-backed arguments to get what they deserve from product companies.
FairPlay is your AI advocate when you encounter any product or service issues, offering a wide range of support, from calling companies' customer service lines on your behalf to providing policy-based arguments and the best contact channels. FairPlay guides you through every step of your post-purchase journey, ensuring your consumer rights are protected.
The Impact
FairPlay AI was presented to key stakeholders at Consumer Reports and received overwhelmingly positive feedback.
In late July, my team and I presented our FairPlay Agentic AI concept at Consumer Reports' NYC office to over 20 attendees, including the Head of Innovation, the Chief Venture Officer, and other members of the Innovation Lab. The presentation sparked a lively discussion about the future of AI interactions and the potential incorporation of some features into their future pipeline. We're excited to see the next chapter for FairPlay and to continue advocating for consumer rights in the digital age.
The Three Tiers of AI Intervention
Drawing from the insights of our foundational research, we discovered that people's comfort levels with AI vary, particularly regarding trust in its accuracy and alignment with their best interests. To address this, we developed three tiers of AI assistance—informing, guiding, and acting—designed to offer users the flexibility to choose the level of AI involvement they’re comfortable with. This approach ensures that AI enhances each user's experience without compromising their sense of control.
Inform
The agent provides relevant information to users in a timely manner.
Guide
The agent provides step-by-step guidance and resources to empower consumers.
Act
The agent acts on behalf of consumers to advocate for their rights.
Inform
Guide
Policy-based Arguments at Your Fingertips
Once FairPlay redirects the user to the iMessage interface, utilizing the Apple Business Chat Integration, it monitors user input and makes inferences about which policies are most likely to help users advocate for their rights, using Large Language Models trained specifically on CR data. Once FairPlay detects a relevant policy, it will alert the user. The user can then review the policy and select “Draft Argument” to have the text box pre-populated.
Act
Hate Calling Customer Service Agents? FairPlay AI Has Your Back
Imagine a world where the frustration of customer service calls is a thing of the past. With FairPlay's agentic AI, that world becomes a reality. FairPlay AI autonomously handles customer service issues with minimal effort on your part. Simply select the company and describe your problem—FairPlay takes care of the rest, seamlessly managing calls behind the scenes, whether it's negotiating an overcharged bill or resolving poor service issues.
Inform
Policy-based Arguments at Your Fingertips
We understand that while some individuals prefer to handle customer service calls themselves, they still seek guidance to effectively assert their rights. In response, we’ve introduced company policy cards, along with tips and tailored talking points specific to the customer service issue at hand. During the call, users will have access to all necessary information, ensuring they are well-prepared to advocate for themselves effectively.
Inform
Guide
Seamlessly Integrated with iOS Business Chat
Once FairPlay redirects the user to the iMessage interface, utilizing the Apple Business Chat Integration, it monitors user input and makes inferences about which policies are most likely to help users advocate for their rights, using Large Language Models trained specifically on CR data. Once FairPlay detects a relevant policy, it will alert the user. The user can then review the policy and select “Draft Argument” to have the text box pre-populated.
Inform
Guide
Act
Accessing the Most Optimal Contact Channel for Any Company
Each company has its preferred methods of contact, whether through a phone call, live text chat, or email. FairPlay AI would leverage this knowledge of behind-the-scenes shortcuts and recommend the fastest and easiest way to contact a specific company based on aggregated company data. This would enable users to bypass lengthy queues and address their issues quickly and effortlessly, ensuring a hassle-free experience.
Inform
Guide
Act
Keeping You Informed Every Step Along the Way
We recognize the importance of user control when it comes to innovative technologies like agentic Voice AI. That’s why we integrate robust, fail-proof features into FairPlay AI to keep users well-informed and in command at every step. Users would receive timely push notifications before, during, and after each call. These updates allow users to verify account details, make crucial time-sensitive decisions—such as accepting promotional offers—and review a comprehensive summary of the call. Through these push notifications, we aim to build trust and transparency with our end users.
Decoding FairPlay’s Agentic AI Technology
At the heart of FairPlay’s agentic AI would lie a sophisticated Multi-agent Large Language Model (LLM) system. Here’s how we imagine it would work :
When interacting with a customer service representative, anything said is converted into text—what’s termed a “literal utterance.” The system’s first agent, the Intent Mapper, identifies and maps this utterance to a specific intent. This intent is then relayed to the second agent, the Policy Matcher, which retrieves relevant company policies to support the user’s argument.
The process doesn’t stop there. The third agent, the Response Generator, crafts a reply based on these policies, while the fourth agent, the Placeholder Filler, integrates the user’s personally identifiable information (PII) like name, address, and purchase details into the response. Importantly, our system is designed to ensure that the LLMs cannot access sensitive user information, maintaining strict privacy standards.
To further refine the accuracy of the system’s responses, an additional Discriminator Agent is employed. This agent evaluates the effectiveness of our responses based on the customer service representative’s subsequent utterances. Feedback from this evaluation is then used to update the other agents, continuously improving our system’s accuracy and reliability.
UX Research - Discovery
Customer Service: Seizing Opportunities in the Post-Purchase Phase
Through our primary and secondary research, we discovered that consumers' greatest frustrations arise during the post-purchase stage. Customer service interactions, in particular, were identified as the most frustrating aspect of their journey. This frustration stems from consumers often lacking knowledge of their rights and the power to hold businesses accountable. Additionally, businesses often lack incentives to genuinely uphold their end of the bargain and assist consumers after a purchase. This presents a significant opportunity for CR to step in and empower consumers during this critical phase.
Semi-structured Interviews
Who did we speak to?
To further explore the post-purchase problem space, we conducted semi-structured interviews with 23 participants, selected to represent diverse gender identities, age groups, and familiarity with Consumer Reports. Our interviews incorporated directed storytelling sessions and storyboards to gather rich, qualitative data.
23
Participants
~20%
CR Subscribers
47% vs. 53%
Male vs. Female
21 - 64
Age Range
Key Insight 1
Companies’ poor CX necessitates an agent capable of identifying contact channels, curating case information, and tracking issues on behalf of consumers.
Many companies' customer service processes are inefficient, creating obstacles for consumers trying to resolve issues. Our interviews revealed key pain points: difficulty finding contact information, frustration gathering case details from various sources, and a lack of transparency in service request progress. To address these issues, we propose an AI agent that identifies contact channels, curates case information, and tracks progress, streamlining the customer service experience.
You are prompted for ticket number, confirmation number, reservation number, flight number, all of those things live in different places.
- Male, 40s, Non-CR Reader (P13)
Key Insight 2
Consumers seek a strong advocate to assertively defend their rights.
Because there is an information asymmetry between consumers and companies, consumers seek an advocate—someone assertive and logical—to help them stand up for their rights. This underscores the need for a service that not only provides relevant information but also assists in building strong arguments. When company policies are presented clearly and in an actionable way, consumers are more confident and prepared to advocate for themselves.
I want 'a little Karen' in that situation that comes with a certain level of comfort and confrontation. I think that empowerment as a consumer sometimes gets lost.
- Female, 30s, CR Reader (P3)
Key Insight 3
A consumer's willingness to adopt an agent depends on their confidence in the agent's ability to advocate as effectively, or better, than they would themselves.
Our research found that consumers hesitate to use AI agents unless they trust them to be as persistent and effective as they would be themselves, especially in situations requiring emotions like anger or forceful persuasion. This highlights their desire for the agent to strongly advocate for them and reveals a lack of trust in AI to do so. It underscores the need to design an AI agent that competes with human capabilities and reassures users of its competence and effectiveness.
I wish I could argue for my money back. I don’t think an agent can persuade a person as effectively as I can.
- Female, 50s, CR Reader (P12)
Key Insight 4
It is essential to cultivate consumer trust in AI agents.
Building trust in AI agents requires overcoming several key hurdles. Our research found that consumers often rely on recommendations from trusted sources before using AI agents. Once engaged, user control is crucial—some prefer proactive agent outreach for low-risk tasks, while others want to initiate contact themselves. Clear confirmation that the AI understands their request is essential, as is maintaining a personable but not overly human-like tone. Transparency is key: consumers need to know the source of information and receive consistent updates, fostering confidence in the agent's effectiveness and building trust.
The trust builds each time the agent conveys to me that it understands my situations. So the more granularly you can do that, the more I will trust and engage.
- Female, 60s, CR Reader (P10)
Problem Reframing
The key insights from our guerrilla research and semi-structured interviews helped us reframe the problem space into the following statement:
How can we streamline consumers' post-purchase interactions with businesses, reducing cognitive load, emotional strain, and time investment while ensuring transparency and fairness in the marketplace?
Parallel Pototyping
Exploring Design Concepts Using the 'Inform, Guide, and Act' Framework
Combining our reframed problem statement, key findings from talking to consumers, as well as the IGA framework, we came up with three distinct design concepts for an agent that would inform, guide, and/or act on behalf of consumers.
The Negotiation Helper is a smart agent that leverages advanced technologies like Large Language Models (LLM). It enables real-time alerts based on feedback from customer service representatives and provides post-call tips to strengthen arguments and advocacy. It also centralizes businesses’ contact channels and call history for easy access.
Policy Assistant – a smartphone widget in the control center– would alert users to dark patterns or beneficial company policies, sift through emails and browser content on demand, and present relevant policy segments clearly. It would also draft emails and formulate arguments based on these policies.
Finally, CR Wallet, would be an app-based smart agent assisting consumers throughout their purchase journey. It would use Augmented Reality (AR) technology to identify product issues, guide users through resolutions, alert them to relevant company policies for repairs or replacements, and submit and track requests on their behalf.
The Negotiation Helper
Policy Assistant
CR Wallet
Concept Validation
Evaluating the Riskiest Assumptions through Rapid Prototyping
Based on stakeholder feedback, we decided to move forward with the Negotiation Helper concept because it excited both stakeholders and consumers, and it served as the foundation for validating our idea. We then conducted three rounds of testing and improved the core features of the Negotiation Helper based on the feedback we received.
Validation Testing: Round 1
Evaluating Simultaneous Multi-Modal Interaction
In our first round of concept validation, we evaluated key features such as customer service ratings, community reviews, multi-modal interaction, and AI-generated tips. Using a Wizard of Oz testing protocol with five participants in a simulated customer service scenario, we found:
Information Overload: Simultaneous text messages during calls overwhelmed users, making it hard to process tips while on the call.
Pre-call Information: Users preferred receiving information and tips before starting the call.
Concise Presentation: Clear and concise information delivery was favored before engaging with customer service.
Validation Testing: Round 2
Assessing Users' Comfort with AI Acting on Their Behalf
Based on what we learned in the first round, we reconfigured the interactions and aimed to evaluate several concepts in this round: asynchronous multi-modal interaction (tips before a call), synchronous single-modal interaction (in-chat tips during live chat), building arguments with AI-generated tips, summaries from service reviews, and an AI agent acting on behalf of the user. Here’s what we learned:
Tips before Calls: 60% of users liked receiving tips before calls, while 40% were neutral. Some, like P2, were uncomfortable using policy info due to trust issues and fear of confrontation.
Tips during Live Chat: In-chat tips were highly favored, as users found the information easy to understand and act upon.
AI Acting on Behalf of Users: The “CR Wizard” concept was well-received, with 60% of users favoring it and an additional 20% feeling neutral. Some users expressed the need to customize their desired outcomes, such as choosing a refund over a replacement.
Validation Testing: Round 3
Perception on AI Errors
In our third round of concept testing, we focused on 1) how users tolerate AI mistakes and 2) the ideal interventions when errors occur. Our goal was to understand user reactions to AI mistakes and the best responses to maintain trust. Here’s what we learned:
No Tolerance for AI Errors: 70% of participants wanted the AI to handle calls independently, frustrated by the need to monitor calls. They felt it defeated the purpose, as they still had to pay attention and might as well do it themselves.
Discomfort with Listening in: Some participants felt uneasy listening to an AI interact with a human agent, with one describing it as “playing God” and preferring to only receive a report afterward.
Cognitive Load of a Three-way Conversation: Monitoring the AI, calling out errors, and interjecting caused high cognitive load, discomfort, and anxiety, emphasizing the need for the AI to operate independently to improve user experience.
Key Takeaways
Testing assumptions quickly through rapid iterations is the key to unlocking true innovation.
To address the feedback and pain points raised by users during three rounds of testing, we redesigned the concept to simplify the process. Users no longer need to monitor calls. Instead, they receive verification prompts before the call and can choose to receive updates during the call. Afterward, they are provided with a summary and transcript. Additionally, users will receive policy-related tips before and during their interactions with customer service agents and be directed to the best contact channel.
These prototypes and testing sessions have shown our team the value of rapid iteration. We often had assumptions about which features users would prefer, but testing revealed surprising results. By observing users interact with our prototypes, we uncovered unexpected challenges, which inspired new ideas and ultimately led to our final deliverables.