Friendli Agent

Improving AI Agent User Experience with Usability Testing

Friendli Agent simplifies the Gen AI model deployment process and reduces user friction by leveraging an interactive conversational UI that guides users through complex steps and enhances engagement.

TIMELINE

Dec 2024 - Present

ROLE

Sole Designer

(User Flows, Interaction Design, UI Design, Prototyping, Usability Testing)

WITH

4 Frontend engineers,

1 User researcher,

Product manager

ABOUT COMPANY

FriendliAI is a B2B company specializing in generative AI infrastructure solutions. Its mission is to empower organizations to harness the power of generative AI with ease and cost-efficiency.

PROJECT OVERVIEW

Designing the Flow: From Blank Canvas to a AI Agent POC

As the sole designer on this project, I was responsible for creating a seamless Gen AI model deployment flow that would guide first-time users to successfully deploy models on Friendli Suite. The project involved designing a agent-driven experience that simplifies a traditionally complex process while also showcasing the platform’s unique capabilities.

Taking on both UX and UI responsibilities, I approached the project by balancing user needs (clear guidance and low friction) with business goals (highlighting the platform’s capabilities). I worked closely with PMs and engineers, moving quickly through wireframing, prototyping, and building an AI agent-based Proof of Concept (POC).

01
Background

What is the purpose of this agent? Who needs it?

TARGET USER

New User Type: Users Seeking Model Deployment

With the rise of new user types entering our platform from external Gen AI model index pages, we identified a need to support these users by providing a seamless model deployment experience.

WHAT NEEDED?

Seamless Model Deployment Page

Our new users want to select a model from external index pages and deploy it directly on our platform, Friendli Suite. The current flow did not address this need, so we needed to build a 'Gen AI Model Deployment Page' to support them.

BUSINESS GOAL

Showcase Friendli Suite’s Strengths & Increase Engagement

Our business goal was clear: to make the platform appealing to first-time users by demonstrating the simplicity of our deployment workflows and highlighting key features that set Friendli Suite apart.

  1. Demonstrate seamless Gen AI model deployment to highlight Friendli Suite’s ease of use.

  1. Increase first-time user engagement by showcasing key features early in the journey.

02
Research

What's like traditional model deployment pages?

COMPETITOR ANALYSIS

Traditional Gen AI Model Deployment Pages

We explored competitor services and found that most model deployment pages use a standard UI pattern:

  1. Left Panel:
    Detailed descriptions of the model.

  1. Right Corner:
    A “Deploy” button that triggers deployment.

  1. Post-click:
    Users are redirected to sign-up, account settings, and subscription pages.

UX ISSUES

Issues with the Traditional UI

While these standard UI patterns are straightforward, they don’t encourage engagement and often result in users abandoning the process during onboarding steps.

Complexity and cognitive overload

Users were overwhelmed by too many options and technical details displayed upfront.

High drop-off rates during onboarding

Competitor platforms often redirected users to multiple pages (sign-up, account settings, etc.), which make users not want to complete the entire deployment process.

Lack of proactive support

Users weren’t prompted or guided through the deployment flow, leading to confusion and hesitation.

We realized that a traditional UI would not solve the problem. Instead, we aimed to create a more interactive and engaging experience.

03
Hypothesis: Why an Agent?

Benefits of the conversational UI

HYPOTHESIS

Why an Agent?

We moved fast to validate our hypothesis that an AI agent with a conversational UI would better meet both user and business needs compared to a traditional model deployment page. This hypothesis was rooted in solving specific user pain points we identified through competitor analysis and user research.

Benefits of conversational UI

User Engagement

The AI agent could guide users step-by-step through deployment, reducing cognitive load.

Content Delivery

It allowed us to deliver relevant content (like FAQs and feature showcases) dynamically within the deployment flow.

User-Friendly Experience

By using conversational design, users perceived the deployment process as simpler and more approachable.

Our hypothesis was that a guided, conversational UI would:

01

Reduce friction

for first-time users by simplifying the complex model deployment process.

02

Improve user engagement

by proactively delivering information through conversation.

03

Encourage action

by making the deployment flow feel more interactive.

04

Showcase Friendli Suite’s key features

early in the flow without overwhelming the user.

04
From Wireframe to POC

Validating a conversational UI in one week

DESIGN PROCESS

Agile and Iterative Process

We approached the project with an agile, iterative process, focusing on quickly testing the agent concept to see if it would indeed reduce friction and improve engagement. Within one week, we designed and developed a Proof of Concept (POC) AI agent deployment flow.

KEY FLOW HIGHLIGHTS

Streamlined deployment flow

The agent streamlined deployment by reducing unnecessary steps.

KEY FLOW HIGHLIGHTS

Real-Time Status Updates

The agent provided instant feedback on deployment progress.

KEY FLOW HIGHLIGHTS

Surprise Me Button

We added a “Surprise Me” button to spark user curiosity and showcase unique platform features, structured output, and tool calling.

* Structured Output: Organized, consistent data formatting for clearer results.

* Tool Calling: Automated activation of specific tools to perform tasks efficiently.

05
Usability Testing

Did our hypothesis truly meet the user’s needs?

USABILITY TESTING

How we gathered user insights to refine the AI Agent UI

To ensure that our agent-driven deployment flow was meeting user needs, we conducted usability tests with 11 participants, focusing on both experienced AI engineers and first-time users to cover a range of expertise levels.

TESITNG GOAL

  1. Identify pain points in the deployment flow.

  1. Understand how users interacted with the AI agent UI compared to traditional UIs.

  1. Gather qualitative feedback on overall user satisfaction, clarity, and ease of use.

PARTICIPANT PROFILE

70% of Highly experienced in AI/ML

Users with strong familiarity with generative AI and model deployment processes. 

30% of Less familiar with deployment

Users who were less experienced in deploying models but had general AI knowledge. 

TESTING PROCESS

  1. Experience Trial (~15 minutes)

  1. Group Interview (~15 minutes)

  1. Retro & Analysis (~30 minutes)

*We also recorded users’ screen time during the test period to observe their behavior in detail. 👉

PARTICIPANT PROFILE

70% of Highly experienced in AI/ML

Users with strong familiarity with generative AI and model deployment processes. 

30% of Less familiar with deployment

Users who were less experienced in deploying models but had general AI knowledge. 

RESULTS

Confusion Post-Deployment (50%)

  • Users felt uncertain after clicking the “Deploy Now” button.

  • The process felt too simple, leading to doubts about whether deployment had started.

RESULTS

Difficulty Monitoring Deployment Status (30%)

  • Users struggled to track the progress of the model deployment.

  • Some participants refreshed the page repeatedly to see if the deployment was complete.

RESULTS

Mixed Feedback on “Surprise Me” Button (90% clicked)

  • While most users clicked the “Surprise Me” button, they found the demo content underwhelming.

  • Users wanted more interactive examples that better demonstrated the platform’s unique capabilities.

06
Improvements

Iterating the AI agent UI based on user feedback

Usability Improvements and Design Enhancements

Following our usability testing, we identified several areas for improvement. We implemented targeted design changes to address these issues, ensuring that the agent experience is clearer, more engaging, and proactive.

IMPROVEMENT

Providing Pre-Deployment Information

We added pre-deployment context screens to clarify what happens when users deploy a model and what they can expect next.

BEFORE

AFTER

IMPROVEMENT

Real-Time Deployment Progress Bar

We introduced a fixed deployment status bar at the top of the page, paired with proactive AI agent updates to provide real-time progress feedback.

BEFORE

AFTER

IMPROVEMENT

Redesigning 'Suprsie Me' Interactions

During the “Surprise Me” demo interaction, users were shown a JSON-formatted output after inputting their own data. However, many users misunderstood the source of the structured output, thinking it came directly from their input rather than being generated by the model.

We reorganized the flow to split the demo interaction into three distinct parts.
By splitting these phases, we made it clear that structured output is a result of the model’s processing, not simply a formatted version of the user’s input.

BEFORE

AFTER

IMPROVEMENT

UI and UX Writing Enhancements

Some users found the agent's language too formal or robotic and commented that the UI design felt outdated.

We refined the agent's tone of voice to create a more approachable and supportive experience. We also updated the visual design of the AI agent UI to make it feel more polished and aligned with the Friendli Suite brand.

BEFORE

AFTER

IMPROVEMENT

Micro-Interactions for Delight

Confetti effect 🎉🎊

07

Impact of the Improvements

IMPACT

Driving Clarity, Engagement, and User Confidence

After implementing these improvements, we ran another round of usability tests and saw significant improvements in user satisfaction and engagement

08

Next Steps

Launching and Iterating with Real Users

Within the next month, the Model Deployment Page will be publicly released via a platform that receives 18.9 million visits per month.

We’re excited to see how real users interact with the deployment flow and look forward to making continuous improvements to deliver a frictionless, engaging experience.

More flows & pages coming soon… 👉

08

Final Takeaways

Quick Iterations with Real Feedback Are a Game-Changer

We kept things scrappy and fast with our POC approach — and it worked. Testing early with users helped us catch pain points we wouldn’t have seen on our own. By making small, targeted tweaks along the way, we turned a rough concept into something that felt polished and user-friendly. It’s a reminder that real feedback beats assumptions every time.

UI and UX Writing Can Make or Break Trust

People notice how something looks and sounds — it’s not just about function. Our AI agent felt too robotic at first, and the UI looked a little outdated, which hurt the overall experience. Once we cleaned up the visuals and made the language feel more human, users felt more comfortable and confident. It’s proof that good design and writing aren’t just nice-to-haves — they’re essential to building trust, especially for first-time users.