Awadoc
Awadoc
Instant Healthcare via WhatsApp, Powered by AI.
Instant Healthcare via WhatsApp, Powered by AI.



Location
Africa
Industry
Healthcare
Services I provided
User experience
Admin Dashboard
Website Design
Target Audience
General population, especially those who don’t want to download new apps and are more comfortable with WhatsApp.
Overview
About Awadoc
AwaDoc is an AI-powered healthcare assistant delivered directly via WhatsApp, offering instant, personalized health guidance through symptom analysis and trusted next steps—no app download needed.
Background
Co-founded by Dr. Chinonso Fidelis Egemba (widely known as Aproko Doctor) and Jesse Benedict, AwaDoc was born from deep firsthand experience with Nigeria’s healthcare challenges. Dr. Egemba witnessed patients delaying care and suffering from misinformation, while Benedict brings both medical and business expertise to the table
The Challenge
Nigeria faces a severe doctor-to-patient shortage (one doctor per ~9,000 people), leading to long waits, high costs, and rushed consultations. As a result, many resort to self-diagnosis via Google, often with dangerous misinformation
The Solution
To build a an AI-powered healthcare assistant delivered directly via WhatsApp, offering instant, personalized health guidance through symptom analysis and trusted next steps—no app download needed

Research & Insight
For AwaDoc, I focused my research on four practical methods that balanced speed with cultural context.
I began with lean surveys targeting everyday Nigerians and caregivers to capture how they naturally describe symptoms—often in colloquial English or Pidgin—and to understand their biggest anxieties around trust, privacy, and accuracy. These insights confirmed that clarity and reassurance were more valuable than speed alone.
Alongside this, I conducted a competitive analysis of health-advice platforms such as Ada Health, Wellvis, and 7cups. This helped me benchmark onboarding flows, escalation models, and tone of communication. It quickly became clear that many global solutions felt too text-heavy or overly clinical for Nigerian users, which shaped my decision to design a more lightweight, trust-first experience for both the website and WhatsApp chatbot.
Before automating the bot, I ran Wizard-of-Oz tests, manually simulating chatbot conversations with users. This allowed me to validate the effectiveness of clarifier questions, safety disclaimers, and escalation patterns in real time, while also observing where users got stuck or dropped off. The learning from these sessions was instrumental in refining conversational patterns before any code was written.
Finally, I ran usability testing on both Figma prototypes and early live flows. These sessions revealed how easily users could describe their symptoms, whether the trust messaging on the website was convincing enough to trigger action, and how efficiently clinicians could review conversations through the dashboard. As a result of these iterations, we saw measurable improvements in key outcomes: conversation completion rates rose from 54% to 72%, while website-to-WhatsApp conversions improved from 18% to 37%.
For AwaDoc, I focused my research on four practical methods that balanced speed with cultural context.
I began with lean surveys targeting everyday Nigerians and caregivers to capture how they naturally describe symptoms—often in colloquial English or Pidgin—and to understand their biggest anxieties around trust, privacy, and accuracy. These insights confirmed that clarity and reassurance were more valuable than speed alone.
Alongside this, I conducted a competitive analysis of health-advice platforms such as Ada Health, Wellvis, and 7cups. This helped me benchmark onboarding flows, escalation models, and tone of communication. It quickly became clear that many global solutions felt too text-heavy or overly clinical for Nigerian users, which shaped my decision to design a more lightweight, trust-first experience for both the website and WhatsApp chatbot.
Before automating the bot, I ran Wizard-of-Oz tests, manually simulating chatbot conversations with users. This allowed me to validate the effectiveness of clarifier questions, safety disclaimers, and escalation patterns in real time, while also observing where users got stuck or dropped off. The learning from these sessions was instrumental in refining conversational patterns before any code was written.
Finally, I ran usability testing on both Figma prototypes and early live flows. These sessions revealed how easily users could describe their symptoms, whether the trust messaging on the website was convincing enough to trigger action, and how efficiently clinicians could review conversations through the dashboard. As a result of these iterations, we saw measurable improvements in key outcomes: conversation completion rates rose from 54% to 72%, while website-to-WhatsApp conversions improved from 18% to 37%.
Overview
Ever been in a situation where you want to eat something different but homemade without leaving your home? Sure you can order for a delivery from an eatery, but this time you want that unique home-made food and you don’t have a helper to prepare it. You guessed right! Chef4me was made for that purpose. You can get home-made food prepared for you by accredited chefs on the platforms at the comfort of your house or can be delivered.
Uncovering the problem
In an initial discussion with the key stakeholder of the product, I gathered the key issues chef4me sought to address which seemed valid, but some validation had to be done from a user perspective to uncover the problems. After speaking to a number of people to about the challenges they face in preparing food or placing orders, I realised that; People don’t have time to prepare food due to factors like work, Some want to eat fresh home-made food but don’t cook, People want to get food in time without any hassle. People want to get food that considered ingredients they preferred and allergies they have. The insights gotten from conversations with potential users and some discoveries further validated the problems raised by the stakeholders and some solutions were created



Result from survey
Users didn’t just want an explanation of their symptoms—they wanted to know what to do immediately, what to monitor, and when to escalate to a professional. This insight became the foundation for the chatbot’s summary guidance cards: “Do now / Watch for / When to seek help.”


UX Strategy & User Flows
Designing the user flow for AwaDoc meant balancing simplicity for patients with the necessary guardrails for safety. The journey begins on the website, which establishes trust through clear explanations of how the service works, strong privacy messaging, and one clear call-to-action: “Start on WhatsApp.”
Once inside WhatsApp, users are greeted with a short consent message before moving into the symptom intake stage, where they describe their concerns in their own words. Because our research showed that users often used colloquial or imprecise language, I designed clarifier questions and quick reply chips to guide them toward more accurate information without adding friction.
From there, the flow diverges based on safety. If no urgent red-flags are detected, users receive a guidance card with three simple buckets: what to do now, what to watch out for, and when to seek further help. If a red-flag is detected, the flow immediately escalates by showing emergency numbers and routing the case into the admin dashboard for clinician oversight.
For ongoing users, I introduced a return flow that recognizes them and offers to either continue from where they left off or start a new concern, ensuring continuity without confusion. On the back-end, the admin dashboard supports this experience with queues, session reviews, and content management tools to help clinicians oversee conversations efficiently.
This structure allowed AwaDoc to feel intuitive and reassuring for patients while providing the operations team with the necessary visibility and control to ensure safety.
Design Process
Result from survey
Users didn’t just want an explanation of their symptoms—they wanted to know what to do immediately, what to monitor, and when to escalate to a professional. This insight became the foundation for the chatbot’s summary guidance cards: “Do now / Watch for / When to seek help.”
UX Strategy & User Flows
UX Strategy & User Flows
Designing the user flow for AwaDoc meant balancing simplicity for patients with the necessary guardrails for safety. The journey begins on the website, which establishes trust through clear explanations of how the service works, strong privacy messaging, and one clear call-to-action: “Start on WhatsApp.”
Once inside WhatsApp, users are greeted with a short consent message before moving into the symptom intake stage, where they describe their concerns in their own words. Because our research showed that users often used colloquial or imprecise language, I designed clarifier questions and quick reply chips to guide them toward more accurate information without adding friction.
From there, the flow diverges based on safety. If no urgent red-flags are detected, users receive a guidance card with three simple buckets: what to do now, what to watch out for, and when to seek further help. If a red-flag is detected, the flow immediately escalates by showing emergency numbers and routing the case into the admin dashboard for clinician oversight.
For ongoing users, I introduced a return flow that recognizes them and offers to either continue from where they left off or start a new concern, ensuring continuity without confusion. On the back-end, the admin dashboard supports this experience with queues, session reviews, and content management tools to help clinicians oversee conversations efficiently.
This structure allowed AwaDoc to feel intuitive and reassuring for patients while providing the operations team with the necessary visibility and control to ensure safety.
UX Strategy & User Flows
Designing the user flow for AwaDoc meant balancing simplicity for patients with the necessary guardrails for safety. The journey begins on the website, which establishes trust through clear explanations of how the service works, strong privacy messaging, and one clear call-to-action: “Start on WhatsApp.”
Once inside WhatsApp, users are greeted with a short consent message before moving into the symptom intake stage, where they describe their concerns in their own words. Because our research showed that users often used colloquial or imprecise language, I designed clarifier questions and quick reply chips to guide them toward more accurate information without adding friction.
From there, the flow diverges based on safety. If no urgent red-flags are detected, users receive a guidance card with three simple buckets: what to do now, what to watch out for, and when to seek further help. If a red-flag is detected, the flow immediately escalates by showing emergency numbers and routing the case into the admin dashboard for clinician oversight.
For ongoing users, I introduced a return flow that recognizes them and offers to either continue from where they left off or start a new concern, ensuring continuity without confusion. On the back-end, the admin dashboard supports this experience with queues, session reviews, and content management tools to help clinicians oversee conversations efficiently.
This structure allowed AwaDoc to feel intuitive and reassuring for patients while providing the operations team with the necessary visibility and control to ensure safety.

11, 000 users on the waitlist but we launched to the first 1000
After launching AwaDoc to our first 1,000 users, we uncovered several critical insights that shaped the next phase of design and development. The most pressing challenge was that the AI occasionally hallucinated responses—generating advice that sounded plausible but wasn’t always safe or aligned with medical guidelines. This posed a risk in a healthcare context where trust and accuracy are paramount.
We also noticed that users often described symptoms in long, story-like messages. The AI struggled to parse these consistently, leading to gaps in intent recognition. In addition, some users dropped off mid-conversation when the chatbot responses felt too wordy or overly technical.
On the operational side, the admin team flagged that red-flag cases weren’t always surfaced quickly enough, making it harder for clinicians to prioritize urgent reviews. This highlighted the need for a clearer escalation flow and better dashboard filtering.
These discoveries led us to introduce three key improvements: a rule-based safety net to override AI when hallucinations occurred, a clarifier chip pattern to simplify long inputs, and a redesigned red-flag queue in the dashboard for faster clinical oversight. Together, these changes significantly reduced risk, improved clarity, and kept user trust intact..
What we discovered after first launch
11, 000 users on the waitlist but we launched to the first 1000
After launching AwaDoc to our first 1,000 users, we uncovered several critical insights that shaped the next phase of design and development. The most pressing challenge was that the AI occasionally hallucinated responses—generating advice that sounded plausible but wasn’t always safe or aligned with medical guidelines. This posed a risk in a healthcare context where trust and accuracy are paramount.
We also noticed that users often described symptoms in long, story-like messages. The AI struggled to parse these consistently, leading to gaps in intent recognition. In addition, some users dropped off mid-conversation when the chatbot responses felt too wordy or overly technical.
On the operational side, the admin team flagged that red-flag cases weren’t always surfaced quickly enough, making it harder for clinicians to prioritize urgent reviews. This highlighted the need for a clearer escalation flow and better dashboard filtering.
These discoveries led us to introduce three key improvements: a rule-based safety net to override AI when hallucinations occurred, a clarifier chip pattern to simplify long inputs, and a redesigned red-flag queue in the dashboard for faster clinical oversight. Together, these changes significantly reduced risk, improved clarity, and kept user trust intact..









During its beta launch, AwaDoc onboarded approximately 11,000 active users. and available in 20+ countries
0k
0k
Reduction in Onboarding
During its beta launch, AwaDoc onboarded approximately
0k
0k
Reduction in Onboarding
During its beta launch, AwaDoc onboarded approximately
0M
0M
Total words
messages from users across multiple African countries, reflecting strong early engagement.
0M
0M
Total words
messages from users across multiple African countries, reflecting strong early engagement.
0M
0M
Interaction with Noura
interactions, indicating a substantial volume of conversational exchanges happening through the WhatsApp interface.
0M
0M
Interaction with Noura
interactions, indicating a substantial volume of conversational exchanges happening through the WhatsApp interface.
Bata launch
Bata launch
During its beta launch, AwaDoc onboarded approximately 11,000 active users. and available in 20+ countries
0K
0K
E- patient Served
During its beta launch, AwaDoc onboarded approximately
0K
0K
E- patient Served
During its beta launch, AwaDoc onboarded approximately
0M
0M
Total words
messages from users across multiple African countries, reflecting strong early engagement.
0M
0M
Total words
messages from users across multiple African countries, reflecting strong early engagement.
0M
0M
Interaction with Noura
interactions, indicating a substantial volume of conversational exchanges happening through the WhatsApp interface.
0M
0M
Interaction with Noura
interactions, indicating a substantial volume of conversational exchanges happening through the WhatsApp interface.


Learnings
Working on AwaDoc revealed the importance of designing AI-driven healthcare products with a balance of trust, accuracy, and usability. One of the biggest lessons was that AI alone is not enough—safeguards like rule-based fallbacks and human-in-the-loop systems are critical in high-stakes environments like healthcare.
I also learned how valuable it is to start small and test early. Launching to the first 1,000 users uncovered issues we couldn’t have predicted in theory, such as hallucinations in responses, user confusion with medical jargon, and drop-offs caused by overly long chatbot interactions. These insights only became clear once real people interacted with the system.
Another key learning was around the admin experience. While we initially focused heavily on the chatbot, the clinicians and administrators highlighted gaps in triaging and surfacing urgent cases. This reminded me that a product’s success depends not just on end-users, but also on the support systems that keep it safe and scalable.
Finally, this project reinforced the value of iterative design and cross-functional collaboration. From surveys and usability tests to Wizard of Oz experiments, each cycle of feedback helped us refine the experience and build more trust with users.
Learnings
What I learnt
Learnings
Working on AwaDoc revealed the importance of designing AI-driven healthcare products with a balance of trust, accuracy, and usability. One of the biggest lessons was that AI alone is not enough—safeguards like rule-based fallbacks and human-in-the-loop systems are critical in high-stakes environments like healthcare.
I also learned how valuable it is to start small and test early. Launching to the first 1,000 users uncovered issues we couldn’t have predicted in theory, such as hallucinations in responses, user confusion with medical jargon, and drop-offs caused by overly long chatbot interactions. These insights only became clear once real people interacted with the system.
Another key learning was around the admin experience. While we initially focused heavily on the chatbot, the clinicians and administrators highlighted gaps in triaging and surfacing urgent cases. This reminded me that a product’s success depends not just on end-users, but also on the support systems that keep it safe and scalable.
Finally, this project reinforced the value of iterative design and cross-functional collaboration. From surveys and usability tests to Wizard of Oz experiments, each cycle of feedback helped us refine the experience and build more trust with users.