Confidential
Contact
Real-Time Patient Empowerment for Healthcare

Your doctor has AI.
You don’t. Until now.

Clinical AI systems now make triage, risk-scoring, and diagnostic decisions at hospitals across the country. They are trained on the same data that produced the disparities. Patients have no visibility, no voice, and no tools to protect themselves. FairwAI is building the companion that changes that — real-time rights guidance, intelligent question prompts, and documentation for every medical interaction.

“No one was really listening to what I was saying.”

— Serena Williams, writing in Elle (2022) about her near-fatal post-delivery complications. She experienced a pulmonary embolism after giving birth and had to advocate repeatedly for her own care. Black women in the U.S. die from pregnancy-related complications at three to four times the rate of white women.

These are real health challenges.
In every one, the patient was alone.

Maternal Health
“No one was listening.”

Serena Williams knew something was wrong after delivering her daughter. She asked for a blood thinner and a CT scan. A nurse told her the medication was making her “talk crazy.” She had a pulmonary embolism. She nearly died. If one of the most famous athletes in the world had to fight to be heard, imagine what happens to everyone else.

Serena Williams, Elle, 2022
Emergency Rooms · LGBTQ+ · Low-Income Patients
Same symptoms. Different treatment.

Researchers gave nine AI systems 1,000 identical ER cases and only changed the patient’s identity. Patients labeled LGBTQ+, unhoused, or Black were recommended for invasive procedures and mental health evaluations that weren’t medically necessary. Same pain. Different care. No one told the patient.

Nature Medicine, 2025
Skin Cancer · South Asian · Dark Skin
The app said “low risk.” It was Stage II.

A South Asian woman used a popular dermatology app for a changing mole. The app said low risk. Months later, a dermatologist diagnosed Stage II melanoma. The AI had been trained on thousands of light-skinned images and almost none on brown skin. She trusted the app. The app wasn’t built for her.

Oxford University / Lancet Digital Health
Chronic Illness · Insurance Denials
The algorithm said no.

An AI system used by a major U.S. insurer was allegedly denying coverage for patients who needed care — overriding physician recommendations based on algorithmic scoring. Patients were refused treatment not because a doctor said no, but because a model did. They had no way to know, challenge, or document what happened.

UnitedHealth lawsuit, 2023
Women · Heart Disease · All Backgrounds
Her heart attack looked “atypical.”

Cardiac AI systems trained primarily on male patients fail to recognize heart attacks in women, whose symptoms present differently. Women are more likely to have their pain dismissed, their symptoms labeled “atypical,” and their conditions missed entirely. Medical research was built on men. AI inherited that blind spot.

NIH research; PMC systematic reviews
Elderly · Your Parents · Your Grandparents
Treated as if decline is inevitable.

AI risk-scoring tools trained on nursing home data assume all elderly patients will deteriorate. The result: less aggressive treatment, earlier hospice referrals, and lower resource allocation — even for patients who could recover. If you’re over 65, the algorithm may have already decided your outcome.

Geriatric AI research; PMC, 2021
UK · NHS · Ethnic Minorities · Women
The devices don’t work the same for everyone.

The UK government found that pulse oximeters — the clip on your finger measuring oxygen — overestimate readings in darker skin. AI diagnostic tools under-diagnose cardiac conditions in women and miss skin cancers in people of colour. The devices look the same. The results aren’t.

UK Dept. of Health & Social Care, 2024
EU · Cross-Border · No Safety Net
New AI rules. No patient tools.

The EU AI Act requires healthcare AI systems to be tested for bias and meet strict safety standards by August 2026. But even as regulations tighten, patients across Europe still have no tools to understand what AI is doing in their care, no way to ask informed questions, and no documentation if something goes wrong.

EU AI Act (2024); WHO European Region survey, 2025
200M+
Patients affected by a single
biased clinical algorithm in the U.S.
US · UK · EU
Three markets. New regulations.
No patient-side tools exist yet.
Aug 2026
EU AI Act high-risk healthcare
compliance deadline

Patients are on their own in a system that wasn’t built for them

When you walk into a hospital, clinical AI systems are already making decisions about your care — scoring your risk, prioritizing your treatment, shaping your diagnosis. These systems are trained on data that reflects decades of unequal outcomes. And when something goes wrong, you have no way to know, no tools to respond, and no record to prove it.

AI is already in the room Algorithms used in maternal risk scoring, emergency triage, and diagnostic decision support are making care decisions — often without the patient or physician being fully aware of how.
The data reflects the disparities Clinical AI systems are trained on datasets shaped by decades of differential treatment. The bias isn’t a bug — it’s baked into the training data.
Patients have no tools When your care is shaped by a biased system, you have no way to detect it, no guidance on your rights, and no documentation if something goes wrong.

A companion that has your back — in the room with you

The FairwAI Companion is a patient-facing AI tool that works on your phone or wearable. It listens, informs, flags, and documents — so you never have to navigate a medical interaction alone.

01
Listen
The Companion monitors your medical conversation in real time, using NLP trained on federal and state healthcare law.
02
Inform
You receive guidance on your rights and suggested questions to ask — helping you advocate for yourself when it matters most.
03
Flag
The system alerts you when your care may deviate from clinical standards, patient rights, or standard-of-care protocols.
04
Document
Your interaction is summarized and stored in your own records — giving you an evidence base if you ever need it.

Built for real moments, not theoretical ones

The Companion App is designed around the situations where patients are most vulnerable — prenatal visits, emergency rooms, specialist consultations, and post-surgical recovery. Every feature exists because a real patient needed it and didn’t have it.

01

Real-Time Rights Guidance

Jurisdiction-specific healthcare rights delivered in plain language during your medical interaction. Federal law, state law, and patient protections — in your pocket.

02

Intelligent Question Prompts

Context-aware suggestions for what to ask your doctor based on the conversation happening in real time. Because under stress, you forget the questions that matter most.

03

Visit Summary & Documentation

Automatic post-visit summaries stored securely in your own records. If something went wrong, you have the documentation — timestamped, organized, and yours.

Regulatory Environment

EU AI Act (2025) — mandatory protections for high-risk AI in healthcare US HIPAA — patient data protections UK GDPR / NHS AI governance standards FDA guidance on AI/ML-based medical devices Patients have the legal right to record their own medical encounters in most U.S. states

Engineered by MIT-level systems leadership

17-person research and engineering team spanning MIT, Columbia, Harvard, Georgia Tech, and NYU.

Antonio Dixon

Co-Founder & CEO

MIT Advisor. Forbes Next 1000. UN SDG Pioneer. Frost & Sullivan Innovation Award. Led federal infrastructure partnerships with the U.S. Department of Energy. Serial founder and public-interest technologist.

Dr. John Cooley

Chief Architect

MIT PhD. Five degrees from MIT. Former CEO & CTO, Nanoramic Laboratories. Raised $100M+ in venture capital. 15+ years building scalable hardware-software AI systems.

Dr. Ryan Chin

Systems Strategy Lead

MIT PhD. Co-Founder & CEO, Optimus Ride (MIT spinout). Raised $100M+. Government & Fortune 100 advisor. Designs scalable systems for institutional adoption.

Mike Rinella

Government & Regulatory Strategy

Harvard Kennedy School. Senior advisor to two NY Governors including Mario Cuomo. Economic development, policy reform, and public sector integration.

Advisory Board
Megan Smith
Former U.S. Chief Technology Officer. Former Google VP. MIT Corporation Board of Trustees.
Jeff Hoffman
Founder, Priceline.com. Chairman, Global Entrepreneurship Network (190 countries).
Joaquim Barbosa
Former Chief Justice, Supreme Federal Court of Brazil. Columbia Law School visiting scholar.
Reggie Van Lee
MIT Corporation Board of Trustees. Former Chief Transformation Officer, The Carlyle Group.
Incubated & Affiliated
MIT Martin Trust Center Oxford Saïd Business School Thomson Reuters Foundation
Currently in stealth development · Additional product details available under NDA

Every patient deserves a companion
that knows their rights.

FairwAI is raising a seed round to bring real-time patient empowerment to healthcare. Oxford-incubated. MIT-engineered. Launching 2026.

hello@gofairwai.com
Confidential · FairwAI · 2026