23 Comments
User's avatar
Bill Bledsoe's avatar

Great start and all very valid reasons for ChatGPT to focus on health. I have two big questions:

1. Will all of my conversations with ChatGPT Health be hippa compliant (meaning my personal information will not be used in any way other than patient/provider communication?)

2. When with this connect to my medical records housed within Epic or Oracle (Cerner) so that I can start to have that "holistic" picture that you describe?

Expand full comment
Bryan Vartabedian's avatar

HIPAA. This is the big question. You should not consider anything that you put into ChatGPT as secure in that regard. Hopefully this is something that OpenAI fixes.

Expand full comment
Paul Hebert's avatar

Was also a major concern. They are not a trustworthy company. I filed a DSAR back in May, and they have yet to provide me any of my data.

Expand full comment
Paul Hebert's avatar

Curious if you can go into more detail on the differences in training/tuning between the Health model and the public release model. We all know the Public model is flawed and hallucinates more than it doesn't, very curious how someone could trust their Health to the tool. Also, did you not just come out a month or so back and say that ChatGPT should not be used for Medical Advice?

Expand full comment
Michael Spencer's avatar

But can I trust OpenAI with any of my sensitive and private personal data including healthcare data? Judging from history, I cannot.

Expand full comment
Nanjuan Shi's avatar

One thing I have learnt during handling my father's cancer treatment was that we all have to be our own healthcare advocate. Despite their best intention and efforts, doctors are extremely busy. They can only give you suggestions. The ultimate decision, like whether to wait or have the surgery now, eventually comes down to me personally. That was back to 2021. There was no ChatGPT. I had spent nights after nights reading through online docs to figure out what was the right decision. I am glad I made the right call to have the surgery immediately, but that was surely an exhausting and stressful time.

Being more on the "controlling" side, I always want to do my research to make sure the medical treatment is correct for my parents and my family. But I am not medically trained. This means reading tons of medical sites and "train" myself to be knowledgable to make a decision. Years ago I wouldn't trust this on any AI bot. However throughout the last 2 years, I realized I'm turning to ChatGPT for health answers 100% of the time. Sometimes I used it to gauge how urgent the situation was, whether I need to take my parent to ER and wait 4 hours there. Sometimes to go through blood test result to see if there is need to follow up with a specialist when the PCP does not give deterministic suggestion. It is like I "oursource" all my R&D to ChatGPT now. And everytime ChatGPT's answer is right on the mark.

I got people's concern about privacy, accuracy etc. However, we are already doing all these self-taught thing to make critical medical decisions anyway. ChatGPT is actually much more knowledgable than me. It has read more medical papers and has more post training than I possibly could. Especially with all the context data I feel my confidence level with ChatGPT beats myself.

This is where future is going. It will be even more beneficial to users who don't have the luxury to spend hours to do research or find second opinions.

Expand full comment
Fernando Tenorio's avatar

Hi Fidji,

Someone I loved deeply died from a cancer that was detected far too late. It wasn’t identified until the 13th doctor visit, after months of misdiagnoses. The signs were there, but fragmented across visits, tests, and opinions. In the end, we trusted that the system—and the specialists—would connect the dots. There was no AI to help us make sense of it all.

Recently, a close family member went through a disturbingly similar path: more than ten doctors, multiple misdiagnoses, and months of uncertainty. This time, we caught it. She’s now in remission and under the care of excellent physicians. But I can’t overstate how critical ChatGPT was for us along the way—helping interpret results, understand patterns, ask better questions, and raise concerns we might otherwise have missed.

I genuinely believe AI can be part of the answer to broken healthcare systems. I’m in Mexico, and we face many of the same challenges—especially around early detection. I understand the risks, and I agree they must be taken seriously. AI is not perfect, and it should never replace medical specialists. But it can—and should—augment them.

If the outcome is better understanding, earlier detection, and in some cases saving lives, I believe the risk is worth engaging with responsibly. It may not be a perfect tool, but it might be the best one we have right now.

If I could go back in time and use a tool like this to make sense of my loved one’s clinical history, maybe—just maybe—she’d still be here. I would take that chance every single time.

This is very important work you’re doing. I wish you and your team all the best!

Expand full comment
Timothy Sullivan's avatar

This is a great example of AI improving patient safety.

I'm an infectious diseases physician and I think there is a lot of room for AI to help improve antibiotic prescribing. Of course this should be on the prescriber side and not the patient's responsibility, but it's great that you were able to use ChatGPT and your own health data to advocate for yourself.

Expand full comment
Alexandre's avatar

Excellent points. One practical extension: turn ChatGPT Health data into a scannable QR code that any doctor can access with patient permission. Walk into urgent care or see a new specialist, they scan it, and instantly have your complete medical history, recent bloodwork, medication interactions, and AI-flagged risk factors. Would solve the patient-doctor handoff problem and make the data immediately actionable in clinical settings, especially for people like you managing complex chronic conditions across multiple specialists.

Expand full comment
Bernardo Campos's avatar

Hey, the waitlist link is 404, not sure if it changed?

Expand full comment
Paul LaPosta's avatar

Read this with a knot in my stomach.

Full disclosure: my day job is healthcare cloud operations. This is a highly regulated space for a reason, and most consumer GenAI stacks are not built for protected health information as a default posture. Operationally, this would keep me awake at night.

The risk is not just “wrong answer.” It is data lifecycle and incentives. What gets retained, who can access it, how it is used for tuning, what is logged, what is discoverable, what is breach-reportable, and whether patients ever meaningfully consented to any of it. In healthcare, “trust us” is not a control.

My org is also experimenting with agentic workflows, and the moment a system can invoke tools and make material changes, the threshold is delegation. That is why I wrote DAS-1 as an open spec for a bare minimum control set when an agent can act: https://github.com/forgedculture/das-1

I do not think this is a “ban it” conversation. I think it is a “design it like a regulated system” conversation. Because when you watch companies race ahead of safeguards, you are watching in real time why HIPAA exists in the first place: limit exposure, force accountability, and make misuse expensive.

Artifacts are cheap. Judgment is scarce.

Expand full comment
Alessio's avatar

''ChatGPT Health is not available yet in your location.''

:(

Expand full comment
Rahul Bahri's avatar

Thoughtful piece on AI in healthcare. You're right about the system's flaws it is fraught with fragmentation, burnout, and reactive care are real problems, and AI as an assistant holds genuine promise. Your personal story powerfully shows its value as a "second set of eyes" for data checks.

However, the kidney stone example also hints at why AI must remain an assistant, not the primary. Your care possibly involved a physical exam, imaging interpretation, and surgical planning stages requiring human touch, judgment, and accountability. AI has no hands for a procedure (not delving into robotic stereotactic procedures - which may be an alternative for a fortunate few but not a mass solution yet), no experience for a risk-benefit call, and cannot be held liable for a "hallucination."

The goal shouldn't be to remove the human from the loop, but to use AI to augment the human in the loop. The ideal model is Doctor + AI: where AI handles administrative burdens and data synthesis, freeing clinicians to do what only they can exercise nuanced judgment, provide empathetic care, and bear ultimate responsibility. Let's make the best of what has been built in a judicious fashion and not rush to rip it all out. I still think there is no and will never be a replacement for the human brain.

The vision of AI empowering patients and doctors is correct. The path to get there is by building specialized, validated tools that support not replace the irreplaceable human core of medicine.

Expand full comment
Emil R's avatar

We need a few health sections, one for ourselfs, and others for parents or kids

Expand full comment
Celia Quillian's avatar

I once had an experience like yours, where in a doctor’s office I was nearly administered a medication for an ear infection that I would have had a severe allergic reaction to. Thankfully I thought to ask the doctor if it contained the ingredient that my record clearly said I was allergic to. Of course: I blame the system, not the doctor pressed for too little time.

In my book (“AI for Life”) I cover a whole section on the use cases for ChatGPT in improving health and wellness. This new feature takes it all to the next level (and the privacy element is a massive added bonus!)

Is something HIPPA-compliant coming for physicians soon as well? To help them take notes, review patient records, as a thought partner, etc?

Expand full comment
Peter Pragnakar Atreides's avatar

I agree with you, and I believe there is significant potential to further advance the role of AI in healthcare. I have been trying to connect with the appropriate person within your organization to discuss this in more detail and explore possible next steps. Any guidance or redirection would be greatly appreciated.

Expand full comment
🎈Noemi from ME TIME 🎈's avatar

Maybe it is redundant to say that this application should be used with caution, probably better for physical ailments not psychological advice….and always in combination with professional advice and lots lots lots of common sense! OpenAI has an open case for its involvement in the suicide of a 16 year old. Also Chat GPT convinced me I had parathyroid cancer when I had calcium deficiency… While you can draw benefits the other way around is also possible!

Expand full comment