45 Comments
User's avatar
Bill Bledsoe's avatar

Great start and all very valid reasons for ChatGPT to focus on health. I have two big questions:

1. Will all of my conversations with ChatGPT Health be hippa compliant (meaning my personal information will not be used in any way other than patient/provider communication?)

2. When with this connect to my medical records housed within Epic or Oracle (Cerner) so that I can start to have that "holistic" picture that you describe?

Bryan Vartabedian's avatar

HIPAA. This is the big question. You should not consider anything that you put into ChatGPT as secure in that regard. Hopefully this is something that OpenAI fixes.

Paul Hebert's avatar

Was also a major concern. They are not a trustworthy company. I filed a DSAR back in May, and they have yet to provide me any of my data.

Fernando Tenorio's avatar

Hi Fidji,

Someone I loved deeply died from a cancer that was detected far too late. It wasn’t identified until the 13th doctor visit, after months of misdiagnoses. The signs were there, but fragmented across visits, tests, and opinions. In the end, we trusted that the system—and the specialists—would connect the dots. There was no AI to help us make sense of it all.

Recently, a close family member went through a disturbingly similar path: more than ten doctors, multiple misdiagnoses, and months of uncertainty. This time, we caught it. She’s now in remission and under the care of excellent physicians. But I can’t overstate how critical ChatGPT was for us along the way—helping interpret results, understand patterns, ask better questions, and raise concerns we might otherwise have missed.

I genuinely believe AI can be part of the answer to broken healthcare systems. I’m in Mexico, and we face many of the same challenges—especially around early detection. I understand the risks, and I agree they must be taken seriously. AI is not perfect, and it should never replace medical specialists. But it can—and should—augment them.

If the outcome is better understanding, earlier detection, and in some cases saving lives, I believe the risk is worth engaging with responsibly. It may not be a perfect tool, but it might be the best one we have right now.

If I could go back in time and use a tool like this to make sense of my loved one’s clinical history, maybe—just maybe—she’d still be here. I would take that chance every single time.

This is very important work you’re doing. I wish you and your team all the best!

Hilary Rowland's avatar

I was misdiagnosed for a year and a half when I had cancer. I wish I'd had AI back then. Now I use it instead of doctors because human doctors don't listen to women.

John P's avatar

In some ways, these tools nudge us to be our own health data custodians - like a personal health record, with actionable insights.

Paul Hebert's avatar

Curious if you can go into more detail on the differences in training/tuning between the Health model and the public release model. We all know the Public model is flawed and hallucinates more than it doesn't, very curious how someone could trust their Health to the tool. Also, did you not just come out a month or so back and say that ChatGPT should not be used for Medical Advice?

John P's avatar

I imagine this may have had more input from health professionals (e.g. for labeling and more)?

I just wish there was an easier way to flag critical errors when directly testing cgpt in these contexts

Paul Hebert's avatar

10000%. We know the enterprise models are trained/tuned specified to the use. My concern is how they do not reveal that so 90% of their user base who are non tech will know that and think the consumer model of charge can do the same.

Transparency and accountability need to be paramount

John P's avatar

Good point - I am not sure how much is rolled into plus and/or the differences in the offerings. I still need an opportunity to tinker with cgpt x health

Paul LaPosta's avatar

One major problem with our medical data, is the inherent bias it already has. Even if we can get past the privacy issues, this wont be equitable medical care for all with no bias. Im not sure how we account for that in any medical model.

Nanjuan Shi's avatar

One thing I have learnt during handling my father's cancer treatment was that we all have to be our own healthcare advocate. Despite their best intention and efforts, doctors are extremely busy. They can only give you suggestions. The ultimate decision, like whether to wait or have the surgery now, eventually comes down to me personally. That was back to 2021. There was no ChatGPT. I had spent nights after nights reading through online docs to figure out what was the right decision. I am glad I made the right call to have the surgery immediately, but that was surely an exhausting and stressful time.

Being more on the "controlling" side, I always want to do my research to make sure the medical treatment is correct for my parents and my family. But I am not medically trained. This means reading tons of medical sites and "train" myself to be knowledgable to make a decision. Years ago I wouldn't trust this on any AI bot. However throughout the last 2 years, I realized I'm turning to ChatGPT for health answers 100% of the time. Sometimes I used it to gauge how urgent the situation was, whether I need to take my parent to ER and wait 4 hours there. Sometimes to go through blood test result to see if there is need to follow up with a specialist when the PCP does not give deterministic suggestion. It is like I "oursource" all my R&D to ChatGPT now. And everytime ChatGPT's answer is right on the mark.

I got people's concern about privacy, accuracy etc. However, we are already doing all these self-taught thing to make critical medical decisions anyway. ChatGPT is actually much more knowledgable than me. It has read more medical papers and has more post training than I possibly could. Especially with all the context data I feel my confidence level with ChatGPT beats myself.

This is where future is going. It will be even more beneficial to users who don't have the luxury to spend hours to do research or find second opinions.

John P's avatar

Or pay for traditional concierge medicine.

Would be nice if these techs improve access, on both sides

Alexandre's avatar

Excellent points. One practical extension: turn ChatGPT Health data into a scannable QR code that any doctor can access with patient permission. Walk into urgent care or see a new specialist, they scan it, and instantly have your complete medical history, recent bloodwork, medication interactions, and AI-flagged risk factors. Would solve the patient-doctor handoff problem and make the data immediately actionable in clinical settings, especially for people like you managing complex chronic conditions across multiple specialists.

John P's avatar

Interesting idea - would this be better than using the share chat feature in cgpt?

Michael Spencer's avatar

But can I trust OpenAI with any of my sensitive and private personal data including healthcare data? Judging from history, I cannot.

CD's avatar

Valid concern. If you get sick enough that your life falls apart and you have nothing, you’re desperate enough not to care.

Paul LaPosta's avatar

Read this with a knot in my stomach.

Full disclosure: my day job is healthcare cloud operations. This is a highly regulated space for a reason, and most consumer GenAI stacks are not built for protected health information as a default posture. Operationally, this would keep me awake at night.

The risk is not just “wrong answer.” It is data lifecycle and incentives. What gets retained, who can access it, how it is used for tuning, what is logged, what is discoverable, what is breach-reportable, and whether patients ever meaningfully consented to any of it. In healthcare, “trust us” is not a control.

My org is also experimenting with agentic workflows, and the moment a system can invoke tools and make material changes, the threshold is delegation. That is why I wrote DAS-1 as an open spec for a bare minimum control set when an agent can act: https://github.com/forgedculture/das-1

I do not think this is a “ban it” conversation. I think it is a “design it like a regulated system” conversation. Because when you watch companies race ahead of safeguards, you are watching in real time why HIPAA exists in the first place: limit exposure, force accountability, and make misuse expensive.

Artifacts are cheap. Judgment is scarce.

Timothy Sullivan's avatar

This is a great example of AI improving patient safety.

I'm an infectious diseases physician and I think there is a lot of room for AI to help improve antibiotic prescribing. Of course this should be on the prescriber side and not the patient's responsibility, but it's great that you were able to use ChatGPT and your own health data to advocate for yourself.

Brett Strouss's avatar

Fidji - good article and nice job covering the needs. I have used several LLMs for medical investigation, interpretation, and diagnosis. Personally, I've found GPT-5.2 a little overreactive to possible problems, without providing possible less-serious diagnoses. It sounds like from her comment here, @lawyerpreneur had a similar issue. I do have several success stories, and one I'll share.

An old friend came to visit in July 2025. A triathlete, he had been competing at a high-level for around twenty years, but I had not heard of him in any competitions in recent years. When I asked, he shared his story of traveling to a (name withheld) country in South America for an Iron Man-distance triathlon in April 2019 and finishing in the top three in his age group. Then, within several months, he was barely able to run two miles without stopping. One of his friends who competed there was also struck with this mysterious malady, and neither could find a cause or solution. Both had the financial resources to pursue specialists that insurance might discourage. Everyone was blaming it on COVID, but neither had gotten sick nor had tested positive for COVID. They went to great lengths to try to fix the problem. My friend had heart surgery (PFO closure in July 2019) to repair a possible heart issue which did not fix the problem. His friend bought a hyperbaric oxygen chamber to attempt to fix it. Didn't work. But I started typing into Claude when he shared the arrogance of the doctor at a top clinic who told him, "I see a hundred guys like you, and after two days of running all of these tests, we find nothing wrong. You'll just have to face it that you're just getting old and need to train harder." Claude blew me away with a number of possibilities, and when it prompted me to ask if there was anything else he and his friend might have done differently than the thousands of others, he said they went to a natural hot spring, and that they had both put their heads under water (his doctors also knew about this). When I added that, I swear Claude got excited, and reported there is a strong possibility of Non-Tuberculous Mycobacterium (NTM) Infection - particularly MAC (Mycobacterium avium complex) which is common in South America, rare in the US, is slow acting, and hard to diagnose.

Because I doubted that his doctor(s) would believe it, I had Claude provide a letter with supporting information, research and journal references, diagnostic tests to verify it, and a treatment protocol (strong antibiotics and other drugs for 18 to 24 months). It literally took months for him to work his way through the medical establishment to finally get referred to a top university hospital's infectious disease specialist, who has TWENTY patients being treated for this very thing right now. He is still awaiting the results of the culture from mucus from his lungs, which takes six weeks to grow. The doctor says It's very likely he has this since the source, the onset, the symptoms, and preliminary tests all point to this. Given that it has taken over five years to find this, it's unlikely he will regain more than 80% of his original lung capacity.

Biological Honesty's avatar

Fidji, this is a vital perspective. As we integrate AI into healthcare, we have to ensure we aren't just automating a 'broken' system.

​I’ve been researching a framework called Biological Honesty, which treats health as a systems-audit. The real power of AI isn't just in making doctors faster; it's in its ability to help us listen to the underlying 'logical signals' of the body that we’ve traditionally ignored. We need to audit the biological architecture itself, not just use technology to patch the symptoms. Thank you for pushing this conversation forward.

Sophie Lemieux's avatar

I’m very conflicted about this one, I like the potential uses, at the same time we’re dealing with a US (for profit) entity, a huge huge lot of sensitive information, and a sycophantic model. Feels like a potential recipe for complete disaster for some people.

Hilary Rowland's avatar

Yes, like how do we make sure insurance companies can't get the data?

Brett Strouss's avatar

If you have heard podcaster (podcast is The Ultimate Human)and "human biologist" Gary Brecka, insurance companies already have more data than we know. His job was to predict a life insurance applicant's life span down to the month. He left the industry and started a company to analyze one's genetic makeup and recommend treatments and supplements to regain one's health.

🎈Noemi from ME TIME 🎈's avatar

Maybe it is redundant to say that this application should be used with caution, probably better for physical ailments not psychological advice….and always in combination with professional advice and lots lots lots of common sense! OpenAI has an open case for its involvement in the suicide of a 16 year old. Also Chat GPT convinced me I had parathyroid cancer when I had calcium deficiency… While you can draw benefits the other way around is also possible!

Bernardo Campos's avatar

Hey, the waitlist link is 404, not sure if it changed?

CD's avatar

This was a fantastic podcast! I’ve been using AI to help me understand and manage POTS, ME/CFS, Chronic Migraine, Hashimotos, EDS and MCAS. It’s amazing and definitely a relief after years of spending over six figures chasing answers and coming to the conclusion that providers are unable to help. So many people do not realize they could be one infection away from having their life crumble before their eyes. Getting sick made me feel like I’ve lost all the credibility I built in my life when people don’t understand how debilitating it can be and think it’s in your head. I miss working. I’d be interested in understanding how you were able to get well enough to get back to work.

Heather Hausenblas, PhD's avatar

Fidji - I’ve been impressed by what ChatGPT Health can do, and I want my university students to learn how this platform can be used to understand and communicate health. For an upcoming assignment in my Evidence-Based Health course, I’d like to have my students read your ChatGPT Health posts. As part of the assignment, they would write a comment and respond to another comment. This would give them experience engaging critically and respectfully with evidence-based content, while also contributing meaningful discussion to your post. Would you be okay with me moving forward with this? Also, do you have any restrictions on commenting or replying that might prevent my students from reading and participating as free users? Best, Heather