Joining OpenAI at 10
I thought I’d give you a peek into what I discovered here.
Today OpenAI turns 10. When I joined four months ago, I knew I was stepping into a special company with a lot of history, culture, and impact. One of the things I’ve always tried to do is find the magic in people and in teams, and shine a light on it, so I’ve spent a lot of time trying to understand what makes this place what it is. I thought I’d give you a peek into what I discovered here.
When OpenAI got started, there was never a plan to build a consumer or enterprise business, yet three years after the launch of ChatGPT, we’re the fastest-scaling company in history, with 800 million people using our products every week and 1 million business customers building on top of them. What’s struck me most, though, is how deeply the company has stayed anchored to its focus on research, safety, and a long-term mission of ensuring AGI that benefits everyone. A few reflections on why this is so unique from what I’ve seen before:
Everything starts with research
OpenAI began as a research lab, and a decade later it’s more core to who we are than ever. The culture is deeply thoughtful, curious, and principled. People have the autonomy to explore big questions, and ideas are refined through open debate. That’s a big part of why the magic here still starts in the lab.
For example, Sora started as a way to understand physics so AI could be more effective in the physical world. When we saw how Sora 2 could create beautiful videos with synced audio for the first time, that’s how the idea for the Sora app and character features clicked into place. Deep research is another great example. Advances in reasoning and the ability to plan and execute multi-step tasks led to an agent that can turn hours of work into minutes. And when we applied this research approach to shopping specifically, it allowed us to launch our personal shopping agent.
In most tech companies, product and engineering drive most of the innovation. Here, we have incredible product and engineering talent setting a high bar, but they’re doing it on top of a research foundation that’s constantly propelling the technology forward and creating new possibilities for the product. As a product girl, I feel like a kid in a candy shop playing with the new models! One product manager on the team put it well: PMing at OpenAI is like getting a new superpower unlocked every few months, and our job is to equip all of humanity with this superpower.
Direction comes from the bottom up
Given the bottom-up nature of research, and the spikiness of breakthroughs, OpenAI doesn’t follow a typical product roadmap cycle. Other tech companies often lead with these well laid-out top-down plans, but here we’ve excelled at creating an environment where researchers can pursue new lines of thinking that might completely reshape our direction. That ambiguity can be a challenge (I used to love planning!), but it’s also what keeps us at the leading edge.
Across the field, we’ve seen the biggest breakthroughs in AI begin as passion projects of a single researcher or two — from the GPT series itself, to neural scaling laws, to major image generation techniques like variational autoencoders and diffusion models. For that reason, we continue to give researchers room to follow their interests. When results show promise, small groups start to refine the technical details that make a research project truly successful. Collaboration scales from there.
Because of this, everyone at OpenAI internalizes that what’s true today may not be true tomorrow, so we can’t afford to get stuck on a certain idea or approach. I’ve worked in very fast-paced environments before, but OpenAI is another level entirely.
Safety has always been part of our DNA
Exploration can quickly lead us in new directions, but safety is a constant. From the very beginning, OpenAI understood that as AI becomes more powerful, there will be significant risks to mitigate, just as there will be immense benefits to amplify. This is true for any technology, but this recognition led safety to be embedded in our work much earlier in our lifespan than at other companies.
Some of our earliest research related to AI alignment and how to get models to behave in ways that reflect people’s intent. That work is now an entire research discipline with multiple teams dedicated to identifying risks before they materialize and keeping AI aligned with humanity’s interests. For example, as we train AI models that can help make scientific breakthroughs and cure diseases, we also need to protect against biological risks. We’re working proactively to build in safeguards and partner with global experts to inform our work and strengthen society’s biological defenses more broadly.
Of course, no one can anticipate every edge case up front, which is why we’ve adopted a principle of iterative deployment: introducing capabilities in stages, learning quickly from real use, and strengthening safeguards as the technology evolves so we can protect people and give society time to adapt.
At other companies, safety is the last thing you check before launch — a guardrail on your way out the door, often managed by a separate team acting as a gatekeeper. Here, safety is everyone’s job: instead of tension between two sides, every team feels ownership over the safety of their technology. Safety is also respected as an area of innovation in and of itself, like our Safety Reasoner tool and the new confessions technique we just published. We invest deeply in safety across the company, through the great work of our Safety Systems team and throughout every team.
We follow our conviction, even when it’s counter-consensus (maybe especially when it is!)
Ten years ago, AGI was seen as science fiction, not worth serious research, but we built a lab to work on it anyway. We pushed on scale when many believed models would quickly hit hard limits. We focused on making systems broadly useful long before there was a consensus that general-purpose AI was practical. Again and again, we leaned into ideas that others rejected.
That history shapes our mindset today. We expect progress to be nonlinear and for ideas to look wrong before they’re proven right. We value rigorous debate, but we don’t wait for consensus to emerge before we make progress. As long as we’re anchored in our mission, we’re comfortable making long-term bets and building toward capabilities before use cases.
Inside the company, people often talk about “feeling the AGI.” We have a deep belief that AGI is not only possible, but that it’s worth building because intelligence has been the primary driver of every major leap in human progress, from science and medicine, to creativity, education, and economic growth. By making intelligence more abundant than ever, we can lower the barriers to discovery and creation that moves the world forward. This can sound lofty from the outside, but understanding this drive is essential to understanding our culture.
Products aren’t the goal themselves
What our mission means in practice is that successful products are not the endpoints at OpenAI. They matter deeply, but they are understood as steps along the journey to achieving AGI. (In fact, we often debate whether a product, even if widely successful, would be a distraction from our mission.)
This is a contrast from the rest of tech, and it’s been one of the biggest surprises for me. Here, the point is never just to ship something people love, as important as that is. The deeper question is always whether a product helps us learn and build toward safe and beneficial AGI for everyone. It’s a high bar, but it puts things into extreme focus. It’s why we optimize our products toward helping people achieve their goals, rather than time spent or other engagement metrics. It’s why we’re being extremely thoughtful about creating a business model that serves the mission and maintains people’s trust.
At the same time, there is a deep recognition that reaching AGI and superintelligence only matters if it can be deployed into the world in useful and safe ways. That’s why the product side of the house has grown so quickly and accomplished so much in so little time. Now my role is to create the best product company in the world, enabled by the best research lab in the world, without ever risking what makes that lab special. That spark needs protecting at all costs.
—
What a privilege. I feel incredibly lucky to be part of OpenAI at this moment and to work alongside so many brilliant, caring people to help carry that magic into the next decade.

The confessions technique is eerily similar to people consciously engaging in self-forgiveness. And, as you no doubt know, people who do that consistently show improved performance in subsequent tasks.
The bottom-up research culture is what makes this different. Most companies plan the roadmap first, then build toward it. OpenAI builds the breakthrough first, then figures out what it unlocks. That’s why PMing there feels like “getting a new superpower every few months.” The challenge isn’t executing a plan; it’s deciding which breakthrough to productize when the research keeps leapfrogging the roadmap. That’s a fundamentally different operating system.