Gladstone AI’s mission is to protect and empower humanity through the era of advanced AI, by ensuring that future AI systems are developed and used safely. We enable organizations to make the best and fastest possible decisions in AI policy, strategy, procurement, and risk management.
Like many of the world’s top AI labs, we believe that very powerful AI systems may be developed in the coming years. And, like those labs, we believe that if sufficient safeguards aren’t developed in time, the results could be catastrophic. Unfortunately, today AI labs are locked in a race to develop increasingly powerful systems, and there is immense pressure on each lab to move as fast as possible at the expense of investing in safety.
We’re proud of what we’ve pulled off so far. In the last year alone: we’ve trained hundreds of DoD staff (including hundreds of senior executives, generals, and admirals) to understand AI and AI risk; built tools that support the U.S. government’s ability to acquire safe AI technology; personally briefed multiple Cabinet officials in the U.S. and allied nations; collaborated closely with the world’s top contingency planners and operators on risk mitigation strategies; and made other contributions that can’t be disclosed in a public setting.
We advance our mission through three lines of effort:
Even though we’re a for-profit company, Gladstone has raised virtually no outside capital. This ensures that our interests stay aligned with those of our government customers, allowing us to deliver on our mission free of any outside influence from VCs, large nonprofit donors, or other special interests.
Building a business while executing an urgent policy mission is challenging. Among other things, it imposes a cost on organizational focus. But we’ve learned from experience that the benefits of maintaining our independence vastly outweigh such costs. Now that our business has achieved escape velocity, we not only have complete freedom to chart our own course, we’re also in a position to have a historic impact on government AI policy.
If you’re drawn by our mission and independent mandate — and if you’d like to join a team that eats impossible problems for breakfast — Gladstone might just be where you belong.
If that interests you, check out our open positions.
Who are you?
Gladstone has three cofounders. Mark Beall is the former head of AI policy at the U.S. Department of Defense. Edouard Harris is an AI safety researcher, full stack software engineer, and YC founder. And Jeremie Harris is a bestselling author, podcaster, angel investor, and YC founder. Edouard and Jeremie are brothers, and cofounded several companies together. All three of us have been family friends for more than a decade.
How much have you raised?
We’re funded with a single $50k angel check. We’ve raised no institutional capital, either from VCs or nonprofit donors. We don’t intend to raise any in the future.
While our team has experience raising money from some of the world’s top investors, we’ve found that outside funding can too often distort a company’s incentives in the long run. While VCs can accelerate growth, their support comes at the cost of a revenue focus that risks detracting from our mission impact. And while nonprofit donors can be mission-aligned, their funding is often tied to explicit or implicit conditions, and can lead to unhealthy dependencies and conflicts of interest.
On the other hand, growing through sales preserves our independence and creates a healthy pressure for us to deliver value that's aligned with our mission. Our first year’s revenue is already in the multiple millions of dollars, not counting even bigger contracts we’ve signed for the next two years. Our core team is 4 people, our war chest is large and growing, and we’re default alive. By revenue, Gladstone is a Series A company.
Why did you start this company?
All three founders worked on this problem for two years before finally starting Gladstone in late 2022. When GPT-3 came out in 2020, we realized what AI scaling meant, and made a few bets. First, we bet that AI scaling would continue. Second, because scaling would continue, we bet that frontier AI labs would eventually become locked in a scaling race that none of them could individually escape. And finally, because the scaling race would destroy each frontier lab’s individual agency, we bet that governments would soon get involved, and we positioned ourselves accordingly. Three years later, these bets are finally paying off.
Today, we’re placing our next round of bets. If you’d like to be part of that process, reach out or check out our open positions.