Welcome to Internal Tech Emails: internal tech industry emails that surface in public records. 🔍 If you haven’t signed up, join 50,000+ others and get the newsletter:
Sam Altman emails Elon Musk
On May 25, 2015, at 9:10 PM, Sam Altman wrote:
Been thinking a lot about whether it's possible to stop humanity from developing AI.
I think the answer is almost definitely not.
If it's going to happen anyway, it seems like it would be good for someone other than Google to do it first.
Any thoughts on whether it would be good for YC to start a Manhattan Project for AI? My sense is we could get many of the top ~50 to work on it, and we could structure it so that the tech belongs to the world via some sort of nonprofit but the people working on it get startup-like compensation if it works. Obviously we'd comply with/aggressively support all regulation.
Sam
From: Elon Musk
To: Sam Altman
Subject: Re: question
Date: Monday, May 25, 2015 11:09:22 PM
Probably worth a conversation
[This document is from Elon Musk, et al. v. Samuel Altman, et al. (2024).]
Previously: Sam Altman emails Elon Musk: “The mission would be to create the first general Al and use it for individual empowerment—ie, the distributed version of the future that seems the safest” (June 24, 2015)
Previously: Microsoft CTO: "Thoughts on OpenAI" (June 12, 2019)
A note from @TechEmails
Every year, I track hundreds of court cases and review more than 10,000 filings to bring you Internal Tech Emails. If you like @TechEmails, and would like to help make this work more sustainable, consider upgrading to a paid subscription.
You’ll be supporting the research that drives Internal Tech Emails, and will help ensure that it can continue publishing. And you’ll also receive access to the full archive of internal tech emails, with 200+ documents from Apple, Google, Meta, Tesla, and more.
Thank you all so much for reading over the past three years!
-Internal Tech Emails
Sent from my iPad
Ilya Sutskever emails Elon Musk and Sam Altman
On Sep 20, 2017, at 2:08 PM, Ilya Sutskever wrote:
Elon, Sam,
This process has been the highest stakes conversation that Greg and I have ever participated in, and if the project succeeds, it'll turn out to have been the highest stakes conversation the world has seen. It's also been a deeply personal conversation for all of us.
Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake. We have several important concerns that we haven't raised with either of you. We didn't raise them because we were afraid to: we were afraid of harming the relationship, having you think less of us, or losing you as partners.
There is some chance that our concerns will prove to be unresolvable. We really hope it's not the case, but we know we will fail for sure if we don't all discuss them now. And we have hope that we can work through them and all continue working together.
Elon:
We really want to work with you. We believe that if we join forces, our chance of success in the mission is the greatest. Our upside is the highest. There is no doubt about that. Our desire to work with you is so great that we are happy to give up on the equity, personal control, make ourselves easily firable — whatever it takes to work with you.
But we realized that we were careless in our thinking about the implications of control for the world. Because it seemed so hubristic, we have not been seriously considering the implications of success.
The current structure provides you with a path where you end up with unilateral absolute control over the AGI. You stated that you don't want to control the final AGI, but during this negotiation, you've shown to us that absolute control is extremely important to you.
As an example, you said that you needed to be CEO of the new company so that everyone will know that you are the one who is in charge, even though you also stated that you hate being CEO and would much rather not be CEO.
Thus, we are concerned that as the company makes genuine progress towards AGI, you will choose to retain your absolute control of the company despite current intent to the contrary. We disagree with your statement that our ability to leave is our greatest power, because once the company is actually on track to AGI, the company will be much more important than any individual.
The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.
We have a few smaller concerns, but we think it's useful to mention it here:
In the event we decide to buy Cerebras, my strong sense is that it'll be done through Tesla. But why do it this way if we could also do it from within OpenAI? Specifically, the concern is that Tesla has a duty to shareholders to maximize shareholder return, which is not aligned with OpenAI's mission. So the overall result may not end up being optimal for OpenAI.
We believe that OpenAI the non-profit was successful because both you and Sam were in it. Sam acted as a genuine counterbalance to you, which has been extremely fruitful. Greg and I, at least so far, are much worse at being a counterbalance to you. We feel this is evidenced even by this negotiation, where we were ready to sweep the long-term AGI control questions under the rug while Sam stood his ground.
Sam:
When Greg and I are stuck, you've always had an answer that turned out to be deep and correct. You've been thinking about the ways forward on this problem extremely deeply and thoroughly. Greg and I understand technical execution, but we don't know how structure decisions will play out over the next month, year, or five years.
But we haven't been able to fully trust your judgements throughout this process, because we don't understand your cost function.
We don't understand why the CEO title is so important to you. Your stated reasons have changed, and it's hard to really understand what's driving it.
Is AGI truly your primary motivation? How does it connect to your political goals? How has your thought process changed over time?
Greg and Ilya:
We had a fair share of our own failings during this negotiation, and we'll list some of them here (Elon and Sam, I'm sure you'll have plenty to add...):
During this negotiation, we realized that we have allowed the idea of financial return 2-3 years down the line to drive our decisions. This is why we didn't push on the control — we thought that our equity is good enough, so why worry? But this attitude is wrong, just like the attitude of AI experts who don't think that AI safety is an issue because they don't really believe that they'll build AGI.
We did not speak our full truth during the negotiation. We have our excuses, but it was damaging to the process, and we may lose both Sam and Elon as a result.
There's enough baggage here that we think it's very important for us to meet and talk it out. Our collaboration will not succeed if we don't. Can all four of us meet today? If all of us say the truth, and resolve the issues, the company that we'll create will be much more likely to withstand the very strong forces it'll experience.
- Greg & Ilya
From: Elon Musk
Subject: Re: Honest Thoughts
Date: Wednesday, September 20, 2017 2:17:03 PM
Guys, I've had enough. This is the final straw.
Either go do something on your own or continue with OpenAI as a nonprofit. I will no longer fund OpenAI until you have made a firm commitment to stay or I'm just being a fool who is essentially providing free funding for you to create a startup.
Discussions are over.
On Wed, Sep 20, 2017 at 3:08 PM Elon Musk wrote:
To be clear, this is not an ultimatum to accept what was discussed before. That is no longer on the table.
From: Sam Altman
Subject: Re: Honest Thoughts
Date: Thursday, September 21, 2017 9:17:57 AM
i remain enthusiastic about the non-profit structure!
[This document is from Elon Musk, et al. v. Samuel Altman, et al. (2024).]
Elon Musk on Twitter’s finances
On Mon, May 15, 2023 at 8:34 AM Elon Musk wrote:
Please inform the agencies and resellers that I will be calling their CEOs immediately if they do not pay their bills. When something like this happens, I expect you personally first to call those who owe us money, not your “team”.
Let’s make sure that we are paying what we owe them, if anything. They may be refusing to pay us, because we are late paying them. That’s happened several times.
The GPU purchases are necessary, but should be funded via [REDACTED] in xAI and then xAI will be responsible for paying for them. Note, Twitter will benefit greatly from the work that xAI will be doing as we need to apply ML to the Twitter recommendation algorithm and search.
The headcount reduction you describe below is absurdly small. We are overstaffed by at least 500 people. At the same time, we do need to hire people who are exceptionally capable and hard-working.
[This document is from Schobinger v. Twitter (2024).]