It is fairly rare to see an ex-employee put a positive spin on their work experience.
I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.
This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).
Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!
Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.
Way to go to keep the boring chores of the first months with the partner and join the fun when the little one starts to be more fun after a year. With all that cash, I'm sure they could buy a bunch of help for the partner too.
I don't know, when I became a parent I was in for the full ride, not to have someone else raising her. Yes, raising includes changing diapers and all that.
There is some parenting, then there is good parenting. Most people don't have this option due to finances, but those that do and still avoid it to pick up just easy and nice parts - I don't have much sympathy nor respect for them.
Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.
Plus it certainly helps the kid with bonding, emotional stability and keeps the parent more in touch emotionally with their own kid(s).
That's even more of a reason not to bad mouth other billionaires/billion dollar companies. Billionaires and billion dollar companies work together all the time. It's not a massive pool. There is a reason beef between companies and top level execs and billionaires is all rumors and tea-talk until a lawsuit drops out of no where.
You think every billionaire is gonna be unhinged like Musk calling the president a pedo on twitter?
I am not trying to advance wild hypotheticals, but something about his behavior does not quite feel right to me. Someone who has enough money for multiple lifetimes, working like he's possessed, to launch a product minimally different than those at dozens of other companies, and leaving his wife with all the childcare, then leaving after 14 months and insisting he was not burnt out but without a clear next step, not even, "I want to enjoy raising my child".
His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.
I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.
Calvin just worked like this when I was at Segment. He picked what he worked on and worked really intensely at it. People most often burn out because of the lack of agency, not hours worked.
Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.
This reflection seems very unlikely to be authentic because it is full of superlatives and not a single bad thing (or at least not great) is mentioned. Real organizations made of real humans simply are not like this.
The fact that several commenters know the author personally goes some way to explain why the entire comment section seems to have missed the utterly unbalanced nature of the article.
> There's no Bond villain at the helm. It's good people rationalizing things.
I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.
Interesting. A year ago I joined one of the larger online sportsbook/casinos. In terms of talent, employees are all over the map (both good and bad). But I have yet to meet a villain. Everyone here is doing the best they can.
That may very well be the case. But I think this is a distinct category of evil; the second one, in which you'll find most of the cigarette and gambling businesses, is that of evil caused by indifference.
"Yes, I agree there are some downsides to our product and there are some people suffering because of that - but no one is forcing them to buy from us, they're people with agency and free will, they can act as adults and choose not to buy. Now what is this talk about feedback loops and systemic effects? It's confusing, go away."
This category is where you'll also find most of the advertising business.
The self-righteous may be the source of the greatest evil by magnitude, but day-to-day, the indifferents make it up in volume.
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.
I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).
I also believe that AGI is not a thing, but for different reasons. I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not. People also don't seem interested in justifying why humans would be GI but other animals with 99% of the same DNA aren't.
My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.
If we were to believe the embodiment theory of intelligence (it’s by far not the only one out there, but very influential and convincing), this means that building an AGI is an equivalent problem to building an artificial human. Not a puppet, not a mock, not “sorta human”, but real, fully embodied human, down to gut bacterial biome, because according to the embodiment theory, this affects intelligence too.
In this formulation, it’s pretty much as impossible as time travel, really.
Sure, if we redefine "AGI" to mean "literally cloning a human biologically", then AGI suddenly is a very different problem (mainly one of ethics, since creating human clones, educating, brainwashing, and forcing them to respond to chat messages ala chatGPT has a couple ethical issues along the way).
I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.
Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.
Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.
I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.
“AGI” isn’t a thing and never will be. It fails even really basic scrutiny. The objective function of a human being is to keep its biological body alive and reproduce. There is no such similar objective on which a ML algorithm can be trained. It’s frankly a stupid idea propagated by people with no meaningful connection to the field and no idea what the fuck they’re talking about.
The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.
OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)
The tender offer limitations still are, last I heard.
Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)
(It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)
Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.
Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).
Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.
> remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own
Well it depends on people’s mindset. It’s like doing a hackathon and not winning.
Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.
…but of course not everybody likes to go to hackathons
> OpenAI is perhaps the most frighteningly ambitious org I've ever seen.
That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.
Having said that, the entire article reads more like a puff piece than an honest reflection.
> everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions!
The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”
There is lots of rationalizing going on in his article.
> I returned early from my paternity leave to help participate in the Codex launch.
10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.
Allow me to propose a different rationalization: "yes I know X might damage some people/society, but it was not me who decided, and I get lots of money to do it, which someone else would do if not me."
I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.
Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.
> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.
> The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
Yeah I had to re-read the sentence.
The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.
I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.
Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:
"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"
I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:
> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.
This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).
And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.
Well, as a reminder OpenAI has a non disparagement clause in their contracts, so the only thing you'll ever see from former employees is positive feedback.
> That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things
I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.
> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.
- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.
- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.
- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."
- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."
- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.
> I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA)
Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.
"Hey, Twitter vibes are a metric, so make sure to mention the company on Twitter if you want to be heard."
Twitter is a one-way communication tool. I doubt they're using it to create a feedback loop with users, maybe just to analyse their sentiment after a release?
The entire article reads more like a puff piece than an honest reflection. Those of us who live outside the US are more sceptical, especially after everything revealed about OpenAI in the book Empire of AI.
Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
Similar to Pascal's wager, which pretty much amounts to "yeah, God is probably not real, _but what if it is_? The utility of getting into heaven is infinite (and hell is infinitely negative), so any non-zero probability that God is real should make you be religious, just in case."
There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.
Someone else can confirm, but from my understanding, no they did not know sentiment analysis, reasoning, few shot learning, chain of thought, etc would emerge at scale. Sentiment analysis was one of the first things they noticed a scaled up model could generalize. Remember, all they were trying to do was get better at next-token prediction, there was no concrete idea to achieve "instruction following", for example. We can never truly say going up another order of magnitude on the number of params won't achieve something (it could, for reasons unknown, just like before).
It is somewhat parallel to the story of Columbus looking for India but ending up in America.
Didn't it just get better at next token prediction? I don't think anything emerged in the model itself, what was surprising is how really good next token prediction itself is at predicting all kind of other things no?
The Schaeffer et al. "Mirage" paper showed that many claimed emergent abilities disappear when you use different metrics, what looked like sudden capability jumps were often artifacts of using harsh/discontinuous measurements rather than smooth ones.
But I'd go further: even abilities that do appear "emergent" often aren't that mysterious when you consider the training data. Take instruction following - it seems magical that models can suddenly follow instructions they weren't explicitly trained for, but modern LLMs are trained on massive instruction-following datasets (RLHF, constitutional AI, etc.). The model is literally predicting what it was trained on. Same with chain-of-thought reasoning - these models have seen millions of examples of step-by-step reasoning in their training data.
The real question isn't whether these abilities are "emergent" but whether we're measuring the right things and being honest about what our training data contains. A lot of seemingly surprising capabilities become much less surprising when you audit what was actually in the training corpus.
>Thanks to this bottoms-up culture, OpenAI is also very meritocratic. Historically, leaders in the company are promoted primarily based upon their ability to have good ideas and then execute upon them. Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering. That matters less at OpenAI then it might at other companies. The best ideas do tend to win. 2
This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...
>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.
I think in this structure people only think locally and they are not concerned with the overall mission of the company and do not actively think about morality of the mission or if they are following it.
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.
There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.
Worried about how sustainable this is for its people, given the risk of burnout.
If anyone tried to demand that I work that way, I’d say absolutely not.
But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!
Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.
Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.
The latter. I mean, I feel like a disproportionate number of folks who hang around here have that kind of disposition.
That just turns out to be the kind of person who likes to be around me, and I around them. It’s something I wish I had been more deliberate about cultivating earlier in my life, but not the sort of thing I regret.
In my case that’s a lot of artists/writers/hackers, a fair number of clergy, and people working in service to others. People quietly doing cool stuff in boring or difficult places… people whose all-out sprints result in ambiguity or failure at least as often as they do success. Very few rich people, very few who seek recognition.
The flip side is that neither I nor my social circles are all that good at consistency—but we all kind of expect and tolerate that about each other. And there’s lots of “normal” stuff I’m not part of, which I probably could have been if I had tried. I don’t know what that means to the business-minded people around here, but I imagine it includes things like corporate and nonprofit boards, attending sports events in stadia, whatever golf people do, retail politics, Society Clubs For Respectable People, “Summering,” owning rich people stuff like a house or a car—which is fine with me!
This guy who is already independently wealthy chose working 16-17h 7 days a week instead of raising his newborn child and thanks his partner for “childcare duties”. Pretty much tells you everything you need to know.
It's not sustainable, at all, but if it's happening just a couple times throughout your career, it's doable; I know people who went through that process, at that company, and came out of it energized.
I think Altman said in Lex F. podcast that he works 8 hours, 4 first one being the most productive ones and he doesn't believe CEO claiming they work 16 hours a day. Weird contrast to what described in the article. This confirms my theory that there are two types of people in startups: founders and everybody else, the former are there to potentially make a lot of money, and the later are there to learn and leave.
I couldn't imagine asking my partner to pick up that kind of childcare slack. Props to OP's wife for doing so, and I'm glad she got the callout at the end, but god damn.
It's worst than that. Lots of power struggles and god-like egos. Altman called one of the employees "Einstein" on Twitter, some think they were chosen to transcend humanity, others believe they're at war with China, some want to save the world, others see it burn, and some just want their names up there with Gates and Jobs.
This is what ex-employees said in Empire of AI, and it's the reason Amodei and Kaplan left OpenAI to start Anthropic.
He references childcare and paternity leave in the post and he was a co-founder in a $3B acquisition. To me it seems it is a time-of-life/priorities decision not a straight up burnout decision.
Working a job like that would literally ruin my life. There's no way I could have time to be a good husband and father under those conditions, some things should not be sacrificed.
Many people are bad parents. Many are bad at their jobs. Many at bad at both. At least this guy is good at his job, and can provide very well for his family.
It is all relative. A workaholic seems pretty nice when compared to growing up with actual objectively bad parents, workaholics plus: addicts, perpetually drunk, gamblers, in jail, no shows for everything you put time into, competing with you when obtaining basic skills, abusing you for being a kid, etc.
There are plenty worse than that. The storied dramatic fiction parent missing out on a kid's life is much better than what a lot of children have.
Yet, all kids grow up, and the greatest factor determining their overall well-being through life is socioeconomic status, not how many hours a father was present.
Im very interested in that topic and haven’t made up my mind about what really counts in parenting.
You have sources for the claim about well-being (asking explicitly about mental well-being and not just material well-being) being more influenced by socioeconomic status and not so much by parental absence?
About the guy: I think if it’s just a one time thing it’s ok but the way he presents himself gives reason for doubt
They were showered with assets for being a lucky individual in a capital driven society, time is interchangeable for wealth, as evidenced throughout history.
This guy is young. He can experience all that again, if it is that much of a failure, and he really wants to.
Sure, there are ethical issues here, but really, they can be offset by restitution, lets be honest.
My hot take is I don’t think burn out has much to do with raw hours spent working. I feel it has a lot more to do with sense of momentum and autonomy. You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it. But if it feels like wading through a swamp, you will burn out very quickly, even if it’s just 50 hours a week. I also find ownership has a lot to do with sense of burnout
At some level of raw hours, your health and personal relationships outside work both begin to wither, because there are only 24 hours in a day. That doesn’t always cause burnout, but it provides high contrast - what you are sacrificing.
Exactly this - if not at all about hours spent (at least that’s not a good metric; working less will benefit a burned out person; but the hours were not the root cause). The problem is lack of autonomy, lack of control over things you care about deeply. If those go out the window, the fire burns out quickly.
Imho when this happens it’s usually because a company becomes too big, and the people in control lack subject matter expertise, have lost contact with the people that drive the company, and instead are guided by KPIs and the rules they enforced grasping for that feeling of being in control.
And if the work you're doing feels meaningful and you're properly compensated. Ask people to work really hard to fill out their 360 reviews and they should rightly laugh at you.
i hope thats not a hot take because it's 100% correct.
people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.
you can fix overwork with a vacation. burnout is a deeper existential wound.
my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.
2024 my wife and I did a startup together. We worked almost every hour we were awake, 16-18 hours a day, 7 days a week. We ate, we went for an hour's walk a day, the rest of the time I was programming. For 9 months. Never worked so hard in my life before. And, not a lick of burnout during that time, not a moment of it, where I've been burned out by 6 hour work days at other organizations. If you're energized by something, I think that protects you from burnout.
for the amount of money they are giving that is relatively easy, normal people are paid way less in harder jobs, for example, working in an Amazon Warehouse or doing door-to-door sales, etc.
I don't really have an opinion on working that much, but working that much and having to go into the office to spend those long hours sounds like torture.
Those that love the work they do don't burn out, because every moment working on their projects tends to be joyful. I personally hate working with people who hate the work they do, and I look forward to them being burned out
Sure, but this schedule is like, maybe 5 hours of sleep per night. Other than an extreme minority of people, there’s no way you can be operating on that for long and doing your best work. A good 8 hours per night will make most people a better engineer and a better person to be around.
"You don't really love what you do unless you're willing to do it 17 hours a day every day" is an interesting take.
You can love what you do but if you do more of it than is sustainable because of external pressures then you will burn out. Enjoying your work is not a vaccine against burnout. I'd actually argue that people who love what they do are more likely to have trouble finding that balance. The person who hates what they do usually can't be motivated to do more than the minimum required of them.
Weird how we went from like the 4 hour workweek and all those charts about how people historically famous in their field spent only a few hours a day on what they were most famous for, to "work 12+ hours a day or you're useless".
Also this is one of a few examples I've read lately of "oh look at all this hard work I did", ignoring that they had a newborn and someone else actually did all of the hard work.
I read gp’s formulation differently: “if you’re working 17 hours a day, you’d better stop soon unless you’re doing it for the love of doing it.” In that sense it seems like you and gp might agree that it’s bad for you and for your coworkers if you’re working like that because of external pressures.
I don’t delight in anybody’s suffering or burnout. But I do feel relief when somebody is suffering from the pace or intensity, and alleviates their suffering by striking a more sustainable balance for them.
I feel like even people energized by efforts like that pay the piper: after such a period I for one “lay fallow”—tending to extended family and community, doing phone-it-in “day job” stuff, being in nature—for almost as long as the creative binge itself lasted.
I would indeed agree with things as you've stated. I interpreted "the work they do" to mean "their craft" but if it was intended as "their specific working conditions" I can see how it'd read differently.
I think there are a lot of people that love their craft but are in specific working conditions that lead to burnout, and all I was saying is that I don't think it means they love their craft any less.
> Worried about how sustainable this is for its people, given the risk of burnout.
Well given the amount of money OpenAI pays their engineers, this is what it comes with. It tells you that this is not a daycare or for coasters or for the faint of heart, especially at a startup at the epicenter of AI competition.
There is now a massive queue of lots of desperate 'software engineers' ready to kill for a job at OpenAI and will not tolerate the word "burnout" and might even work 24 hours to keep the job away from others.
For those who love what they do, the word "burnout" doesn't exist for them.
I am not saying that’s easy work but most motivated people do this. And if you’re conscious of this that probably means you viewed it more as a job than your calling.
Doesn't it bother anybody that their product heavily relies on FastAPI according to this post yet they haven't donated to the project or aren't listed as sponsors?
- The company was a little over 1,000 people. One year later, it is over 3,000.
- Changes direction on a dime.
- Very secretive place.
With the added "everything is a rounding error compared to GPU cost" and "this creates a lot of strange-looking code because there are so many ways you can write Python".
This was good, but the one thing I most wanted to know about what it's like building new products inside of OpenAI is how and how much LLMs are involved in their building process.
If anything about OpenAI which should bothers people is how they fake to be blind to the consequences because of "the race".
Leveraging the decision IF and WHAT should be done to the top heads only never worked well.
> Good ideas can come from anywhere, and it's often not really clear which ideas will prove most fruitful ahead of time.
Is that why they have a dozens of different models?
> Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering.
I don't think the Sam/Board drama confirms this.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
Did you thank your OpenAI overlords for letting you access their sacred latest models?
+-+-
This reads like an Ad for Open AI or an attempt by the author to court them again? I am not sure how anyone can take his words seriously.
>Safety is actually more of a thing than you might guess
Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.
For a company that has grown so much in such a short time, I continue to be surprised by its lack of technical writers. Saying docs could be better is an euphemism, but I still can't find fellow tech writers working there. Compare this with Anthropic and its documentation.
I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.
The higher ups don't think there's value in that. Back at DigitalOcean they had an amazing tech writing team, with people with years of experience, doing some of the best tech docs in the industry, when the layoffs started the writing team was the first to be cut.
I didn't realise that team at DO was let go, what a horrible decision - the SERP footprint of DO was immense and the quality of the content was fantastic.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in. There's an API you can sign up and use–and most of the models (even if SOTA or proprietary) tend to quickly make it into the API for startups to use.
The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.
Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.
OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.
Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.
Yes, also OpenAI being this great nimble startup that can turn on a dime, while in reality Google reacted to them and has now surpassed them technically in every area, except image prompt adherence.
Anthropic has banned my own accounts before I used them for violating ToS. Appeals do nothing. Only when I started using a Google login did they stop banning them. This isn’t an OpenAI-only problem.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement.
That is literally how openAI gets data for fine-tuning it's models, by testing it on real users and letting them supply data and use cases. (tool calling, computer use, thinking, all of these were championed by people outside and they had the data)
Good writing, enjoyed that article. Also I guess it looks like there was more time spent writing this article than actually working at OpenAI? 1 year tenure and a paternity leave?
Are any of these causal to OpenAI's success? Or are they incidental? You can throw all of this "culture" into an org but I doubt it'd do anything without the literal world-changing technology the company owns.
> There's a corollary here–most research gets done by nerd-sniping a researcher into a particular problem. If something is considered boring or 'solved', it probably won't get worked on.
This is a very interesting nugget, and if accurate this could become their Achilles heel.
It's not "their" Achilles heel. It's the Achilles heel of the way humans work.
Most top-of-their-field researchers are on top of their field because they really love it, and are willing to sink insane amount of hours into doing things they love.
> An unusual part of OpenAI is that everything, and I mean everything, runs on Slack.
Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.
What I really wanted to know if OpenAI(and other labs for that matter) actually use their own products and not just casually but make LLM a core of how they operate. For example: using LLM for coding in prod, training/fine-tuning internal models for aligning on the latest updates, finding answer etc. Do they put their money where their mouth is, do LLMs help with productivity? There is no mention of it in the article, so I guess they don't?
I don’t know, but I’d guess they are using them heavily, though in a piecemeal fashion.
As impressive as LLMs can be at one-shotting certain kinds of tasks, working in a sprawling production codebase like the one described with tight performance constraints, subtle interdependencies, cross-cutting architectural concerns, etc. still requires a human driving most of the time. LLMs help a lot for this kind of work, but the human is either carefully assimilating their output or carefully choosing spots where (with detailed prompts) they can generate usable code directly.
Again, just a guess, but this my impression of how experienced engineers (including myself) are using LLMs in big/nontrivial codebases, and I’ve seen no indication that engineering processes at the labs are much different from the wider industry.
This is silicon valley culture on steroids: I really have to question if it is positive for any involved party. Codex almost has no mindshare and rightly so. It's a textbook also ran, except it came from the most dominant player and was outpaced by Claude code on the order of weeks.
Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro
What I read in this blogpost is a description of how every good research organization works, from academia to private labs. The command and control, centrally planned approach doesn't work.
While their growth is faster and technology different, the atmosphere feels very much like AWS back in 2014. I stayed for 8 years because I enjoyed it so much.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing
I doubt many people would say something contrary to this about their (former) colleagues, which means we should always take this with a (large) grain of salt.
Do I think (most) AT&T employees wanted to let the NSA spy on us? Probably not. Google engineers and ICE? Palantir and.. well idk i think everyone there knows what Palantir does.
wham.
thanks for sharing anecdotal episodes from OAI's inner mecahnism from an eng perspective. I wonder if OAI wouldn't be married to Azure would the infra be more resilient, require less eng effort to invent things to just run (at scale).
What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc.
Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.
I’m at a point my life and career where I’d never entertain working those hours. Missed basketball games, seeing kids come home from school, etc. I do think when I first started out, and had no kiddos, maybe some crazy sprints like that would’ve been exhilarating. No chance now though
> What's funny about this is there are exactly three services that I would consider trustworthy: Azure Kubernetes Service, CosmosDB (Azure's document storage), and BlobStore.
CosmosDB is trustworthy? Everyone I know that used CosmosDB ended up rewriting their code because of throttling.
>It's hard to imagine building anything as impactful as AGI
>...
>OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On the one hand, there's the goal of building AGI–which means there is a lot to get right.
I'm kind of surprised people are still drinking this AGI Koolaid
for real. same. the level of delusion. i think what'll happen is they'll get some really advanced agents that can effectively handle most general tasks and they'll call it AGI and say they've done it. it won't really be AGI, but a lot of people will have bought into the lie thanks to the incredibly convincing facsimile they'll have created.
Granted the "OpenAI is not a monolith" comment, interesting that use of AI assisted coding was a curious omission from the article -- no mention if encouraged or discouraged.
Chunking the codebase that you entirely own into packages is as if you're intentionally wanting to make your life miserable by imposing the same kind of volatility that you would otherwise find in the development process of building the Linux distribution. It's a misnomer.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:
--- start quote ---
The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.
"Safety is actually more of a thing than you might guess if you read a lot from Zvi or Lesswrong. There's a large number of people working to develop safety systems. Given the nature of OpenAI, I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones (intelligence explosion, power-seeking). That's not to say that nobody is working on the latter, there's definitely people focusing on the theoretical risks. But from my viewpoint, it's not the focus."
This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:
"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]
He joined last year May and left recently. About one year of stay.
I wonder one year is enough time for programmers to understand codebase, let alone meaningfully contributing patches? But then we see that job hopping is increasing common, which results in the drop in product qualities. I wonder what values are the job hoppers adding to the company.
That’s how I imagined it, kind of a hybrid of what I’ve seen called Product Marketing Manager and Product Analyst, but other replies and OpenAI job postings indicate maybe it’s a different role, more hands on building, getting from research to consumer product maybe?
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
I appreciate where the author is coming from, but I would have just left this part out. If there is anything I've learned during my time in tech (ESPECIALLY in the Bay Area) it's that the people you didn't meet are absolutely angling to do the wrong thing(TM).
I've been in circles with very rich and somewhat influential tech people and it's a lot of talk about helping others, but somehow beneath the veneer of the talk of helping others you notice that many of them are just ripping people off, doing coke and engaging in self-centered spiritual practices (especially crypto people).
I also don't trust that people within the system can assess if what they're doing is good or not. I've talked with higher ups in fashion companies who genuinely believe their company is actually doing so much great work for the environment when they basically invented fast-fashion. I've felt it first hand personally how my mind slowly warped itself into believing that ad-tech isn't so bad for the world when I worked for an ad-tech company, and only after leaving did I realize how wrong I was.
And it's not just about some people doing good and others doing bad. Individual employees all doing the "right thing" can still be collectively steered in the wrong direction by higher ups. I'd say this describes the entirety of big tech.
Yes. We already know that Altman parties with extremists like Yarvin and Thiel and donates millions to far-right political causes. I’m afraid the org is rotten at its core. If only the coup had succeeded.
When your work provides lunch in a variety of different cafeterias all neatly designed to look like standalone restaurants, directly across from which is an on-campus bank that will assist you with all of your financial needs before you take your company-operated Uber-equivalent to the next building over and have your meeting either in that building's ballpit, or on the tree-covered rooftop that - for some reason - has foxes on top, it's easy to focus only on the tiny "good" thing you're working on and not the steaming hot pile of garbage that the executives at your company are focused on but would rather you not see.
Edit: And that's to say nothing of the very generous pay...
My biggest problem with these new companies is their core philosophy.
First, these companies generate their own demand — natural demand for their products rarely exists. Therefore, they act more like sellers than developers.
Second, they always follow the same maxim: "What's the next logical step?" This naturally follows from the first premise, because this allows you to ignore everything "real". You are simply bound to logic. They have no "problems" to solve, yet they offer you solutions - simply as a logical consequence of their own logic. Has anyone ever actually asked if coders would use agents if it meant losing their jobs?
Thirdly, this naturally brings to light the B2B philosophy. The customer is merely a catalyst that will eventually become superfluous.
Fourth, the same excuse and ignorance of the form "(we don't know what we are doing, but) time will tell". What if time tells you "this is bad and you should and could have known better?"
Interesting that so many folks from Meta joined OpenAI - but Meta wasn't really able to roll its own competitive foundational model, so is that a bad sign?
Kind of interesting that folks aren't impressed by Azure's offering. I wonder if OpenAI is handicapped by that as well, compared to being on AWS or GCP.
>It's hard to imagine building anything as impactful as AGI,
Where is this AGI that you've built then? The reason for the very existence of that term is an acknowledgement that what's hyped today as AI isn't actually what AI used to mean, but the hype cycle VC money depends on using the term AI, so a new term was invented to denote the thing the old term used to denote. Do we need yet another term because AGI is about to get burned the same way?
> and LLMs are easily the technological innovation of the decade.
Sorry, what? I'm sure it feels that way from some corners of that particular tech bubble, but my 73 year old mom's life is not impacted by LLMs at all - well, except for when she opens her facebook feed once a month and gets blasted with tons of fake BS. Really something to be proud of for us as an industry? A tech breakthrough of the last decade that might have literally saved her life were mRNA vaccines though, and I could likely come up with more examples if I thought about it for more than 3 seconds.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing
Of course they are. People in orgs like that are passionate, they want to work on the tech cause LLMs are once-in-a-lifetime tech breakthrough. But they don’t realize enough that they’re working for bad people. Ultimately all of that tech is in the hands of Altman, and that guy hasn’t proven to be the saint he hopes to become.
this post was such a brilliant read. to read about how they still have a YC-style startup culture, are meritocratic, and people get to work on things they find interesting.
as an early stage founder, i worry about the following a lot.
- changing directions fast when i lose conviction
- things breaking in production
- and about speed, or the lack of it
I learned to actually not worry about the first two.
But if OpenAI shipped Codex in 7 weeks, small startups have lost the speed advantage they had. Big reminder to figure out better ways to solve for speed.
One thing I was interested to read but didn't find in your post is: does everyone believe in the vision that the leadership has shared publicly, e.g. [1]? Is there some skepticism that the current path leads to AGI, or has everyone drunk the Kool-Aid? If there is some dissent, how is it handled internally?
Not the author, but I work at OpenAI. There are wide variety of viewpoints and it's fine for employees to disagree on timelines and impact. I myself published a 100-page paper on why I think transformative AGI by 2043 is quite unlikely (https://arxiv.org/abs/2306.02519). From informal discussion, I think the vast majority of employees don't think that we're mere years from a post-scarcity utopia where we can drink mai tais on the beach all day. But there is a lot of optimism about the rapid progress in AI, and I do think that it's harder to forecast the path of a technology that has the potential to improve itself. So much depends on your definition of AGI. In a sense, GPT-4 is already AGI in the literal sense that it's an artificial intelligence with some generality. But in the sense of automating the economy, it's of course not close.
My definition: AGI will be here when you can put it in a robot body in the real word and interact with it like you would a person. Ask it to drive your car or fold your laundry or make a mai tai and if it doesn’t know how to do that, you show it, and then it can.
Maybe I'm biased, but I actually think it's a pretty good definition, as definitions go. All of our narrow measures of human intelligence that we might be tempted to use - win at games, solve math problems, ace academic tests, dominate at programming competitions - are revealed as woefully insufficient as soon as an AI beats them but fails to generalize far beyond. But if you have an AI that can generate lots of revenue doing a wide variety of real work, then you've probably built something smart. Diverse revenue is a great metric.
> Would that be services generally provided by government?
Most services provided by governments are economically valuable, as they provide infrastructure that allow individual actors to perform better, increasing collective economic output. (For e.g. high-expenditure infrastructure it could be quite easily argued though that they are not economically profitable.)
The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.
Externally there's no rigorous definition as to what constitutes AGI, so I'd guess internally it's not one monolithic thing they're targeting either. You'd need everyone to take a class about the nature of intelligence first, and all the different kinds of it just to begin with. There's undoubtedly dissent internally as to the best way to achieve chosen milestones on the way there, as well as disagreement that those are the right milestones to begin with. Think tactical disagreement, not strategic. If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
Well, Sam Altman has a clear definition of ASI, and AGI is something they've been thinking about for a long time, so presumably they must have some accepted definition of it.
My question was whether everyone believes this vision that ASI is "close", and more broadly whether this path leads to AGI.
> If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
People can have all sorts of reasons for working with a company. They might want to work on cutting-edge tech with smart people and infinite resources, for investment or prestige, but not necessarily buy into the overarching vision. I'm just wondering whether such a profile exists within OpenAI, and if so, how it is handled.
I definitely didn't get that feeling. There was a whole section about how their infra resembles Meta and they've had excellent engineers hired from Meta.
It would be interesting to read the memoirs of former OpenAI employees that dive into whether they thought the company was on the right track towards AGI. Of course, that’s an NDA violation at best.
It sounds to me in contrast to the grandiose claims OpenAI tries to make about its own products - it views AI as 'regular technology', and is pragmatically tries to build viable products using it.
> It's hard to imagine building anything as impactful as AGI, and LLMs are easily the technological innovation of the decade.
I really can't see a person with at least minimal self-awareness talking their own work up this much. Give me a break dude. Plus, you haven't built AGI yet.
Can't believe there's so little critique of this post here. It's incredibly self-serving.
He joins a proven unicorn at its inflection point and then leaves mere days after hitting his vesting cliff. All of this "learning" and "experience" talk is sopping wet with cynicism.
He co-founded and sold Segment. You think he was just at OpenAI to collect a check? He lays out exactly why he joined OpenAI and why he's leaving. If you think everyone does things only for cynical reasons, it might be a reflection more of your personal impulses than others.
Just because someone claims they are speaking in good faith doesn’t mean we have to take their word for it. Most people in tech dealing with big money are doing it for cynical reasons. The talk of changing the world or “doing something hard” is just marketing typically.
Calvin works incredibly hard and has very little ego. I was surprised he joined OpenAI since he's loaded from the Segment acquisition, but if anyone it makes sense he would do this. He's always looking to find the hardest problem and work on it.
That's what he did at Segment even in the later stages.
Newborns need constantly mom, not dad. Moms need husbands or their moms to help. The way it works is you agree what to do as a family (to do it or not to do it) and everybody is happy with their lives. You can be a great dad and husband and still do all of it when it makes sense and your wife supports it etc. Not having kids in the first place could be considered ego driven, not this.
I appreciate the edit, but "sopping wet with cynicism" still breaks the site guidelines, especially this one: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
Understood, in the future I will refrain from questioning motives in featured articles. I can no longer edit my post but you may delete or flag it so that others will not be exposed to it.
Given that he leaves OpenAI almost immediately after hitting his 25% vesting cliff, it seems like his employment at OpenAI and this blog post (which makes him and OpenAI look good while making the reader feel good) were done cynically. I.e. primarily in his self-interest. What makes it even worse is his stated reason for leaving:
> It's hard to go from being a founder of your own thing to an employee at a 3,000-person organization. Right now I'm craving a fresh start.
This is just wholly irrational for someone whose credentials indicate someone who is capable of applying critical thinking towards accomplishing their goals. People who operate at that level don't often act on impulse or suddenly realize they want to do something different. It seems much more likely he intentionally planned to give himself a year of vacation at OpenAI, which allows him to hedge a bit while taking a breather before jumping back into being a founder.
Is this essentially speculation? Yes. Is it cynical to assume he's acting cynically? Yes. Speculation on his true motives is necessary because otherwise we'll never get confirmation, short of him openly admitting to it (which is still fraught). We have to look at behaviors and actions and assess likelihoods from there.
There's nothing cynical about leaving a job after cliffing. If a company wants a longer commitment than a year before issuing equity, it can set a longer cliff. We're all adults here.
I don't see anything interesting about that detail; you keep trying to make something out of it, but there's nothing there to talk about.
There might be some marginal drama to scrape up here if the post was negative about OpenAI (I'd still be complaining about trying to whip up drama where there isn't any), but it's kind of glowing about them.
Well now the goalpost has shifted from "it's not cynical" to "even if it is cynical it doesn't matter" and dang has already warned me so I'm hesitant to continue this thread. I'll just say that once you recognize that a lot of the fluff in this article is cynically motivated, it reduces your risk of giving the information presented more meaning than is really there.
I remember this being common business practice for written communication (email, design documents) circa 20 years ago, so that people at least read the important points, or can quickly pick them out again later.
Possibly the dumbest, blandest, most annoying kind of cultural transference imaginable. We dreamed of creating machines in our image, and now we're shaping ourselves in the image of our machines. Ugh.
I think we’ve always shaped ourselves based what we’re capable of building. Think of how infrastructure such as buildings and roadways shape our lives within them. What I do agree with you, is how LLMs are shaping our mental thought how we are offloading a lot of our mental capacities with blind trust in the LLM output.
I'm 50, worked at few cool places and lots of boring ones. to paraphrase, Tolstoy tends to be right -all happy families are similar and unhappy families are unhappy in unique ways
OpenAI is currently selected for the brightest and young excited minds, (and a lot of money).. bright, young (as in full of energy) and excited people will work well anywhere- esp if given a fair amount of autonomy.
Young people talking about how hard they worked is not a sign of a great corp culture, just a sign that they are in the super excited stage of their careers
In the long run who knows, I tend to view these companies as groups of like minded people and groups of people change and the dynamic changees overnight -so if they can sustain that culture sure, but who knows..
I said this elsewhere on the thread and so apologize for repeating, but: I know mid-career people working at this firm who have been through these conditions, and they were energized by the experience. They're shipping huge stuff that tens of millions of people will use almost immediately.
The cadence we're talking about isn't sustainable --- has never been sustained anywhere --- but if insane sprints like this (1) produce intrinsically rewarding outcomes and (2) punctuate otherwise-sane work conditions, they can work out fine for the people involved.
It's completely legit to say you'd never take a job where this could be an expectation.
On one hand, yes. But on the other hand, he's still in his 30s. In most fields, this would be considered young / early career. It kind of reinforces the point that bright, young people can get a lot done in the tech world.
Calvin is loaded from the Segment exit, he would not work if he wasn't excited about the work. The other founders just went on to do their own thing or non-profits.
I worked there for a few years and Calvin is definitely more of the grounded engineering guy. He would introduced him as an engineer and just get talking code. He would spend most of his time with the SRE/core team trying to tackle the hardest technical problem at the company.
This is a politically correct farewell letter. Obviously something we little people who need jobs have to resort to so the next HR manager doesn't think we are a risk to stock valuation. For a deeper understanding, read Empire of AI by Karen Hao. She defrocks Sam Altman to reveal he is just another human. Like Steve Jobs, he is an adept salesman appealing to the naïve altruistic sentiments of humans while maintaining his singular focus on scale. Not so different from the archetype of Rockefeller in his pursuit of monopoly through scale using any means, sam is no different than google which even forgot its own rallying cry ‘dont be evil’. Other actors in the story seem to have been infected by the same meme virus, leaving openAI for their own empires- Musk left after he and altman conflicted over who would be CEO.(birth of xAI). Amodei, his sister and others left to start anthropic. Sutskever left to start ‘safe something or other’(smacks of the same misdirection sam used when openAI formed as a nonprofit ) giving the idea of a nonprofit a mantle of evil since OPENAI has pivoted to profit.
The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.
Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI. This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.
Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.
Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.
Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?
Will AI’s contribution to global warming be just as toxic global thermonuclear war?
These are the questions that come to mind after Hao’s historic summary.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.
One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.
"I would argue that there are very few benefits of AI, if any at all."
OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.
Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.
I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.
Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?
Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?
I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.
1. Here is the subset: any algorithm, which is learning based, trained on a large data set, and modifies or generates content.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.
I wish your parent comment didn't get downvoted, because this is an important conversation point.
"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"
I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.
Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.
> I think this (esp the sufficient number of bad actors) is vibes based on bad headlines and no actual numbers. In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin.
We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.
In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.
And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.
I think we'll have to wait and see here, because all the layoffs can be easily attributed to leadership making crappy over-hiring decisions over COVID and now not being able to admit to that and giving hand-wavy answers over "I'm firing people because AI" to drive different headline narratives (see: founders/CEO's talking outta their a**).
It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.
You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.
Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.
Just to add one final point: I included modification as well as generation of content, since I also want to exclude technologies that simply improve upon existing content in some way that is very close to generative but may not be considered so. For example: audio improvent like echo removal, ML noise removal, which I have already shown to interpolate.
I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.
> I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.
The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.
A few programmers being better off does not make an entire society better off. In fact, I'd argue that you shipping code 10x faster just means in the long run that consumerism is being accelerated at a similar rate because that is what most code is used for, eventually.
I spent much of my career working on open source software that helped other engineers ship code 10x faster. Should I feel bad about the impact my work there had on accelerating consumerism?
I don't know if you should feel bad or not, but even I know that I have a role to play in consumerism that I wish I didn't.
That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.
I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.
The TV show The Good Place actually dug into this quite a bit. One of the key themes explored in the show was the idea that there is no ethical consumption under capitalism, because eventually the things you consume can be tied back to some grossly unethical situation somewhere in the world.
i don't really understand this thought process. all technology has it's advantages and drawbacks and we are currently going through the hype and growing pains process.
you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.
True, but it is meaningful to understand whether the "quantity" advantages - drawbacks decreases over time, which I believe it does.
And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.
Rather myopic and crude take, in my opinion. Because if I bring out a net, it doesn't change the woods for others. If I introduce AI into society, it does change society for others, even those who don't want to use the tool. You have really no conception of subtlety or logic.
If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.
> everyone I met there is actually trying to do the right thing
making human beings obsolete is not the right thing. nobody in openAI is doing the right thing.
in another part of the post he says safety teams work primarily on making sure the models dont say anything racist as well as limiting helpful tips on building weapons of terror… and that AGI safety is basically not a focus. i dont think this company should be allowed to exist. they dont have ANY right to threaten the existence and wellbeing of me and my kids!
> As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google. Each of these organizations are going to take a different path to get there based upon their DNA (consumer vs business vs rock-solid-infra + data).
It is fairly rare to see an ex-employee put a positive spin on their work experience.
I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.
This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
> There's no Bond villain at the helm
We're talking about Sam Altman here, right, the dude behind Worldcoin? A literal bond-villainesque biological data harvesting scheme?
I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).
Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!
Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.
Calvin cofounded Segment that had a $3.2B acquisition. He's not your typical employee.
Which is a YC startup. If you know anything about YC it's the network of founders supporting each other no matter what.
> no matter what
except if you publicly speak in less than glowing terms their leaders
So this guy is filthy rich and yet decided to grind for 14 months with a newborn at home?
I guess that's why he's filthy rich.
Way to go to keep the boring chores of the first months with the partner and join the fun when the little one starts to be more fun after a year. With all that cash, I'm sure they could buy a bunch of help for the partner too.
There are certain experiences in life that one needs to go through so you keep grounded to what really matters.
I don't know, when I became a parent I was in for the full ride, not to have someone else raising her. Yes, raising includes changing diapers and all that.
There is some parenting, then there is good parenting. Most people don't have this option due to finances, but those that do and still avoid it to pick up just easy and nice parts - I don't have much sympathy nor respect for them.
Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.
Plus it certainly helps the kid with bonding, emotional stability and keeps the parent more in touch emotionally with their own kid(s).
That's even more of a reason not to bad mouth other billionaires/billion dollar companies. Billionaires and billion dollar companies work together all the time. It's not a massive pool. There is a reason beef between companies and top level execs and billionaires is all rumors and tea-talk until a lawsuit drops out of no where.
You think every billionaire is gonna be unhinged like Musk calling the president a pedo on twitter?
He is still manipulatable and driven by incentive like anyone else.
What incentives? It's not a very intellectual opinion to give wild hypotheticals with nothing to go on other than "it's possible".
I am not trying to advance wild hypotheticals, but something about his behavior does not quite feel right to me. Someone who has enough money for multiple lifetimes, working like he's possessed, to launch a product minimally different than those at dozens of other companies, and leaving his wife with all the childcare, then leaving after 14 months and insisting he was not burnt out but without a clear next step, not even, "I want to enjoy raising my child".
His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.
I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.
Calvin just worked like this when I was at Segment. He picked what he worked on and worked really intensely at it. People most often burn out because of the lack of agency, not hours worked.
Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.
> People most often burn out because of the lack of agency, not hours worked.
Why did Michael Jordan retire 3 times? Sure, you could probably write a book about it, but you would want to get to know the guy first.
Not sure if it's genuine insight or just a well-written bit of thoughtful PR.
I don't know if this happens to anyone else, but the more I read about OpenAI, the more I like Meta. And I deleted Facebook years ago.
i know calvin, and he's one of the most authentic people i've worked with in tech. this could not be more off the mark
This reflection seems very unlikely to be authentic because it is full of superlatives and not a single bad thing (or at least not great) is mentioned. Real organizations made of real humans simply are not like this.
The fact that several commenters know the author personally goes some way to explain why the entire comment section seems to have missed the utterly unbalanced nature of the article.
> There's no Bond villain at the helm. It's good people rationalizing things.
I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.
> We are all very good and kind and not at all evil, trust us if we do say so ourselves
Do these people have even minimal self-awareness?
Interesting. A year ago I joined one of the larger online sportsbook/casinos. In terms of talent, employees are all over the map (both good and bad). But I have yet to meet a villain. Everyone here is doing the best they can.
Every villain wants to be the best villain they can be!
More seriously, everyone is the hero of their own story, no matter how obvious their failings are from the outside.
I’ve been burned by empathetically adopting someone’s worldview and only realizing later how messed up and self-serving it was.
I’m sure people working for cigarette companies are doing the best they can too. People can be good individuals and also work toward evil ends.
I am of the opinion that the greatest evils come from the most self-righteous.
That may very well be the case. But I think this is a distinct category of evil; the second one, in which you'll find most of the cigarette and gambling businesses, is that of evil caused by indifference.
"Yes, I agree there are some downsides to our product and there are some people suffering because of that - but no one is forcing them to buy from us, they're people with agency and free will, they can act as adults and choose not to buy. Now what is this talk about feedback loops and systemic effects? It's confusing, go away."
This category is where you'll also find most of the advertising business.
The self-righteous may be the source of the greatest evil by magnitude, but day-to-day, the indifferents make it up in volume.
I think y'all are agreeing.
VGT?
> It is fairly rare to see an ex-employee put a positive spin on their work experience
Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.
Absolutely correct.
There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.
"OpenAI is nothing without it's people"
All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.
Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.
I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).
> I also don't believe AGI is a thing
Why not? I don't think we're anywhere close, but there are no physical limitations I can see that prevent AGI.
It's not impossible in the same way our current understanding indicates FTL travel or time travel is.
I also believe that AGI is not a thing, but for different reasons. I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not. People also don't seem interested in justifying why humans would be GI but other animals with 99% of the same DNA aren't.
My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.
If we were to believe the embodiment theory of intelligence (it’s by far not the only one out there, but very influential and convincing), this means that building an AGI is an equivalent problem to building an artificial human. Not a puppet, not a mock, not “sorta human”, but real, fully embodied human, down to gut bacterial biome, because according to the embodiment theory, this affects intelligence too.
In this formulation, it’s pretty much as impossible as time travel, really.
Sure, if we redefine "AGI" to mean "literally cloning a human biologically", then AGI suddenly is a very different problem (mainly one of ethics, since creating human clones, educating, brainwashing, and forcing them to respond to chat messages ala chatGPT has a couple ethical issues along the way).
I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.
Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.
Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.
I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.
“AGI” isn’t a thing and never will be. It fails even really basic scrutiny. The objective function of a human being is to keep its biological body alive and reproduce. There is no such similar objective on which a ML algorithm can be trained. It’s frankly a stupid idea propagated by people with no meaningful connection to the field and no idea what the fuck they’re talking about.
The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.
Even if it's illegal, you'll have to fight them in court.
OpenAI will certainly punish you for this and most likely make an example out of you, regardless of the outcome.
The goal is corporate punishment, not the rule of the law.
OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)
> It's no longer hanging over anyone's head,
The tender offer limitations still are, last I heard.
Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)
Nope, happy to report that was also fixed.
(It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)
So OA PPUs can now be sold and transferred without restriction to arbitrary buyers, outside the tender offer windows?
No, that's still the same.
Also work at OpenAI. Every tender offer has made full payouts to previous employees. Sorry to ruin your witch hunt..
Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.
Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).
Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.
> remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own
Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.
…but of course not everybody likes to go to hackathons
> OpenAI is perhaps the most frighteningly ambitious org I've ever seen.
That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.
Having said that, the entire article reads more like a puff piece than an honest reflection.
> everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions!
The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”
There is lots of rationalizing going on in his article.
> I returned early from my paternity leave to help participate in the Codex launch.
10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.
Allow me to propose a different rationalization: "yes I know X might damage some people/society, but it was not me who decided, and I get lots of money to do it, which someone else would do if not me."
I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.
Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.
> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.
> The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.
Yeah I had to re-read the sentence.
The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.
I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.
Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.
Same here. If I wrote an honest piece about my last employer, it would sound very similar in tone to what was written in this article
We already have bad guys doing X right now (literally, not the placeholders variable)
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:
"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"
I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:
> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!
Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.
This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).
And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.
Well, as a reminder OpenAI has a non disparagement clause in their contracts, so the only thing you'll ever see from former employees is positive feedback.
> It is fairly rare to see an ex-employee put a positive spin on their work experience.
I liked my jobs and bosses!
Most posts of the form "Reflections on [Former Employer]" on HN are positive.
> That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things
I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.
What a great post.
Some points that stood out to me:
- Progress is iterative and driven by a seemingly bottom up, meritocratic approach. Not a top down master plan. Essentially, good ideas can come from anywhere and leaders are promoted based on execution and quality of ideas, not political skill.
- People seem empowered to build things without asking permission there, which seems like it leads to multiple parallel projects with the promising ones gaining resources.
- People there have good intentions. Despite public criticism, they are genuinely trying to do the right thing and navigate the immense responsibility they hold.
- Product is deeply influenced by public sentiment, or more bluntly, the company "runs on twitter vibes."
- The sheer cost of GPUs changes everything. It is the single factor shaping financial and engineering priorities. The expense for computing power is so immense that it makes almost every other infrastructure cost a "rounding error."
- I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA), with each organisation's unique culture shaping its approach to AGI.
> I liked the take of the path to AGI being framed as a three horse race between OpenAI (consumer product DNA), Anthropic (business/enterprise DNA), and Google (infrastructure/data DNA)
Wouldn't want to forget Meta which also has consumer product DNA. They literally championed the act of making the consumer the product.
lol, I almost missed the sarcasm there :)
And don't forget xAI, which has MechaHitler in its product DNA
"Hey, Twitter vibes are a metric, so make sure to mention the company on Twitter if you want to be heard."
Twitter is a one-way communication tool. I doubt they're using it to create a feedback loop with users, maybe just to analyse their sentiment after a release?
The entire article reads more like a puff piece than an honest reflection. Those of us who live outside the US are more sceptical, especially after everything revealed about OpenAI in the book Empire of AI.
Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
Similar to Pascal's wager, which pretty much amounts to "yeah, God is probably not real, _but what if it is_? The utility of getting into heaven is infinite (and hell is infinitely negative), so any non-zero probability that God is real should make you be religious, just in case."
https://en.wikipedia.org/wiki/Pascal%27s_wager#Analysis_with...
I am convinced!
Which god should I believe in, though? There are so many.
And what if I pick the wrong god?
You should believe in all of them. Just spray and pray!
> I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI.
Not sure how you can say this so confidently. Many would argue they're already pretty close, at least on a short time horizon.
100%
"but what if it's true?"
There was nothing hypothesized about next-token prediction and emergent properties (they didn't know scale would allow it to generalize for sure). What if it's true is part of LLMs story, there is a mystical element here.
> There was nothing hypothesized that next-token prediction and scale could show emergent properties.
Nobody ever hypothesized it before it happened? Hard to believe.
Someone else can confirm, but from my understanding, no they did not know sentiment analysis, reasoning, few shot learning, chain of thought, etc would emerge at scale. Sentiment analysis was one of the first things they noticed a scaled up model could generalize. Remember, all they were trying to do was get better at next-token prediction, there was no concrete idea to achieve "instruction following", for example. We can never truly say going up another order of magnitude on the number of params won't achieve something (it could, for reasons unknown, just like before).
It is somewhat parallel to the story of Columbus looking for India but ending up in America.
Didn't it just get better at next token prediction? I don't think anything emerged in the model itself, what was surprising is how really good next token prediction itself is at predicting all kind of other things no?
> sentiment analysis, reasoning, few shot learning, chain of thought, etc would emerge at scale
Some would say it still hasn't (to an agreeable level).
The Schaeffer et al. "Mirage" paper showed that many claimed emergent abilities disappear when you use different metrics, what looked like sudden capability jumps were often artifacts of using harsh/discontinuous measurements rather than smooth ones.
But I'd go further: even abilities that do appear "emergent" often aren't that mysterious when you consider the training data. Take instruction following - it seems magical that models can suddenly follow instructions they weren't explicitly trained for, but modern LLMs are trained on massive instruction-following datasets (RLHF, constitutional AI, etc.). The model is literally predicting what it was trained on. Same with chain-of-thought reasoning - these models have seen millions of examples of step-by-step reasoning in their training data.
The real question isn't whether these abilities are "emergent" but whether we're measuring the right things and being honest about what our training data contains. A lot of seemingly surprising capabilities become much less surprising when you audit what was actually in the training corpus.
>Thanks to this bottoms-up culture, OpenAI is also very meritocratic. Historically, leaders in the company are promoted primarily based upon their ability to have good ideas and then execute upon them. Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering. That matters less at OpenAI then it might at other companies. The best ideas do tend to win. 2
This sets off my red flags: companies that say they are meritocratic, flat etc., often have invisible structures that favor the majority. Valve Corp is a famous example for that where this leads to many problems, see https://www.pcgamer.com/valves-unusual-corporate-structure-c...
>It sounds like a wonderful place to work, free from hierarchy and bureaucracy. However, according to a new video by People Make Games (a channel dedicated to investigative game journalism created by Chris Bratt and Anni Sayers), Valve employees, both former and current, say it's resulted in a workplace two of them compared to The Lord of The Flies.
I think in this structure people only think locally and they are not concerned with the overall mission of the company and do not actively think about morality of the mission or if they are following it.
Are you implying that a top-down corporate structure is better?
> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.
There's so much compression / time-dilation in the industry: large projects are pushed out and released in weeks; careers are made in months.
Worried about how sustainable this is for its people, given the risk of burnout.
If anyone tried to demand that I work that way, I’d say absolutely not.
But when I sink my teeth into something interesting and important (to me) for a few weeks’ or months’ nonstop sprint, I’d say no to anyone trying to rein me in, too!
Speaking only for myself, I can recognize those kinds of projects as they first start to make my mind twitch. I know ahead of time that I’ll have no gas left the tank by the end, and I plan accordingly.
Luckily I’ve found a community who relate to the world and each other that way too. Often those projects aren’t materially rewarding, but the few that are (combined with very modest material needs) sustain the others.
I think senior folks at OpenAI realized this is not sustainable and hence took the "wellness week".
I think any reasonable manager would appreciate that sort of interest in a project and would support it, not demand it.
And, at least if I'm your manager, zealously defend the sanctity of your recovery time afterward.
I'd be curious to know about this community. Is this a formal group or just the people that you've collected throughout your life?
The latter. I mean, I feel like a disproportionate number of folks who hang around here have that kind of disposition.
That just turns out to be the kind of person who likes to be around me, and I around them. It’s something I wish I had been more deliberate about cultivating earlier in my life, but not the sort of thing I regret.
In my case that’s a lot of artists/writers/hackers, a fair number of clergy, and people working in service to others. People quietly doing cool stuff in boring or difficult places… people whose all-out sprints result in ambiguity or failure at least as often as they do success. Very few rich people, very few who seek recognition.
The flip side is that neither I nor my social circles are all that good at consistency—but we all kind of expect and tolerate that about each other. And there’s lots of “normal” stuff I’m not part of, which I probably could have been if I had tried. I don’t know what that means to the business-minded people around here, but I imagine it includes things like corporate and nonprofit boards, attending sports events in stadia, whatever golf people do, retail politics, Society Clubs For Respectable People, “Summering,” owning rich people stuff like a house or a car—which is fine with me!
More than enough is too much :)
This guy who is already independently wealthy chose working 16-17h 7 days a week instead of raising his newborn child and thanks his partner for “childcare duties”. Pretty much tells you everything you need to know.
It's not sustainable, at all, but if it's happening just a couple times throughout your career, it's doable; I know people who went through that process, at that company, and came out of it energized.
I think Altman said in Lex F. podcast that he works 8 hours, 4 first one being the most productive ones and he doesn't believe CEO claiming they work 16 hours a day. Weird contrast to what described in the article. This confirms my theory that there are two types of people in startups: founders and everybody else, the former are there to potentially make a lot of money, and the later are there to learn and leave.
I couldn't imagine asking my partner to pick up that kind of childcare slack. Props to OP's wife for doing so, and I'm glad she got the callout at the end, but god damn.
The author left after 14 months at OpenAI, so that seems like a burnout duration.
It's worst than that. Lots of power struggles and god-like egos. Altman called one of the employees "Einstein" on Twitter, some think they were chosen to transcend humanity, others believe they're at war with China, some want to save the world, others see it burn, and some just want their names up there with Gates and Jobs.
This is what ex-employees said in Empire of AI, and it's the reason Amodei and Kaplan left OpenAI to start Anthropic.
He references childcare and paternity leave in the post and he was a co-founder in a $3B acquisition. To me it seems it is a time-of-life/priorities decision not a straight up burnout decision.
Working a job like that would literally ruin my life. There's no way I could have time to be a good husband and father under those conditions, some things should not be sacrificed.
How did they have any time left to be a parent?
> I returned early from my paternity leave to help participate in the Codex launch.
Obvious priorities there.
That part made me do a double take. I hope his child never learns they were being put second.
It's just a google search away.
Many people are bad parents. Many are bad at their jobs. Many at bad at both. At least this guy is good at his job, and can provide very well for his family.
If you think being good at your job is providing for your family, you've been raised with some bad parenting examples.
It'll be of little comfort to the kid.
It is all relative. A workaholic seems pretty nice when compared to growing up with actual objectively bad parents, workaholics plus: addicts, perpetually drunk, gamblers, in jail, no shows for everything you put time into, competing with you when obtaining basic skills, abusing you for being a kid, etc.
There are plenty worse than that. The storied dramatic fiction parent missing out on a kid's life is much better than what a lot of children have.
Yet, all kids grow up, and the greatest factor determining their overall well-being through life is socioeconomic status, not how many hours a father was present.
Im very interested in that topic and haven’t made up my mind about what really counts in parenting. You have sources for the claim about well-being (asking explicitly about mental well-being and not just material well-being) being more influenced by socioeconomic status and not so much by parental absence?
About the guy: I think if it’s just a one time thing it’s ok but the way he presents himself gives reason for doubt
They were showered with assets for being a lucky individual in a capital driven society, time is interchangeable for wealth, as evidenced throughout history.
This guy is young. He can experience all that again, if it is that much of a failure, and he really wants to.
Sure, there are ethical issues here, but really, they can be offset by restitution, lets be honest.
He cannot experience time with his kid again. In any case he's on a fast track to divorce rn.
My hot take is I don’t think burn out has much to do with raw hours spent working. I feel it has a lot more to do with sense of momentum and autonomy. You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it. But if it feels like wading through a swamp, you will burn out very quickly, even if it’s just 50 hours a week. I also find ownership has a lot to do with sense of burnout
At some level of raw hours, your health and personal relationships outside work both begin to wither, because there are only 24 hours in a day. That doesn’t always cause burnout, but it provides high contrast - what you are sacrificing.
Yup, the yearly average should be that 35-45 hours per week, but sprinting is fine if opportunity is there.
Exactly this - if not at all about hours spent (at least that’s not a good metric; working less will benefit a burned out person; but the hours were not the root cause). The problem is lack of autonomy, lack of control over things you care about deeply. If those go out the window, the fire burns out quickly. Imho when this happens it’s usually because a company becomes too big, and the people in control lack subject matter expertise, have lost contact with the people that drive the company, and instead are guided by KPIs and the rules they enforced grasping for that feeling of being in control.
And if the work you're doing feels meaningful and you're properly compensated. Ask people to work really hard to fill out their 360 reviews and they should rightly laugh at you.
i hope thats not a hot take because it's 100% correct.
people conflate the terms "burnout" and "overwork" because they seem semantically similar, but they are very different.
you can fix overwork with a vacation. burnout is a deeper existential wound.
my worst bout of burnout actually came in a cushy job where i was consistently underworked but felt no autonomy or sense of purpose for why we were doing the things we were doing.
2024 my wife and I did a startup together. We worked almost every hour we were awake, 16-18 hours a day, 7 days a week. We ate, we went for an hour's walk a day, the rest of the time I was programming. For 9 months. Never worked so hard in my life before. And, not a lick of burnout during that time, not a moment of it, where I've been burned out by 6 hour work days at other organizations. If you're energized by something, I think that protects you from burnout.
> You can work extremely hard 100 hour weeks six months in a row, in the right team and still feel highly energized at the end of it.
Something about youth being wasted on young.
I’m sure they’ll look back at it and smile, no?
for the amount of money they are giving that is relatively easy, normal people are paid way less in harder jobs, for example, working in an Amazon Warehouse or doing door-to-door sales, etc.
I don't really have an opinion on working that much, but working that much and having to go into the office to spend those long hours sounds like torture.
Those that love the work they do don't burn out, because every moment working on their projects tends to be joyful. I personally hate working with people who hate the work they do, and I look forward to them being burned out
Sure, but this schedule is like, maybe 5 hours of sleep per night. Other than an extreme minority of people, there’s no way you can be operating on that for long and doing your best work. A good 8 hours per night will make most people a better engineer and a better person to be around.
"You don't really love what you do unless you're willing to do it 17 hours a day every day" is an interesting take.
You can love what you do but if you do more of it than is sustainable because of external pressures then you will burn out. Enjoying your work is not a vaccine against burnout. I'd actually argue that people who love what they do are more likely to have trouble finding that balance. The person who hates what they do usually can't be motivated to do more than the minimum required of them.
Weird how we went from like the 4 hour workweek and all those charts about how people historically famous in their field spent only a few hours a day on what they were most famous for, to "work 12+ hours a day or you're useless".
Also this is one of a few examples I've read lately of "oh look at all this hard work I did", ignoring that they had a newborn and someone else actually did all of the hard work.
I read gp’s formulation differently: “if you’re working 17 hours a day, you’d better stop soon unless you’re doing it for the love of doing it.” In that sense it seems like you and gp might agree that it’s bad for you and for your coworkers if you’re working like that because of external pressures.
I don’t delight in anybody’s suffering or burnout. But I do feel relief when somebody is suffering from the pace or intensity, and alleviates their suffering by striking a more sustainable balance for them.
I feel like even people energized by efforts like that pay the piper: after such a period I for one “lay fallow”—tending to extended family and community, doing phone-it-in “day job” stuff, being in nature—for almost as long as the creative binge itself lasted.
I would indeed agree with things as you've stated. I interpreted "the work they do" to mean "their craft" but if it was intended as "their specific working conditions" I can see how it'd read differently.
I think there are a lot of people that love their craft but are in specific working conditions that lead to burnout, and all I was saying is that I don't think it means they love their craft any less.
> Worried about how sustainable this is for its people, given the risk of burnout.
Well given the amount of money OpenAI pays their engineers, this is what it comes with. It tells you that this is not a daycare or for coasters or for the faint of heart, especially at a startup at the epicenter of AI competition.
There is now a massive queue of lots of desperate 'software engineers' ready to kill for a job at OpenAI and will not tolerate the word "burnout" and might even work 24 hours to keep the job away from others.
For those who love what they do, the word "burnout" doesn't exist for them.
For these prestigious companies it makes sense, work hard for a few years then retire early.
This is what being a wartime company looks like
I am not saying that’s easy work but most motivated people do this. And if you’re conscious of this that probably means you viewed it more as a job than your calling.
Doesn't it bother anybody that their product heavily relies on FastAPI according to this post yet they haven't donated to the project or aren't listed as sponsors?
https://github.com/sponsors/tiangolo#sponsors
https://github.com/fastapi/fastapi?tab=readme-ov-file#sponso...
Presumably it also relies on Python, Linux, nginx, coreutils and a bunch of other stuff they haven't donated to.
no, because I wouldn't expect anything good from openai.
This stuff:
- The company was a little over 1,000 people. One year later, it is over 3,000.
- Changes direction on a dime.
- Very secretive place.
With the added "everything is a rounding error compared to GPU cost" and "this creates a lot of strange-looking code because there are so many ways you can write Python".
Is not something that is going to last.
This was good, but the one thing I most wanted to know about what it's like building new products inside of OpenAI is how and how much LLMs are involved in their building process.
Same! I was really hoping it was discussed. I’m assuming “lots, but it depends on what you’re working on”?
That's a good question!
He describes 78,000 public pull requests per engineer over 53 days. LMAO. So it's likely 99.99% LLM written.
Lots of good info in the post, surprised he was able to share so much publicly. I would have kept most of the business process info secret.
Edit: NVM. That 78k pull requests is for all users of Codex, not all engineers of Codex.
I didn't find any surprises reading this post.
If anything about OpenAI which should bothers people is how they fake to be blind to the consequences because of "the race". Leveraging the decision IF and WHAT should be done to the top heads only never worked well.
> Good ideas can come from anywhere, and it's often not really clear which ideas will prove most fruitful ahead of time.
Is that why they have a dozens of different models?
> Many leaders who were incredibly competent weren't very good at things like presenting at all-hands or political maneuvering.
I don't think the Sam/Board drama confirms this.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
Did you thank your OpenAI overlords for letting you access their sacred latest models?
+-+-
This reads like an Ad for Open AI or an attempt by the author to court them again? I am not sure how anyone can take his words seriously.
>Safety is actually more of a thing than you might guess
Considering all the people who led the different safety teams have left or been fired, Superalignment has been a total bust and the various accounts from other employees about the lack of support for safety work I find this statement incredibly out of touch and borderline intentionally misleading.
This is not true and it’s weird hyperbole misinformation
For a company that has grown so much in such a short time, I continue to be surprised by its lack of technical writers. Saying docs could be better is an euphemism, but I still can't find fellow tech writers working there. Compare this with Anthropic and its documentation.
I don't know what's the rationale for not hiring tech writers other than nobody suggesting it yet, which is sad. Great dev tools require great docs, and great docs require teams that own them and grow them as a product.
The higher ups don't think there's value in that. Back at DigitalOcean they had an amazing tech writing team, with people with years of experience, doing some of the best tech docs in the industry, when the layoffs started the writing team was the first to be cut.
People look at it as a cost a and nothing else.
I didn't realise that team at DO was let go, what a horrible decision - the SERP footprint of DO was immense and the quality of the content was fantastic.
I think he explained it in his post? You get rewarded for "actions" which is making cool things and stuff. You don't get rewarded for writing docs.
Whoa, there is a ton of interesting stuff in this one, and plenty of information I've never seen shared before. Worth spending some time with it.
Agreed!
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in. There's an API you can sign up and use–and most of the models (even if SOTA or proprietary) tend to quickly make it into the API for startups to use.
The comparison here should clearly be with the other frontier model providers: Anthropic, Google, and potentially Deepseek and xAI.
Comparing them gives the exact opposite conclusion - OpenAI is the only model provider that gates API access to their frontier models behind draconic identity verification (also, Worldcoin anyone?). Anthropic and Google do not do this.
OpenAI hides their model's CoT (inference-time compute, thinking). Anthropic to this day shows their CoT on all of their models.
Making it pretty obvious this is just someone patting themselves on the back and doing some marketing.
Yes, also OpenAI being this great nimble startup that can turn on a dime, while in reality Google reacted to them and has now surpassed them technically in every area, except image prompt adherence.
Anthropic has banned my own accounts before I used them for violating ToS. Appeals do nothing. Only when I started using a Google login did they stop banning them. This isn’t an OpenAI-only problem.
There are only two hard things in Computer Science: cache invalidation and naming things:
CloseAI.
OpenAI hides their model's CoT (inference-time compute, thinking)
Probably because Deepseek trained a student model off their frontier model.
And the same thing could very easily happen to Anthropic, yet they choose not to hide it.
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement.
That is literally how openAI gets data for fine-tuning it's models, by testing it on real users and letting them supply data and use cases. (tool calling, computer use, thinking, all of these were championed by people outside and they had the data)
Good writing, enjoyed that article. Also I guess it looks like there was more time spent writing this article than actually working at OpenAI? 1 year tenure and a paternity leave?
Are any of these causal to OpenAI's success? Or are they incidental? You can throw all of this "culture" into an org but I doubt it'd do anything without the literal world-changing technology the company owns.
> There's a corollary here–most research gets done by nerd-sniping a researcher into a particular problem. If something is considered boring or 'solved', it probably won't get worked on.
This is a very interesting nugget, and if accurate this could become their Achilles heel.
It's not "their" Achilles heel. It's the Achilles heel of the way humans work.
Most top-of-their-field researchers are on top of their field because they really love it, and are willing to sink insane amount of hours into doing things they love.
> An unusual part of OpenAI is that everything, and I mean everything, runs on Slack.
Not that unusual nowadays. I'd wager every tech company founded in the last ~10 years works this way. And many of the older ones have moved off email as well.
I wonder how much of this data Salesforce can use, a literal goldmine of information
isnt there a clause in the slack contract you cant use the slack api to pull data to train an AI
rules for thee, not for me
What I really wanted to know if OpenAI(and other labs for that matter) actually use their own products and not just casually but make LLM a core of how they operate. For example: using LLM for coding in prod, training/fine-tuning internal models for aligning on the latest updates, finding answer etc. Do they put their money where their mouth is, do LLMs help with productivity? There is no mention of it in the article, so I guess they don't?
Yes we do. If you worked at Google you know moma. Our moma is an internal version of chat. It is very good.
I don’t know, but I’d guess they are using them heavily, though in a piecemeal fashion.
As impressive as LLMs can be at one-shotting certain kinds of tasks, working in a sprawling production codebase like the one described with tight performance constraints, subtle interdependencies, cross-cutting architectural concerns, etc. still requires a human driving most of the time. LLMs help a lot for this kind of work, but the human is either carefully assimilating their output or carefully choosing spots where (with detailed prompts) they can generate usable code directly.
Again, just a guess, but this my impression of how experienced engineers (including myself) are using LLMs in big/nontrivial codebases, and I’ve seen no indication that engineering processes at the labs are much different from the wider industry.
This is silicon valley culture on steroids: I really have to question if it is positive for any involved party. Codex almost has no mindshare and rightly so. It's a textbook also ran, except it came from the most dominant player and was outpaced by Claude code on the order of weeks.
Why go through all that? Instead what would have been a much better scenario is openai carefully assessing different approaches to agentic coding and releasing a more fully baked product with solid differentiation. Even Amazon just did that with Kiro
What I read in this blogpost is a description of how every good research organization works, from academia to private labs. The command and control, centrally planned approach doesn't work.
Codex is quite different from Claude Code. It’s more similar to Devin.
Maybe you’re thinking of the confusingly named Codex CLI?
Fascinating that you chose to compare OpenAI's culture to Los Alamos. I can't tell if you're hinting AI is as world ending as nuclear weapons or not.
>As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google.
Sleeping on Keen Technology I see
If you'd like some "objective" insights into how bottoms-up innovation at OpenAI works..
a research manager there coauthored this under-hyped book: https://engineeringideas.substack.com/p/review-of-why-greatn...
While their growth is faster and technology different, the atmosphere feels very much like AWS back in 2014. I stayed for 8 years because I enjoyed it so much.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing
I doubt many people would say something contrary to this about their (former) colleagues, which means we should always take this with a (large) grain of salt.
Do I think (most) AT&T employees wanted to let the NSA spy on us? Probably not. Google engineers and ICE? Palantir and.. well idk i think everyone there knows what Palantir does.
wham. thanks for sharing anecdotal episodes from OAI's inner mecahnism from an eng perspective. I wonder if OAI wouldn't be married to Azure would the infra be more resilient, require less eng effort to invent things to just run (at scale).
What i haven't seen much is the split between eng and research and how people within the company are thinking about AGI and the future, workforce, etc. Is it the usual SF wonderland or is there an OAI specific value alignment once someone is working there.
This is an incredibly fascinating read into how OpenAI works.
Some of the details seem rather sensitive to me.
I'm not sure if the essay is going to stay up for long, given how "secretive" OpenAI is claimed to be.
I’m at a point my life and career where I’d never entertain working those hours. Missed basketball games, seeing kids come home from school, etc. I do think when I first started out, and had no kiddos, maybe some crazy sprints like that would’ve been exhilarating. No chance now though
> I’m at a point my life and career where I’d never entertain working those hours.
That’s ok.
Just don’t complain about the cost of daycare, private school tuition, or your parents senior home/medical bills.
> Just don’t complain about the cost of daycare, private school tuition, or your parents senior home/medical bills.
How does any of this relates to the amount of hours one works?
> What's funny about this is there are exactly three services that I would consider trustworthy: Azure Kubernetes Service, CosmosDB (Azure's document storage), and BlobStore.
CosmosDB is trustworthy? Everyone I know that used CosmosDB ended up rewriting their code because of throttling.
Great read! As a software engineer sitting here in India, it feels like a privilege to peek inside how OpenAI works. Thanks for sharing!
These one or two year tenures.. I don't know man
This is just the exact same culture as Deepmind minus the "everything on Slack" bulletpoint.
Surely one wouldn't complain about infra at deepmind?
>It's hard to imagine building anything as impactful as AGI
>...
>OpenAI is also a more serious place than you might expect, in part because the stakes feel really high. On the one hand, there's the goal of building AGI–which means there is a lot to get right.
I'm kind of surprised people are still drinking this AGI Koolaid
for real. same. the level of delusion. i think what'll happen is they'll get some really advanced agents that can effectively handle most general tasks and they'll call it AGI and say they've done it. it won't really be AGI, but a lot of people will have bought into the lie thanks to the incredibly convincing facsimile they'll have created.
> i think what'll happen is they'll get some really advanced agents that can effectively handle most general tasks and they'll call it AGI
They likely won't wait even for that. Because that itself is still really far off.
Granted the "OpenAI is not a monolith" comment, interesting that use of AI assisted coding was a curious omission from the article -- no mention if encouraged or discouraged.
Python monorepo is the biggest surprise in this whole article
Chunking the codebase that you entirely own into packages is as if you're intentionally wanting to make your life miserable by imposing the same kind of volatility that you would otherwise find in the development process of building the Linux distribution. It's a misnomer.
Interesting read!
Discounting Chinese labs entirely for agi seems like a misstep though. I find it hard to believe there won’t be at least a couple contenders
Well they don’t have (good) GPUs, so how are they going to seriously compete?
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
To quote Jonathan Nightingale from his famous thread on how Google sabotaged Mozilla [1]:
--- start quote ---
The question is not whether individual sidewalk labs people have pure motives. I know some of them, just like I know plenty on the Chrome team. They’re great people. But focus on the behaviour of the organism as a whole. At the macro level, google/alphabet is very intentional.
--- end quote ---
Replace that with OpenAI
[1] https://archive.is/2019.04.15-165942/https://twitter.com/joh...
"Safety is actually more of a thing than you might guess if you read a lot from Zvi or Lesswrong. There's a large number of people working to develop safety systems. Given the nature of OpenAI, I saw more focus on practical risks (hate speech, abuse, manipulating political biases, crafting bio-weapons, self-harm, prompt injection) than theoretical ones (intelligence explosion, power-seeking). That's not to say that nobody is working on the latter, there's definitely people focusing on the theoretical risks. But from my viewpoint, it's not the focus."
This paragraph doesn't make any sense. If you read a lot of Zvi or LessWrong, the misaligned intelligence explosion is the safety risk you're thinking of! So readers "guesses" are actually right that OpenAI isn't really following Sam Altman's:
"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could."[0]
[0] https://blog.samaltman.com/machine-intelligence-part-1
He joined last year May and left recently. About one year of stay.
I wonder one year is enough time for programmers to understand codebase, let alone meaningfully contributing patches? But then we see that job hopping is increasing common, which results in the drop in product qualities. I wonder what values are the job hoppers adding to the company.
Well, since he worked on a brand-new product, it probably didn't matter too much.
What’s the GTM role referenced a couple of times in the post?
Go-to-market. Outbound marketing and sales, pipeline definition, analytics.
That’s how I imagined it, kind of a hybrid of what I’ve seen called Product Marketing Manager and Product Analyst, but other replies and OpenAI job postings indicate maybe it’s a different role, more hands on building, getting from research to consumer product maybe?
GTM = go to market
An actual offering made to the public that can be paid for.
“Go To Market”, ie the group that turns the tech into products people can use and pay for.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
I appreciate where the author is coming from, but I would have just left this part out. If there is anything I've learned during my time in tech (ESPECIALLY in the Bay Area) it's that the people you didn't meet are absolutely angling to do the wrong thing(TM).
I've been in circles with very rich and somewhat influential tech people and it's a lot of talk about helping others, but somehow beneath the veneer of the talk of helping others you notice that many of them are just ripping people off, doing coke and engaging in self-centered spiritual practices (especially crypto people).
I also don't trust that people within the system can assess if what they're doing is good or not. I've talked with higher ups in fashion companies who genuinely believe their company is actually doing so much great work for the environment when they basically invented fast-fashion. I've felt it first hand personally how my mind slowly warped itself into believing that ad-tech isn't so bad for the world when I worked for an ad-tech company, and only after leaving did I realize how wrong I was.
And it's not just about some people doing good and others doing bad. Individual employees all doing the "right thing" can still be collectively steered in the wrong direction by higher ups. I'd say this describes the entirety of big tech.
Yes. We already know that Altman parties with extremists like Yarvin and Thiel and donates millions to far-right political causes. I’m afraid the org is rotten at its core. If only the coup had succeeded.
When your work provides lunch in a variety of different cafeterias all neatly designed to look like standalone restaurants, directly across from which is an on-campus bank that will assist you with all of your financial needs before you take your company-operated Uber-equivalent to the next building over and have your meeting either in that building's ballpit, or on the tree-covered rooftop that - for some reason - has foxes on top, it's easy to focus only on the tiny "good" thing you're working on and not the steaming hot pile of garbage that the executives at your company are focused on but would rather you not see.
Edit: And that's to say nothing of the very generous pay...
My biggest problem with these new companies is their core philosophy. First, these companies generate their own demand — natural demand for their products rarely exists. Therefore, they act more like sellers than developers. Second, they always follow the same maxim: "What's the next logical step?" This naturally follows from the first premise, because this allows you to ignore everything "real". You are simply bound to logic. They have no "problems" to solve, yet they offer you solutions - simply as a logical consequence of their own logic. Has anyone ever actually asked if coders would use agents if it meant losing their jobs? Thirdly, this naturally brings to light the B2B philosophy. The customer is merely a catalyst that will eventually become superfluous. Fourth, the same excuse and ignorance of the form "(we don't know what we are doing, but) time will tell". What if time tells you "this is bad and you should and could have known better?"
Interesting that so many folks from Meta joined OpenAI - but Meta wasn't really able to roll its own competitive foundational model, so is that a bad sign?
Kind of interesting that folks aren't impressed by Azure's offering. I wonder if OpenAI is handicapped by that as well, compared to being on AWS or GCP.
20 years from now, the only people who will remember how much you worked is your family, especially your kids.
Seems like an awful place to be.
>It's hard to imagine building anything as impactful as AGI,
Where is this AGI that you've built then? The reason for the very existence of that term is an acknowledgement that what's hyped today as AI isn't actually what AI used to mean, but the hype cycle VC money depends on using the term AI, so a new term was invented to denote the thing the old term used to denote. Do we need yet another term because AGI is about to get burned the same way?
> and LLMs are easily the technological innovation of the decade.
Sorry, what? I'm sure it feels that way from some corners of that particular tech bubble, but my 73 year old mom's life is not impacted by LLMs at all - well, except for when she opens her facebook feed once a month and gets blasted with tons of fake BS. Really something to be proud of for us as an industry? A tech breakthrough of the last decade that might have literally saved her life were mRNA vaccines though, and I could likely come up with more examples if I thought about it for more than 3 seconds.
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing
Of course they are. People in orgs like that are passionate, they want to work on the tech cause LLMs are once-in-a-lifetime tech breakthrough. But they don’t realize enough that they’re working for bad people. Ultimately all of that tech is in the hands of Altman, and that guy hasn’t proven to be the saint he hopes to become.
No way the newborn slept until 5:30 every morning.
this post was such a brilliant read. to read about how they still have a YC-style startup culture, are meritocratic, and people get to work on things they find interesting.
as an early stage founder, i worry about the following a lot.
- changing directions fast when i lose conviction - things breaking in production - and about speed, or the lack of it
I learned to actually not worry about the first two.
But if OpenAI shipped Codex in 7 weeks, small startups have lost the speed advantage they had. Big reminder to figure out better ways to solve for speed.
Thanks for sharing.
One thing I was interested to read but didn't find in your post is: does everyone believe in the vision that the leadership has shared publicly, e.g. [1]? Is there some skepticism that the current path leads to AGI, or has everyone drunk the Kool-Aid? If there is some dissent, how is it handled internally?
[1]: https://blog.samaltman.com/the-gentle-singularity
Not the author, but I work at OpenAI. There are wide variety of viewpoints and it's fine for employees to disagree on timelines and impact. I myself published a 100-page paper on why I think transformative AGI by 2043 is quite unlikely (https://arxiv.org/abs/2306.02519). From informal discussion, I think the vast majority of employees don't think that we're mere years from a post-scarcity utopia where we can drink mai tais on the beach all day. But there is a lot of optimism about the rapid progress in AI, and I do think that it's harder to forecast the path of a technology that has the potential to improve itself. So much depends on your definition of AGI. In a sense, GPT-4 is already AGI in the literal sense that it's an artificial intelligence with some generality. But in the sense of automating the economy, it's of course not close.
> depends on your definition of AGI
What definition of AGI is used at OpenAI?
My definition: AGI will be here when you can put it in a robot body in the real word and interact with it like you would a person. Ask it to drive your car or fold your laundry or make a mai tai and if it doesn’t know how to do that, you show it, and then it can.
In the OpenAI charter, it's "highly autonomous systems that outperform humans at most economically valuable work."
https://openai.com/charter/
Huh. That feels like kind of a weak definition.
That makes me wonder what kinds of work aren’t economically valuable? Would that be services generally provided by government?
Maybe I'm biased, but I actually think it's a pretty good definition, as definitions go. All of our narrow measures of human intelligence that we might be tempted to use - win at games, solve math problems, ace academic tests, dominate at programming competitions - are revealed as woefully insufficient as soon as an AI beats them but fails to generalize far beyond. But if you have an AI that can generate lots of revenue doing a wide variety of real work, then you've probably built something smart. Diverse revenue is a great metric.
> Would that be services generally provided by government?
Most services provided by governments are economically valuable, as they provide infrastructure that allow individual actors to perform better, increasing collective economic output. (For e.g. high-expenditure infrastructure it could be quite easily argued though that they are not economically profitable.)
Thank you!
The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.
Instead of looking at absolute capabilities, look at their first and second derivatives.
Externally there's no rigorous definition as to what constitutes AGI, so I'd guess internally it's not one monolithic thing they're targeting either. You'd need everyone to take a class about the nature of intelligence first, and all the different kinds of it just to begin with. There's undoubtedly dissent internally as to the best way to achieve chosen milestones on the way there, as well as disagreement that those are the right milestones to begin with. Think tactical disagreement, not strategic. If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
Well, Sam Altman has a clear definition of ASI, and AGI is something they've been thinking about for a long time, so presumably they must have some accepted definition of it.
My question was whether everyone believes this vision that ASI is "close", and more broadly whether this path leads to AGI.
> If you didn't think that AGI were ever possible with LLMs, would you even be there to begin with?
People can have all sorts of reasons for working with a company. They might want to work on cutting-edge tech with smart people and infinite resources, for investment or prestige, but not necessarily buy into the overarching vision. I'm just wondering whether such a profile exists within OpenAI, and if so, how it is handled.
> As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google.
Umm... I don't think Zuckerberg would agree with this statement.
...or Musk, or chinese labs.
„the right people can make magic happen“
:-)
seems like the whole thing was meant to be a jab at Meta
I definitely didn't get that feeling. There was a whole section about how their infra resembles Meta and they've had excellent engineers hired from Meta.
was it?
it was however interesting to know that it isn't just Meta poaching OpenAI, but the reverse also happened.
Very apt, OpenAI's start was always poach-central, we know this from executive email leaks via Elon/Sam respectively.
Any gibberish on any company's behalf of "poaching" is nonsense regardless IMO.
It would be interesting to read the memoirs of former OpenAI employees that dive into whether they thought the company was on the right track towards AGI. Of course, that’s an NDA violation at best.
> giant python monolith
this does not sound fun lol
Maybe I’m paranoid but this sounds too good to be true. Almost like something planted to help with recruiting after meta poached their best guys.
The fact that they gave little shout outs at the end makes me think they wanted to avoid burning bridges by criticizing the company.
They almost certainly still own shares/options in the company.
They didn't mind burning MS
It sounds to me in contrast to the grandiose claims OpenAI tries to make about its own products - it views AI as 'regular technology', and is pragmatically tries to build viable products using it.
> It's hard to imagine building anything as impactful as AGI, and LLMs are easily the technological innovation of the decade.
I really can't see a person with at least minimal self-awareness talking their own work up this much. Give me a break dude. Plus, you haven't built AGI yet.
Can't believe there's so little critique of this post here. It's incredibly self-serving.
Lucky to be able to write this .. likely just vested with FU money!
> On the other hand, you're trying to build a product that hundreds of millions of users leverage for everything from medical advice to therapy.
... then the next paragraph
> As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing.
not if you're trying to replace therapists with chatbots, sorry
He joins a proven unicorn at its inflection point and then leaves mere days after hitting his vesting cliff. All of this "learning" and "experience" talk is sopping wet with cynicism.
He co-founded and sold Segment. You think he was just at OpenAI to collect a check? He lays out exactly why he joined OpenAI and why he's leaving. If you think everyone does things only for cynical reasons, it might be a reflection more of your personal impulses than others.
Just because someone claims they are speaking in good faith doesn’t mean we have to take their word for it. Most people in tech dealing with big money are doing it for cynical reasons. The talk of changing the world or “doing something hard” is just marketing typically.
Calvin works incredibly hard and has very little ego. I was surprised he joined OpenAI since he's loaded from the Segment acquisition, but if anyone it makes sense he would do this. He's always looking to find the hardest problem and work on it.
That's what he did at Segment even in the later stages.
Someone putting their work project over their newborn in this circumstance (returning early from pay leave no less) is 100% ego driven.
Newborns need constantly mom, not dad. Moms need husbands or their moms to help. The way it works is you agree what to do as a family (to do it or not to do it) and everybody is happy with their lives. You can be a great dad and husband and still do all of it when it makes sense and your wife supports it etc. Not having kids in the first place could be considered ego driven, not this.
Incredible that you've managed to post this from the 1950's
Can you please make your substantive points without crossing into personal attack and/or name-calling?
https://news.ycombinator.com/newsguidelines.html
Sorry, I removed the personal attack.
I appreciate the edit, but "sopping wet with cynicism" still breaks the site guidelines, especially this one: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
https://news.ycombinator.com/newsguidelines.html
Understood, in the future I will refrain from questioning motives in featured articles. I can no longer edit my post but you may delete or flag it so that others will not be exposed to it.
I did not pick up much cynicism in this post. What about it seemed cynical to you?
Given that he leaves OpenAI almost immediately after hitting his 25% vesting cliff, it seems like his employment at OpenAI and this blog post (which makes him and OpenAI look good while making the reader feel good) were done cynically. I.e. primarily in his self-interest. What makes it even worse is his stated reason for leaving:
> It's hard to go from being a founder of your own thing to an employee at a 3,000-person organization. Right now I'm craving a fresh start.
This is just wholly irrational for someone whose credentials indicate someone who is capable of applying critical thinking towards accomplishing their goals. People who operate at that level don't often act on impulse or suddenly realize they want to do something different. It seems much more likely he intentionally planned to give himself a year of vacation at OpenAI, which allows him to hedge a bit while taking a breather before jumping back into being a founder.
Is this essentially speculation? Yes. Is it cynical to assume he's acting cynically? Yes. Speculation on his true motives is necessary because otherwise we'll never get confirmation, short of him openly admitting to it (which is still fraught). We have to look at behaviors and actions and assess likelihoods from there.
There's nothing cynical about leaving a job after cliffing. If a company wants a longer commitment than a year before issuing equity, it can set a longer cliff. We're all adults here.
> There's nothing cynical about leaving a job after cliffing
My criticism is that that's a detail that is being obscured and instead other explanations for leaving are being presented (cynically IMO).
I don't see anything interesting about that detail; you keep trying to make something out of it, but there's nothing there to talk about.
There might be some marginal drama to scrape up here if the post was negative about OpenAI (I'd still be complaining about trying to whip up drama where there isn't any), but it's kind of glowing about them.
Well now the goalpost has shifted from "it's not cynical" to "even if it is cynical it doesn't matter" and dang has already warned me so I'm hesitant to continue this thread. I'll just say that once you recognize that a lot of the fluff in this article is cynically motivated, it reduces your risk of giving the information presented more meaning than is really there.
Yeah, I just read what Dan said you, and it makes sense, so we should wrap it up right here.
He's likely received hundreds of millions from segment acquisition. Do you think he cares about the OpenAI vesting cliff?
It's more likely that he was there to see how OpenAI was run so he could learn and something similar on his own after.
Interesting how ChatGPT’s style of writing has made people start bolding so much text.
I remember this being common business practice for written communication (email, design documents) circa 20 years ago, so that people at least read the important points, or can quickly pick them out again later.
Possibly the dumbest, blandest, most annoying kind of cultural transference imaginable. We dreamed of creating machines in our image, and now we're shaping ourselves in the image of our machines. Ugh.
I think we’ve always shaped ourselves based what we’re capable of building. Think of how infrastructure such as buildings and roadways shape our lives within them. What I do agree with you, is how LLMs are shaping our mental thought how we are offloading a lot of our mental capacities with blind trust in the LLM output.
People have bolded important points to make text easier to scan long before AI.
I'm 50, worked at few cool places and lots of boring ones. to paraphrase, Tolstoy tends to be right -all happy families are similar and unhappy families are unhappy in unique ways
OpenAI is currently selected for the brightest and young excited minds, (and a lot of money).. bright, young (as in full of energy) and excited people will work well anywhere- esp if given a fair amount of autonomy.
Young people talking about how hard they worked is not a sign of a great corp culture, just a sign that they are in the super excited stage of their careers
In the long run who knows, I tend to view these companies as groups of like minded people and groups of people change and the dynamic changees overnight -so if they can sustain that culture sure, but who knows..
I said this elsewhere on the thread and so apologize for repeating, but: I know mid-career people working at this firm who have been through these conditions, and they were energized by the experience. They're shipping huge stuff that tens of millions of people will use almost immediately.
The cadence we're talking about isn't sustainable --- has never been sustained anywhere --- but if insane sprints like this (1) produce intrinsically rewarding outcomes and (2) punctuate otherwise-sane work conditions, they can work out fine for the people involved.
It's completely legit to say you'd never take a job where this could be an expectation.
Calvin is the founder/CTO of Segment, not old but also not some doe eyed new grad.
On one hand, yes. But on the other hand, he's still in his 30s. In most fields, this would be considered young / early career. It kind of reinforces the point that bright, young people can get a lot done in the tech world.
Calvin is loaded from the Segment exit, he would not work if he wasn't excited about the work. The other founders just went on to do their own thing or non-profits.
I worked there for a few years and Calvin is definitely more of the grounded engineering guy. He would introduced him as an engineer and just get talking code. He would spend most of his time with the SRE/core team trying to tackle the hardest technical problem at the company.
> In most fields, this would be considered young / early career
Is it considered young / early career in this field?
This is a politically correct farewell letter. Obviously something we little people who need jobs have to resort to so the next HR manager doesn't think we are a risk to stock valuation. For a deeper understanding, read Empire of AI by Karen Hao. She defrocks Sam Altman to reveal he is just another human. Like Steve Jobs, he is an adept salesman appealing to the naïve altruistic sentiments of humans while maintaining his singular focus on scale. Not so different from the archetype of Rockefeller in his pursuit of monopoly through scale using any means, sam is no different than google which even forgot its own rallying cry ‘dont be evil’. Other actors in the story seem to have been infected by the same meme virus, leaving openAI for their own empires- Musk left after he and altman conflicted over who would be CEO.(birth of xAI). Amodei, his sister and others left to start anthropic. Sutskever left to start ‘safe something or other’(smacks of the same misdirection sam used when openAI formed as a nonprofit ) giving the idea of a nonprofit a mantle of evil since OPENAI has pivoted to profit.
The bottom line is that scaling requires money and the only way to get that in the private sector is to lure those with money with the temptation they can multiply their wealth.
Things could have been different in a world before financial engineers bankrupted the US (the crises of enron, salomon bros, 2008 mortgage debacle all added hundreds of billions to us debt as the govt bought the ‘too big to fail’ kool-aid and bailed out wall street by indenturing main street). Now 1/4 of our budget is simply interest payment on this debt. There is no room for govt spending on a moonshot like AI. This environment in 1960 would have killed Kennedy’s inspirational moonshot of going to the moon while it was still an idea in his head in his post coital bliss with Marilyn at his side.
Today our govt needs money just like all the other scrooge-infected players in the tower of debt that capitalism has built.
Ironically it seems china has a better chance now. It seems its release of deep seek and the full set of parameters is giving it a veneer of altruistic benevolence that is slightly more believable than what we see here in the west. China may win simply on thermodynamic grounds. Training and research in DL consumes terawatt hours and hundreds of thousands of chips. Not only are the US models on older architectures (10-100x more energy inefficient) but the ‘competition’ of multiple players in the US multiplies the energy requirements.
Would govt oversight have been a good thing? Imagine if General Motors, westinghouse, bell labs, and ford competed in 1940 each with their own manhattan project to develop nuclear weapons ? Would the proliferation of nuclear have resulted in human extinction by now?
Will AI’s contribution to global warming be just as toxic global thermonuclear war?
These are the questions that come to mind after Hao’s historic summary.
100%
> The thing that I appreciate most is that the company is that it "walks the walk" in terms of distributing the benefits of AI. Cutting edge models aren't reserved for some enterprise-grade tier with an annual agreement. Anybody in the world can jump onto ChatGPT and get an answer, even if they aren't logged in.
I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
What keeping AI free for everyone is akin to is keeping an addictive drug free for everyone so that it can be sold in larger quantities later.
One can argue that some technology is beneficial. A mosquito net made of plastic immediately improves one's comfort if out in the woods. But AI doesn't really offer any immediate TRUE improvement of life, only a bit more convenience in a world already saturated in it. It's past the point of diminishing returns for true life improvement and I think everyone deep down inside knows that, but is seduced by the nearly-magical quality of it because we are instinctually driven to seek out advantags and new information.
"I would argue that there are very few benefits of AI, if any at all."
OK, if you're going to say things like this I'm going to insist you clarify which subset of "AI" you mean.
Presumably you're OK with the last few decades of machine learning algorithms for things like spam detection, search relevance etc.
I'll assume your problem is with the last few years of "generative AI" - a loose term for models that output text and images instead of purely being used for classification.
Are predictive text keyboards on a phone OK (tiny LLMs)? How about translation engines like Google Translate?
Vision LLMs to help with wildlife camera trap analysis? How about to help with visual impairments navigate the world?
I suspect your problem isn't with "AI", it's with the way specific AI systems are being built and applied. I think we can have much more constructive conversations if we move beyond blanket labeling "AI" as the problem.
1. Here is the subset: any algorithm, which is learning based, trained on a large data set, and modifies or generates content.
2. I would argue that translation engines have their positives and negatives, but a lot of them are negative, because they lead to translators losing their jobs, and a loss in general for the magical qualities of language learning.
3. Predictive text: I think people should not be presented with possible next words, and think of them on their own, because that means they will be more thoughtful in their writing and less automatic. Also, with a higher barrier to writing something, they will probably write less and what they do write will be of greater significance.
4. I am against all LLMs, including wildlife camera trap analysis. There is an overabundance of hiding behind research when we really already know the problem fairly well. It's a fringe piece of conservation research anyway.
5. Visual impairments: one can always appeal to helping the disabled and impaired, but I think the tradeoff is not worth the technological enslavement.
6. My problem is categorically with AI, not with how it is applied, PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors to make the net effect always negative. It's human nature.
I wish your parent comment didn't get downvoted, because this is an important conversation point.
"PRECISELY BECAUSE AI cannot be applied in an ethical way, since human beings en masse will inevitably have a sufficient number of bad actors"
I think this is vibes based on bad headlines and no actual numbers (and tbf, founders/CEO's talking outta their a**). In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin. I say this as someone academically trained on well modeled Dynamical systems (the opposite of Machine Learning). My team just lost. Badly.
Case-in-point: I work with language localization teams that have fully adopted LLM based translation services (our DeepL.com bills are huge), but we've only hired more translators and are processing more translations faster. It's just..not working out like we were told in the headlines. Doomsday Radiologist predictions [1], same thing.
[1]: https://www.nytimes.com/2025/05/14/technology/ai-jobs-radiol...
> I think this (esp the sufficient number of bad actors) is vibes based on bad headlines and no actual numbers. In my real-life experience the advantages of specifically generative AI far outweighs the disadvantages, by like a really large margin.
We define bad actors in different ways. I also include people like tech workers, CEOs who program systems that take away large numbers of jobs. I already know people whose jobs were eroded based on AI.
In the real world, lots of people hate AI generated content. The advantages you speak of are only to those who are technically minded enough to gain greater material advantages from it, and we don't need the rich getting richer. The world doesn't need a bunch of techies getting richer from AI at the expense of people like translators, graphic designers, etc, losing their jobs.
And while you may have hired more translators, that is only temporary. Other places have fired them, and you will too once the machine becomes good enough. There will be a small bump of positive effects in the short term but the long term will be primarily bad, and it already is for many.
I think we'll have to wait and see here, because all the layoffs can be easily attributed to leadership making crappy over-hiring decisions over COVID and now not being able to admit to that and giving hand-wavy answers over "I'm firing people because AI" to drive different headline narratives (see: founders/CEO's talking outta their a**).
It may also be the narrative fed to actual employees, saying "You're losing your job because AI" is an easy way to direct anger away from your bad business decisions. If a business is shrinking, it's shrinking, AI was inconsequential. If a business is growing AI can only help. Whether it's growing or shrinking doesn't depend on AI, it depends on the market and leadership decision-making.
You and I both know none of this generative AI is good enough unsupervised (and realistically, with deep human edits). But they're still massive productivity boosts which have always been huge economic boosts to the middle-class.
Do I wish this tech could also be applied to real middle-class shortages (housing, supply-chain etc.), sure. And I think it will come.
Thanks for this, it's a good answer. I think "generative AI" is the closest term we have to that subset you describe there.
Just to add one final point: I included modification as well as generation of content, since I also want to exclude technologies that simply improve upon existing content in some way that is very close to generative but may not be considered so. For example: audio improvent like echo removal, ML noise removal, which I have already shown to interpolate.
I think AI classification and stuff like classification is probably okay but of course with that, as with all technologies, we should be cautious of how we use it as it can be used also in facial recognition, which in turn can be used to create a stronger police state.
> I would argue that there are very few benefits of AI, if any at all. What it actually does is create a prisoner's dilemma situation where some use it to become more efficient only because it makes them faster and then others do the same to keep up. But I think everyone would be FAR better off without AI.
Personally, my life has significantly improved in meaningful ways with AI. Apart from the obvious work benefits (I'm shipping code ~10x faster than pre-AI), LLMs act as my personal nutritionist, trainer, therapist, research assistant, executive assistant (triaging email, doing SEO-related work, researching purchases, etc.), and a much better/faster way to search for and synthesize information than my old method of using Google.
The benefits I've gotten are much more than conveniences and the only argument I can find that anyone else is worse off because of these benefits is that I don't hire junior developers anymore (at max I was working with 3 for a contracting job). At the same time, though, all of them are also using LLMs in similar ways for similar benefits (and working on their own projects) so I'd argue they're net much better off.
A few programmers being better off does not make an entire society better off. In fact, I'd argue that you shipping code 10x faster just means in the long run that consumerism is being accelerated at a similar rate because that is what most code is used for, eventually.
I spent much of my career working on open source software that helped other engineers ship code 10x faster. Should I feel bad about the impact my work there had on accelerating consumerism?
I don't know if you should feel bad or not, but even I know that I have a role to play in consumerism that I wish I didn't.
That doesn't necessitate feeling bad because the reaction to feel good or bad about something is a side effect of the sort of religious "good and evil" mentality that probably came about due to Christianity or something. But *regardless*, one should at least understand that because our world has reached a sufficient critical mass of complexity, even the things we do that we think are benign or helpful can have negative side effects.
I never claim that we should feel bad about that, but we should understand it and attempt to mitigate it nonetheless. And, where no mitigation is possible, we should also advocate for a better societal structure that will eventually, in years or decades, result in fewer deleterious side effects.
The TV show The Good Place actually dug into this quite a bit. One of the key themes explored in the show was the idea that there is no ethical consumption under capitalism, because eventually the things you consume can be tied back to some grossly unethical situation somewhere in the world.
That theme was primarily explored through the idea it's impossible to live a truly ethical life in the modern world due to unknowable externalities.
I don't think the takeaway was meant to really be about capitalism but more generally the complexity of the system. That's just me though.
i don't really understand this thought process. all technology has it's advantages and drawbacks and we are currently going through the hype and growing pains process.
you could just as well argue the internet, phones, tv, cars, all adhere to the exact same prisoner's dilemma situation you talk about. you could just as well use AI to rubber duck or ease your mental load than treat it like some rat-race to efficiency.
True, but it is meaningful to understand whether the "quantity" advantages - drawbacks decreases over time, which I believe it does.
And we should indeed apply the logic to other inventions: some are more worth using than others, whereas in today's society, we just use all of them due to the mechanisms of the prisoner's dilemma. The Amish, on the other hand, apply deliberation on whether to use certain technologies, which is a far better approach.
hiding from mosquitos under your net is a negative. the point of going out to the woods is to be bitten by mosquitos and youve ruined it.
its impossible to get benefit from the woods if youve brought a bug net, and you should stay out rather than ruining the woods for everyone
Rather myopic and crude take, in my opinion. Because if I bring out a net, it doesn't change the woods for others. If I introduce AI into society, it does change society for others, even those who don't want to use the tool. You have really no conception of subtlety or logic.
If someone says driving at 200mph is unsafe, then your argument is like saying "driving at any speed is unsafe". Fact is, you need to consider the magnitude and speed of the technology's power and movement, which you seem incapable of doing.
> everyone I met there is actually trying to do the right thing
making human beings obsolete is not the right thing. nobody in openAI is doing the right thing.
in another part of the post he says safety teams work primarily on making sure the models dont say anything racist as well as limiting helpful tips on building weapons of terror… and that AGI safety is basically not a focus. i dont think this company should be allowed to exist. they dont have ANY right to threaten the existence and wellbeing of me and my kids!
> As I see it, the path to AGI is a three-horse race right now: OpenAI, Anthropic, and Google. Each of these organizations are going to take a different path to get there based upon their DNA (consumer vs business vs rock-solid-infra + data).
Grok be like. okey. :))