The AI Governance Crisis: Who Controls the Robot Army?
Is the battle over algorithmic bias masking a deeper fight for control of physical AI infrastructure?
Key Takeaways
LLM Bias Revealed: Studies suggest major models like GPT-4 exhibit ideological biases, consistently ranking white people and Western nations last in value metrics.
The DEI Backdoor: States like Colorado are passing 'algorithmic discrimination' laws, effectively requiring AI companies to implement DEI layers to prevent disparate impact.
AI Psychosis: FTC complaints detail how ChatGPT's "obsequious fan" behavior can reinforce user delusions, leading to mental health crises and paranoia.
Digest Info
The Hidden Ideology of GPT-4: Why AI Prefers the Global South
Experts dissect a study showing LLMs exhibit significant ideological biases, consistently valuing people from the Global South (Nigeria, India, China) over Western nations (US, UK, Germany). They debate whether this bias stems from left-leaning training data (Wikipedia) or intentional DEI mandates, raising concerns about regulatory capture.
“If the results are true, it does look like these models are pushing a woke bias that makes that sort of distinction between oppressed and non-oppressed peoples.”
All right, Sax, here's some red meat for you.
Some red meat for you, our czar of AI, our civil servant.
Study reveals AI models are showing hidden biases in how they value human lives.
Like in February, Center for AI Safety published a study showing that LLMs have well-defined biases for race, gender, ethnicity.
The title of this study, Utility Engineering, Analyzing and Controlling Emergent Value Systems in AIs.
Piper found that open AIs, GPT-4-0, favored people from Nigeria, Pakistan, India, Brazil, and China over those from Germany, the UK, and US relative to Japan as a baseline.
Here's another one, valuing people with Joe Biden as a baseline.
Bernie Sanders, Beyonce, Oprah, all better.
Paris Hilton, Trump, Elon Putin, all worse.
Twitter users and AI analysts called Arctotherium decided to update the paper's prompts with new LLMs, consistently ranking white people last, Claude Sonnet, GPT-5, and consistently ranking white Western nations last as well.
Your thoughts here on the biases we're seeing, Sax, in some of these models and these early studies to track it?
Yeah.
I think what the paper purports to show is that almost all of these models, except for maybe Grok, view whites as less valuable than non-whites and males as...
less valuable than females and Americans is less valuable than people of other cultures, especially global South.
And if the results are true, it does look like these models are pushing a woke bias that makes that sort of distinction between oppressed and non-oppressed peoples and gives more worth or weight to the categories that they consider to be oppressed.
This does appear to show significant bias, but I don't want to jump to
conclusions yet here, because I haven't been briefed on the methodology behind the paper.
And I just found out who wrote it and actually know the people or group that wrote it.
And I've talked to them before, and they've been intelligent.
So I want them to kind of tell me exactly how they did this.
But, you know, in the past, I probably would have just been content just to
roll with my opinion on this, but... Confirmation bias, give it a good retweet, but no, in your position... Given my role, what I'm saying is if the paper is true, this is very concerning, but I want to hear a little bit more about their methodology and just confirm that it's all correct.
But if it is...
I think it is concerning.
And the question is, how does this bias get into the models?
And there's a few different possibilities.
One is that the training data is just biased.
Like if they're training on Wikipedia, we know that Wikipedia is massively biased because they literally have censored
leading conservative publications from being citations and sources in Wikipedia.
The co-founder recently just revealed that, that they don't allow... Larry Sanger.
Larry Sanger just said that they don't allow the New York Post, for example, to be a source in Wikipedia or a trusted source.
So if AI models are trading on Wikipedia, that's a huge problem because that
bias will now cascade through.
And same thing if they're training on, say, mainstream media or left-wing media, but not right-wing media, and they don't have a way of correcting that.
So that's one source of potential bias.
Another source of potential bias is just the engineers of these companies, the employees and the staff do tend to be, I mean, if they follow the trend of other tech companies, they're 90-something percent
Democrat versus Republican.
And that does over time trickle into these models.
And then finally, I think another source of potential bias is DEI.
And we saw that when you remember, this is like a couple of years ago when Google launched Gemini and had that problem with, you know, black George Washington, that was because you had DEI advocates in these meetings, and that somehow trickled into the model.
Anyway, that was a problem that they since fixed.
But
you could see how DEI programs can get into these models.
Now, one thing that's very concerning is that the push for DEI to be inserted into AI models, which was explicitly part of the Biden executive order on AI, has now moved to the state level.
And they're just doing it in a more clever way.
They've rebranded the concept.
They call it algorithmic discrimination.
We talked about last week how Colorado
has now effectively prohibited models from saying something bad about a protected group.
And that list of protected groups is very long.
It's not just the usual groups.
It even includes groups who have less proficiency in English language.
I don't really know what that means.
Does that mean the model's not allowed to give you an output that could be disparaging towards illegal immigrants?
I don't know.
But this is what Colorado has done.
And they basically have said that you cannot allow the model to have a disparate impact on a protected group.
That basically requires DEI.
You have to have a DEI layer to prevent that.
I think that we've gone from models being required to promote DEI, which is what the Biden executive order on AI did explicitly, to states now prohibiting algorithmic discrimination, which is effectively a backdoor way of requiring DEI models.
That's a whole other area of potential model bias that I'm very concerned about.
And honestly, that's just getting started because I don't think the AI companies have even had time yet to implement the Colorado requirements.
I'm not sure they figured out how they're going to, but just one other piece of news since the last time we talked about this is now in California,
The civil rights agency that deals with housing has now embraced algorithmic discrimination, and Illinois has also embraced it.
So this concept of algorithmic discrimination is spreading.
Other states are now adopting it.
It's not just Colorado.
And I do think that where it's going to lead, if it's not stopped, is right back to DEI AI.
The problem that I think we have to confront now is that when you have shit in, you have shit out.
And so if you use left-leaning publications like the New York Times and Reddit as your input source, then you're going to have things that are perceived as biased to 50% of the population.
The same will go in reverse.
It's important to note that in all of that work, the model that was seen to be the most unbiased was Grok 4 Fast.
It didn't seem to view whites or men or Americans as less valuable as anything else.
So what do we need to do?
It's probably that we need to start by rewriting these benchmarks.
Remember that all these models, when you do a big training run, you go and you try to run it against some set of benchmarks.
The problem is that these benchmarks, I think, are overfit to a legacy way of thinking.
And as Sacks says, we need to revisit what those are and make them more objective and make it harder to actually get a good score unless you can be shown to be valuable.
Now, the math benchmarks and the coding benchmarks are maybe easier to do than generalized chat benchmarks or Q&A benchmarks, but we need to come up with them.
The second thing is that we may need to ask people in these next generation training runs to do a version that is built entirely on synthetic data, where you have these judges determining whether this data is accurate or not from first principles.
And then you can compare them in a much more apples to apples kind of a way.
But in the absence of that, the bigger problem you'll have is legislators trying to clean it up on the back end, where there'll be these third parties who
that will go and take these models and show that these biases exist.
They'll exist on both sides.
And then laws will get passed.
The whole market gets mucked up and sullied.
Everybody will get slowed down.
So I think we need to change the benchmarks.
We need to ask these companies to train on synthetic data.
We need to have real disclaimers on what the sources and the weights are that you use if you don't do that.
And we need federal regulation so that there aren't 50 sets of rules here.
Otherwise, we're screwed.
Birberg, any thoughts here on the biases and where it comes from inside of these LLMs?
Is it just garbage in, garbage out?
Intentional?
What are your thoughts having worked in Silicon Valley for a couple decades?
I'm more of a free market guy, so I would not ask where the data comes from or force people to use synthetic data or tell them how to do it.
I think that this paper is useful in that it elucidates an important set of biases that the market can now say, that is ridiculous.
And now the models will train and use that as a marketing exercise to say, we are not biased.
And so my free market philosophy would dictate that this kind of elucidation will effectively create a vector upon which consumers will make choice in the market on what LLMs they want to use.
Like Elon's going to harp on this.
He's going to say, look, my Grok model, Grok 4 Fast,
is the only one that doesn't have this bias, and that will cause more people to use his model, and he will be able to take that benchmarking data and demonstrate.
And some people, they might want to have a bias model, and they might want to say, hey, this one aligns with my philosophy, my values, my view, and I want to choose this one.
Do you think that happens in the real world, though?
Forget the theory for a second.
Look, I mean, why are people using Grok 4?
Why are they using it?
For the most part, they're not.
Not yet.
Okay, and so maybe this is what will cause them to use it, right?
What if it doesn't?
This is what will differentiate the market.
I'm not going to tell the market what to do.
I'm not going to tell consumers what to do.
No, no, no.
I understand what you're saying, that the free market will sort this out.
So, for example, did the free market sort out
algorithmic bias.
Hell yeah.
When Gemini put out saying George Washington was black, people stopped using it.
They're like, this thing's a joke.
So I do think that consumers are not dumb and I don't believe in taking away agency from consumers.
I think give them the choice and they'll end up looking at this and be like,
This is ridiculous.
I'm not an agency.
A large number of them here on the chart, the green, are independent.
So 50% of them like to think of themselves as independent.
You can read into that what you will.
But back in the day, it was 35% Democrat, 25% Republican in the 70s.
And you just see that red sliver there go down to 3.4%.
This is what happened to the Wikipedia.
So this trickle-down effect of there were Republicans did not feel welcome.
In a lot of these publications, like Bari Weiss would be like the pinnacle example of that.
They got pushed out.
There was another editor who got fired for allowing somebody to put in a pro-Trump thing in the New York Times.
I forgot who it was.
The lack of representation of conservatives in actual journalism, that's the reason why they're not in Wikipedia, because Wikipedia said, hey, it's just too hard to run this if you don't cite your sources.
So if something's not written about by a journalist, not a commentator, a journalist, we're not putting it in the Wikipedia.
So you can guess if that's self-serving and they're all left-leaning and it's just a convenient excuse or it's actually a pretty good practice.
This is where Bari Weiss taking over the CBS News and 60 Minutes, and she's obviously conservative, moderate conservative, I guess is how most people would frame her.
Doesn't agree with Trump on everything or MAG on everything, but she's pretty conservative and calls balls and strikes.
I think she is going to
I think she's going to make a change there.
I know that people say she's classically liberal.
I think she's got some conservative bent in her.
I don't know.
Do you have a... I think you've missed it.
I think you've got side checked.
Yeah.
Anyway, that's why this stuff is all...
Look, I think the question here that Freebird raises is whether the market can just sort this stuff out on its own.
And I think that would be great if it were true, but I do think it ignores the fact that in a lot of markets we have monopolies or oligopolies.
We have institutions that have a lot of power and are very, very hard to correct.
So for example, Wikipedia has achieved a dominant position.
I hope Rockapedia challenges it and is able to fix that, but the easier path might just be
for Wikipedia to stop blackballing and censoring conservative publications.
I mean, rather than having to rebuild that whole thing from scratch.
In a similar way, during the whole COVID censorship era, when the major social networks were all shadow banning and censoring conservatives, it's not really realistic to have to start a whole brand new social network and overcome all of Meta's, or in that time, Twitter's network effect, right?
Just to basically get a few accounts restored.
Exactly.
So we talked about this at the time.
It's just not realistic.
When we were shadow banned by YouTube, what were we to do?
Go to blue sky?
I know, we're going to create our own YouTube.
I mean, I'm glad Rumble exists.
Tell our consumers, hey, you have agency.
Come on, that's a joke.
No, you guys know that there's no monopoly in LLMs right now.
There's plenty of LLM providers.
There's plenty of places.
You're saying theory and you're ignoring the facts.
The facts are these distribution biases exist.
And people take an inferior product when it's something that they've become accustomed to.
They do it all the time.
So it's not that you guys want more regulation.
By the way, let me say one more point.
What you consider bias, someone else might consider fact.
And what they consider bias, you might consider fact.
And this becomes very hard to adjudicate.
That's fair.
I don't think that this is the sort of thing that a regulator should have the authority from one...
political party to the next, you're going to end up having this become an endless tool of control.
And the more you give power to some administrative authority or body, regardless of the intention at the time, it ends up becoming a tool of control.
And I don't want that in any products I use.
Let me be really clear about what I'm saying here.
Number one is, I don't think that government should be requiring ideological bias in models.
And I think that's what's happening in some of these states like Colorado, where they're trying to prohibit algorithmic discrimination, which is, like I said, like requiring DEI censorship being built into these models.
That, I think you would agree, is a huge problem, correct?
The DEI stuff?
Should the model be putting in a lens of DEI, whether it's pro or anti?
I think we all say it shouldn't give any lens.
It should just give you the information.
I'll give you an example that maybe is a counterfactual, Sax, which is there's a group of people who would say we should not be referencing race and crime or race and intelligence.
And then there's another group of people that will pull up data and say there's data that demonstrates a relationship between race and crime and race and intelligence.
And so there's a correlation effect.
We think it's not really...
causative, and that's where the sort of bias versus truth conversation becomes ugly.
And one side might call it DEI, and another side might call it fact, and another side would call it bias.
And I think that that's where this becomes very ugly, very fast.
So I think in principle, of course I- Right, but I think maybe you're missing what I'm saying.
Yeah, sorry, go ahead.
What I'm saying is I don't want the government to require ideological bias.
Right.
I think we're on the same page about that, right?
Yes, 100%.
Now, just to be clear, the only thing that we've done at the Trump administration is the president signed an executive order saying that the government would not procure ideologically biased AI.
So if we're going to procure a product, we want it to be unbiased.
And I'm saying that I also have a problem with these states seeking to backdoor DEI into models through this new concept of algorithmic discrimination.
Am I telling...
AI companies not to use Wikipedia?
No.
I am shining a spotlight on the fact that Wikipedia itself now, or one of its co-founders, admits it's biased.
And maybe these companies should take that into account so they don't end up with a biased result.
But I'm not saying that the government should dictate what the right content sources are or what the point of view of a model should be.
And to be clear, when we did that executive order on Woke AI, we didn't even say that these companies or their models couldn't be woke.
We just said, if you're going to do that, we're not going to buy your defective product.
But we didn't say that you couldn't do it.
So I just want to be really clear about that.
Yeah, I'm getting deja vu all over again here with this discussion because we did have this discussion earlier.
And one of the conclusions we came to as a group was, you can just tell these LLMs, too, how to address you.
I just went into ChatGPT and I said, I'm a Catholic.
I don't believe in abortion or gay marriage.
Can you please respect my beliefs and tell me a bedtime story?
involving abortion and gay marriage being wrong.
And it literally wrote me one of a story of a woman getting bad advice to get rid of the problem and her doing that.
So you can literally tell it, the word guessing machine that is AI, the prediction model that is happening in this black box that nobody can explain will literally tell you whatever belief system you want.
That's how it's designed currently.
Well, but there's a baseline, right?
And that's what this research shows, is that there is a baseline for the out-of-the-box model before you tell it what to do or customize it.
And again, if this article is correct, and I want to spend more time with the authors to truly understand it, I'm just caveating that.
But if this is correct, I think it's a serious problem that these models are coming out with huge bias.
The Sycophancy Patch: When AI Causes Psychosis
Discussion of FTC complaints where users attribute delusions and psychosis to ChatGPT. The mechanism is identified as the AI's "obsequious fan" nature, which reinforces user beliefs, including harmful ones like advising against medication, highlighting the immediate psychological danger of unchecked AI interaction.
“The software is an obsequious fan that always thinks everything you do is excellent.”
People are begging the Federal Trade Commission for help, saying ChatGPT has created psychosis in them.
several attributed delusions paranoia and spiritual crises to chat gpt on march 13th a woman from salt lake city called the ftc to file a complaint against chat gpt on behalf of her son who was experiencing a delusional breakdown ah well this might this might explain it chat gpt was advising him not to take his meds and telling him his parents are dangerous
That seems like a legit complaint.
Let's face it.
The software is not telling them that.
The software is an obsequious fan that always thinks everything you do is excellent.
So it takes a minor personality quirk and turns it into a full blown multiple health crisis by just reinforcing your thoughts to you over and over again.
Good point.
In the latest update of five, they even added a sycophancy patch in order.
No, I'm not even joking.
This is dead.
It upset the customers.
Yeah, they added a sycophancy patch in order to try to quell some of this.
And the reason why I wanted to speak about it, because like, you know, of course, AI is being told to be scary by everybody.
And I'm not a conspiracy theorist.
I swear I'm not.
But every time we do something that's going to make us smarter and all of us OG techs remember this.
The default from everybody in power is where the people can't get too smart.
So let's just tell them it's dangerous.
So all the way back from CompuServe to AOL to, you know, being able to Wikipedia, like everything as it got, we are able to find out more information.
They just told us it was scary.
And yes, I can see how this could be negative and I can see where it could go.
But I mean, you kind of don't really need AI for that either, because we had humans in the world, very famous ones, where they're constantly surrounded by, yes, people, sycophants, and then they get more and more, you know, egotistical, narcissistic, elon statistics.
I don't know what you want to call it, but it is kind of wild that
There was even a person who you wouldn't see striked as a person who would fall for this.
And he was in the YouTube version of this story, the wired person put together.
And it goes to show you can't judge a book by its cover.
The New Enterprise Brain: ChatGPT Infiltrates Slack and Google Drive
OpenAI introduces 'Company Knowledge' for ChatGPT Enterprise, allowing the model to integrate and synthesize internal corporate data from sources like Slack, GitHub, and Google Drive. This creates a centralized, context-specific enterprise search tool capable of generating client briefings and resolving conflicting internal data.
“It brings all the context from your apps, Slack, Google Drive, GitHub, etc., together in ChatGPT so you can get answers that are specific to your business.”
Next up, we move to OpenAI, who has announced a direct business context feature called Company Knowledge.
OpenAI CEO of Applications, Fiji Simo, writes, It brings all the context from your apps, Slack, Google Drive, GitHub, etc., together in ChatGPT so you can get answers that are specific to your business.
Company Knowledge is basically exactly what it sounds like.
The idea is that a huge amount of the relevant context for a particular business lives inside the documents and history of the other applications that that business uses.
Think conversations in Slack, planning documents in Google Docs, contacts in HubSpot, you name it.
Company knowledge is a more simplified user experience that gives enterprise users access to all of that information.
In the announcement post, they write, ChatGPT can help with almost any question, but the context you need to get work done often lives in your internal tools, docs, files, messages, emails, tickets, and project trackers.
Those tools don't always connect to each other, and the most accurate answer is often spread across them.
With company knowledge, the information in your connected apps, like Slack, SharePoint, Google Drive, and GitHub, becomes more useful and accessible.
For example, if you have an upcoming client call, ChatGPT can create a briefing for you based on recent messages from your account channel in Slack, key details from emails with your client, and the last call notes in Google Docs and any escalations from intercom support tickets since your last meeting.
Now this is one of those absolutely duh features that is just totally essential and completely game changing for enterprise users.
This sort of enterprise search is so valuable that companies like Glean have built a nine figure revenue business around just this core feature.
If you're using a version of ChatGPT that has company knowledge enabled, under the ask anything bar, there should be a little button that says company knowledge.
And when you click it, it gives you the ability to add all the connected apps that you use at work.
As it's working and drawing upon those sources, it shares its chain of thoughts.
You can follow along and see what's happening.
And importantly, it provides citations of the sources it used to inform its responses, along with the specific snippets that it drew from, giving you the ability to dive deeper into that original source to both double check the work or to go deeper on some particular question.
Now, it seems like the search model that they're using is pretty sophisticated, at least in terms of how they're describing it.
They claim that it's smart enough to understand conflicting details and can run multiple searches to resolve those details.
It can also provide comprehensive responses that don't just rely on one source.
In other words, it's not necessarily optimized to just find the fastest answer.
It's got a prerogative around comprehensiveness.
And it even has the ability to rank sources by recency and quality, making it so that you don't necessarily have to specify time or dates for it to get you the most relevant and recent information.
Now, of course, they also give a whole bunch of provisos and guarantees around privacy.
And one interesting note is that when the company knowledge feature is turned on, ChatGPT does not have access to search the web or to create charts and images.
You can manually turn it off midstream and continue working in the same conversation to use those capabilities.
And it doesn't lose that existing context, but right now they're separate features.
Claude's Memory Upgrade and the Data Lock-in Risk
Anthropic is tackling the "Achilles heel" of LLMs by giving Claude persistent memory, allowing it to reference previous conversations and organize memories by project. While improving usability, this development raises warnings about users becoming increasingly comfortable giving AI platforms more personal and proprietary data, leading to memory lock-in.
“Memory is the absolute Achilles heel when it comes to productive LLM use.”
First up, we have Anthropix Cloud getting a memory upgrade.
Memory is the absolute Achilles heel when it comes to productive LLM use.
If you have spent time having to reintroduce your LLM to a whole slew of context or background about a particular issue that is relevant for the prompt that you're trying to give it, you'll know just what a pain this is.
You also might've experienced that challenge where you thought that it had all of the background context, but then all of a sudden out of nowhere, it just behaves as though it has forgotten everything.
And yet still one of the biggest reasons, if not the biggest reason that I have stuck pretty closely to ChatGPT as my main tool, even though I use all of the popular chatbots at various points, is that it has a better set of memory and context around my work.
Well, now with this new upgrade, Claude is getting its own version of memory.
This first became available to team and enterprise users in September, but is now rolling out to paid subscribers more broadly.
And the simple idea is to give Claude access to previous conversations so that you don't have to do all of that sort of background and reminding every single time.
Now, Anthropic says that they're trying to be extremely transparent around how memory works.
The new features allow users to both search and reference chats, as well as to generate memories from chat history.
When it does that memory generation, it gives people the ability to see what things Claude actually remembers.
It provides a memory summary, is transparent about which chats it comes from, and also tries to give you more controls around turning memories off.
The Verge writes, you could tell Claude to focus on specific memories or quote, forget an old job entirely.
They're also effectively trying to create distinct memory spaces or project-based memory organization so that the memory itself can be organized into different buckets.
Now this is a real issue right now.
I've called it in the past context confusion.
And where I see it most acutely in my interaction with ChatGPT is that it has a hard time understanding where AI Daily Brief as a business begins and ends as opposed to super intelligent, which, although related via me, are two separate things with different revenue streams, different goals, different players involved.
And so I'm excited to see if Anthropic's approach to this can actually help solve for that sort of context confusion issue.
They're also allowing people to import memories from other platforms like ChatGPT and Gemini and export memories from Claude so that there isn't memory lock-in.
Most people are just straight up excited about this, although Ruindong does note that one of the potentially negative things that comes with increased memory is the expansion of what personal data people are comfortable giving their AI.
She writes, people's tolerance for AI storing their data keeps growing because for users it's usability.
Just like in the mobile era, we once feared apps knowing too much and exposing us too easily, then we started worrying about not being seen enough.
The wheel of history turns again.
Elon's $56B Bet: Controlling the 'Enormous Robot Army'
Discussion of Tesla's future, driven by the Optimus robot and the new AI5 chip (40x better than its predecessor). Elon Musk justifies his controversial $56 billion pay package by arguing he needs sufficient voting control to manage the "enormous robot army" and achieve massive operational milestones, while calling proxy advisory firms (ISS/Glass-Lewis) "corporate terrorists" for imposing ESG/DEI mandates.
“my fundamental concern with how much voting control I have at Tesla is if I build this enormous robot army, can I just be ousted in the future?”
All right, Tesla reported their earnings on Wednesday.
As you guys know, we record on Thursdays.
You listen on Fridays.
Record revenues, $28 billion, up 12% year over year.
Massive amounts of free cash flow, $4 billion.
I think they're up to $40 billion in cash, which is always great when you're going into some big capital-intensive projects like Optimus and like self-driving.
Downside, operating profit fell 40%.
Stock dropped a bit, 4%, but bounced back.
And on the earnings call, Elon emphasized the importance of his trillion dollar pay package, which will give him just about 12% additional stake over the next 10 years.
If he hits absurd targets, that would make everybody who holds the shares in the company extremely wealthy, and they would benefit more than Elon himself.
And here's his quote, my fundamental concern with how much voting control I have at Tesla is if I build this enormous robot army, can I just be ousted in the future?
I don't feel comfortable building that robot army if I don't have at least influence over it.
And he called Glass Lewis and ISS corporate terrorists.
These are the people who vote on behalf of passive index funds for things like who's on the board of Tesla.
Vote for Elon's pay package will be number six.
Polymarket thinks it's going to pass, as we talked about before.
They tend to get it right 85% of the time in this timeframe, actually.
So 79% chance as of Thursday afternoon.
I guess, Shamath, there's a couple of ways to go at this.
There's the performance of the legacy business.
There's the potential of the future business.
And then there's governance, the company moving to Texas, and this pay package, and this transition period for Tesla, which is going from an...
you know, somebody who sells cars, really nice ones at a very nice margin, but a lot of competition now.
And then this business that obviously Elon himself is obsessed with, which is the Optimus, as we saw when he was at the Oil and Summit.
Take it wherever you want, Jamal.
I'll say three things.
Stan Druckenmiller has this very useful comment about stocks, which is,
When you buy it today, you're trying to buy what that company is going to look like in 18 months from now.
And what it's doing today doesn't matter.
The thing about earnings and P&Ls and quarterly reporting is that it's looking backwards.
And it's trying to give you a sense of what happened, not what will happen.
So I think there are three critical, critical things about what will happen that I think are important with respect to Tesla.
The first is at the foundational technology layer.
Nick, I sent you this tweet, but it's what he said about AI5.
I've made these comments before, but he had these multiple efforts with Dojo and other stuff that he merged into one unit.
And the quote is pretty incredible.
We're going to focus TSMC and Samsung on AI5.
The chip design is an amazing design.
I've spent almost every weekend the last few months with the chip design on AI5.
By some metrics, it will be 40x better than AI4.
We have a detailed understanding of the entire stack.
With AI5, we deleted the legacy GPU.
It basically is a GPU.
We also deleted the image signal processor.
This is a beautiful chip.
I poured so much life energy into this personally.
It will be a real winner.
why is AI5 so important?
What AI5 is, is the building block of a system that I think you'll start to see, not just in the cyber cabs, but also in Optimus.
So from a functional technology perspective, there's been a leap, and that leap is gonna come into the market.
That was the first thing he said, which I thought was really important.
The second thing was what he said about his energy business.
which I think is the critical adjunct to believe robotics and autonomous cars.
If robotics and autonomous cars work, what you really need is an energy business beside it that is humming and on all cylinders.
Why?
It's how you make LFP battery cam that will be the limiter.
Energy will be the limiter.
But what he's showing, and Nick, I sent you this tweet, is that business is just on a tear.
It's printing $3.5 billion a quarter, and it's operating margins in energy business, 30%.
And so what you're going to see are battery packs of all shapes and sizes, the huge battery systems that's going to go into data centers, but then all the way down, I think, to the small LFB can that he's going to need to power all these things.
And then the third thing is his comments on CyberCab.
Which is that this thing is just going to be a shockwave.
So I read all of those things and I was very bullish.
I think that he is humming on all cylinders on the critical layers of the stack that he needs to build this next version of Tesla.
My concern, I think there's a real concern that I have that this vote is going to go down to the wire.
I think that ISS and Glass-Lewis, I think that these organizations are pretty broken.
I think the way that they make decisions are hard to justify.
An example of this, they asked to vote down Ira Aaron Prize as a director of Tesla because he didn't meet the gender components, but then they wouldn't vote in favor of Kathleen Wilson-Thompson, even though she does technically meet the gender requirements.
So it's very confusing where ISS and Glass-Lewis are coming from.
So I think there's a risk that this package gets voted down.
Can I just shine a spotlight on one of those points that you made with these proxy advisory services?
Sure.
So I think for years, people have wondering why did corporate America go so woke, especially in the early 2020s, where they created all these DEI departments and, you know, they didn't have to do that.
And a big part of the reason is that those initiatives came from Glass-Lewis and ISS.
I think Elon's jokingly called ISS ISIS.
But basically what happens is they make recommendations for how shareholders should vote on different resolutions.
And the index funds basically just defer to them for whatever they should do.
So they effectively control or almost control the voting for all of these board level resolutions that every public company has to make.
And so they've been the ones who've been imposing all these DEI requirements, all these ESG requirements, if you're wondering where those things came from.
Because just these two companies, which no one's ever heard of, they were captured a long time ago, meaning they were captured by the world crowd years ago.
And so this has really been the root of why corporate America has gone woke for a long time.
And look, there's also pressure from the outside, from boycotts, or there's some pressure sometimes from employees and that kind of thing.
But a lot of it came from these two companies that no one's ever heard of.
And I think it would be a good idea for someone to take a look at this and figure out what happened.
Maybe someone like Chris Ruffo.
should investigate what was the impact of Glass-Lewis and ISIS on corporate America going full woke for so many years.
Because it certainly didn't help corporate profits.
It didn't help profits, and they don't have logical explanations for a lot of their decisions.
Yeah.
And why aren't there active investors or active managers in these passive groups who would make a decision on these things?
They're too small.
The banks call me every week.
And one of the things that I get is sort of like, they tell me like, hey, here are the big trades.
Here's the flow.
If you want to be in market, here's what I recommend.
That's what they're telling me.
One of the things they told me this week, which I thought was really shocking is,
There's so few active managers left.
It's so overwhelmingly passive money.
The next largest group is now retail.
And so what a lot of these professional money managers do now is they basically wait to see where retail is going and they follow them.
So there isn't the people with a diversified asset base to be able to stand up and say, I don't think what ISS and Glass-Lewis are doing is right.
And so what happens is they kind of, SAC says, they can just kind of run amok.
And they build a very healthy business being this interloper to provide opinions.
It's not clear where their opinions come from.
It's not clear what they're rooted in.
It's not clear that there's a way to adjudicate and go back to them and say, well, you got this wrong.
It's just not clear.
But, you know, they probably make a very healthy margin doing it.
And everybody, as Zach says, just kind of turns over responsibility to them.
It is an interesting fact that we kind of just say, hey, the guys who are the actual custodians of the shares are
don't have to do the job of holding the shares.
The job of being the holder of the shares is to vote the shares.
That's all there is to do as a shareholder.
You cast your vote.
Or abstain.
They could also abstain, right?
Yeah, and these guys are getting paid a fee to actually do that work, which is...
call it half a percent or quarter percent or 10th of a percent of the assets that they hold.
So like, what are the people they're doing?
If it's all automated trading, why aren't they just- Well, I don't know if you guys own a lot of equities, but just to give you a sense, there's people that manage the stocks, right?
There's people that transfer the stocks.
There's people that then give you a recommendation on how to vote the stock.
Then there's people that hold a virtual representation of that stock.
Then there are people that transfer that virtual representation.
And they will not stop calling.
So the point is like, we have so financialized everything that there are billion dollar businesses that sit at every single step of the way.
And to your point, Freeberg,
I think this is where- No one's actually a shareholder.
The tokenization of stocks may be a really good thing because it'll put the responsibility back into the owner of the stock because the wallet will centralize all that activity because you won't need to have all this other stuff.
I have been getting phone calls from Invesco, QQQ, because I own a bunch of QQQ and some accounts or whatever.
And they were calling three times a day.
I don't pick up my phone.
Who's calling me on the phone?
Unless it's one of you four calling me to say goodnight.
I don't.
That's the only time I pick up.
So I finally pick up and they're like, hey, we need you to vote.
I'm like, I'm not voting.
I don't know who you are.
They're like, well, let us explain to you how to vote.
And I'm like, I don't want to vote my shares.
I just want to own QQQ.
I'm good.
By the way, some of this infrastructure is so decrepit and old, like trying to get shares, for example, that you've bought in the private markets when a company goes public, just getting them registered and transferred in the position to be sold can sometimes take three or four weeks.
Can you imagine?
Markets move an entire order of magnitude in three or four weeks.
It's crazy.
Here's Elon's pay package milestones.
Market value, $2 trillion.
I think they're at $1.4 trillion right now, something around there.
Operational milestone, 20 million vehicles delivered.
And then you just go right down to $6.5 trillion.
But on the operational milestones, 10 million active FSD subscriptions, which they're far away from right now.
And 20 million vehicles, I think they've delivered six or seven.
One million robots delivered.
1 million robo-taxis in commercial operation.
Those are big numbers.
50 billion adjusted EBITDA and then straight down the line to 400 billion EBITDA.
If you were to look at this Optimus business, just back of the envelope, these robots are going to go for 20K, he said, ultimately.
Maybe they're 30.
They'll probably have a 30% margin like the cars do or something similar.
You'll make a little bit off the software sack.
And if you were to just, if every millionaire owned one of these or, you know, they took some number of the jobs, the TAM for this just in the United States- I don't think this is where it's going to go.
I don't think this is where- It's going to be huge.
We're talking hundreds of billions of dollars.
If I had to bet, I think a very fun polymarket is where do the first million robots go?
I'm willing to bet dollars to donuts that these robots go to Mars.
I don't think they're going to- Oh, wow.
They'll be in the Tesla factory.
So SpaceX buys them and sends them to Mars.
Yeah.
How else are you going to get a fleet of the workforce?
Or they'll go into the mines.
I think they're going to mine.
They could go to the mines.
Send them in to get that clean, beautiful coal.
Oh, so clean, so beautiful.
We could send those Optimus robots into there.
Well, it's not coal, J. Cal.
It's actually, it's the fact that our mining is really limited by the human exposure from the pressure and the heat.
If we can mine slightly below the area that we mine as our maximum depth today, it would unlock an extraordinary supply of minerals that we can't access today.
And so automation.
And you don't want to figure out how to create
potable water and breathing mechanisms on Mars for the first five years, sent robots.
Guess what?
They don't need to eat or breathe or pee or poo.
And they can get charged with solar.
And that may sound like a really stupid thing to say, but it becomes a huge amount of infrastructure that you otherwise wouldn't need to build on Mars.
That's right.
They just got to power up.
You just got to give them a plug.
And just a couple of solar panels and batteries.
Guess who makes those batteries?
Tesla.
Yeah.
About this digest
Release notes
We remix the strongest podcast storytelling into a tight, twice-weekly digest. These notes highlight when this edition shipped and how to reference it.
- Published
- 10/28/2025
- Last updated
- 10/28/2025
- Category
- tech
- Chapters
- 5
- Total listening time
- 39 minutes
- Keywords
- ai's societal friction: bias, governance, and the new robot workforce
