2025-06-24 01:59:10
2025-06-21 01:15:44
2025-06-16 22:50:13
Hello Where’s Your Ed At Subscribers! I’ve started a premium version of this newsletter with a weekly Friday column where I go over the most meaningful news and give my views, which I guess is what you’d expect. Anyway, it’s $7 a month or $70 a year, and helps support the newsletter. I will continue to do my big free column too! Thanks.
What wins the war is sincerity.
What wins the war is accountability.
And we do not have to buy into the inevitability of this movement.
Nor do we have to cover it in the way it has always been covered. Why not mix emotion and honesty with business reporting? Why not pry apart the narrative as you tell the story rather than hoping the audience works it out? Forget “hanging them with their own rope” — describe what’s happening and hold these people accountable in the way you would be held accountable at your job.
Your job is not to report “the facts” and let the readers work it out. To quote my buddy Kasey, if you're not reporting the context, you're not reporting the story. Facts without context aren’t really facts. Blandly repeating what an executive or politician says and thinking that appending it with “...said [person]” is sufficient to communicate their biases or intentions isn’t just irresponsible, it’s actively rejecting your position as a journalist.
You don’t even have to say somebody is lying when they say they’re going to do something — but the word “allegedly” is powerful, reasonable and honest, and is an objective way of calling into question a narrative.
Let me give you a few examples.
A few weeks ago, multiple outlets reported that Meta would partner with Anduril, the military contractor founded by Palmer Luckey, the former founder of VR company Oculus whichMeta acquired in 2014, only to oust Luckey four years later for donating $10,000 to an anti-Hilary Clinton group. In 2024, Meta CTO Andrew “Boz” Bosworth, famous for saying that Facebook’s growth is necessary and good, even if it leads to bad things like cyberbullying and terror attacks, publicly apologized to Luckey.
Now the circle is completing, with Luckey sort-of-returning to Meta to work with the company on some sort of helmet called “Eagle Eye.”
One might think at this point the media would be a little more hesitant in how they cover anything Zuckerberg-related after he completely lied to them about the metaverse, and one would be wrong.
The Washington Post reported that, and I quote:
To aid the collaboration, Meta will draw on its hefty investments in AI models known as Llama and its virtual reality division, Reality Labs. The company has built several iterations of immersive headsets aimed at blending the physical and virtual worlds — a concept known as the metaverse.
Are you fucking kidding me?
The metaverse was a joke! It never existed! Meta bought a company that made VR headsets — a technology so old, they featured in an episode of Murder She Wrote — and an online game that could best be described as “Second Life, but sadder.” Here’s a piece from the Washington Post agreeing with me! The metaverse never really had a product of any kind, and lost tens of billions of dollars for no reason! Here’s a whole thing I wrote about it years ago! To still bring up the metaverse in the year of our lord 2025 is ridiculous!
But even putting that aside… wait, Meta’s going to put its AI inside of this headset? Palmer Luckey claims that, according to the Post, this headset will be “combining an AI assistant with communications and other functions.” Llama? That assistant?
You mean the one that it had to rig to cheat on LLM benchmarking tests? The one that will, as reported by the Wall Street Journal, participate in vivid and gratuitous sexual fantasies with children? The one using generative AI models that hallucinate, like every other LLM? That’s the one that you’re gonna put in the helmet for the military? How is the helmet going to do that exactly? What will an LLM — an inconsistent and unreliable generative AI system — do in a combat situation, and will a soldier trust it again after its first fuckup?
Just to be clear, and I quote Palmer Luckey, the helmet that will feature an “ever-present companion who can operate systems, who can communicate with others, who you can off-load tasks onto … that is looking out for you with more eyes than you could ever look out for yourself right there right there in your helmet.” This is all going to be powered by Llama?
Really? Are we all really going to accept that? Does nobody actually think about the words they’re writing down?
Here’s the thing about military tech: the US DOD tends to be fairly conservative when it comes to the software it uses, and has high requirements for reliability and safety. I could talk about these for hours — from coding guidelines, to the ADA programming language, which was designed to be highly crash-resistant and powers everything from guided missiles to F-15 fighter jet — but suffice it to say that it’s highly doubtful that the military is going to rely on an LLM that hallucinates a significant portion of the time.
To be clear, I’m not saying we have to reject every single announcement that comes along, but can we just for one second think critically about what it is we are writing down.
We do not have to buy into every narrative, nor do we have to report it as if we do so. We do not have to accept anything based on the fact someone says it emphatically, or because they throw a number at us to make it sound respectable.
Here’s another example. A few weeks ago, Axios had a miniature shitfit after Anthropic CEO said that “AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10-20% in the next one to five years.”
What data did Mr. Amodei use to make this point? Who knows! Axios simply accepted that he said something and wrote it down, because why think when you could write.
This is extremely stupid! This is so unbelievably stupid that it makes me question the intelligence of literally anybody that quotes it! Dario Amodei provided no sourcing, no data, nothing other than a vibes-based fib specifically engineered to alarm hapless journalists. Amodei hasn’t done any kind of study or research. He’s just saying stuff, and that’s all it takes to get a headline when you’re the CEO of one of the top two big AI companies.
It is, by the way, easy to cover this ethically, as proven by Allison Morrow of CNN, who, engaging her critical thinking, correctly stated that “Amodei didn’t cite any research or evidence for that 50% estimate,” that “Amodei is a salesman, and it’s in his interest to make his product appear inevitable and so powerful it’s scary,” and that “little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic’s work.”
Morrow’s work is compelling because it’s sincere, and is proof that there is absolutely nothing stopping mainstream press from covering this industry honestly. Instead, Business Insider (which just laid off a ton of people and lazily recommended their workers read books that don’t exist because they can’t even write their own emails without AI), Fortune, Mashable and many other outlets blandly covered a man’s completely made up figure as if it was fact.
This isn’t a story. It is “guy said thing,” and “guy” happens to be “billionaire behind multi-billion dollar Large Language Model company,” and said company has made exactly jack shit as far as software that can actually replace workers.
While there are absolutely some jobs being taken by AI, there is, to this point, little or no research that suggests that it’s happening at scale, mostly because Large Language Models don’t really do the things that you need them to do to take someone’s job at scale. Nor is it clear that those jobs were lost because AI — specifically genAI — can actually do them as well, or better, than a person, or because an imbecile CEO bought into the hype and decided to fire up the pink slip printer, and when those LLMs inevitably shit the bed, those people will be hired back.
You know, like Klarna literally just had to.
These scare tactics exist to do one thing: increase the value of companies like Anthropic, OpenAI, Microsoft, Salesforce, and anybody else outright lying about how “agents” will do our jobs, and to make it easier for the startups making these models to raise funds, kind-of how a pump-and-dump scammer will hype up a doomed penny stock by saying how it’s going to the moon, not disclosing that they themselves own a stake in the business.
Let’s look at another example. A recent report from Oxford Economics talked about how entry-level workers were facing a job crisis, and vaguely mentioned in the preview of the report that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.”
One might think the report says much more than that, and one would be wrong. On the very first page, it says that “there are signs that entry-level positions are being displaced by artificial intelligence at higher rates.” On page 3, it claims that the “high adoption rate by information companies along with the sheer employment declines in [some roles] since 2022 suggested some displacement effect from AI…[and] digging deeper, the largest displacement seems to be entry-level jobs normally filled by recent graduates.”
In fact, fuck it, take a look.
That’s it! That’s the entire extent of its proof! The argument is that because companies are getting AI software and there’s employment declines, it must be AI. There you go! Case closed.
This report has now been quoted as gospel. Axios claimed that Oxford Economics’ report provided “hard evidence” that “AI is displacing white-collar workers.” USA Today said that positions in computer and mathematical sciences have been the first affected as companies increasingly adopt artificial intelligence systems.”
And Anthropic marketing intern/New York Times columnist Kevin Roose claimed that this was only the tip of the iceberg, because, and I shit you not, he had talked to some guys who said some stuff.
No, really.
In interview after interview, I’m hearing that firms are making rapid progress toward automating entry-level work, and that A.I. companies are racing to build “virtual workers” that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become “A.I.-first,” testing whether a given task can be done by A.I. before hiring a human to do it.
One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by A.I. coding tools. Another told me that his start-up now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company.
Yet Roose’s most egregious bullshit came after he admitted that these don’t prove anything:
Anecdotes like these don’t add up to mass joblessness, of course. Most economists believe there are multiple factors behind the rise in unemployment for college graduates, including a hiring slowdown by big tech companies and broader uncertainty about President Trump’s economic policies.
But among people who pay close attention to what’s happening in A.I., alarms are starting to go off.
That’s right, anecdotes don’t prove his point, but what if other anecdotes proved his point? Because Roose goes on to quote Amodei’s 50% quote, and say that they now claim its Claude Opus 4 model can “code for several hours without stopping,” a statement that Roose calls “a tantalizing possibility if you’re a company accustomed to paying six-figure engineer salaries for that kind of productivity” without thinking “does that mean the code is good?” or “what does it do for those hours?”
Roose spends the rest of the article clearing his throat, adding that “even if AI doesn’t take all entry-level jobs right away” that “two trends concern [him],” namely that he worries companies are “turning to AI too early, before the tools are robust enough to handle full entry-level workloads,” and that executives believing that entry-level jobs are short-lived will “underinvest in job training, mentorship and other programs aimed at entry-level workers.”
Kevin, have you ever considered checking whether that actually happens?
Nah! Why would he? Kevin’s job is to be a greasy pawn of the AI industry and the markets at large. An interesting — and sincere! — version of this piece would’ve intelligently humoured the idea then attempted to actually prove it, and then failed because there is no proof that this is actually happening other than that which the media drums up.
It’s the same craven, insincere crap we saw with the return to office “debate” which was far more about bosses pretending that the office was good than it was about productivity or any kind of work. I wrote about this almost every week for several years, and every single media outlet participated, on some level, in pushing a completely fictitious world where in-office work was “better” due to ‘serendipity,” that the boss was right, and that we all had to come back to the office.
Did they check with the boss about how often they were in the office? Nope! Did they give equal weight to those who disagreed with management — namely those doing the actual work? No. But they did get really concerned about quiet quitting for some reason, even though it wasn’t real, because the bosses that don’t seem to actually do any work had demanded that it was.
Anyway, Kevin Roose was super ahead of the curve on that one. He wrote that “working from home is overrated” and that “home-cooked lunches and no commuting…can’t compensate for what’s lost in creativity” in March, 2020. My favourite quote is when he says “...research also shows that what remote workers gain in productivity, they often miss in harder-to-measure benefits like creativity and innovative thinking,” before mentioning some studies about “team cohesion,” linking to an article from The Atlantic from 2017 that does not appear to include a study other than the Nicholas Bloom study that Roose himself linked that showed remote work was productive and another about “proximity boosting productivity” that it does not link to, adding that “the data tend to talk past each other.”
I swear to god I am not trying to personally vilify Kevin Roose — it’s just that he appears to have backed up every single boss-coddling market-driven hype cycle with a big smile, every single time. If he starts writing about Quantum Computing, it’s tits up for AI.
This is the same thing that happened when corporations were raising prices and the media steadfastly claimed that inflation had nothing to do with corporate greed (once again, CNN’s Allison Morrow was one of the few mainstream media reporters willing to just say “yeah corporations actually are raising prices and blaming it on inflation”), desperately clinging to whatever flimsy data might prove that corporations weren’t price gouging even as corporations talked about doing so publicly.
It’s all so deeply insincere, and all so deeply ugly — a view from nowhere, one that seeks not to tell anyone anything other than that whatever the rich or powerful is worried or excited about is true, and that the evidence, no matter how flimsy, always points in the way they want it to.
It’s lazy, brainless, and suggests either a complete rot in the top of editorial across the entire business and tech media or a consistent failure by writers to do basic journalism, and as forgiving I want to be, there are enough of these egregious issues that I have to begin asking if anybody is actually fucking trying.
It’s the same thing every time the powerful have an idea — remote work is bad for companies and we must return to the office, the metaverse is here and we’re all gonna work in it, prices are higher and it’s due to inflation rather than anything else, AI is so powerful and strong and will take all of our jobs, or whatever it is — and that idea immediately become the media’s talking points. Real people in the real world, experiencing a different reality, watch as the media repeatedly tells them that their own experiences are wrong. Companies can raise their prices specifically to raise their profits, Meta can literally not make a metaverse, AI can do very little to actually automate your real job, and the media will still tell you to shut the fuck up and eat their truth-slop.
You want an actual conspiracy theory? How about a real one: that the media works together with the rich and powerful to directly craft “the truth,” even if it runs contrary to reality. The Business Idiots that rule our economy — work-shy executives and investors with no real connection to any kind of actual production — are the true architects of what’s “real” in our world, and their demands are simple: “make the news read like we want it to.”
Yet when I say “works together,” I don’t even mean that they get together in a big room and agree on what’s going to be said. Editors — and writers — eagerly await the chance to write something following a trend or a concept that their bosses (or other writers’ bosses) come up with and are ready to go. I don’t want to pillory too many people here, but go and look at who covered the metaverse, cryptocurrency, remote work, NFTs and now generative AI in gushing terms.
Okay, but seriously, how is it every time with Casey and Kevin?
The illuminati doesn’t need to exist. We don’t need to talk about the Bilderberg Group, or Skull and Bones, or reptilians, or wheel out David Icke and his turquoise shellsuit. The media has become more than willing to follow whatever it needs to once everybody agrees on the latest fad or campaign, to the point that they’ll repeat nonsensical claim after nonsensical claim.
The cycle repeats because our society — and yes, our editorial class too — is controlled by people who don’t actually interact with it. They have beliefs that they want affirmed, ideas that they want spread, and they don’t even need to work that hard to do so, because the editorial rails are already in place to accept whatever the next big idea is. They’ve created editorial class structures to make sure writers will only write what’s assigned, pushing back on anything that steps too far out of everybody’s agreed-upon comfort zone.
The “AI is going to eliminate half of white collar jobs” story is one that’s taken hold because it gets clicks and appeals to a fear that everyone, particularly those in the knowledge economy who have long enjoyed protection from automation, has. Nobody wants to be destitute. Nobody with six figures of college debt wants to be stood in a dole queue.
It’s a sexy headline, one that scares the reader into clicking, and when you’re doing a half-assed job at covering a study, you can very easily just say “there’s evidence this is happening.” It’s scary. People are scared, and want to know more about the scary subject, so reporters keep covering it again and again, repeating a blatant lie sourced using flimsy data, pandering to those fears rather than addressing them with reality.
It feels like the easiest way to push back on these stories is fairly simple: ask reporters to show the companies that have actually done this.
No, I don’t mean “show me a company that did layoffs and claims they’re bringing in new efficiencies with AI.” I mean actually show me a company that has laid off, say, 10 people, and how those people have been replaced by AI. What does the AI do? How does it work? How do you quantify the work it’s replaced? How does it compare in quality? Surely with all these headlines there’s got to be one company that can show you, right?
No, no, I really don’t mean “we’re saying this is the reason,” I mean show me the actual job replacement happening and how it works. We’re three years in and we’ve got headlines talking about AI replacing jobs. Where? Christopher Mims of the Wall Street Journal had a story from June 2024 that talked about freelance copy editors and concept artists being replaced by generative AI, but I can find no stories about companies replacing employees.
To be clear, I am not advocating for this to happen. I am simply asking that the media, which seems obsessed with — even excited by — the prospect of imminent large-scale job loss, goes out and finds a business (not a freelancer who has lost work, not a company that has laid people off with a statement about AI) that has replaced workers with generative AI.
They can’t, because it isn’t happening at scale, because generative AI does not have the capabilities that people like Dario Amodei and Sam Altman repeatedly act like they do, yet the media continues to prop up the story because they don’t have the basic fucking curiosity to learn about what they’re talking about.
Hell, I’ll make it easier for you. Why don’t you find me the product, the actual thing, that can do someone’s job? Can you replace an accountant? No. A doctor? No. A writer? Not if you want good writing. An artist? Not if you want to actually copyright the artwork, and that’s before you get to how weird and soulless the art itself feels. Walk into your place of work tomorrow and look around you and start telling me how you would replace each and every person in there with the technology that exists today, not the imaginary stuff that Dario Amodei and Sam Altman want you to think about.
Outside of coding — which, by the way, is not the majority of a software engineer’s fucking job, if you’d take the god damn time to actually talk to one! — what are the actual capabilities of a Large Language Model today? What can it actually do?
You’re gonna say “it can do deep research,” by which you mean a product that doesn’t really work. What else? Generate videos that sometimes look okay? “Vibe code”? Bet you’re gonna say something about AI being used in the sciences to “discover new materials” which proved AI’s productivity benefits. Well, MIT announced that it has “no confidence in the provenance, reliability or validity of the data, and [has] no confidence in the validity of the research contained in the paper.”
I’m not even being facetious: show me something! Show me something that actually matters. Show me the thing that will replace white collar workers — or even, honestly, “reduce the need for them.” Find me someone who said “with a tool like this I won’t need this many people” who actually fired them and then replaced them with the tool and the business keeps functioning. Then find me two or three more. Actually, make it ten, because this is apparently replacing half the white collar workforce.
There are some answers, by the way. Generative AI has sped up transcription and translation, which are useful for quick references but can cause genuine legal risk. Generative AI-based video editing tools are gaining in popularity, though it’s unclear by how much. Seemingly every app that connects to generative AI can summarise a message. Software engineers using LLM tools — as I talked about on a recent episode of Better Offline — are finding some advantages, but LLMs are far from a panacea. Generative AI chatbots are driving people insane by providing them an endlessly-configurable pseudo-conversation too, though that’s less of a “use case” and more of a “text-based video game launched at scale without anybody thinking about what might happen.”
Let’s be real: none of this is transformative. None of this is futuristic. It’s stuff we already do, done faster, though “faster” doesn’t mean better, or even that the task is done properly, and obviously, it doesn’t mean removing the human from the picture. Generative AI is best at, it seems, doing very specific things in a very generic way, none of which are truly life-changing. Yet that’s how the media discusses it.
An aside about software engineering: I actually believe LLMs have some value here. LLMs can generate outputs to generate and evaluate code, as well as handle distinct functions within a software engineering environment. It’s pretty exciting for some software engineers - they’re able to get a lot of things done much faster! - though they’d never trust it with things launched in production. These LLMs also have “agents” - but for the sake of argument, I’d like to call them “bots.” Bots, because the term “agent” is bullshit and used to make things sound like they can do more than they can. Anyway, bots can, to quote Thomas Ptacek, “poke around your codebase on their own…author files directly…run tools…compile code…run tests…and iterate on the results,” to name a few things.” These are all things - under the watchful eye of an actual person - that can speed up some software engineers’ work.
(A note from my editor, Matt Hughes, who has been a software engineer for a long time: I’m not sure how persuasive this stuff is. Coders have been automating things like tests, code compilation, and the general mechanics of software engineering long before AI and LLMs were the hot thing du jour. You can do so many of the things that Ptacek mentioned with cronjobs and shell scripts — and, undoubtedly, with greater consistency and reliability.)Ptacek also adds that “if truly mediocre code is all we ever get from LLM, that’s still huge, [as] it’s that much less mediocre code humans have to write.”
Back to Ed: In a conversation with The Internet of Bugs’ (and veteran software engineer) Carl Brown as I was writing this newsletter, he recommended I exercise caution with how I discussed LLMs and software engineering, saying that “...there are situations at the moment (unusual problems, or little-used programming languages or frameworks) where the stuff is absolutely useless, and is likely to be for a long time.”In a previous draft, I’d written that mediocre code was “fine if you knew what to look for,” but even then, Brown added that “...the idea that a human can ‘know what code is supposed to look like’ is truly problematic. A lot of programmers believe that they can spot bugs by visual inspection, but I know I can't, and I'd bet large sums of money they can't either — and I have a ton of evidence I would win that bet.”
Brown continued: “In an offline environment, mediocre code may be fine when you know what good code looks like, but if the code might be exposed to hackers, or you don't know what to look for, you're gonna cause bugs, and there are more bugs than ever in today's software, and that is making everyone on the Internet less secure.”
He also told me the story of the famed Heartbleed bug, a massive vulnerability in a common encryption library that millions of smart, professional security experts and developers looked at for over two years before someone saw a single error — one single statement — that somebody didn’t check that led to a massive, internet-wide panic leaving hundreds of millions of websites vulnerable.
So, yeah, I dunno man. On one hand, there are clearly software developers that benefit from using LLMs, but it’s complicated, much like software engineering itself. You cannot just “replace a coder,” because “coder” isn’t really the job, and while this might affect entry-level software engineers at some point, there’s yet to be proof it’s actually happening, or that AI’s taking these jobs and not, say, outsourcing.
Perhaps there’s a simpler way to put it: software engineering is not just writing code, and if you think that’s the case, you do not write software or talk to software engineers about what it is they do.
Seriously, put aside the money, the hype, the pressure, the media campaigns, the emotions you have, everything, and just focus on the product as it is today. What is it that generative AI does, today, for you? Don’t say “AI could” or “AI will,” tell me what “AI does.” Tell me what has changed about your life, your job, your friends’ jobs, or the world around you, other than that you heard a bunch of people got rich.
Yet the media continually calls it “powerful AI.” Powerful how? Explain the power! What is the power? The word “powerful” is a marketing term that the media has adopted to describe something it doesn’t understand, along with the word “agent,” which means “autonomous AI that can do things for you” but is used, at this point, to describe any Large Language Model doing anything.
But the intention is to frame these models as “powerful” and to use the term “agents” to make this technology seem bigger than it is, and the people that control those terms are the AI companies themselves.
It’s at best lazy and at worst actively deceitful, a failure of modern journalism to successfully describe the moment outside of what they’re told to, or the “industry standards” they accept, such as “a Large Language Model is powerful and whatever Anthropic or OpenAI tells me is true.”
It’s a disgrace, and I believe it either creates distrust in the media or drives people insane as they look at reality - where generative AI doesn’t really seem to be doing much - and get told something entirely different by the media.
When I read a lot of modern journalism, I genuinely wonder what it is the reporter wants to convey. A thought? A narrative? A story? Some sort of regurgitated version of “the truth” as justified by what everybody else is writing and how your editor feels, or what the markets are currently interested in? What is it that writers want readers to come away with, exactly?
It reminds me a lot of a term that Defector’s David Roth once used to describe CNN’s Chris Cilizza — “politics, noticed”:
This feels, from one frothy burble to the next, like a very specific type of fashion writing, not of the kind that an astute critic or academic or even competent industry-facing journalist might write, but of the kind that you find on social media in the threaded comments attached to photos of Rihanna. Cillizza does not really appear to follow any policy issue at all, and evinces no real insight into electoral trends or political tactics. He just sort of notices whatever is happening and cheerfully announces that it is very exciting and that he is here for it. The slugline for his blog at CNN—it is, in a typical moment of uncanny poker-faced maybe-trolling, called The Point—is “Politics, Explained.” That is definitely not accurate, but it does look better than the more accurate “Politics, Noticed.”
Whether Roth would agree or not, I believe that this paragraph applies to a great deal of modern journalism. Oh! Anthropic launched a new model! Delightful. What does it do? Oh they told me, great, I can write it down. It’s even better at coding now! Wow! Also, Anthropic’s CEO said something, which I will also write down. The end!
I’ll be blunt: making no attempt to give actual context or scale or consideration to the larger meaning of the things said makes the purpose of journalism moot. Business and tech journalism has become “technology, noticed.” While there are forays out of this cul-de-sac of credulity — and exceptions at many mainstream outlets — there are so many more people who will simply hear that there’s a guy who said a thing, and that guy is rich and runs a company people respect, and thus that statement is now news to be reported without commentary or consideration.
Much of this can be blamed on the editorial upper crust that continually refuses to let writers critique their subject matter, and wants to “play it safe” by basically doing what everybody else does. What’s crazy to me is that many of the problems with the AI bubble — as with the metaverse, as with the return to office, as with inflation and price gouging — are obvious if you actually use the things or participate in reality, but such things do not always fit with the editorial message.
But honestly, there are plenty of writers who just don’t give a shit. They don’t really care to find out what AI can (or can’t) do. They’ve come to their conclusion (it’s powerful, inevitable, and already doing amazing things) and thus will write from that perspective. It’s actually pretty nefarious to continually refer to this stuff as “powerful,” because you know their public justification is how this stuff uses a bunch of GPUs, and you know their private justification is that they have never checked and don’t really care to. It’s much easier to follow the pack, because everybody “needs to cover AI” and AI stories, I assume, get clicks.
That, and their bosses, who don’t really know anything other than that “AI will be big,” don’t want to see anything else. Why argue with the powerful? They have all the money.
But even then…can you try using it? Or talking to people that use it? Not “AI experts” or “AI scientists,” but real people in the real world? Talk to some of those software engineers! Or I dunno, learn about LLMs yourself and try them out?
Ultimately, a business or tech reporter should ask themselves: what is your job? Who do you serve? It’s perfectly fine to write relatively straightforward and positive stuff, but you have to be clear that that’s what you’re doing and why you’re doing it.
And you know what, if all you want to do is report what a company does, fine! I have no problem with that, but at least report it truthfully. If you’re going to do an opinion piece suggesting that AI will take our jobs, at least live in reality, and put even the smallest amount of thought into what you’re saying and what it actually means.
This isn’t even about opinion or ideology, this is basic fucking work.
And it is fundamentally insincere. Is any of this what you truly believe? Do you know what you believe? I don’t mean this as a judgment or an attack — many people go through their whole lives with relatively flimsy reasons for the things they believe, especially in the case of commonly-held beliefs like “AI is going to be big” or “Meta is a successful company.”
If I’m honest, I really don’t mind if you don’t agree with something I say, as long as you have a fundamentally-sound reason for doing so. My CoreWeave analysis may seem silly to some because its value has quadrupled — and that’s why I didn’t write that I believed the stock would crater, or really anything about the stock. Its success does not say much about the AI bubble other than it continues, and even if I am wrong, somehow, long term, at least I was wrong for reasons I could argue versus the general purpose sense that “AI is the biggest thing ever.”
I understand formats can be constraining — many outlets demand an objective tone — but this is where words like “allegedly” come in. For example, The Wall Street Journal recently said that Sam Altman had claimed, in a leaked recording, that buying Jony Ive’s pre-product hardware startup would add “$1 trillion in market value” to OpenAI. As it stands, a reader — especially a Business Idiot — could be forgiven for thinking that OpenAI was now worth, or could be worth, over a trillion dollars, which is an egregious editorial failure.
One could easily add that “...to this date, there have been no consumer hardware launches at this scale outside of major manufacturers like Apple and Google, and these companies had significantly larger research and development budgets and already-existent infrastructure relationships that OpenAI lacks.”
Nothing about what I just said is opinion. Nothing about what I just said is an attack, or a sleight, and if you think it’s “undermining” the story, you yourself are not thinking objectively. These are all true statements, and are necessary to give the full context of the story.
That, to me, is sincerity. Constrained by an entirely objective format, a reporter makes the effort to get across the context in which a story is happening, rather than just reporting exactly the story and what the company has said about it. By not including the context, you are, on some level, not being objective: you are saying that everything that’s happening here isn’t just possible, but rational, despite the ridiculous nature of Altman’s comment.
Note that these are subjective statements. They are also the implication of simply stating that Sam Altman believes acquiring Jony Ive’s company will add $1 trillion dollars in value to OpenAI. By not saying how unlikely it is — again, without even saying the word “unlikely,” but allowing the audience to come to that conclusion by having the whole story — you give the audience the truth.
It really is that simple.
The problem, ultimately, is that everybody is aware that they’re being constantly conned, but they can’t always see where and why. Their news oscillates from aggressively dogmatic to a kind of sludge-like objectivity, and oftentimes feels entirely disconnected from their own experiences other than in the most tangential sense, giving them the feeling that their actual lives don’t really matter to the world at large.
On top of that, the basic experience of interacting with technology, if not the world at large, kind of fucking sucks now. We go on Instagram or Facebook to see our friends and battle through a few ads and recommended content, we see things from days ago until we click stories, and we hammer past a few more ads to get a few glimpses of our friends. We log onto Microsoft Teams, it takes a few seconds to go through after each click, and then it asks why we’re not logged in, a thing that we don’t need to be able to do to make a video call.
Our email accounts are clogged with legal spam — marketing missives, newsletters, summaries from news outlets, notifications from UPS that require us to log in, notifications that our data has been leaked, payment reminders, receipts, and even occasionally emails from real people. Google Search is broken, but then again, so is searching on basically any platform, be it our emails, workspaces or social networks.
At scale, we as human beings are continually reminded that we do not matter, that any experiences of ours outside of what the news say makes us “different” or a “cynic,” that our pain points are only as relevant as those that match recent studies or reports, and that the people that actually matter are either the powerful or considered worthy of attention. News rarely feels like it appeals to the listener, reader or viewer, just an amorphous generalized “thing” of a person imagined in the mind of a Business Idiot. The news doesn’t feel the need to explain why AI is powerful, just that it is, in the same way that “we all knew” that being back in the office was better, even if there were far more people who disagreed than didn’t.
As a result of all of these things, people are desperate for sincerity. They’re desperate to be talked to as human beings, their struggles validated, their pain points confronted and taken seriously. They’re desperate to have things explained to them with clarity, and to have it done by somebody who doesn’t feel chained by an outlet.
This is something that right wing media caught onto and exploited, leading to the rise of Donald Trump and the obsession with creating the “Joe Rogan of the Left,” an inherently ridiculous position based on his own popularity with young men (which is questionable based on recent reports) and its total misunderstanding of what actually makes his kind of media popular.
However you may feel about Rogan, what his show sells on is that he’s a kind of sincere, pliant and amicable oaf. He does not seem condescending or judgmental to his audience, because he himself sits, slack-jawed, saying “yeah I knew a guy who did that” and genuinely seems to like them. While you (as I do) may deeply dislike everything on that show, you can’t deny that they seem to at least enjoy themselves, or feel engaged and accepted.
The same goes for Theo Von (real name: Theodor Capitani von Kurnatowski III, and no, really!), whose whole affable doofus motif disarms guests and listeners.
It works! And he’s got a whole machine that supports him, just like Rogan, money, real promotion, and real production value. They are given the bankroll and the resources to make a high-end production and a studio space and infrastructural support and then they get a bunch of marketing and social push too. There’s entire operations behind them, other than the literal stuff they do on the set, because, shocker, the audience actually wants to see them not have a boxed lunch with “THE THINGS TO BELIEVE” written on it by a management consultant.
This is in no way a political statement, because my answer to this entire vacuous debate is to “give a diverse group of people that you agree with the beliefs of the actual promotional and financial backing and then let them create something with their honest-to-god friendships.” Bearing witness to actual love and solidarity is what will change the hearts of young people, not endless McKinsey gargoyles with multi-million-dollar budgets for “data.”
I should be clear that this isn’t to say every single podcast should be in the format I suggest, but that if you want whatever “The Joe Rogan Of The Left” is, the answer is “a podcast with a big audience where the people like the person speaking and as a result are compelled by their message.”
It isn’t even about politics, it’s that when you cram a bunch of fucking money into something it tends to get big, and if that thing you create is a big boring piece of shit that’s clearly built to be — and even signposted in the news as built to be — manipulative, it is in and of itself sickening.
I’m gonna continue clearing my throat: the trick here is not to lean right, nor has it ever been. Find a group of people who are compelling, diverse and genuinely enjoy being around each other and shove a whole bunch of advertising dollars into it and give it good production values to make it big, and then watch in awe as suddenly lots of people see it and your message spreads. Put a fucking trans person in there — give Western Kabuki real money, for example — and watch as people suddenly get used to seeing a trans person because you intentionally chose to do so, but didn’t make it weird or get upset when they don’t immediately vote your way.
Because guess what — what people are hurting for right now is actual, real sincerity. Everybody feels like something is wrong. The products they use every day are increasingly-broken, pumped full of generative AI features that literally get in the way of what they’re trying to do, which already was made more difficult because companies like Meta and Google intentionally make their products harder to use as a means of making more money. And, let’s be clear, people are well aware of the billions in profits that these companies make at the customer’s expense.
They feel talked down to, tricked, conned, abused and abandoned, both parties’ representatives operating in terms almost as selfish as the markets that they also profit from. They read articles that blandly report illegal or fantastical things as permissible and rational and think, for a second, “am I wrong? Is this really the case? This doesn’t feel the case?” while somebody tells them that despite the fact that they have less money and said money doesn’t go as far, they’re actually experiencing the highest standard of living in history.
Ultimately, regular people are repeatedly made to feel like they don’t matter. Their products are overstuffed with confusing menus, random microtransactions, the websites they read full of advertisements disguised as stories and actual advertisements built to trick them, their social networks intentionally separating them from the things they want to see.
And when you feel like you don’t matter, you look to other human beings, and other human beings are terrified of sincerity. They’re terrified of saying they’re scared, they’re angry, they’re sad, they’re lonely, they’re hurting, they’re constantly on a fucking tightrope, every day feels like something weird or bad is going to happen either on the news (which for no reason other than it helps rich people constantly tries to scare them that AI will take their jobs), and they just want someone to talk to, but everybody else is fucking unwilling to let their guard down after a decade-plus of media that valorized snark and sarcasm, because the lesson they learned about being emotionally honest was that it’s weird or they’re too much or it’s feminine for guys or it’s too feminine for women.
Of course people feel like shit, so of course they’re going to turn to media that feels like real people made it, and they’ll turn to the media they’ll see the easiest, such as that given to them by the algorithm, or that which they are made to see by advertisement, or, of course, word of mouth. And if you’re sending someone to listen to something, and someone describes it in terms that sound like they’re hanging out with a friend, you’d probably give it a shot.
Outside of podcasting, people’s options for mainstream (and an alarming amount of industry) news are somewhere between “I’m smarter than you,” “something happened!” “sneering contempt,” “a trip to the principal’s office,” or “here’s who you should be mad at,” which I realize also describes the majority of the New York Times opinion page.
While “normies” of whatever political alignment might want exactly the slop they get on TV, that slop is only slop because the people behind it believe that regular people will only accept the exact median person’s version of the world, even if they can’t really articulate it beyond “whatever is the least-threatening opinion” (or the opposite in Fox News’ case).
Really, I don’t have a panacea for what ails media, but what I do know is that in my own life I have found great joy in sincerity and love. In the last year I have made — and will continue to make, as it’s my honour to — tremendous effort to get to know the people closest to me, to be there for them if I can, to try and understand them better and to be my authentic and honest self around them, and accept and encourage them doing the same. Doing so has improved my life significantly, made me a better, more confident and more loving person, and I can only hope I provide the same level of love and acceptance to them as they do to me.
Even writing that paragraph I felt the urge to pare it back, for fear that someone would accuse me of being insincere, for “speaking in therapy language,” for “trying to sound like a hero,” not that I am doing so, but because there are far more people concerned with moderating how emotional and sincere there are than those willing to stop actual societal harms.
I think it’s partly because people see emotions as weakness. I don’t agree. I have never felt stronger and more emboldened than I have as I feel more love and solidarity with my friends, a group that I try to expand at any time I can. I am bolder, stronger (both physically and mentally), and far happier, as these friendships have given me the confidence to be who I am, and I offer the same aggressive advocacy to my friends in being who they are as they do to me.
None of what I am saying is a one-size-fits-all solution. There is so much room for smaller, more niche projects, and I both encourage and delight in them. There is also so much more attention that can be given to these niche projects, and things are only “niche” until they are given the time in the light to become otherwise. There is also so much more that can be done within the mainstream power structures, if only there is the boldness to do so.
Objective reporting is necessary — crucial, in fact! — to democracy, but said objectivity cannot come at the cost of context, and every time it does so, the reader is failed and the truth is suffocated. And I don’t believe objective reporting should be separated from actual commentary. In fact, if someone is a reporter on a particular beat, their opinion is likely significantly more-informed than that of someone “objective” and “outside of the coverage,” based on stuff like “domain expertise.”
The true solution, perhaps, is more solidarity and more sincerity. It’s media outlets that back up their workers, with editorial missions that aggressively fight those who would con their readers or abuse their writers, focusing on the incentives and power of those they’re discussing rather than whether or not “the markets” agree with their sentiment.
In any case, the last 15+ years of the media has led to a flattening of journalism, constantly swerving toward whatever the next big trend is — the pivot to video, contorting content to “go viral” on social media, SEO, or whatever big coverage area (AI, for example) everybody is chasing instead of focusing on making good shit people love. Years later, social networks have effectively given up on sending traffic to news, and now Google’s AI summaries are ripping away large chunks of the traffic of major media outlets that decided the smartest way to do their jobs was “make content for machines to promote,” never thinking for a second that those who owned the machines were never to be trusted.
Worse still, outlets have drained the voices from their reporters, punishing them for having opinions, ripping out anything that might resemble a personality from their writing to meet some sort of vague “editorial voice” despite readers and viewers again and again showing that they want to read the news from a human being not an outlet.
I maintain that things can change for the better, and it starts with a fundamental acceptance that those running the vast majority of media outlets aren’t doing so for their readers’ benefit. Once that happens, we can rebuild around distinct voices, meaningful coverage and a sense of sincerity that the mainstream media seems to consider the enemy.
2025-06-14 03:59:58
2025-06-10 02:07:01
Soundtrack: Queens of the Stone Age - Villains of Circumstance
Listen to my podcast Better Offline if you haven't already.
I want my fucking tech industry back.
Maybe you think I sound insane, but technology means a lot to me. It’s the way that I speak to most of my friends. It’s my lifeline when I’m hurting or when those close to me hurt, and it’s the way I am able to make a living and be a creative — something I only was able to become because of technology. Social networks have been a huge part of me being able to become a functional human being, and you can judge me for that all you want, but you are a coward and a hypocrite for doing so, and you’re going to read to the end of this blog anyway.
Really, seriously, honestly — the Ed Zitron you know was and is only possible because of my deep connection to technology. This was how I made friends. This was how I got the confidence to meet real people. This was how I started my company. This was how I met the people closest to me, people I love with all my heart. I was only able to do any of this because I was able to get on the computer.
I am bombastic and frankly a little much today, and was the literal opposite less than 5 years ago, and I was even more reserved 10 years before that. Technology allowed me to find a way to be human on my terms, in ways that I don’t think are possible anymore because most of the interconnecting fabric that I used has been interfered with by bad actors and the rest with slop and SEO.
I think there are far more people out there like me than will admit to it. I think more people miss the past, or at least realize now what they lost.
There was a time this didn’t suck, when it wasn’t a struggle to do basic things, when my world was not a constant war with my god damn apps, when things weren’t necessarily turn-key but my phone wasn’t randomly burning through half of its battery life in an hour and a half because one app on the App Store is poorly configured. I swear to god, back in like, 2019, Zoom just fucking connected. I remember things being better, and on top of that, I see how much better things could be.
But that’s not the tech industry we’re allowed to have, because the people that run the tech industry do not give a shit.
It’s not enough to have your data, your work, your art, your posts, your friends, the things you’ve taken photos of, and the things you’ve searched for. The industry must have that of your children, and their children, as early as possible, even if it means helping them cheat on their homework so that they too can live a life where they’ve skipped having any responsibility or learning anything about the world other than how one can extract as much as possible without having to give anything in return.
Big tech is sociopathic and directionless, swinging wildly to try and find new ways to drag any kind of interaction out of a customer they’ve grown to loathe for their unwillingness to be more profitable. Decades of powerful Big Tech Business Idiots have chased out true value-creation in Silicon Valley in favour of growth economics, sending edict after edict down to the markets and the media about what’s going to be “hot” next, inventing business trends rather than actual solutions to problems. After all, that might involve — eugh! — experiencing the real world rather than authoring a new version of it every few years.
Apple barely escapes the void because its principle value proposition has, on some level, always been “our stuff works.” The problem is that Apple needs to grow, and thus its devices are slowly but surely becoming mired in sludge. The App Store is an abomination, your iPhone settings look like a fucking Escher painting, and in its desperation to follow the pack it shoved Apple Intelligence out the door — one of the most invasive and annoying pieces of software to ever grace a computer.
Apple’s willingness to do this shows that it’s rotten just like the rest of them — it's just better at hiding it. After all, look at the way in which it flaunted court orders telling it to open up third-party payments as a means of squeezing every penny out of the App Store. Loathsome. And it still ended up losing.
I adore tech. Tech made me who I am today. I use and love technology for hours a day, yet that experience is constantly mangled by the warring intentions of almost every product I use. I’m forced to log into the newspaper website and back into Google Calendar multiple times a week, my phone randomly resets — as every single iPhone has for multiple years — at least twice a week, my Apple Watch stops being willing to read my heart rate, websites I want to read sometimes simply do not load, and sometimes when I load websites on an iPad they just won’t scroll.
Everything feels like a fucking chore, but I love the actual things that technology does for me, like letting me take notes with ease, like building and maintaining my fitness through a series of connected products like Tonal and Fight Camp, like using Signal to talk to friends hundreds or thousands of miles away, like posting dumb stuff on Bluesky and interacting with my followers, like recording a podcast wherever I am in the world because USB-C mics are cheap and easy to use and sound great.
There are so many great things about technology, things I fucking love, and Large Language Models do not resemble their form or intention. There is nothing about an LLM that feels like it’s built to provide a real service, other than some sort-of fraudulent copy of something else lacking its soul or utility. Those that actually use them in their daily work talk about them as exciting tools that help them improve workflows - not like they're the next big thing.
The original iPhone, even in its initial form, promised a world where two or three devices became one, where your music and a camera were always on you, and where you could do your banking and grocery shopping while sitting in the back of a taxi. It promised access to the world’s knowledge from a slab of glass in your pocket. If i’m honest, the smartphone has absolutely delivered on those promises — and more.
Where do we extrapolate from LLMs? What am I meant to be seeing in ChatGPT?
The “iPhone moment” wasn’t a result of one thing, but a collection of different bits that formed an obvious whole — one device that did a bunch of things really, really well.
LLMs have no such moment, nor do they have any one thing they do well, let alone really well. LLMs are famous not for their efficacy, but their inconsistency, with even ardent AI cultists warning people not to trust their output. What am I meant to see from here? They’re not autonomous, and have shown no proof that they can be, and in fact kind of feel antithetical to autonomy itself, which requires consistency, reliability and replicability, more things that LLMs cannot do.
And that, ultimately, was what made the smartphone amazing too. Within a few years, phones were competent web browsers. The mobile web took a minute to catch up, sure, but you could see it taking form immediately, as you could with the App Store. They immediately made sense as a way to listen to music, because they were effectively an iPod, a beloved MP3 player, and the iPhone’s camera was good enough for most people at the time, and quickly became better than most of the point-and-shoots that people used to take on vacations and to parties. Now, most people are pretty happy with their phone cameras regardless of who makes them.
All of this made total sense from the very beginning the moment you picked one up. What if the camera was better? It happened. What if the screen was bigger? It happened. There were immediate signs the iPhone would improve. It wasn’t fantastical to believe that in 10-to-20 years you’d have a bigger, faster and thinner iPhone with a camera that produced shots alarmingly close to what you’d capture with a DSLR.
It makes sense that Google freaked out the second it picked one up. It was fucking wild what it could do, even in its first form. Each iteration and improvement — as with other smartphones — offers a new twist on a formula you already know works, and sometimes “better” means something different. For example, I don’t use Android, but I think the foldable Motorola phones are cool as shit. Palm’s WebOS was a stroke of UI genius, and it’s criminal to see how HP mishandled the company after its acquisition, ultimately killing one of the earliest and most iconic mobile brands.
Sidenote: In anticipation of a “well, akchually” from the peanut gallery, different can also mean bad. 3D phones were portable migraine-causers. The BlackBerry Storm’s weird SurePress technology — where the touchscreen screen kind-of ‘clicked’ through haptic feedback whenever you pressed something — was an abomination that put RIM on a terminal trajectory. And Samsung’s decision to include a built-in firelighter in the Samsung Galaxy Note 7 will remain one of the most expensive errors in mobile hardware history. It really blew up, but not in the way they wanted it to.
What does the “better” version of ChatGPT look like, exactly? What’s cool about ChatGPT? Where’s that “oooh” moment? Are you going to tell me you’re that impressed by the pictures and the words? Is it in the resemblance of its outputs to human beings? Because the actual answer is “a ChatGPT that actually works.” One that you can just ask to do some shit and know it’ll do it, and it’d also be very obvious what it could actually do, which is not the case right now. A better ChatGPT would quite literally be a different product.
What’s particularly horrifying about the AI bubble is that it’s shown that when they decide to, big tech can put hundreds of billions behind whatever the fuck they want. They are able to mobilize incredible amounts of capital and the industrial might of multiple companies with multi-trillion dollar market capitalisations to build entire infrastructure dedicated to one thing, and the one thing they are choosing is generative AI.
They’re all fully capable of uniting around an ideal — it’s just that said ideal exists entirely to automate human beings out of the picture, and even more offensively, it doesn’t seem to be able to do so, and the more obvious that becomes, the more obvious the powerful’s hunger becomes for a world where they never see or talk to us, and they get all of our money and attention.
And it’s not just their greed — it’s how obviously they love the idea of automating human beings away, and creating a world where we’re increasingly disconnected and beholden to technology that they entirely control. No creators, no connections, and best of all, no customers — just people cranking a giant, energy-guzzling slot machine and maybe getting the thing they wanted at the end.
Except it doesn’t work. It obviously doesn’t work. It hasn’t ever worked, and there’s never really been a sign of it working other than people very confidently saying “this will eventually work.”
They now need this to be several echelons BIGGER than the iPhone to be worth it. Hundreds of billions of capital expenditures and endless media attention are begging for an actual payoff — something truly amazing and societally relevant other than the amount of investment and attention it’s getting. They need this to be the single biggest consumer tech phenomenon ever while also being the panacea to the dwindling growth of the Software as a Service and enterprise IT markets, and it needs to start doing that within the next 12 months, without fail, if it even has that long.
You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
Imagine if they’d done something else.
Imagine if they’d done anything else.
Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.
Imagine, because right now that’s the closest you’re going to fucking get.
Mid-break Soundtrack: Spinerette - A Prescription For Mankind
We all feel like we’re at war right now. Every person I know, on some level, feels like they’re in their own battle, their own march toward something, or against something, or away from something. It’s constant, a drumbeat, a war song, a funeral dirge, and so rarely an anthem.
All of us feel like we’re individually suffering. We echo with conflict and we reverberate with our own doubts, even the most confident and successful of us. Even our devices are wars within themselves — wars within software that is built to interfere with its own purpose, our ability to connect with others, or find the things we. This suffering is often an unfortunate byproduct of an advertising channel that makes Sundar Pichai or Mark Zuckerberg a hundred million dollars or more.
We struggle to do the things we need to do, as we do with the things we want to do, because there are so many warring incentives that it literally slows our mobile browsers down because they all want to shove a fucking cookie into our phones, or a page has to phone home to a hundred different tracking services. And we fail to see the big picture, how this is literally robbing us of the one thing we know to be finite — time.
We tell ourselves these problems are minor, because if we accept how frustrating they are, we must accept how frustrating all of them are, and how many of them there are, and that we’re surrounded by digital ants biting us with little or no rhyme or reason other than their thirst for their queen’s growth.
While we may feel increasingly divided, these problems unite us. Everybody faces them mostly in equal measure, though the poorer you are, the more likely you’re burdened by a cheap, shitty laptop like the ACER Aspire 1 that I used last year that took over an hour to set up and took forever to do anything in its advertisement-filled litterbox of an operating system. The more likely you’re unable to afford the subscriptions that afford you a bit of dignity in the digital world, like YouTube Premium, which saves you from having to see five minutes of advertising for every 10-minutes of video you watch.
We all use social networks that actively experiment on us to see how much advertising we’ll take, what content we might engage with — not like, enjoy — and we all have the same fucking awful version of Google Search. Even expensive iPhones are plagued with the cursed Apple Intelligence software, and even if you turn it off, you still deal with Apple’s actively evil App Store and a mobile internet full of websites that are effectively impossible to browse on a mobile.
We ache not so much for the old world of the computer, but the world we know is possible if these fucking bastards wouldn’t keep ruining it. It’s magical that we can have a video chat with someone halfway across the world, or play a fast-paced videogame with them, watch the same movies that we both stream, casually looking something up on a search engine, or looking at a friend’s photos they posted on a social network. Even if it’s for work, it’s kind of amazing that we can take big files and send them across the internet. The cameras in our phones are truly incredible. Connected fitness has changed my entire life. Handheld gaming PCs are cool as shit.
We live in the future, and the future is cool.
Or it would be cool, if it wasn’t for all these fucking bastards.
Even for those of us too young to remember a less-algorithmic internet, we can all see the potential. We see what technology can do. We see what the remarkable advances in smaller chips and batteries and processors have allowed us to do. We know what’s possible, but we see — whether we acknowledge it or just feel its sheer force shearing off bits of our fucking soul — what these companies are choosing to do to us.
There is nothing making Mark Zuckerberg force algorithmic Instagram and Facebook feeds upon people by default other than sheer, unadulterated greed and the growth-at-all-costs rot economics that have made him a multi-billionaire. We know what we want from his network, he knows what value we get out of it, but unlike Mark Zuckerberg, we have no voice in the conversation other than choosing to accept whatever punishment he offers. We know exactly what it is we want to do, and for some reason we rarely talk about the man responsible for getting in our way.
I don’t know, maybe you think I’m being dramatic, but I feel like shit about this, because I know it doesn’t have to be this way. I have spent the last year of my life cataloguing why companies like Google (Prabhakar Raghavan) and Facebook (Zuckerberg, Gleit, Mosseri, Backstrom, Sandberg, Bosworth) make their products worse, and I don’t know why more people don’t talk about the scale of these harms, and the unbelievable, despicable intentionality behind their decision making. Sundar Pichai and Mark Zuckerberg have personally overseen the destruction of society’s access to readily-available information. You can dance around it all you want, you can claim these things aren’t a big deal, but you’re fucking wrong.
Google and Facebook were, on some level, truly societal marvels, and they have been poisoned and twisted into a form of advertising parasite that you choose to let feed on you so that you can speak to your friends or find something out.
Let me put it in simpler terms: isn’t it fucking weird how hard it is to do anything? Don’t you remember when it was easier? It’s harder now because of Mark Zuckerberg and Sundar Pichai, and the information you look for is worse because of Sam Altman and Satya Nadella, whose deranged attachment to Large Language Models have pumped our internet full of bullshit at a time when Google had actively abandoned any duty to the web or its users.
This isn’t a situation with grey areas, especially when it comes to Mark Zuckerberg, a man who cannot be fired. He chose to make things bad, and he chooses to keep them this bad every day. Sundar Pichai is responsible for the destruction of Google Search along with the now-deposed Prabhakar Raghavan.
Sam Altman is a con artist that worked studiously for over a decade to accumulate power and connections until he found a technology and a time when the tech industry was out of ideas, and from everything I’ve read, it feels like he fell ass-backwards into ChatGPT and was surprised by how much everybody else liked it.
In any case, he is a great salesman to a legion of Business Idiots that had run out of growth ideas — the Rot-Com Bubble I discussed a year ago — and would take something, anything, even if it was horrifyingly expensive, even if it wasn’t clear if it would work, because Sam Altman could spin a fucking yarn, and he’d spent a long time investing in media relationships to make sure that he’d have their buy in.
And honestly, the tech media was ready for a fun new story. I heard people saying in 2022 that it was “nice to get excited about something again,” and in many ways Altman gave hope to an industry that felt fucking bleak after getting hoodwinked twice by crypto and the metaverse, by which I mean a far more convincing story with an actual product to look at, sold by a guy the media already liked who had convinced everybody he was very smart.
Then Satya Nadella, a management consultant cultist of the growth mindset, lost, realizing there were no more growth markets, decided that he must have ChatGPT in Bing, and then Sundar Pichai chose to follow too. At any point these men could’ve looked ahead and seen exactly what would happen, but they chose not to, because there was nowhere else to shove their money, and both the markets and the media yearned for good news.
Notice how none of this — from the media to the executive sect — is about you or me. None of this is about products, or the future, or even the present, just whatever “the next big thing” might be that will keep the Rot Economy’s growth-at-all-costs party going.
Nowhere along the line did anyone actually see an opportunity to sell people something they wanted or needed. Large Language Models were able to generate a lot of text or generate pictures, and that barely approximated a thing that society wanted or needed other than it was something that people used to be willing to pay more for — and businesses had been interested in doing these things cheaper, usually by offshoring or underpaying contractors, and this allowed them to potentially reduce costs further.
The fact that three years later we still have trouble describing why these things exist is enough of a sign that the tech industry has no real interest in building artificial intelligence at all — because AI is, at least based on the time before ChatGPT, meant to be about doing stuff for us, which Large Language Models are pretty fucking poor at, because the idea of getting something “done for you” is that you’re outsourcing both the production and the quality control.
In any case, it’s enough to make anyone feel crazy. Over the last decade we’ve watched — and while I’m talking about the tech industry, I think we can all say it’s been everywhere else too — the things we love get distanced from us so that somebody else can get unbelievably rich, the things we used to do easily made more difficult, confusing and/or expensive, and the ways we used to connect with people become increasingly abstracted and exploitative.
I don’t know what to tell you about these people other than the fact that you should know that they are responsible for the world around you feeling like it’s in fucking ruins. I cannot give you a plan for the future, I cannot tell you what will fix things, but however things get fixed starts with people knowing who these people are and what they have done.
I can give you their names. Mark Zuckerberg. Sam Altman. Sundar Pichai. Satya Nadella. Tim Cook. Sheryl Sandberg. Adam Mosseri. Prabhakar Raghavan. There are others, many others, and they are fully responsible for how broken everything feels.
And some of the guilty aren’t tech CEOs, or fabulously wealthy, but rather their collaborators in the tech media that have carried water for the sociopaths ruining our digital — and, often, physical — world.
The reason I am so hard on my peers in the media is that it has never been more urgent that we hold these people accountable. Their ability to act both unburdened by regulation and true criticism has emboldened them to cause harm to billions of people so that they may continue to make billions of dollars, in part because the media continually congratulates them for doing so.
And let’s be honest, what they’re doing is horribly, awfully wrong.
Fighting back starts with the truth, said regularly, said boldly and clearly with emotion and sincerity. I don’t have other answers. I don’t have bold plans. I don’t know what to do, other than to explain how I feel, and if you feel the same, at the very least make you feel less afraid.
If you ever need to talk, email me at [email protected]. I don’t care. I have cracked myself open and spilled myself onto my podcast and newsletter for no reason other than the fact that I feel more alive doing so, and have become a stronger and happier person doing so.
All this is possible thanks to technology, and while I have no plan, I know I feel more free and alive when I write and speak about this stuff. I write this knowing that speaking in this way feels “too much” or some other way of attacking me for experiencing emotion, and if you’re feeling that way reading this, look deep within yourself and see if you’re simply uncomfortable with somebody capable of feeling things.
We die alone, but we choose whether we live that way. Remember that billions of us are suffering in the same way, and remember who to fucking thank for doing it to us.
2025-05-28 01:02:28
Next year is going to be big. Well, I personally don't think it'll be big, but if you ask the AI industry, here's the things that will happen by the end of 2026:
How much of this actually sounds plausible to you?
I thought I couldn't be more disappointed in parts of the tech media, then OpenAI went and bought former Apple Chief Design Officer Jony Ive's "Io," a hardware startup that it initially invested in to create some sort of consumer tech device. As part of the ridiculous $6.5 billion all-stock deal to acquire Io, Jony Ive will take over all design at OpenAI, and also build a device of some sort.
At this point, no real information exists. Analyst Ming-Chi Kuo says it might have a "form factor as compact and elegant as an iPod shuffle," yet when you look at the tweet everybody is citing Kuo's quotes from, most of the "analysis" is guesswork outside of a statement about what the prototype might be like.
Let's Talk About Ming-Chi Kuo!
It feels like everybody is quoting analyst Ming-Chi Kuo as a source as to what this device might be as a means of justifying writing endless fluff about Jony Ive and Sam Altman's at-this-point-theoretical device.
Kuo is respectable, but only when it comes to the minutiae of Apple — changes in strategy around the components and near-term launches. He has a solid reputation when it comes to finding out what’s selling, what isn’t, and what the company plans to launch. That’s because analysts work by speaking to people — often people working at companies in the less glamorous element of the iPhone and Mac supply chain, like those that manufacture specific components — and asking what orders they’ve received, for what, and when. If a company massively cuts down on production for, say, iPhone screens, you can infer that Apple’s struggling to shift the latest version of the iPhone. Similarly, if a company is having to work around the clock to manufacture an integrated circuit that goes into the newest MacBook, you can assume that sales are pretty brisk.
Outside of that, Kuo is fucking guessing, and assuming much more than that allows reporters to make ridiculous and fantastical guesses based on nothing other than vibes. If you are writing that Kuo "revealed details" about the device you have failed your readers, first by putting Kuo on a mythological pedestal (which he already has, to some extent), and secondly by failing to put into context what an analyst does, and what an analyst can’t do.
And yeah, Kuo is guessing. Jony Ive may have worked at Apple, but he is not Apple. Ive was not a hardware guy — at least when it came to the realm beyond industrial and interface design —, nor did he handle operations at Apple. While Kuo's sources may indeed have some insight, it's highly doubtful he magically got his sources to talk after the announcement, meaning that he's guessing.
Kuo also predicted in 2021 that Apple would release 15-20 million foldable iPhones in 2023, and predicted Apple would launch some sort of AR headset almost every year, claiming it would arrive in 2020, 2022 (with glasses in 2025!), second quarter 2022, "late 2022" (where he also said that Apple would somehow also launch a second-generation version in 2024 with a lighter design), or 2023, but then in mid-2022 decided the headset would be announced in January 2023, and become available "2-4 weeks after the event," and predicted that, in fact, Apple would ship 1.5 million units of said headset in 2023. Sadly, by the end of 2023, Kuo said that the headset would be delayed until the second half of 2023, before nearly getting it right, saying that the device would be announced at WWDC 2023 (correct!), but that it would ship "the second or third quarter of 2023."
Not content with being wrong this many times, Kuo doubled down (or quadrupled down, I’ve lost count) in February 2023, saying that Apple would launch "high-end and low-end versions of second-generation headset in 2025," at a point in time when Apple had yet to announce or ship the first generation. Then, finally, literally a day before the announcement of the Vision Pro, Kuo predicted it "could launch as late as 2024," the kind of thing you could've learned from a single source at Apple telling you what would be announced in 24 hours, or, I dunno, the press embargo
On December 25 2023, Kuo successfully predicted that the Vision Pro would launch "in late January or Early February 2024." It launched in the US February 2 2024. Mark Gurman of Bloomberg reported that Apple planned to launch the device "by February 2024" five days earlier on December 20 2023.
Kuo then went on to predict Apple would only produce "up to 80,000 Vision Pro headsets for launch" on January 11 2024, only to say that Apple had sold "up to 180,000" of them 11 days later. On February 28 2024, after predicting no less than twice that Apple would make multiple models, said that Apple had not started working on a second-generation or lower-priced Vision Pro.
This was a very long-winded way to say that anybody taking tweets by Ming-Chi Kuo as even clues as to what Jony Ive and Sam Altman are making is taking the piss. He has a 72.5% track record for getting things right, according to Apple Track, which is decent, but far from perfect. Any journalist that regurgitates a Ming-Chi Kuo prediction without mentioning that is committing criminal levels of journalistic malpractice.
So, now that we've got that out the way, here's what we actually know — and that’s a very load-bearing “know” — about this device, according to the Wall Street Journal:
OpenAI Chief Executive Sam Altman gave his staff a preview Wednesday of the devices he is developing to build with the former Apple designer Jony Ive, laying out plans to ship 100 million AI “companions” that he hopes will become a part of everyday life.
...
Altman and Ive offered a few hints at the secret project they have been working on. The product will be capable of being fully aware of a user’s surroundings and life, will be unobtrusive, able to rest in one’s pocket or on one’s desk, and will be a third core device a person would put on a desk after a MacBook Pro and an iPhone.
The Journal earlier reported that the device won’t be a phone, and that Ive and Altman’s intent is to help wean users from screens. Altman said that the device isn’t a pair of glasses, and that Ive had been skeptical about building something to wear on the body.
Let's break down what this actually means:
The Journal's story also has one of the most ludicrous things I've read in the newspaper: that "...Altman suggested the $6.5 billion acquisition has the potential to add $1 trillion in value to OpenAI," which would mean that OpenAI acquiring a washed former Apple designer who has designed basically nothing since 2019 to create a consumer AI device — a category that has categorically failed to catch on — would somehow nearly quadruple its valuation. Printing that statement is journalistic malpractice without a series of sentences about how silly it is.
But something about Jony Ive gives reporters, analysts and influencers a particular kind of madness. Reporters frame this acquisition as "the big bet that Jony Ive can make AI hardware work," that this is OpenAI "crashing Apple's party," that this is "a wake up call" for Apple, that this is OpenAI "breaking away from the pack" by making "a whole range of devices from the ground up for AI."
Based on this coverage, one might think that Jony Ive has been, I dunno, building something since he left Apple in 2019, which CNBC called "the end of the hardware era at Apple" about six months before Apple launched its M1 series processors and markedly improved its hardware as a result.Hell, much of Apple’s hardware improvement has been because it walked away from Ive’s dubious design choices. Ive’s obsession with thinness led to the creation of the Butterfly Keyboard — a keyboard design that was deeply unpleasant to type on, with very little travel (the distance a key moves when pressed), and a propensity to break at the first glimpse of a speck of dust.
Millions of angry customers — including famed film director Taika Waititi — and a class-action lawsuit later, Apple ditched it and returned to the original design. Similarly, since Ive’s exit, Apple has added HDMI ports, SD card readers, and MagSafe charging back to its laptops. Y’know, the things that people — especially creatives — wanted and liked, but had to be eliminated because they added negligible levels of heft to a laptop.
On leaving Apple in 2019 — where he'd been part time since 2015 (though the Wall Street Journal says he returned as a day-to-day executive in 2017, just in time to promise and then never ship the AirPower charging pad) — Ive formed LoveFrom, a design studio with Apple as its first (and primary) client with a contract valued at more than $100 million, according to the New York Times, which reported the collapse of the relationship in 2022:
In recent weeks, with the contract coming up for renewal, the parties agreed not to extend it. Some Apple executives had questioned how much the company was paying Mr. Ive and had grown frustrated after several of its designers left to join Mr. Ive’s firm. And Mr. Ive wanted the freedom to take on clients without needing Apple’s clearance, these people said.
In 2020, LoveFrom signed a non-specific multi-year relationship to “design the future of Airbnb.” LoveFrom also worked on some sort of seal for King Charles to give away during the coronation to — and I quote — “recognize private sector companies that are leading the way in creating sustainable markets.” It also designed an entirely new font for the multi-million dollar event, which does not matter to me in the slightest but led to some reporters writing entire stories about it. The project involves King Charles encouraging space companies. I don’t know, man.
I cannot find a single thing that Jony Ive has done since leaving Apple other than "signing deals." He hasn't designed or released a tech product of any kind. He was a consultant at Apple until 2022, though it's not exactly obvious what it is he did there since the death of Steve Jobs. People lovingly ascribe Apple's every success to Ive, forgetting that (as mentioned) Ive oversaw the truly abominable butterfly keyboard, as well as numerous other wonky designs, including the trashcan-shaped Mac Pro, the PowerMac G4 Cube (a machine aimed at professionals, with a price to match, but limited upgradability thanks to its weird design), and the notorious “hockey puck” mouse.
In fact, since leaving Apple, all I can confirm is that Jony Ive redesigned Airbnb in a non-specific way, made a new font, made a new system for putting on clothing, made a medal for the King of England to give companies that recycle, and made some non-specific contribution to creating an electric car that has yet to be shown to the public.
Anyway, this is the guy who's going to be building a product that will ship 100 million units "faster than any company has ever shipped 100 million of something new before."
It took 3.6 years for Apple to sell 100 million iPhones, and nearly six years for them to it 100 million Apple Watches. It took four years for Amazon to sell 100 million Echo devices, Former NFT scam Rabbit claims to have sold over 130,000 units of its "barely reviewable" "AI-powered" R1 device, but told FastCompany last year that the product had barely 5000 daily active users. The Humane Pin was so bad that their returns outpaced their sales, with 10,000 devices shipped but many returned due to, well, it sucking. I cannot find another comparison point, because absolutely nobody has succeded in making the next smartphone or "third device."
To give you another data point, Gartner — another reliable analyst firm, at least when it comes to historical sales trends, although its future-looking predictions about AI and the metaverse can be more ‘miss’ than ‘hit’ — says that the number of worldwide PC shipments (which includes desktops and laptops) hit 64.4 million in Q4 2024. OpenAI thinks that it’ll sell nearly twice as many devices in one year as PCs were sold during the 2024 holiday quarter. That’s insane. And that’s without mentioning things like… uh, I don’t know, who’ll actually build them? Where will you get your parts, Sam? Where will you get your chips? Most semiconductor manufacturers book orders months — if not years — in advance. And I doubt Qualcomm has a spare 100 million chipsets lying around that it’ll let you have for cheap.
Yet people seem super ready to believe — much like they were with the Rabbit R1 — except they're asking even less of Jony Ive and Sam Altman, the Abbott and Costello of bullshit merchants. It's hard to tell exactly what it is that Ive did at Apple, but what we do know is that Ive designed the Apple Watch, a product that flopped until it refocused on fitness over fashion, and apparently wanted the watch to be a "high-end fashion accessory" rather than the "extension of the iPhone" that Apple executives wanted according to the Wall Street Journal, heavily suggesting that Ive was the reason the Apple Watch flopped far more than the great mind that made Apple a success.
Anyway, this is the guy who's going to build the first true successor to the smartphone, something Jony Ive already failed to do with the full backing of the entire executive team at Apple, a company he worked at for decades, and one that has literally tens of billions of cash sitting in its bank accounts.
Jony Ive hasn't overseen the design or launch of a consumer electronics product in — at my most charitable guess — three years, though I'd be very surprised if his two-or-three-year-long consultancy deal with Apple involved him leading design on any product, otherwise it would have extended it.
If I was feeling especially uncharitable — and I am — I’d guess that Ive’s relationship with Apple ended up looking like that between Alicia Keys and Research in Motion, which in 2013 appointed the singer its “Global Creative Director,” a nebulous job title that gives Prabhakar Raghavan’s “Chief Technologist” a run for its money. Ive acted as a thread of continuity between the Jobs and Cook eras of Apple, while also adding a degree of celebrity to the company that Apple’s other execs — like Phil Schiller and Craig Federighi — otherwise lacked.
He's teamed up with Sam Altman, a guy who has categorically failed to build any new consumer-facing product outside of the launch of ChatGPT, a product that loses OpenAI billions of dollars a year, to do the only other thing that loses a bunch of money — building hardware.
No, really, hardware is hard. You don't just design something and then send it to a guy in China - you have to go through multiple prototypes, then find one that actually does something using, then work out how to mass-produce it, then actually build the industrial rails to do so, then build the infrastructure to run it, then ship it. At that point, even if the device is really good (it won't be, if it ever launches), you have to sell one hundred million of them, somehow.
I repeat myself - Hardware is hard, to the point where even Apple and Microsoft can cock-up in disastrous (and expensive) ways. Pretty much every 2011 year MacBook Pro — at least, those with their own discrete GPUs — is now e-waste, in part because the combination of shoddy cooling and lead-free solder led these machines to become expensive bricks. The same was true of the Xbox 360. Even if you think the design and manufacturing processes go swimmingly, there’s no guarantee that problems won’t creep up later down the line.
I beg, plead, scream and yell to the tech media to take one fucking second to consider how ludicrious this is. Io raised $225 million in total funding (and OpenAI already owned 23% of the company from those rounds), a far cry from the billion dollars that The Information was claiming it wanted to raise in April 2024, heavily suggesting that whatever big, secret, sexy product was sitting there wasn't compelling enough to attract anyone other than Sutter Hill Ventures (which famously burned hundreds of millions of dollars investing in Lacework, a company that sold for $200 million and once gave away $30,000 of Lululemon gift cards in one night to anyone that would meet with the company’s sales representatives), Thrive (which has participated in or led multiple OpenAI funding rounds), Emerson Collective (run by Lauren Powell Jobs, a close friend of Jony Ive and Altman according to The Information) and, of course, OpenAI itself, which bought the company in its own stock after already owning 23% of its shares.
This deal reeks of desperation, and is, at best, a way for venture capitalists that feel bad about investing in Jony Ive's lack of productivity to get stock in OpenAI, a company that also doesn't build much product.
While OpenAI has succeeded in making multiple different models, what actual products have come out of GPT, Gemini or other Large Language Models? We're three joyless years into this crap, and there isn't a single consumer product of note other than ChatGPT, a product that gained its momentum through a hype campaign driven by press and markets that barely understood what they were hyping.
Despite all that media and investor attention — despite effectively the entirety of the tech industry focusing on this one specific thing — we're still yet to get any real consumer product. Somehow Sam Altman and Jony Ive are going to succeed where Google, Amazon, Meta, Apple, Samsung, LG, Huawei, Xiaomi, and every single other consumer electronics companies has failed, and they're going to do so in less than a year, and said device is going to sell 100 million units.
OpenAI didn't acquire Jony Ive's company to build anything — it did so that it could increase the valuation of OpenAI in the hopes that it can raise larger rounds of funding. It’s the equivalent of adding an extension to a decrepit, rotting house.
OpenAI, as a company, is lost. It has no moat, its models are hitting the point of diminishing returns and have been for some time, and as popular as ChatGPT may be, it isn't a business and constantly loses money.
On top of that, it requires more money than has ever been invested in a startup. SoftBank had to take out a $15 billion bridge loan from 21 different banks just to fund the first $7.5 billion of the $30 billion it’s promised OpenAI in its last funding round.
At this point, it isn't obvious how SoftBank affords the next part of that funding, and OpenAI using stock rather than cash to buy Jony Ive's company suggests that it doesn’t have much to spare. OpenAI is allegedly also buying AI coding company Windsurf for $3 billion. The deal was announced on May 6 2025 by Bloomberg, but it's not clear if it closed, or whether the deal would be in cash or stock, or really anything, and I have to ask: how much money does OpenAI really have?
And how much can it afford to burn? OpenAI’s operating costs are insane, and the company has already committed to several grand projects, while also pushing deeper and deeper into the red. And if — when? — its funding rounds convert into loans, because it failed to convert into a for-profit, OpenAI will have even less money to splash on nebulous vanity projects. Then again, asking questions like that isn't really how the media is doing business with OpenAI — or, for that matter, has done with the likes of Mark Zuckerberg, Satya Nadella, or Sundar Pichai. Everything has to be blindly accepted, written down and published, for fear of...what, exactly? Not getting the embargo to a product launch everybody else got? Missing out on the chance to blindly cover the next big thing, even if it won't be big, and might not even be a thing?
So, I kicked off this newsletter with a bunch of links tied to the year 2026, and I did so because I want — no, need — you to understand how silly all of this is.
Sam Altman's OpenAI is going to, in the next year, according to reports:
Even one of these projects would be considered a stretch. A few weeks ago, Bloomberg Businessweek put out a story called "Inside the First Stargate AI Data Center." Just to be clear, this will be "fully operational" (or "constructed" depending on who you ask!) by the middle of 2026. The real title should've been "Outside the First Stargate AI Data Center," in part because Bloomberg didn't seem to be allowed into anything, and in part because it doesn't seem like there's an inside to visit.
Again, if I’m being uncharitable — which I am — this whole thing reminds me of that model town that North Korea built alongside the demilitarized zone to convince South Koreans about the beauty of the Juche system and the wisdom of the Dear Leader — except the beautiful, ornate houses are, in fact, empty shells. A modern-day Potemkin village. Bloomberg got to visit a Potemkin data center.
Data centers do not just pop out of the ground like weeds. They require masses of permits, endless construction, physical service architecture, massive amounts of power, and even if you somehow get all of that together you still have to make everything inside it work. While analysts believe that NVIDIA has overcome the overheating issues with its Blackwell chips, Crusoe is brand fucking spanking new at this, and The Information described Stargate as "new terrain for Oracle...relying on scrappy but unproven startups...[and] more broadly, [Oracle] has less experience than its larger rivals in dealing with utilities to secure power and working with powerful and demanding customers whose plans change frequently."
In simpler terms, you have a company (Oracle) building something at a scale it’s never built at before, using a partner (Crusoe) which has never done this, for a company (OpenAI) that regularly underestimates the demands it puts on its servers. The project being built is also the largest of its kind, and is being built during the reign of an administration that births and kills a new tariff seemingly every day.
Anyway, all of this needs to happen while OpenAI also funds its consumer electronic product, as well as their main operations which will lose them $14 billion in 2026, according to The Information.
It also needs to become a for-profit by the end of 2025 or lose $10 billion of SoftBank's funding, a plan that SoftBank accepted but Microsoft is yet to approve, in part (according to the Information) because OpenAI wants to both give it a smaller cut of profits and stop Microsoft from accessing its technology past 2030.
This is an insane negotiation strategy — leaking to the press that you want to short-change your biggest investor both literally and figuratively — and however it resolves will be a big tell as to how stupid the C-suite at Microsoft really is. Microsoft shouldn't budge a fucking inch. OpenAI is a loser of a company run by a career liar that cannot ship product, only further iterations of an increasingly-commoditized series of Large Language Models.
At this point, things are so ridiculous that I feel like I'm huffing paint fumes every time I read Techmeme.
If you're a member of the media reading this, I implore you to look more critically on what's going on, to learn about the industries in question and begin asking yourselves why you continually and blandly write up whatever it is they say. If you think you're "not a financial journalist" or "not a data center journalist" and thus "can't understand this stuff," you're wrong. It isn't that complex, otherwise a part-time blogger and podcaster wouldn't be able to pry it apart.
That being said, there's no excuse for how everybody covered this Jony Ive fiasco. Even if you think this device ships, it took very little time and energy to establish how little Jony Ive has done since leaving Apple, and only a little more time to work out exactly how ridiculous everything about it is. I know you need stories about stuff — I know you have to cover an announcement like this — but god, would it fucking hurt to write something even a little critical? Is it too much to ask that you sit down and find out what Jony Ive actually does and then think about what that might mean for the future?
This story is ridiculous. The facts, the figures, the people involved, everything is stupid, and every time you write a story without acknowledging how unstable and untenable it is, you further misinform your readers. Even if I’m wrong — even if they somehow pull off all of this stuff — you still left out a valuable part of the story, refused to critique the powerful, and ultimately decided that marketing material and ephemera were more valuable than honest analysis.
There is no reason to fill in the gaps or “give the benefit of the doubt” to billionaires, and every single time you do, you fail your audience. If that hurts to read, perhaps ask yourself why.
Holding these people accountable isn’t just about asking tough questions, but questioning their narratives and actions and plans, and being willing to write that something is ridiculous, fantastical, or outlandish. Doing so — even if you end up being proven wrong — is how you actually write history, rather than simply existing as a vessel for Sam Altman or Jony Ive or Dario Amodei or any number of the world’s Sloppenheimers.
Look, I am nobody special. I am not supernaturally intelligent, nor am I connected to vast swaths of data or suppliers that allow me to write this. I am a guy with a search engine who remembers when people said stuff, and the only thing you lack is my ability to write 5000 or more words in the space of five hours. If you need help, I am here to help you. If you need encouragement, I am here to provide it. If you need critiques, well, scroll up. Either way, I want to see a better tech media, because that’s what the world deserves.
You can do better.