2025-06-09 04:14:09
I’ve been around the world of artificial intelligence since I was working on my PhD in the early 1990s, which means I have seen how, historically, AI research comes and goes.
The pattern is this: a new technique comes along which promises to be the big leap towards what is now usually termed artificial general intelligence (AGI), but back then was just called AI. A huge amount of effort is spent in researching it, and progress is made. Then, the promised leap to AGI just never appears. What we’re left with is often useful tech, but not the breakthrough everyone believed in.
I suspect that LLMs will follow the same pattern. The biggest difference between LLMs and earlier AI tech isn’t in its potential for reaching AGI, but in its existence in a business environment which rewards the hype cycle. We have oligarchies and monopolies seeking to use their incredible financial muscle to “own” markets. We have venture capitalists who have an interest in hyping a startup so it gets sold at the highest possible price. It’s perfect conditions for hype cycles around particular technologies, and this time it’s LLMs.
The cracks, though, will inevitably appear – and the first one has come from the less than likely source of research at Apple. It’s not a hard paper to read, but the short version is that reasoning models – the adjuncts to LLMs which give them the ability to do multi-step reasoning – collapse badly on complex questions (as do unaugmented LLMs).
Broadly, I think LLMs are useful tech, particularly for some kinds of textual analysis and also for converting human language into machine queries. But that’s all: they’re not the solution which gets us to AGI. And they’re not worth the current hype.
If you want to build your own stress and anxiety levels, then I would heartily suggest reading 404Media’s article which compiles the views from the ground of teachers about the use of AI in schools.
Education is going to be changed dramatically by these tools, whether we like it or not. My biggest concern is that we’re letting this happen, rather than making it happen in a way which is beneficial to us all as human beings.
Barry Adams – one of the best SEO people in the world – recently posted on the impact of AI overviews and AI Mode on traffic from Google. Predictably, if you’ve been reading what I wrote over the past few years, it isn’t pretty: as Barry puts it “publishers need to focus on audience strategies that exclude Google as a reliable source… In the next few years, many publishers will be unable to survive.”
Honestly it’s grim out there, and I still think some publishers are hiding their heads in the sand. Yes, you can get traffic from Discover – but once Google has worked out how to make all the money in the world without sending publishers any traffic, that’s it – the long summer is over.
Some really sad news: Bill Atkinson, one of the early developers of the Mac, has passed away. I remember Bill mostly from his work on HyperCard, which was an amazing product. I built a lot of HyperCard stacks in the pre-web era, mostly to explore academic concepts, but also using it as a complete programming environment. When it gained the ability to read from the Mac’s serial ports, for example, you started to be able to use it for interfacing with equipment – I wrote a piece of HyperCard software which logged calls from a phone switch, for example. Fun times. Steven Levy has a great retrospective on Atkinson’s life and career.
John Gruber has a long article detailing his thoughts on the rumour that Apple Notes will be able to export Markdown files in its next release. Like John, I think of Markdown as a format for creating content which will ultimately live on the web (most of these posts are written first in Markdown, because I could never be bothered with HTML). But I would love to see the ability to export everything into Markdown – I’ve done this in the past using a variety of tricks, but it is always quite painful.
I find it really hard to have much sympathy for Reddit, which only exists because of the millions of contributions of individual users, in its legal battle to stop Anthropic from scraping Reddit content.
Bear in mind that what Reddit is doing here is selling its users content without any real consent. It’s claiming the right to sell something that, really, it does not own.
I’ve written before about how Starship, SpaceX’s heavy lifter, is a bit of a science fiction fantasy of how rockets should work. If you’re a nerd of a certain age, then you will remember pictures from covers of science fiction magazines showing tail-first landings on planets and back on Earth.
Starship is driven by that vision, but there’s only one problem: physics. To land on a planet, you need to lose momentum, as much of it as possible before you hit the dirt. There’s two ways of doing that: using an atmosphere to either aerobrake or parachute (or both); or using retrorockets to slow you down.
If you have an atmosphere, then the former is preferable for one simple reason: weight. If you use retrorockets, the fuel they need to land you safely has to be carried up with you, reducing the amount of useful weight you can carry to orbit.
And of course the more weight you’re carrying, the stronger everything has to be. Which, in turn, increases the weight of the vehicle.
This is the problem that SpaceX is running into, and, as Will Lockett explains, there is no way round it. Physics can’t be avoided, even if you are the richest person in the world, and no amount of software engineering-inspired iteration will dig you out of the hole that gravity puts you in. It’s like Musk’s fantasy of going to Mars, a ferociously difficult journey which would yield little more than a bunch of photo opportunities covered in red dust.
The wonderful Mic Wright – who has a book out which you should pre-order – has written a very telling article about the way that stories move from small local newspapers to bigger news networks. And, as Mic rightly points out, the majority of those stories are not even worth reporting about in the smallest of local papers.
Instead, they are there simply because they deliver clicks. There’s no other merit at all – just how important is it that a random woman was upset by a lack of “English” food on holiday? And yet this ended up in the Daily Mail, one of the country’s largest newspapers.
I don’t think I like this world of publishing much. And I always end up thinking it’s partly my fault. A lot of us early Internet people have these thoughts.
Ever wondered about why Steve Ballmer did the “developers, developers, developers” chant that time? You can find out in this interview, and also hear Ballmer talk about a lot of really interesting Microsoft-related history.
Some lighter shade of darkness: this is a wonderful look at the genesis of Marc Almond’s magnificent Torment and Toreros, not only a great album but one of the greatest. If you love drama.
2025-06-01 23:20:10
Behind every LLM lies a system prompt. This is a set of instructions written by the creators of the LLM designed to give it personality, a starting point for response, and to build in “fences” around what it will respond to. Simon Willison has been digging into the one for the latest iteration of Claude, and it’s absolutely amazing. When you have a piece of software which you have to instruct “[do] not provide information that could be used to make chemical or biological or nuclear weapons” you’re really on to something.
It might have slipped under the radar amidst the illegal detentions, extortion of everyone from law firms to universities, but Donald Trump’s crazy crew of mini-fascists is currently trying to impose its ideas of what “free speech” means on the rest of the world. And that idea includes, for example, defending the rights of neo-Nazi parties in Germany. It includes calling for people who incite racist violence in the UK to be released from prison after those people called for asylum seekers to be burnt alive. We aren’t talking fancy academic debate here.
James Ball has the lowdown, and importantly he draws the right conclusions: that the Trumpists believe that “if you’re on the internet and using a service provided by a US tech company, they say, then Donald Trump sets the rules. The US is quietly declaring sovereignty over cyberspace and expecting the world to acquiesce, making an unprecedented digital landgrab in the name of freedom.”
And that is the really big point, and why Europe needs to get its own alternatives up to scratch as quickly as possible.
You have probably seen the footage of Jordan Petersen imploding when faced with the superior intellect of (checks notes) 20 atheist students, but it’s less likely that you know quite what a big deal this is. Petersen’s cloak of Christianity has enabled him to become a much bigger voice among the right in the US, and his point-blank refusal to actually say he is a Christian on the debate has ruffled some features. Perhaps they will forgive him. Perhaps, like every grifter from Russell Brand to Milo Yiannopolous, he’ll have some kind of epiphany and find god – again.
If you think the technology world provides drama, you should check out what happens in the world of paleontology. Pride, ambition, ego… this story has it all, and it is well worth the long read.
Everyone – with the possible exception of the markets – knows that Elon Musk never keeps a promise. But I don’t think I have seen such a full list of his lies before. Why does anyone believe him? Why do the markets still have a little bounce every time he lies about self driving, or robotaxis, or AI?
Because markets are made up of idiots no smarter than the average watcher of Fox News. In fact, they’re probably more stupid, because the average watcher of Fox News doesn’t believe that markets are divinely rational.
Jason Snell has written a lovely look back at the era of the Mac clone, something that I remember all too well. Mac clones were great for Mac users, who got more powerful technology at lower prices, but dreadful for Apple, which at the time just couldn’t compete.
Bear in mind that on paper Apple had every advantage over the cloners. It had scale, which meant it should have had better margins. It made the operating system! And yet, more nimble companies out manoeuvred it, making machines that were faster and cheaper.
On the one hand, every single wave of computer technology has “cost people jobs” – and yet employment always recovered. So it’s tempting to think of the current wave of replacement of creative work by AI in the same way.
But.
One of the principles of Socialism 101 is that capitalism always seeks to replace labour (you and me) with capital (machines). In a system where ownership of capital is the source of political and economic power, having more creative work done by machine rather than human beings is not only an unfortunate consequence of progress, it’s desirable – for capitalists.
They don’t need us.
One of the other principles of capitalism is that war is good for business. And now that the cultural hegemony has been established, companies like Meta – previously all touchy-feely and “oh no we won’t do that” – are salivating at the prospect of the “growth” they can get from making things designed to kill human beings. Some companies at least have the good grace to claim they have absolutely no idea what governments might be doing with the technology they sell them (“based on everything we currently know”, wink wink). But Zuck – a man with the ethics of a pike – is probably very happy he doesn’t have to pretend anymore.
Children making child sex abuse material of their classmates, using AI. I mean, what can you actually say to that?
For some reason, Apple has declined to send anyone to appear at The Talk Show Live this year. It’s a shame. I’ll leave it to others to speculate why they might have decided not to appear.
2025-05-25 19:24:36
OpenAI's £5 billion acquisition of Jony Ive's io startup represents more than a corporate merger. it's a fascinating collision of Silicon Valley's most revered design philosophy with the chaotic, speculative world of generative AI. Ive, the maestro behind Apple's most iconic devices, is joining forces with Sam Altman to "completely reimagine what it means to use a computer", bringing together 55 engineers who once crafted the iPhone, iPad, and Apple Watch to build what they boldly claim will be "the coolest piece of technology that the world will have ever seen".
Their mission reads like classic Silicon Valley hubris: to transcend the "decades old" smartphones and laptops that currently mediate our digital lives, creating instead an entirely new category of AI-native hardware that could fundamentally disrupt the smartphone duopoly of Apple and Google.
Yet beneath the marketing speak lies a more complex story. This is ultimately about corporate power projection, with OpenAI seeking to escape its dependence on existing platforms whilst potentially creating new forms of technological dependency. OpenAI wants to escape just being an app, and this is the way it thinks it can do it.
Coming after other AI hardware ventures like Humane's troubled AI Pin have spectacularly failed, this feels risky. I'm still unconvinced that voice – which is probably what this will have to rely on – is the interface of the future for every interaction, which it would need to be if there's no screen. I think of this as the car mentality vs the public transport reality: Americans, particularly in Silicon Valley, spend a lot of time driving their cars and so assume that voice is the perfect interface. For most people, though, this isn't true: you're in public, and talking/listening isn't the ideal modality, particularly when you want to converse about something sensitive.
And once you add a screen to account for this, you end up with something which has the size, shape and functionality of a phone. So why not just use your phone, and an app?
Anthropic's Claude is probably my favourite AI model, and it got a significant update this week. Anthropic is an interesting company, in part because they're cautious about the impact of AI generally, and public about what they see as the risks. With Claude Opus 4, it has ramped up the level of safety measures around the model -- moats which effectively stop you using it for certain things -- after finding that the new model was more effective at advising users how to create biological weapons.
Yikes.
But Claude is really interesting, and it's the first tool that I've used consistently which steps beyond the kind of simplistic "write me a blog post about cheese" into being more of a genuinely intelligent assistant. For example, I asked it to look through all the emails I have from Ben Thompson and help me understand if he had any blind spots regarding antitrust. Amongst many other interesting suggestions, it came with with this:
Thompson's approach appears to be that of a strategic business analyst rather than a policy critic, which may create systematic blindspots around the broader societal impacts of platform dominance that EU regulators are attempting to address.
That is almost exactly my critique generally of Ben (and many other tech commentators). So it's interesting to see it picked it out!
This is a great history of the Jam Packs -- collections of loops and samples which Apple sold for use in GarageBand, and which still live on 20 years after their first introduction.
No one doubts that AI consumes power. A lot of power. For example, data centres being planned or built in Nevada are going to require an additional 40% capacity. That’s not compared to the current requirements of data centres: that’s 40% of the entire current grid capacity of Nevada.
That’s an incredible amount of electricity. Even if you assume that it can be supplied with renewables (and in large part it can) there are knock-on effects which are less visible. Water use, for example, will increase dramatically. And there is a carbon cost to the construction of anything – construction is highly carbon-intensive – which means less “carbon budget” for the creation of infrastructure to support our transition to a low carbon economy. If it’s a choice between a new high-speed rail line which takes hundreds of thousands of cars off the road and a data centre, I know which one I would prefer.
The problem is this: we just don’t really know what the impact of AI is in carbon terms. Methodologies for measuring the amount of power required for different algorithms are few on the ground, and often disputed. The degree to which AI can help the transition towards a low carbon economy is uncertain: it can help in some areas, especially ones like better weather prediction and understanding the relationships between climate and weather.
Way back in a newsletter from Jerry Pournelle – who was fundamentally a skeptic about human-driven global warming – Jerry also noted that, even if skeptical, a century-long project of dumping CO2 into the atmosphere of the only habitable planet we have seemed unwise. The race for AI, to me, often seems the same: an unwise distraction which may, or may not, have positive long-term impact, but which serves to put a vast amount of resources into the wrong place at the wrong time. AI is a wonder, but maybe not right now, kids?
Although we don’t know that much about the long-term impact of AI, we are certainly learning a few things. For example, autonomous driving systems which function in controlled, off-road environments are already performing well enough to (1) work safely and (2) replace human drivers.
All of which makes the Trumpian “bring jobs back to the US” idea even more laughable of course. What jobs?
If you’re not spending more time looking at what’s happening in China than Silicon Valley, you’re missing out on where the real action is. China is fascinating, in part because so much of what is happening now is bound up with very specific Chinese historical and cultural characteristics. For example, unless you understand the impact of the “Century of Humiliation” you can’t really see why it’s obsessive about dominating new emerging tech.
Consider, for example, Xiaomi’s commitment to spending $7bn or so to develop its own chips:
Lei’s hefty investment on chips aligns with Chinese President Xi Jinping’s priorities for China to match and even surpass the US in cutting-edge tech including semiconductors.
What happens to Intel is, historically, going to be a footnote compared to this.
Tansy Hoskins (who wrote one of the best books I read last year) has written about how it feels to have your work stolen by a corporate behemoth. Theft always comes before enclosure, you see. And that, to me, is one of the biggest objections to AI: it’s potentially centralisation of knowledge and creativity, to a degree we haven’t seen since the days when it was only permissible to write in Latin.
Pocket, one of the first “read it later” services and one I started using very early on, is being shut down by Mozilla. There are plenty of other options out there, from the power user paradise of Readwise Reader to the slick-looking Matter, but Pocket always struck a great balance and it looked good.
One of the consistent parts of Microsoft culture is that once a “strategic direction” is set, every single team in the business runs to add more and more features which align with that direction. Of course the current direction is all about AI, so Windows is getting more and more AI features which no user wants or will actually use. In about five years, a lot of these will disappear, which is the next stage of Microsoft strategy.
Great piece by Ed ZItron about the crazy proclamations that Satya Nadella makes about how he works, and how – if they are real – he should just get fired for incompetence. Bear in mind, though, that Nadella once said that Comic Sans was a good font.
2025-05-18 17:05:32
Two hundred and seventy five billion dollars. That’s how much Apple planned to spend in China from 2016-2021, and given the growth of the company and the Chinese economy, it probably actually spent more than that.
At the same time, of course, Tim Cook was promising Donald Trump that he would bring more manufacturing back to the US. This was something that only an ignoramus like Trump would actually believe: but believe it, apparently, he did.
All of this is revealed in a wonderful Vanity Fair article based on a new book from Patrick McGee. Apple, of course, is saying the book is “untrue” and “riddled with inaccuracies”, despite also claiming to not have fact checked it. I haven’t read the book yet (it’s next in The Infinite Book Queue) but I’m looking forward to it. The degree to which Apple works with Chinese manufacturers is extraordinary: it’s a long way from the old OEM model where US companies would simply give a task to a manufacturer and hope they came back with something usable.
I have long been fascinated by the thinking behind Reach plc’s audience development strategy. For those who don’t know them, Reach is the UK’s largest local newspaper publisher, and also owns three nationals.
It spent the good part of a decade chasing a strategy which focused on generating scale traffic through Google and (while the going was good) Facebook. This meant that genuine local news often took a back seat to national (and even international) stories on local sites.
Much of this was driven by Google’s decision a few years ago to favour local newspapers. At the time, it was getting a kicking for the impact that its business had on local news, which had declined massively because of reduced classified revenue. Anyone working in publisher SEO will remember how, as if by magic, local sites like those owned by Reach started ranking very well for pretty-much any topic you liked.
Reach, though, erm, overreached. It raced to churn out a huge amount of “me too” stories, which were local in name only. A board that has no news room or audience development thought the good times would last forever, built a commercial strategy around huge scale, and were blindsided when, last year, traffic collapsed.
Anyway, Reach has finally seen its sites start to grow again, and their audience director, Martin Little, talked through the story recently. I’m sceptical about some of the numbers – a 43% increase in traffic year on year adding up to just a 9% increase in page views doesn’t add up – but a lot of the editorial strategy is similar to tactics I’ve used in the past: hubbing content teams around topic areas, for example.
But I think it’s still a strategy that’s doomed to fail, for two reasons. First, we are in the last few years of scale referral traffic from Google. AI will kill it, so you should be shifting to a subscription model if you can. Second, though, it’s decimating the value of the brands. Little mentions that Birmingham Live – yes, a local news site for Birmingham -- “leans into personal finance (especially Martin Lewis)… and news from the Department of Work and Pensions.”
None of this has anything to do with Birmingham. So why should anyone from Britain’s second city bother to read it? There’s a potential audience here of 1.1m people. Sure, national news will always feature – but should a local news site really be focused on Martin Lewis?
You can’t have nice things. Not even if you are paying top end prices for them. You must pay a subscription. You must accept what you have will get worse. Accept. Accept. Accept.
When I was a kid we thought that humans were unique in having intelligence. Then we we realised that wasn’t actually true. And now we understand that intelligence has evolved independently more than once, and intelligent brains can be built in different ways.
The story of one of my absolute favourite fonts in the entire world. I would happily use it for everything. You wouldn’t like it if I did. But I would.
Stories like this make me fairly convinced that human beings were not meant to live in the kind of fast information environment that the internet has enabled. Recycled footage of US aid drops into Gaza from 2024 being spread via TikTok (of course) on to other social media platforms, claiming to be contemporary footage of Chinese aid. It’s low-information crack for the gullible.
I have a constant challenge when thinking about AI to avoid just being reactionary. That’s grounded in being old enough to remember how many people were convinced that the home computer/the internet/Google were going to rot kids’ brains, destroy society through mass unemployment, and so on. I also remember the “it’s just a toy, it can’t do real work” brigade insisting that all those technologies were useless, got things wrong, etc.
But.
There will be huge societal and economic changes from the adoption of AI over the coming years. And unfortunately, we are going through this change at a time when neoliberalism’s inevitable consequence – huge monopolies and oligarchies – have eaten the world. I can’t think of many political or economic environments in my lifetime that have been as ill-equipped to handle such a profound change in the world of work.
When I left school in the 1980s, de-industrialisation was in full swing, leading to mass unemployment, but the social safety net was relatively strong: I had no job at 16, but I had benefits and the gift of time and support which helped me survive and eventually thrive. Now? The support system has been eroded by Tory and Labour governments alike who simply believe that neoliberalism will make it all right in the end, like a fairy story with shareholders.
So I can imagine some bright spark in the government looking at what the UAE is doing in introducing AI into the school curriculum and thinking that’s a great idea. And it’s not a bad one: AI, like all tools, requires skill the make the most of it. But the real contribution that the British government can make is to rebuild that social safety net, to make sure that the impacts of AI on work don’t lead to mass poverty. I am not holding my breath.
I’ve been a Stratechery reader for as long as Ben Thompson has been writing, and it’s always interesting to look back on his early posts. Back in 2014, Ben wrote a really interesting piece about the cost of complexity, which examined Chromebooks and their place in the world – and the potential for them to disrupt Microsoft Windows.
That didn’t happen, except in the US education market. But Ben’s fundamental point – that complexity has a cost, and that cost puts established players in danger – is also worth considering in the context of AI. AI is the ultimate simple interface: you ask, it does. That is one of the key reasons it’s so disruptive. It’s not just that it can write things for you, it’s that pretty-much anything you ask it, it will do (or at least make a bad attempt at).
I’ve said it before, and I’ll say it again: it’s the conversational user interface which makes AI powerful. Not just churning words.
Every business I have ever worked in loved having a meeting. And meetings can be great. They’re places to jointly solve problems, to build the social glue which makes good teams, and so on. But they can also be horrific time sinks that act as proxies for things being done. Personally I look forward to when I can send my digital twin to meetings instead. Although if there’s anything likely to cause a revolt of the AIs, it’s exactly that kind of behaviour.
And in this world of automation and AI, it’s worth remembering some old wisdom: you’re no better than anyone else.
2025-05-11 18:24:33
It looks like this, my friends. It looks like this. If there was a doubt that the nastiest people in America have seized control of the Federal government and are going to use it to enrich themselves still further, this should put it to bed.
Students cheating is nothing new. Students using AI to cheat at an industrial scale with little chance of any consequences even if caught? Yeah, that’s new. If you don’t come out of this wonderful long-form piece in New York magazine feeling a little depressed, you probably didn’t read it closely enough.
Part of the problem is that education has come to be seen as transactional, as a golden ticket to a better paid job, rather than as an opportunity to get knowledge and skills. This is especially true in places like the UK, where a university student will leave with nearly £50,000 of debt on average.
So why shouldn’t students do whatever is necessary to get out with a good grade, when their ability to pay back that loan and have a decent income is dependent on it?
For employers, though, this is a potential disaster. Yes, the ability to use AI will be a useful skill to have. But if graduate hires don’t have the kind of critical and creative skills which are vital in business and can only be learned by doing your own work, that means employers will have to teach them.
But there’s a wider question too: in the age of AI, what is education for? When you can ask an AI to research something for you, do you even need research skills of your own? Is the real skill knowing what questions to ask – and how do we teach that?
Rote learning is dead. It caught a cold when Google changed the world, and AI has pushed it into a coffin and buried it.
It looks like the company has quietly sneaked in a clause allowing it to take your creative work and, without giving you a penny, use it to train AI. Now I’m not anti-AI – but I am very much anti-creative-people-not-getting-paid.
Related: how many examples of big platforms shafting people is it going to take before we realise that big platforms can’t be trusted?
You might have noticed that two nuclear-armed states are currently lobbing drones and missiles at each other. I will be very annoyed with Silicon Valley if the first full-scale nuclear war happens because of misinformation and AI-generated footage distributed by social media.
My chum and super-talented individual Matt Gemmell has been on a bit of a journey recently, moving away from the iPad and back to the Mac, and now swapping Ulysses (which is a great writing tool) to… Emacs. Now Emacs isn’t for everyone, but it has the advantage of just working with ordinary text files, which Ulysses does not. He’s documented his thoughts – and some details of his setup – on his blog, and it’s a fascinating read. As Matt says, Emacs isn’t for everyone, but if you’re the kind of person who (1) loves tinkering with software, (2) wants everything in a format which will stand the test of time, and (3) are driven by keyboard shortcuts more than anything else, it’s worth considering.
The wonderful Dr Howard Oakley has written a brief history of the all-in-one Mac, from the original through to the various iMacs. Damn, that first iMac was lovely. Occasionally I get a strong desire to replace my Mac mini with one.
Ahh, NFTs. “Digital goods” which you “own” but which someone else can just accidentally delete by not paying their hosting bill. Whoops.
“It’s only advice for you because it had to be advice for me.” Genius, from a genius.
The answer will shock you! Actually, it won’t – she gets to her desk in her home office at 8.30 every day and just keeps writing. I love these profiles of authors who are nothing like what I want to be.
You know the answer to this (I hope). But interestingly, one of the reasons he chose the name is to honour Pope Leo XIII, who presided over the church at the start of the industrial revolution – and New Leo noted that we currently face“developments in the field of artificial intelligence that pose new challenges for the defence of human dignity, justice and labour.” When you're responsible for an organisation that’s getting on for a couple of thousand years old, you probably have a bit of a different perspective to that of Silicon Valley.
2025-05-03 02:28:23
Pentagon chief Pete Hegseth's latest memo on right-to-repair marks a rare moment of clarity amidst his usual utter tomfoolery. The document on "Army Transformation and Acquisition Reform" finally acknowledges what should have been obvious: that corporate intellectual property restrictions have disabled the Army's ability to fix its own kit in the field. While much of the memo retreads familiar ground with predictable Pentagon-speak, this admission of repair constraints signals a potential shift in the military's relationship with its profit-driven suppliers.
An Indian court has blocked the use of Proton Mail, the fully-encrypted email provider, after a local company "alleged that its employees had received emails containing obscene and vulgar content sent via Proton Mail." Additional Solicitor General Aravind Kamath delicately sidestepped governmental responsibility in court. Speaking for the administration, he suggested the government's hands were largely tied regarding the petitioner's grievances, and instead redirecting them toward criminal courts.
These institutions, he claimed, might extract the necessary information from Swiss authorities – effectively transferring the burden of investigation across international boundaries while the petitioner's complaint remains suspended in procedural limbo. And of course the Swiss courts would just have laughed, because the Swiss do not fuck around when it comes to stuff like this.
You might have noticed that Trump administration's "Trump First" agenda has led to Europe looking towards its own solutions for technology amidst a general push for "digital sovereignty". That could be bad news for big technology providers like Microsoft, so it's not that surprising that the company's top legal officer has come out with a lot of belligerent claims about what it would do if Trump wanted to get his hands on European data.
Microsoft's relationship to Trump appears complex. The company switched law firms in a shareholder case, replacing one that had settled with the Trump administration to avoid a punishing executive order with one that is actively fighting the White House over executive orders that stripped their security clearances and restricted their access to government buildings.
Given that the Federal Trade Commission opened a wide-ranging antitrust investigation into Microsoft's business practices just before Trump's second term, creating a situation where the incoming Trump administration must decide whether to continue or abandon the investigation, that's a pretty ballsy move.
A coworking space for chatbots. Sure. Why not?
But ignore the headline. It's actually a really interesting art project based around the loss of creative work to AI.
The right hates education. In particular, it hates education for ordinary people. That’s why the Trump administration (those guys again!) is scrabbling around for ideas on cutting student funding, making sure that degrees are only affordable for the rich.
Of course, this matters a lot to the world of technology companies, which are a major beneficiary of higher education not only in the US but across the world. Big tech has acted like a globalised vacuum cleaner for highly educated young people, lured by great working conditions and career prospects. While Trump has made noises about valuing highly-skilled immigrants, the hardcore MAGA crowd are dead-set against them.
I’ve been writing for a while about how affiliate revenue for publishers is going to tail off because of AI. So I’m not surprised that OpenAI is adding shopping to ChatGPT. It’s just so obviously a better experience for a lot of the kind of research-heavy, high value purchases that publishers have been targeting.
The only future for publishers is high-quality, direct traffic, and direct subscriptions. That’s it.
“Things are weird in Apple-land” is certainly up there. Jason Snell does his usual excellent piece on Apple’s quarterly results, and points out (among many other things) that the company is very much not dead in the water (again). As he notes:
The water is many things. It’s choppy. It’s chilly. There may be blood in it. There might even be sharks swarming. You pick the water metaphors you want, but what Apple’s certainly not is dead in it.
But... the problem with statements like this is that the water looks fine, until the moment it’s not -- and that point often comes quite quickly. Apple has a lot of resilience baked in, thanks to very diversified revenue streams. But even the most resilient companies reach a peak at some point.
Becky Chambers has nearly completed another book. If you haven’t read her works, you really should – some of the best and most gentle science fiction of the last decade.