MoreRSS

site iconMIT Technology ReviewModify

A world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and polit.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of MIT Technology Review

The ascent of the AI therapist

2025-12-30 19:00:00

We’re in the midst of a global mental-­health crisis. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year.

Given the clear demand for accessible and affordable mental-health services, it’s no wonder that people have looked to artificial intelligence for possible relief. Millions are already actively seeking therapy from popular chatbots like OpenAI’s ChatGPT and Anthropic’s Claude, or from specialized psychology apps like Wysa and Woebot. On a broader scale, researchers are exploring AI’s potential to monitor and collect behavioral and biometric observations using wearables and smart devices, analyze vast volumes of clinical data for new insights, and assist human mental-health professionals to help prevent burnout. 

But so far this largely uncontrolled experiment has produced mixed results. Many people have found solace in chatbots based on large language models (LLMs), and some experts see promise in them as therapists, but other users have been sent into delusional spirals by AI’s hallucinatory whims and breathless sycophancy. Most tragically, multiple families have alleged that chatbots contributed to the suicides of their loved ones, sparking lawsuits against companies responsible for these tools. In October, OpenAI CEO Sam Altman revealed in a blog post that 0.15% of ChatGPT users “have conversations that include explicit indicators of potential suicidal planning or intent.” That’s roughly a million people sharing suicidal ideations with just one of these software systems every week.

The real-world consequences of AI therapy came to a head in unexpected ways in 2025 as we waded through a critical mass of stories about human-chatbot relationships, the flimsiness of guardrails on many LLMs, and the risks of sharing profoundly personal information with products made by corporations that have economic incentives to harvest and monetize such sensitive data. 

Several authors anticipated this inflection point. Their timely books are a reminder that while the present feels like a blur of breakthroughs, scandals, and confusion, this disorienting time is rooted in deeper histories of care, technology, and trust. 

LLMs have often been described as “black boxes” because nobody knows exactly how they produce their results. The inner workings that guide their outputs are opaque because their algorithms are so complex and their training data is so vast. In mental-health circles, people often describe the human brain as a “black box,” for analogous reasons. Psychology, psychiatry, and related fields must grapple with the impossibility of seeing clearly inside someone else’s head, let alone pinpointing the exact causes of their distress. 

These two types of black boxes are now interacting with each other, creating unpredictable feedback loops that may further impede clarity about the origins of people’s mental-­health struggles and the solutions that may be possible. Anxiety about these developments has much to do with the explosive recent advances in AI, but it also revives decades-old warnings from pioneers such as the MIT computer scientist Joseph Weizenbaum, who argued against computerized therapy as early as the 1960s.  


cover of Dr Bot
Dr. Bot: Why Doctors Can Fail Us— and
How AI Could Save Lives

Charlotte Blease
YALE UNIVERSITY PRESS, 2025

Charlotte Blease, a philosopher of medicine, makes the optimist’s case in Dr. Bot: Why Doctors Can Fail Us—and How AI Could Save Lives. Her book broadly explores the possible positive impacts of AI in a range of medical fields. While she remains clear-eyed about the risks, warning that readers who are expecting “a gushing love letter to technology” will be disappointed, she suggests that these models can help relieve patient suffering and medical burnout alike.

“Health systems are crumbling under patient pressure,” Blease writes. “Greater burdens on fewer doctors create the perfect petri dish for errors,” and “with palpable shortages of doctors and increasing waiting times for patients, many of us are profoundly frustrated.”

Blease believes that AI can not only ease medical professionals’ massive workloads but also relieve the tensions that have always existed between some patients and their caregivers. For example, people often don’t seek needed care because they are intimidated or fear judgment from medical professionals; this is especially true if they have mental-health challenges. AI could allow more people to share their concerns, she argues. 

But she’s aware that these putative upsides need to be weighed against major drawbacks. For instance, AI therapists can provide inconsistent and even dangerous responses to human users, according to a 2025 study, and they also raise privacy concerns, given that AI companies are currently not bound by the same confidentiality and HIPAA standards as licensed therapists. 

While Blease is an expert in this field, her motivation for writing the book is also personal: She has two siblings with an incurable form of muscular dystrophy, one of whom waited decades for a diagnosis. During the writing of her book, she also lost her partner to cancer and her father to dementia within a devastating six-month period. “I witnessed first-hand the sheer brilliance of doctors and the kindness of health professionals,” she writes. “But I also observed how things can go wrong with care.”


cover of the Silicon Shrink
The Silicon Shrink: How Artificial Intelligence Made the World an Asylum
Daniel Oberhaus
MIT PRESS, 2025

A similar tension animates Daniel Oberhaus’s engrossing book The Silicon Shrink: How Artificial Intelligence Made the World an Asylum. Oberhaus starts from a point of tragedy: the loss of his younger sister to suicide. As Oberhaus carried out the “distinctly twenty-first-century mourning process” of sifting through her digital remains, he wondered if technology could have eased the burden of the psychiatric problems that had plagued her since childhood.

“It seemed possible that all of this personal data might have held important clues that her mental health providers could have used to provide more effective treatment,” he writes. “What if algorithms running on my sister’s smartphone or laptop had used that data to understand when she was in distress? Could it have led to a timely intervention that saved her life? Would she have wanted that even if it did?”

This concept of digital phenotyping—in which a person’s digital behavior could be mined for clues about distress or illness—seems elegant in theory. But it may also become problematic if integrated into the field of psychiatric artificial intelligence (PAI), which extends well beyond chatbot therapy.

Oberhaus emphasizes that digital clues could actually exacerbate the existing challenges of modern psychiatry, a discipline that remains fundamentally uncertain about the underlying causes of mental illnesses and disorders. The advent of PAI, he says, is “the logical equivalent of grafting physics onto astrology.” In other words, the data generated by digital phenotyping is as precise as physical measurements of planetary positions, but it is then integrated into a broader framework—in this case, psychiatry—that, like astrology, is based on unreliable assumptions.  

Oberhaus, who uses the phrase “swipe psychiatry” to describe the outsourcing of clinical decisions based on behavioral data to LLMs, thinks that this approach cannot escape the fundamental issues facing psychiatry. In fact, it could worsen the problem by causing the skills and judgment of human therapists to atrophy as they grow more dependent on AI systems. 

He also uses the asylums of the past—in which institutionalized patients lost their right to freedom, privacy, dignity, and agency over their lives—as a touchstone for a more insidious digital captivity that may spring from PAI. LLM users are already sacrificing privacy by telling chatbots sensitive personal information that companies then mine and monetize, contributing to a new surveillance economy. Freedom and dignity are at stake when complex inner lives are transformed into data streams tailored for AI analysis. 

AI therapists could flatten humanity into patterns of prediction, and so sacrifice the intimate, individualized care that is expected of traditional human therapists. “The logic of PAI leads to a future where we may all find ourselves patients in an algorithmic asylum administered by digital wardens,” Oberhaus writes. “In the algorithmic asylum there is no need for bars on the window or white padded rooms because there is no possibility of escape. The asylum is already everywhere—in your homes and offices, schools and hospitals, courtrooms and barracks. Wherever there’s an internet connection, the asylum is waiting.”


cover of Chatbot Therapy
Chatbot Therapy:
A Critical Analysis of
AI Mental Health Treatment

Eoin Fullam
ROUTLEDGE, 2025

Eoin Fullam, a researcher who studies the intersection of technology and mental health, echoes some of the same concerns in Chatbot Therapy: A Critical Analysis of AI Mental Health Treatment. A heady academic primer, the book analyzes the assumptions underlying the automated treatments offered by AI chatbots and the way capitalist incentives could corrupt these kinds of tools.  

Fullam observes that the capitalist mentality behind new technologies “often leads to questionable, illegitimate, and illegal business practices in which the customers’ interests are secondary to strategies of market dominance.”

That doesn’t mean that therapy-bot makers “will inevitably conduct nefarious activities contrary to the users’ interests in the pursuit of market dominance,” Fullam writes. 

But he notes that the success of AI therapy depends on the inseparable impulses to make money and to heal people. In this logic, exploitation and therapy feed each other: Every digital therapy session generates data, and that data fuels the system that profits as unpaid users seek care. The more effective the therapy seems, the more the cycle entrenches itself, making it harder to distinguish between care and commodification. “The more the users benefit from the app in terms of its therapeutic or any other mental health intervention,” he writes, “the more they undergo exploitation.” 


This sense of an economic and psychological ouroboros—the snake that eats its own tail—serves as a central metaphor in Sike, the debut novel from Fred Lunzer, an author with a research background in AI. 

Described as a “story of boy meets girl meets AI psychotherapist,” Sike follows Adrian, a young Londoner who makes a living ghostwriting rap lyrics, in his romance with Maquie, a business professional with a knack for spotting lucrative technologies in the beta phase. 

cover of Sike
Sike
Fred Lunzer
CELADON BOOKS, 2025

The title refers to a splashy commercial AI therapist called Sike, uploaded into smart glasses, that Adrian uses to interrogate his myriad anxieties. “When I signed up to Sike, we set up my dashboard, a wide black panel like an airplane’s cockpit that showed my daily ‘vitals,’” Adrian narrates. “Sike can analyze the way you walk, the way you make eye contact, the stuff you talk about, the stuff you wear, how often you piss, shit, laugh, cry, kiss, lie, whine, and cough.”

In other words, Sike is the ultimate digital phenotyper, constantly and exhaustively analyzing everything in a user’s daily experiences. In a twist, Lunzer chooses to make Sike a luxury product, available only to subscribers who can foot the price tag of £2,000 per month. 

Flush with cash from his contributions to a hit song, Adrian comes to rely on Sike as a trusted mediator between his inner and outer worlds. The novel explores the impacts of the app on the wellness of the well-off, following rich people who voluntarily commit themselves to a boutique version of the digital asylum described by Oberhaus.

The only real sense of danger in Sike involves a Japanese torture egg (don’t ask). The novel strangely sidesteps the broader dystopian ripples of its subject matter in favor of drunken conversations at fancy restaurants and elite dinner parties. 

The sudden ascent of the AI therapist seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes.

Sike’s creator is simply “a great guy” in Adrian’s estimation, despite his techno-messianic vision of training the app to soothe the ills of entire nations. It always seems as if a shoe is meant to drop, but in the end, it never does, leaving the reader with a sense of non-resolution.

While Sike is set in the present day, something about the sudden ascent of the AI therapist—­in real life as well as in fiction—seems startlingly futuristic, as if it should be unfolding in some later time when the streets scrub themselves and we travel the world through pneumatic tubes. But this convergence of mental health and artificial intelligence has been in the making for more than half a century. The beloved astronomer Carl Sagan, for example, once imagined a “network of computer psychotherapeutic terminals, something like arrays of large telephone booths” that could address the growing demand for mental-health services.

Oberhaus notes that one of the first incarnations of a trainable neural network, known as the Perceptron, was devised not by a mathematician but by a psychologist named Frank Rosenblatt, at the Cornell Aeronautical Laboratory in 1958. The potential utility of AI in mental health was widely recognized by the 1960s, inspiring early computerized psychotherapists such as the DOCTOR script that ran on the ELIZA chatbot developed by Joseph Weizenbaum, who shows up in all three of the nonfiction books in this article.

Weizenbaum, who died in 2008, was profoundly concerned about the possibility of computerized therapy. “Computers can make psychiatric judgments,” he wrote in his 1976 book Computer Power and Human Reason. “They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not to be given such tasks. They may even be able to arrive at ‘correct’ decisions in some cases—but always and necessarily on bases no human being should be willing to accept.”

It’s a caution worth keeping in mind. As AI therapists arrive at scale, we’re seeing them play out a familiar dynamic: Tools designed with superficially good intentions are enmeshed with systems that can exploit, surveil, and reshape human behavior. In a frenzied attempt to unlock new opportunities for patients in dire need of mental-health support, we may be locking other doors behind them.

Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.

Bangladesh’s garment-making industry is getting greener

2025-12-29 19:00:00

Pollution from textile production—dyes, chemicals, and heavy metals like lead and cadmium—is common in the waters of the Buriganga River as it runs through Dhaka, Bangladesh. It’s among many harms posed by a garment sector that was once synonymous with tragedy: In 2013, the eight-story Rana Plaza factory building collapsed, killing 1,134 people and injuring some 2,500 others. 

colored water pouring out of a cement tunnel into a river with a city in the far distance
Wastewater from Bangladesh’s garment industry flows into the Buriganga River.
ZAKIR HOSSAIN CHOWDHURY

But things are starting to change. In recent years the country has quietly become an unlikely leader in “frugal” factories that use a combination of resource-efficient technologies to cut waste, conserve water, and build resilience against climate impacts and global supply disruptions. Bangladesh now boasts 268 LEED-certified garment factories—more than any other country. Dye plants are using safer chemicals, tanneries are adopting cleaner tanning methods and treating wastewater, workshops are switching to more efficient LED lighting, and solar panels glint from rooftops. The hundreds of factories along the Buriganga’s banks and elsewhere in Bangladesh are starting to stitch together a new story, woven from greener threads.

a single factory worker in the midst of many workstation tables under industrial lighting fixtures
These energy-efficient, automated template sewing machines at the Fakir Eco Knitwears factory near Bangladesh’s capital help workers reduce waste.
ZAKIR HOSSAIN CHOWDHURY

In Fakir Eco Knitwears’ LEED Gold–certified factory in Narayanganj, a city near Dhaka, skylights reduce energy consumption from electric lighting by 40%, and AI-driven cutters allow workers to recycle 95% of fabric scraps into new yarns. “We save energy by using daylight, solar power, and rainwater instead of heavy AC and boilers,” says Md. Anisuzzaman, an engineer at the company. “It shows how local resources can make production greener and more sustainable.” 

The shift to green factories in Bangladesh is financed through a combination of factory investments, loans from Bangladesh Bank’s Green Transformation Fund, and pressure from international buyers who reward compliance with ongoing orders. One prominent program is the Partnership for Cleaner Textile (PaCT), an initiative run by the World Bank Group’s International Finance Corporation. Launched in 2013, PaCT has worked with more than 450 factories on cleaner production methods. By its count, the effort now saves 35 billion liters of fresh water annually, enough to meet the needs of 1.9 million people.

solar panels on a factory roof
Solar panels on top of the factory help reduce its energy footprint.
ZAKIR HOSSAIN CHOWDHURY
""
An exhaust gas absorption chiller absorbs heat and helps maintain the factory floor’s temperature at around 28 °C (82 °F).
ZAKIR HOSSAIN CHOWDHURY

""
Water reclaimed at the factory’s sewage treatment plant is used in the facility’s restrooms.
ZAKIR HOSSAIN CHOWDHURY

It’s a good start, but Bangladesh’s $40 billion garment industry still has a long way to go. The shift to environmentalism at the factory level hasn’t translated to improved outcomes for the sector’s 4.4 million workers. 

Wage theft and delayed payments are widespread. The minimum wage, some 12,500 taka per month (about $113), is far below the $200 proposed by unions—which has meant frequent strikes and protests over pay, overtime, and job security. “Since Rana Plaza, building safety and factory conditions have improved, but the mindset remains unchanged,” says A.K.M. Ashraf Uddin, executive director of the Bangladesh Labour Foundation, a nonprofit labor rights group. “Profit still comes first, and workers’ freedom of speech is yet to be realized.”

The smaller factories that dominate the garment sector may struggle to invest in green upgrades.
ZAKIR HOSSAIN CHOWDHURY

In the worst case, greener industry practices could actually exacerbate inequality. Smaller factories dominate the sector, and they struggle to afford upgrades. But without those upgrades, businesses could find themselves excluded from certain markets. One of those is the European Union, which plans to require companies to address human rights and environmental problems in supply chains starting in 2027. A cleaner Buriganga River mends just a small corner of a vast tapestry of need. 

Zakir Hossain Chowdhury is a visual journalist based in Bangladesh.

MIT Technology Review’s most popular stories of 2025

2025-12-26 20:00:00

It’s been a busy and productive year here at MIT Technology Review. We published magazine issues on power, creativity, innovation, bodies, relationships, and security. We hosted 14 exclusive virtual conversations with our editors and outside experts in our subscriber-only series, Roundtables, and held two events on MIT’s campus. And we published hundreds of articles online, following new developments in computing, climate tech, robotics, and more. 

As the year winds down, we wanted to give you a chance to revisit a bit of this work with us. Whether we were covering the red-hot rise of artificial intelligence or the future of biotech, these are some of the stories that resonated the most with our readers. 

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

Understanding AI’s energy use was a huge global conversation in 2025 as hundreds of millions of people began using generative AI tools on a regular basis. Senior reporters James O’Donnell and Casey Crownhart dug into the numbers and published an unprecedented look at AI’s resource demand, down to the level of a single query, to help us know how much energy and water AI may require moving forward. 

We’re learning more about what vitamin D does to our bodies

Vitamin D deficiency is widespread, particularly in the winter when there’s less sunlight to drive its production in our bodies. The “sunshine vitamin” is important for bone health, but as senior reporter Jessica Hamzelou reported, recent research is also uncovering surprising new insights into other ways it might influence our bodies, including our immune systems and heart health.

What is AI?

Senior editor Will Douglas Heaven’s expansive look at how to define AI was published in 2024, but it still managed to connect with many readers this year. He lays out why no one can agree on what AI is—and explains why that ambiguity matters, and how it can inform our own critical thinking about this technology.

Ethically sourced “spare” human bodies could revolutionize medicine

In this thought-provoking op-ed, a team of experts at Stanford University argue that creating living human bodies that can’t think, don’t have any awareness, and can’t feel pain could shake up medical research and drug development by providing essential biological materials for testing and transplantation. Recent advances in biotechnology now provide a potential pathway to such “bodyoids,” though plenty of technical challenges and ethical hurdles remain. 

It’s surprisingly easy to stumble into a relationship with an AI chatbot

Chatbots were everywhere this year, and reporter Rhiannon Williams chronicled how quickly people can develop bonds with one. That’s all right for some people, she notes, but dangerous for others. Some folks even describe unintentionally forming romantic relationships with chatbots. This is a trend we’ll definitely be keeping an eye on in 2026. 

Is this the electric grid of the future?

The electric grid is bracing for disruption from more frequent storms and fires, as well as an uncertain policy and regulatory landscape. And in many ways, the publicly owned utility company Lincoln Electric in Nebraska is an ideal lens through which to examine this shift as it works through the challenges of delivering service that’s reliable, affordable, and sustainable.

Exclusive: A record-breaking baby has been born from an embryo that’s over 30 years old

This year saw the birth of the world’s “oldest baby”: Thaddeus Daniel Pierce, who arrived on July 26. The embryo he developed from was created in 1994 during the early days of IVF and had been frozen and sitting in storage ever since. The new baby’s parents were toddlers at the time, and the embryo was donated to them decades later via a Christian “embryo adoption” agency.  

How these two brothers became go-to experts on America’s “mystery drone” invasion

Twin brothers John and Gerald Tedesco teamed up to investigate a concerning new threat—unidentified drones. In 2024 alone, some 350 drones entered airspace over a hundred different US military installations, and many cases went unsolved, according to a top military official. This story takes readers inside the equipment-filled RV the Tedescos created to study mysterious aerial phenomena, and how they made a name for themselves among government officials. 

10 Breakthrough Technologies of 2025 

Our newsroom has published this annual look at advances that will matter in the long run for over 20 years. This year’s list featured generative AI search, cleaner jet fuel, long-acting HIV prevention meds, and other emerging technologies that our journalists think are worth watching. We’ll publish the 2026 edition of the list on January 12, so stay tuned. (In the meantime, here’s what didn’t make the cut.)  

The paints, coatings, and chemicals making the world a cooler place

2025-12-26 19:00:00

It’s getting harder to beat the heat. During the summer of 2025, heat waves knocked out power grids in North America, Europe, and the Middle East. Global warming means more people need air-­conditioning, which requires more power and strains grids. But a millennia-old idea (plus 21st-century tech) might offer an answer: radiative cooling. Paints, coatings, and textiles can scatter sunlight and dissipate heat—no additional energy required.

“Radiative cooling is universal—it exists everywhere in our daily life,” says Qiaoqiang Gan, a professor of materials science and applied physics at King Abdullah University of Science and Technology in Saudi Arabia. Pretty much any object will absorb heat from the sun during the day and radiate some of it back at night. It’s why cars parked outside overnight are often covered with condensation, Gan says—their metal roofs dissipate heat into the sky, cooling the surfaces below the ambient air temperature. That’s how you get dew.

Humans have harnessed this basic natural process for thousands of years. Desert peoples in Iran, North Africa, and India manufactured ice by leaving pools of water exposed to clear desert skies overnight, when radiative cooling happens naturally; other cultures constructed “cool roofs” capped with reflective materials that scattered sunlight and lowered interior temperatures. “People have taken advantage of this effect, either knowingly or unknowingly, for a very long time,” says Aaswath Raman, a materials scientist at UCLA and cofounder of the radiative­cooling startup SkyCool Systems.

Modern approaches, as demonstrated everywhere from California supermarket rooftops to Japan’s Expo 2025 pavilion, go even further. Normally, if the sun is up and pumping in heat, surfaces can’t get cooler than the ambient temperature. But back in 2014, Raman and his colleagues achieved radiative cooling in the daytime. They customized photonic films to absorb and then radiate heat at infrared wavelengths between eight and 13 micrometers—a range of electromagnetic wavelengths called an “atmospheric window,” because that radiation escapes to space rather than getting absorbed. Those films could dissipate heat even under full sun, cooling the inside of a building to 9 °F below ambient temperatures, with no AC or energy source required.

That was proof of concept; today, Raman says, the industry has mostly shifted away from advanced photonics that use the atmospheric-window effect to simpler sunlight-scattering materials. Ceramic cool roofs, nanostructure coatings, and reflective polymers all offer the possibility of diverting more sunlight across all wavelengths, and they’re more durable and scalable.

Now the race is on. Startups such as SkyCool, Planck Energies, Spacecool, and i2Cool are competing to commercially manufacture and sell coatings that reflect at least 94% of sunlight in most climates, and above 97% in humid tropical ones. Pilot projects have already provided significant cooling to residential buildings, reducing AC energy needs by 15% to 20% in some cases. 

This idea could go way beyond reflective rooftops and roads. Researchers are developing reflective textiles that can be worn by people most at risk of heat exposure. “This is personal thermal management,” says Gan. “We can realize passive cooling in T-shirts, sportswear, and garments.” 

thermal image of a person on a rooftop holding a stick in a bucket
A thermal image captured during a SkyCool installation shows treated areas (white, yellow) that are roughly 35 ºC cooler than the surrounding rooftop.
COURTESY OF SKYCOOL SYSTEMS

Of course, these technologies and materials have limits. Like solar power grids, they’re vulnerable to weather. Clouds prevent reflected sunlight from bouncing into space. Dust and air pollution dim materials’ bright surfaces. Lots of coatings lose their reflectivity after a few years. And the cheapest and toughest materials used in radiative cooling tend to rely on Teflon and other fluoropolymers, “forever chemicals” that don’t biodegrade, posing an environmental risk. “They are the best class of products that tend to survive outdoors,” says Raman. “So for long-term scale-up, can you do it without materials like those fluoropolymers and still maintain the durability and hit this low cost point?”

As with any other solution to the problems of climate change, one size won’t fit all. “We cannot be overoptimistic and say that radiative cooling can address all our future needs,” Gan says. “We still need more efficient active air-conditioning.” A shiny roof isn’t a panacea, but it’s still pretty cool. 

Becky Ferreira is a science reporter based in upstate New York and author of First Contact: The Story of Our Obsession with Aliens.

AI Wrapped: The 14 AI terms you couldn’t avoid in 2025

2025-12-25 18:00:00

If the past 12 months have taught us anything, it’s that the AI hype train is showing no signs of slowing. It’s hard to believe that at the beginning of the year, DeepSeek had yet to turn the entire industry on its head, Meta was better known for trying (and failing) to make the metaverse cool than for its relentless quest to dominate superintelligence, and vibe coding wasn’t a thing.

If that’s left you feeling a little confused, fear not. As we near the end of 2025, our writers have taken a look back over the AI terms that dominated the year, for better or worse.

Make sure you take the time to brace yourself for what promises to be another bonkers year.

—Rhiannon Williams

1. Superintelligence

a jack russell terrier wearing glasses and a bow tie

As long as people have been hyping AI, they have been coming up with names for a future, ultra-powerful form of the technology that could bring about utopian or dystopian consequences for humanity. “Superintelligence” is that latest hot term. Meta announced in July that it would form an AI team to pursue superintelligence, and it was reportedly offering nine-figure compensation packages to AI experts from the company’s competitors to join.

In December, Microsoft’s head of AI followed suit, saying the company would be spending big sums, perhaps hundreds of billions, on the pursuit of superintelligence. If you think superintelligence is as vaguely defined as artificial general intelligence, or AGI, you’d be right! While it’s conceivable that these sorts of technologies will be feasible in humanity’s long run, the question is really when, and whether today’s AI is good enough to be treated as a stepping stone toward something like superintelligence. Not that that will stop the hype kings. —James O’Donnell

2. Vibe coding

Thirty years ago, Steve Jobs said everyone in America should learn how to program a computer. Today, people with zero knowledge of how to code can knock up an app, game, or website in no time at all thanks to vibe coding—a catch-all phrase coined by OpenAI cofounder Andrej Karpathy. To vibe-code, you simply prompt generative AI models’ coding assistants to create the digital object of your desire and accept pretty much everything they spit out. Will the result work? Possibly not. Will it be secure? Almost definitely not, but the technique’s biggest champions aren’t letting those minor details stand in their way. Also—it sounds fun! — Rhiannon Williams

3. Chatbot psychosis

One of the biggest AI stories over the past year has been how prolonged interactions with chatbots can cause vulnerable people to experience delusions and, in some extreme cases, can either cause or worsen psychosis. Although “chatbot psychosis” is not a recognized medical term, researchers are paying close attention to the growing anecdotal evidence from users who say it’s happened to them or someone they know. Sadly, the increasing number of lawsuits filed against AI companies by the families of people who died following their conversations with chatbots demonstrate the technology’s potentially deadly consequences. —Rhiannon Williams

4. Reasoning

Few things kept the AI hype train going this year more than so-called reasoning models, LLMs that can break down a problem into multiple steps and work through them one by one. OpenAI released its first reasoning models, o1 and o3, a year ago.

A month later, the Chinese firm DeepSeek took everyone by surprise with a very fast follow, putting out R1, the first open-source reasoning model. In no time, reasoning models became the industry standard: All major mass-market chatbots now come in flavors backed by this tech. Reasoning models have pushed the envelope of what LLMs can do, matching top human performances in prestigious math and coding competitions. On the flip side, all the buzz about LLMs that could “reason” reignited old debates about how smart LLMs really are and how they really work. Like “artificial intelligence” itself, “reasoning” is technical jargon dressed up with marketing sparkle. Choo choo! —Will Douglas Heaven

5. World models 

For all their uncanny facility with language, LLMs have very little common sense. Put simply, they don’t have any grounding in how the world works. Book learners in the most literal sense, LLMs can wax lyrical about everything under the sun and then fall flat with a howler about how many elephants you could fit into an Olympic swimming pool (exactly one, according to one of Google DeepMind’s LLMs).

World models—a broad church encompassing various technologies—aim to give AI some basic common sense about how stuff in the world actually fits together. In their most vivid form, world models like Google DeepMind’s Genie 3 and Marble, the much-anticipated new tech from Fei-Fei Li’s startup World Labs, can generate detailed and realistic virtual worlds for robots to train in and more. Yann LeCun, Meta’s former chief scientist, is also working on world models. He has been trying to give AI a sense of how the world works for years, by training models to predict what happens next in videos. This year he quit Meta to focus on this approach in a new start up called Advanced Machine Intelligence Labs. If all goes well, world models could be the next thing. —Will Douglas Heaven

6. Hyperscalers

Have you heard about all the people saying no thanks, we actually don’t want a giant data center plopped in our backyard? The data centers in question—which tech companies want to built everywhere, including space—are typically referred to as hyperscalers: massive buildings purpose-built for AI operations and used by the likes of OpenAI and Google to build bigger and more powerful AI models. Inside such buildings, the world’s best chips hum away training and fine-tuning models, and they’re built to be modular and grow according to needs.

It’s been a big year for hyperscalers. OpenAI announced, alongside President Donald Trump, its Stargate project, a $500 billion joint venture to pepper the country with the largest data centers ever. But it leaves almost everyone else asking: What exactly do we get out of it? Consumers worry the new data centers will raise their power bills. Such buildings generally struggle to run on renewable energy. And they don’t tend to create all that many jobs. But hey, maybe these massive, windowless buildings could at least give a moody, sci-fi vibe to your community. —James O’Donnell

7. Bubble

The lofty promises of AI are levitating the economy. AI companies are raising eye-popping sums of money and watching their valuations soar into the stratosphere. They’re pouring hundreds of billions of dollars into chips and data centers, financed increasingly by debt and eyebrow-raising circular deals. Meanwhile, the companies leading the gold rush, like OpenAI and Anthropic, might not turn a profit for years, if ever. Investors are betting big that AI will usher in a new era of riches, yet no one knows how transformative the technology will actually be.

Most organizations using AI aren’t yet seeing the payoff, and AI work slop is everywhere. There’s scientific uncertainty about whether scaling LLMs will deliver superintelligence or whether new breakthroughs need to pave the way. But unlike their predecessors in the dot-com bubble, AI companies are showing strong revenue growth, and some are even deep-pocketed tech titans like Microsoft, Google, and Meta. Will the manic dream ever burst—Michelle Kim

8. Agentic

This year, AI agents were everywhere. Every new feature announcement, model drop, or security report throughout 2025 was peppered with mentions of them, even though plenty of AI companies and experts disagree on exactly what counts as being truly “agentic,” a vague term if ever there was one. No matter that it’s virtually impossible to guarantee that an AI acting on your behalf out in the wide web will always do exactly what it’s supposed to do—it seems as though agentic AI is here to stay for the foreseeable. Want to sell something? Call it agentic! —Rhiannon Williams

9. Distillation

Early this year, DeepSeek unveiled its new model DeepSeek R1, an open-source reasoning model that matches top Western models but costs a fraction of the price. Its launch freaked Silicon Valley out, as many suddenly realized for the first time that huge scale and resources were not necessarily the key to high-level AI models. Nvidia stock plunged by 17% the day after R1 was released.

The key to R1’s success was distillation, a technique that makes AI models more efficient. It works by getting a bigger model to tutor a smaller model: You run the teacher model on a lot of examples and record the answers, and reward the student model as it copies those responses as closely as possible, so that it gains a compressed version of the teacher’s knowledge.  —Caiwei Chen

10. Sycophancy

As people across the world spend increasing amounts of time interacting with chatbots like ChatGPT, chatbot makers are struggling to work out the kind of tone and “personality” the models should adopt. Back in April, OpenAI admitted it’d struck the wrong balance between helpful and sniveling, saying a new update had rendered GPT-4o too sycophantic. Having it suck up to you isn’t just irritating—it can mislead users by reinforcing their incorrect beliefs and spreading misinformation. So consider this your reminder to take everything—yes, everything—LLMs produce with a pinch of salt. —Rhiannon Williams

11. Slop

If there is one AI-related term that has fully escaped the nerd enclosures and entered public consciousness, it’s “slop.” The word itself is old (think pig feed), but “slop” is now commonly used to refer to low-effort, mass-produced content generated by AI, often optimized for online traffic. A lot of people even use it as a shorthand for any AI-generated content. It has felt inescapable in the past year: We have been marinated in it, from fake biographies to shrimp Jesus images to surreal human-animal hybrid videos.

But people are also having fun with it. The term’s sardonic flexibility has made it easy for internet users to slap it on all kinds of words as a suffix to describe anything that lacks substance and is absurdly mediocre: think “work slop” or “friend slop.” As the hype cycle resets, “slop” marks a cultural reckoning about what we trust, what we value as creative labor, and what it means to be surrounded by stuff that was made for engagement rather than expression. —Caiwei Chen

12. Physical intelligence

Did you come across the hypnotizing video from earlier this year of a humanoid robot putting away dishes in a bleak, gray-scale kitchen? That pretty much embodies the idea of physical intelligence: the idea that advancements in AI can help robots better move around the physical world. 

It’s true that robots have been able to learn new tasks faster than ever before, everywhere from operating rooms to warehouses. Self-driving-car companies have seen improvements in how they simulate the roads, too. That said, it’s still wise to be skeptical that AI has revolutionized the field. Consider, for example, that many robots advertised as butlers in your home are doing the majority of their tasks thanks to remote operators in the Philippines

The road ahead for physical intelligence is also sure to be weird. Large language models train on text, which is abundant on the internet, but robots learn more from videos of people doing things. That’s why the robot company Figure suggested in September that it would pay people to film themselves in their apartments doing chores. Would you sign up? —James O’Donnell

13. Fair use

AI models are trained by devouring millions of words and images across the internet, including copyrighted work by artists and writers. AI companies argue this is “fair use”—a legal doctrine that lets you use copyrighted material without permission if you transform it into something new that doesn’t compete with the original. Courts are starting to weigh in. In June, Anthropic’s training of its AI model Claude on a library of books was ruled fair use because the technology was “exceedingly transformative.”

That same month, Meta scored a similar win, but only because the authors couldn’t show that the company’s literary buffet cut into their paychecks. As copyright battles brew, some creators are cashing in on the feast. In December, Disney signed a splashy deal with OpenAI to let users of Sora, the AI video platform, generate videos featuring more than 200 characters from Disney’s franchises. Meanwhile, governments around the world are rewriting copyright rules for the content-guzzling machines. Is training AI on copyrighted work fair use? As with any billion-dollar legal question, it depends—Michelle Kim

14. GEO

Just a few short years ago, an entire industry was built around helping websites rank highly in search results (okay, just in Google). Now search engine optimization (SEO), is giving way to GEO—generative engine optimization—as the AI boom forces brands and businesses to scramble to maximize their visibility in AI, whether that’s in AI-enhanced search results like Google’s AI Overviews or within responses from LLMs. It’s no wonder they’re freaked out. We already know that news companies have experienced a colossal drop in search-driven web traffic, and AI companies are working on ways to cut out the middleman and allow their users to visit sites from directly within their platforms. It’s time to adapt or die. —Rhiannon Williams

Four bright spots in climate news in 2025

2025-12-24 19:00:00

Climate news hasn’t been great in 2025. Global greenhouse-gas emissions hit record highs (again). This year is set to be either the second or third warmest on record. Climate-fueled disasters like wildfires in California and flooding in Indonesia and Pakistan devastated communities and caused billions in damage.

In addition to these worrying indicators of our continued contributions to climate change and their obvious effects, the world’s largest economy has made a sharp U-turn on climate policy this year. The US under the Trump administration withdrew from the Paris Agreement, cut funds for climate research, and scrapped billions of dollars in funding for climate tech projects.

We’re in a severe situation with climate change. But for those looking for bright spots, there was some good news in 2025. Here are a few of the positive stories our climate reporters noticed this year.

China’s flattening emissions

Solar panels field on hillside
GETTY IMAGES

One of the most notable and encouraging signs of progress this year occurred in China. The world’s second-biggest economy and biggest climate polluter has managed to keep carbon dioxide emissions flat for the last year and a half, according to an analysis in Carbon Brief.

That’s happened before, but only when the nation’s economy was retracting, including in the midst of the covid-19 pandemic. But emissions are now falling even as China’s economy is on track to grow about 5% this year, and electricity demands continue to rise.

So what’s changed? China has now installed so much solar and wind, and put so many EVs on the road, that its economy can continue to expand without increasing the amount of carbon dioxide it’s pumping into the atmosphere, decoupling the traditional link between emissions and growth.

Specifically, China added an astounding 240 gigawatts of solar power capacity and 61 gigawatts of wind power in the first nine months of the year, the Carbon Brief analysis noted. That’s nearly as much solar power as the US has installed in total, in just the first three quarters of this year.

It’s too early to say China’s emissions have peaked, but the country has said it will officially reach that benchmark before 2030.

To be clear, China still isn’t moving fast enough to keep the world on track for meeting relatively safe temperature targets. (Indeed, very few countries are.) But it’s now both producing most of the world’s clean energy technologies and curbing its emissions growth, providing a model for cleaning up industrial economies without sacrificing economic prosperity—and setting the stage for faster climate progress in the coming years.

Batteries on the grid

looking down a row on battery storage units on an overcast day
AP PHOTO/SAM HODDE

It’s hard to articulate just how quickly batteries for grid storage are coming online. These massive arrays of cells can soak up electricity when sources like solar are available and prices are low, and then discharge power back to the grid when it’s needed most.

Back in 2015, the battery storage industry had installed only a fraction of a gigawatt of battery storage capacity across the US. That year, it set a seemingly bold target of adding 35 gigawatts by 2035. The sector passed that goal a decade early this year and then hit 40 gigawatts a couple of months later. 

Costs are still falling, which could help maintain the momentum for the technology’s deployment. This year, battery prices for EVs and stationary storage fell yet again, reaching a record low, according to data from BloombergNEF. Battery packs specifically used for grid storage saw prices fall even faster than the average; they cost 45% less than last year.

We’re starting to see what happens on grids with lots of battery capacity, too: in California and Texas, batteries are already helping meet demand in the evenings, reducing the need to run natural-gas plants. The result: a cleaner, more stable grid.

AI’s energy funding influx

Aerial view of a large Google Data Centre being built in Cheshunt, Hertfordshire, UK
GETTY IMAGES

The AI boom is complicated for our energy system, as we covered at length this year. Electricity demand is ticking up: the amount of power utilities supplied to US data centers jumped 22% this year and will more than double by 2030.

But at least one positive shift is coming out of AI’s influence on energy: It’s driving renewed interest and investment in next-generation energy technologies.

In the near term, much of the energy needed for data centers, including those that power AI, will likely come from fossil fuels, especially new natural-gas power plants. But tech giants like Google, Microsoft, and Meta all have goals on the books to reduce their greenhouse-gas emissions, so they’re looking for alternatives.

Meta signed a deal with XGS Energy in June to purchase up to 150 megawatts of electricity from a geothermal plant. In October, Google signed an agreement that will help reopen Duane Arnold Energy Center in Iowa, a previously shuttered nuclear power plant.

Geothermal and nuclear could be key pieces of the grid of the future, as they can provide constant power in a way that wind and solar don’t. There’s a long way to go for many of the new versions of the tech, but more money and interest from big, powerful players can’t hurt.

Good news, bad news

Aerial view of solar power and battery storage units in the desert
ADOBE STOCK

Perhaps the strongest evidence of collective climate progress so far: We’ve already avoided the gravest dangers that scientists feared just a decade ago.

The world is on track for about 2.6 °C of warming over preindustrial conditions by 2100, according to Climate Action Tracker, an independent scientific effort to track the policy progress that nations have made toward their goals under the Paris climate agreement.

That’s a lot warmer than we want the planet to ever get. But it’s also a whole degree better than the 3.6 °C path that we were on a decade ago, just before nearly 200 countries signed the Paris deal.

That progress occurred because more and more nations passed emissions mandates, funded subsidies, and invested in research and development—and private industry got busy cranking out vast amounts of solar panels, wind turbines, batteries, and EVs. 

The bad news is that progress has stalled. Climate Action Tracker notes that its warming projections have remained stubbornly fixed for the last four years, as nations have largely failed to take the additional action needed to bend that curve closer to the 2 °C goal set out in the international agreement.

But having shaved off a degree of danger is still demonstrable proof that we can pull together in the face of a global threat and address a very, very hard problem. And it means we’ve done the difficult work of laying down the technical foundation for a society that can largely run without spewing ever more greenhouse gas into the atmosphere.

Hopefully, as cleantech continues to improve and climate change steadily worsens, the world will find the collective will to pick up the pace again soon.