MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

What It Will Really Take to Electrify All of Africa

2025-08-15 22:05:36



I dined recently with Joe, a Nigerian who manages a 400-hectare rice farm in the north of his country. Nigeria imports about 2.4 million metric tons of rice annually, according to the U.S. Department of Agriculture. Farmers like Joe are helping to move his country of 237 million people toward self-sufficiency in rice.

But farmer Joe has a handicap. “For me, the power grid is a fiction,” he says. “I don’t get any electricity from the grid, and I never will.”

Five years ago, Joe installed solar panels to power his farm’s irrigation system, which draws water from a nearby river. His milling and bagging machines, meanwhile, still run on diesel generators. When Nigeria ended its fuel subsidy in 2023, Joe’s fuel costs soared, reducing the money he can invest in more land and other improvements.

What is holding back Africa’s electrification?

Joe’s predicament is not unique. In sub-Saharan Africa, 600 million people—about 53 percent—still have no access to electricity. Even this grim statistic understates the problem, because “access” can mean just enough wattage to illuminate a few LED lightbulbs some of the time. It’s not what Western Europeans or North Americans would consider electricity.

And traditional power grids in sub-Saharan Africa are hampered by poor reliability and frequent outages. Even when offered electricity, many customers can’t afford to pay, and so theft of service is endemic. Where grids do exist, “they are outdated, unstable, and lack customer connections,” the United Nations Conference on Trade and Development (UNCTAD) reported in 2023.

“I’m a bit tired of imprecise measures of access if that access doesn’t translate into the potential for substantial improvements and increases in consumption,” says Christopher D. Gore, a professor of politics and public administration at Toronto Metropolitan University, who studies electricity usage in the region. “Our latest research shows that [sub-Saharan] households are happy to have any electric light but remain dissatisfied with the minimal supply, the price, and the quality of both grid and solar power.”

map visualization

The electricity deficit may well be worsening. In a 2024 report on universal energy access in Africa, researchers from the Center for Strategic & International Studies, in Washington, D.C., concluded that “demand is significantly outstripping supply, and the energy crisis is deepening.”

To address this dire shortage, the World Bank and the African Development Bank announced an initiative last year called Mission 300, to bring electricity to 300 million people in sub-Saharan Africa—about half the number who lack access now—by 2030. Such a rapid expansion means bringing electricity to an additional 4.2 million people every month on average.

While plausible, the expansion faces headwinds, most notably from the sub-Saharan’s net population gain of about 2.5 million people per month. If that population growth continues for all six years of the initiative, there will be an additional 180 million people requiring electricity access.

“The challenge is large. Africa’s population is projected to double by 2050,” says Barry MacColl, a senior regional manager at the Electric Power Research Institute (EPRI), who covers Africa from Johannesburg. “Expanding national grids can be expensive and slow, especially in rural and remote areas, where most of the unelectrified people live.” For example, South Africa’s main utility, Eskom Holdings, estimates it will need to spend 390 billion rand (US $22 billion) over the next decade to expand and upgrade its aging power grid and prevent future blackouts.


Large differences in electricity access persist among and within African countries. According to a 2020 report from Germany’s Federal Ministry for Economic Cooperation and Development, in the East, West, and Southern African regions, about half the people have access to electricity, but the percentage falls to a mere 30 percent in Central Africa, where nearly 100 million folks have no electricity access. And according to the World Bank, about 82 percent of urban residents had electricity access in 2023, but only 33 percent in rural areas. (The North African countries aren’t part of the sub-Saharan region, and, except for Libya, have electrification rates of 100 percent.)

Off-grid solar’s untapped potential in Africa

Fossil fuels still play a big role in Africa’s power generation. Natural gas is the single largest source of electricity generation, while coal is significant only in South Africa. Together, they account for roughly two-thirds of the continent’s electricity production, according to BloombergNEF. While new gas-fired plants continue to be built, the trend is shifting toward renewable energy sources.

Photo of a roadside storefront labeled \u201cElectronics\u201d with a variety of solar panels displayed outside.An electronics shop in Kenya sells solar panels. Off-grid solar has been a big part of the country’s successful push to increase electricity access.James Wakibia/SOPA Images/LightRocket/Getty Images

Small-scale off-grid technologies, especially solar power, are widely viewed as the strongest path to expanding electricity access to rural communities and underserved urban areas. UNCTAD estimates that Africa has 60 percent of the world’s best global solar resources. That translates to a solar potential of over 10 terawatts. “Off-grid solar and storage is taking off in a big way,” says Sonia Dunlop, CEO of the Global Solar Council in London. “There are already about 600 million people, almost all in sub-Saharan Africa, who use off-grid solar and storage at least once a week.” Dunlop expects to see a 40 percent increase in solar installations next year in the region.

Off-grid solar power lends itself to bottom-up bootstrapping in rural areas by communities, small farms, businesses, and residential customers. To make the technology more affordable, the expansion of microfinancing will be key, as Mwoya Byaro and Nanzia Florent Mmbaga point out in a 2022 study in Scientific African.

I know firsthand the difference off-grid solar can make. My Nigerian-born wife and I own a walled compound of three homes in southern Nigeria, where members of her family live. We recently installed solar lights atop 5-meter-tall poles. They now illuminate communal areas that were formerly dark at night. The compound and the neighborhood aren’t connected to the grid, though, so for indoor electricity, our relatives still rely on diesel generators.

The future of hydropower in sub-Saharan Africa

While off-grid solar could bring electricity to millions of people, hydropower is “Africa’s renewable-electricity powerhouse, largely thanks to excellent resources in the East and Central regions of the continent,” BloombergNEF reported in 2024. Six countries, led by Ethiopia, get most of their electricity from hydropower.

Photo of two men wearing orange safety vests in a computerized control room.Engineers monitor the Kariba Dam, on the Zambia-Zimbabwe border. Hydropower could play a big role in expanding electricity access in sub-Saharan Africa, but construction is expensive and changing rainfall patterns are making hydropower output unpredictable. The Washington Post/Getty Images

“The hydro space is a huge growth area target,” says MacColl of EPRI. As with solar, Africa uses only a small fraction of its hydropower potential. Mini hydropower dams from 100 kilowatts to 1 megawatt are important for remote and small communities of around 50 to 500 homes, MacColl says. Large dams are under construction or have been recently completed in Angola, Ethiopia, Nigeria, and Zambia.

chart visualization

But constructing hydropower dams is costly and carries the risk of corruption and mismanagement that comes with big projects, as well as the cost of connecting a new power source to the power grid. For instance, Nigeria’s $5.8 billion, 3,050-MW Mambilla dam, which will become the largest source of electricity in the country, has been in the planning stage for over 40 years, and completion isn’t expected before 2030. Climate change’s impact on rainfall and temperature is also upending estimates of how much electricity hydropower dams across the region can produce.

Could nuclear power help electrify Africa?

Even nuclear power may play a role in closing Africa’s electricity gap. The African Energy Chamber, an industry group based in Johannesburg, notes in its 2025 Outlook Report that “a significant number of countries in Africa are considering embarking on nuclear power programmes.”

chart visualization

Today, only South Africa has nuclear power. But Ghana, which runs a research reactor, is planning its first nuclear power plant with assistance from China, Japan, and the United States. Uganda has chosen a site for its first reactors, as has Kenya. And the Nigerian Nuclear Regulatory Authority says it has signed technical agreements on nuclear power with France, India, Russia, and South Korea. But in all these cases, generating electricity from nuclear power is at least a decade away, according to the World Nuclear Association.

Kenya’s electrification success story

Ultimately, increased access to electricity in sub-Saharan Africa will come from a variety of sources. One success story is Kenya, where off-grid electricity, primarily from solar, is complementing expanded grid access. The government’s Last Mile Connectivity Project aims to extend the grid to an additional 280,000 residences, 30,000 businesses, and health centers and schools in all 47 counties, according to the African Development Bank, which helped fund the effort. Previously, the national utility, Kenya Power, succeeded in increasing the number of grid-connected households in the poorest urban areas from 3,000 to 150,000. Kenya also has the largest wind farm in Africa, the Lake Turkana Wind Power Project. The 310-MW plant’s 365 turbines account for about 15 percent of Kenya’s installed electricity capacity.

These sustained efforts doubled Kenya’s electrification access rate between 2013 and 2023 to 79 percent. Kenya Power now aims to achieve universal electricity access by 2030.

Meanwhile, in Nigeria, the most populous sub-Saharan country, the outlook for electricity access is cloudier. Joe, the Nigerian rice farmer, is considering installing more solar on his farm, to expand his mill. With more electricity, he says, “we can grow more rice, and mill and bag more for our people.” If the power grid won’t—or can’t—come to him, at least he has the means to generate his own electricity to meet his own needs.

Designing for Functional Safety: A Developer's Introduction

2025-08-15 19:52:07



Welcome to your essential guide to functional safety, tailored specifically for product developers. In a world where technology is increasingly integrated into every aspect of our lives—from industrial robots to autonomous vehicles—the potential for harm from product malfunctions makes functional safety not just important, but critical.

This webinar cuts through the complexity to provide a clear understanding of what functional safety truly entails and why it’s critical for product success. We’ll start by defining functional safety not by its often-confusing official terms, but as a structured methodology for managing risk through defined engineering processes, essential product design requirements, and probabilistic analysis. The “north star” goals? To ensure your product not only works reliably but, if it does fail, it does so in a safe and predictable manner.

We’ll dive into two fundamental concepts: the Safety Lifecycle, a detailed engineering process focused on design quality to minimize systematic failures, and Probabilistic, Performance-Based Design using reliability metrics to minimize random hardware failures. You’ll learn about IEC 61508, the foundational standard for functional safety, and how numerous industry-specific standards derive from it.

The webinar will walk you through the Engineering Design phases: analyzing hazards and required risk reduction, realizing optimal designs, and ensuring safe operation. We’ll demystify the Performance Concept and the critical Safety Integrity Level (SIL), explaining its definition, criteria (systematic capability, architectural constraints, PFD), and how it relates to industry-specific priorities.

Discover key Design Verification techniques like DFMEA/DDMA and FMEDA, emphasizing how these tools help identify and address problems early in development. We’ll detail the FMEDA technique showing how design decisions directly impact predictions like safe and dangerous failure rates, diagnostic coverage, and useful life. Finally, we’ll cover Functional Safety Certification, explaining its purpose, process, and what adjustments to your development process can set you up for success.

Register now for this free webinar!

From Stage and Screen to High-Tech Service Sector

2025-08-15 02:00:04



When children dream of being entertainers, their backup career isn’t typically engineering. But that was Gokul Pandy’s plan if he didn’t make it in Kollywood, a segment of Indian cinema dedicated to producing motion pictures in the Tamil language. As a youngster growing up in Chennai, India, Pandy loved acting and dancing, but he also excelled in math and science in school.

His parents, both educators, supported their son’s passion but encouraged him to consider studying engineering. They told him that if engineering didn’t work out, he could always try show business.

Gokul Pandy


Employer:

Accenture in Richmond, Va.

Title:

Application development manager

Member grade:

Senior member

Alma maters:

Anand Institute of Higher Technology, Chennai, India; Alagappa University, in Karaikudi, India


He took their advice and, as a high school student, began watching YouTube videos of engineers solving real-world problems. Those videos inspired him to enroll in the engineering program at Anand Institute of Higher Technology, in Chennai. During his second year there, a career in engineering won out over one in entertainment. His engineering career has had its challenges, however.

Today, Pandy is an application development manager at Accenture, in Richmond, Va. The IEEE senior member also serves as the chair of the IEEE Richmond Section.

He was inducted this month into the IEEE–Eta Kappa Nu honor society.

From IT support to robotic automation expert

Pandy graduated from Anand in 2008 with a bachelor’s degree in electronics and communications engineering. It was at the height of the Great Recession, which began in 2007 and ended in 2009. Many countries, including India, had high unemployment rates then, making it tougher to find a full-time job. To boost his chances, Pandy earned a variety of certifications including lead system architect, AWS cloud partitioner, and robotic process automation (RPA). He also enrolled in the online MBA program at Alagappa University, in Karaikudi, India—which he completed in 2011.

In 2008 he was hired by Accenture in Bangalore as a contract employee, working in IT support. Thanks to his consistent hard work, he says, the company promoted him to full-time software engineer, which was “life-changing.”

He was transferred to the company’s Chennai office and eventually was promoted to senior software engineer.

He became an expert on automation frameworks: structured sets of guidelines, tools, and practices designed to streamline and manage repetitive tasks.

To address inefficiencies in systems, he and his team designed and led an open-source automation tool that automates processes such as data entry and validation, and it verifies compliance across multiple levels.

By automating repetitive administrative tasks, business users could focus on other important duties.

In 2015 his company moved Pandy to the United States to support a major health care transformation initiative.

“That move marked a turning point in my career,” he says, “propelling me into the international spotlight as a thought leader in enterprise automation.” He became an automation subject matter expert from a manual tester for two major health care organizations, he says.

His efforts have been recognized. He received a Global Recognition Award this year for his “outstanding leadership and innovative contributions to robotic process automation (RPA) in health care technology.”

He also received a 2024 Claro Platinum Award for excellence in technology. The Claro Awards recognize people doing outstanding work in AI, data analytics, and technology.

Pandy is a Fellow of several organizations including the British Computer Society, Hackathon Raptors, the Institution of Engineering and Technology, and the Soft Computing Research Society.

He has authored or coauthored more than 20 papers, several of which are in the IEEE Xplore Digital Library. He has peer-reviewed more than 200 papers for major conferences, and he was invited as a keynote speaker in IEEE and other conferences.

Running an IEEE section to acquire new skills

Pandy joined IEEE last year and was quickly elevated to senior member.

He says he joined because “the organization offers me a platform to collaborate with global technologists.

Never Lose Hope


Gokul Pandy offers young professionals this advice for having a successful career:

  • Be patient, work hard, and never lose hope. Today’s young adults want everything instantly—which isn’t realistic in the workplace.
  • Keep learning. Today’s technologies, such as artificial intelligence, are evolving quickly, and you need to stay on top of the latest innovations.
  • Don’t wait for the perfect opportunity; rather, create it.
  • Don’t view yourself as “just” a software engineer or “just” an associate engineer. Go the extra mile, even if you aren’t paid to do so. That way, you’ll always be one step ahead of the competition.
  • Build solutions that will have a real-world impact, not just theoretical value. Theoretical value is good to study, but make the application practical. When you do practical work, you’ll come to know the difficulties, the failures, and the pros and cons.
  • Staying true to yourself will help you grow in your career.
  • Stay humble, and don’t be afraid to ask questions, even if you think they are silly.
  • Get involved in a professional organization, such as IEEE, early in your career. You’ll get to know about cutting-edge technologies, meet great leaders, and pick up skills that will help your career.

“IEEE stays updated on the frontier of technologies and gives back to the engineering community,” he adds. “It also provides services to students and initiates talks where we can explore topics with respect to our skills and knowledge, and mentor others.”

Interested in volunteering and mentoring, he decided to nominate himself for the position of IEEE Richmond Section chair. The section has more than 800 members and covers six cities and 26 counties.

Being chair is his first volunteer leadership position. He says he believed that running the section would enhance his leadership skills. Winning the election “was a miracle,” he says. “It was another enrichment opportunity for my career.”

He began his term in January and was initially anxious about the additional workload, he says, but so far his term has been an “amazing experience.” He credits IEEE Senior Member Allen Jones, a past chair of the section, for helping him through the transition.

Pandy has been busy since taking office. The section formed a life members affinity group and has held events with the IEEE Virginia Commonwealth University student branch in Richmond. The section also held career-development events there, allowing IEEE members to mentor engineering students, provide them with networking tips, and help them refine their résumés.

The IEEE Magnetics Society and the IEEE Computer Society chapters partnered with VCU, AI Ready RVA, and the Richmond Institute of Technology and Science to increase artificial intelligence literacy for preuniversity students in underserved communities. The initiative provides training to educators and offers workshops to students demonstrating the technology and its practical applications.

Through leading all those activities, Pandy says, he has picked up important skills including strategic planning, public speaking, team collaboration, and budget planning, which he can apply at work.

“These experiences have helped me drive large-scale IT transformation projects in my professional role,” he says, “and improved my ability to lead diverse and cross-functional teams.

“Volunteering fuels my energy rather than drains it. Also, it’s kind of a small, rewarding part of my weekly routine. I feel that I’ve done a little bit to help someone somewhere move a step ahead in their career—which gives me great satisfaction.”

Pandy hasn’t lost his desire to be an entertainer. He performs whenever possible at cultural events and the section’s social gatherings.

“Dancing and acting are now hobbies,” he says.

Is the World Adopting Post-Quantum Cryptography Fast Enough?

2025-08-13 23:01:19



A year ago today, the National Institute of Standard and Technology (NIST) published the first ever official standard for post-quantum cryptography (PQC) algorithms. The standard was a result of a 2022 memorandum from the Biden administration that requires federal agencies to transition to PQC-based security by 2035.

Cryptography relies on math problems that are nearly impossible to solve, but easy to check if a solution is correct. Armed with such math problems, only the holder of a secret key can check their solution and get access to the secret data. Today, most online cryptography relies on one of two such algorithms: either RSA or elliptic curve cryptography.

The cause for concern is that quantum computers, if a large enough one is ever built, would make easy work of the “hard” problems underlying current cryptographic methods. Luckily, there are other math problems that appear to be equally hard for quantum computers and their existing classical counterparts. That’s the basis of post-quantum cryptography: cryptography that’s secure against hypothetical quantum computers.

With the mathematics behind PQC ironed out, and standards in hand, the work of adoption is now underway. This is no easy feat: every computer, laptop, smartphone, self-driving car, or IoT device will have to fundamentally change the way they run cryptography.

Ali El Kaafarani is a research fellow at the Oxford Mathematical Institute who contributed to the development of NIST’s PQC standards. He also founded a company, PQShield, to help bring post-quantum cryptography into the real world by assisting original equipment manufacturers in implementing the new protocols. He spoke with IEEE Spectrum about how adoption is going and whether the new standards will be implemented in time to beat the looming threat of quantum computers.

What has changed in the industry since the NIST PQC standards came out?

Portrait photograph of Ali El Kaafarani, dark haired man with a beard, a mustache and glassesAli El KaafaraniPQShield

Ali El Kaafarani: Before the standards came out, a lot of people were not talking about it at all, in the spirit of “If it’s working, don’t touch it.” Once the standards were published, the whole story changed, because now it’s not hypothetical quantum hype, it’s a compliance issue. There are standards published by the U.S. government. There are deadlines for the adoption. And the 2035 [deadline] came together with the publication from [the National Security Agency], and was adopted in formal legislation that passed Congress and therefore there is no way around it. Now it’s a compliance issue.

Before, people used to ask us, “When do you think we’re going to have a quantum computer?” I don’t know when we’re going to have a quantum computer. But that’s the issue, because we’re talking about a risk that can materialize any time. Some other, more intelligent people who have access to a wider range of information decided in 2015 to categorize quantum computing as a real threat. So this year was a transformational year, because the question went from “Why do we need it?” to “How are we going to use it?” And the whole supply chain started looking into who’s going to do what, from chip design to the network security layer, to the critical national infrastructure, to build up a post-quantum-enabled network security kit.

Challenges in PQC Implementation

What are some of the difficulties of implementing the NIST standards?

El Kaafarani: You have the beautiful math, you have the algorithms from NIST, but you also have the wild west of cybersecurity. That infrastructure goes from the smallest sensors and car keys, etc., to the largest server sitting there and trying to crunch hundreds of thousands of transactions per second, each with different security requirements, each with different energy consumption requirements. Now that is a different problem. That’s not a mathematical problem, that’s an implementation problem. This is where you need a company like PQShield, where we gather hardware engineers, and firmware engineers, and software engineers, and mathematicians, and everyone else around them to actually say, “What can we do with this particular use case?”

Cryptography is the backbone of cybersecurity infrastructure, and worse than that, it’s the invisible piece that nobody cares about until it breaks. If it’s working, nobody touches it. They only talk about it when there’s a breach, and then they try to fix things. In the end, they usually put bandaids on it. That’s normal, because enterprises can’t sell the security feature to the customers. They were just using it when governments force them, like when there’s a compliance issue. And now it’s a much bigger problem, as someone is telling them, “You know what, all the cryptography that you’ve been using for the past 15 years, 20 years, you need to change it, actually.”

Are there security concerns for the PQC algorithm implementations?

El Kaafarani: Well, we haven’t done it before. It hasn’t been battle-tested. And now what we’re saying is, “Hey, AMD and the rest of the hardware or semiconductor world go and put all those new algorithms in hardware, and trust us, they’re going to work fine, and then nobody’s going to be able to hack them and extract the key.” That’s not easy, right? Nobody has the guts to say this.

That’s why, at PQShield, we have vulnerability teams that are trying to break our own designs, separately from those teams who are designing things. You have to do this. You need to be one step ahead of attackers. That’s all you need to do, and that’s all you can do, because you can’t say, “Okay, I’ve got something that is secure. Nobody can break it.” If you say that, you’re going eat a humble pie in 10 years’ time, because maybe someone will come up with a way to break it. You need to just do this continuous innovation and continuous security testing for your products.

Because PQC is new, we still haven’t seen all the creativity of attackers trying to bypass the beautiful mathematics, and come up with those creative and nasty side-channel attacks that just laugh at the mathematics. For example, some attacks look at the energy consumption the algorithm is taking on your laptop, and they extract the key from the differences in energy consumption. Or there are timing attacks that look at how long it takes for you to encrypt the same message 100 times and how that’s changing, and they can actually extract the key. So there are different ways to attack algorithms there, and that’s not new. We just don’t have billions of these devices in in our hands now that have post-quantum cryptography that people have tested.

Progress in PQC Adoption

How would you say adoption has been going so far?

El Kaafarani: The fact that a lot of companies only started when the standards were published, it puts us in a position where there are some that are well advanced in their thoughts and their processes and their adoption, and there are others that are totally new to it because they were not paying attention, and they were just kicking the can down the road. The majority of those who were kicking the can down the road are the ones that don’t sit high up in the supply chain, because they felt like it’s someone else’s responsibility. But they didn’t understand that they have they had to influence their suppliers when it comes to their requirements and timelines and integration and so many things that they have to prepare. This is what’s going on now: A lot of them are doing a lot of work.

Now, those who sit high up in the supply chain, quite a few of them have made great progress and started embedding post-quantum cryptography designs into new products, and are trying to work out a way to upgrade products that are already on the ground.

I don’t think that we’re in in a great place, where everyone is doing what they’re supposed to be doing. That’s not the case. But I think that from last year, when many people were asking “When do you think we’re going to have a quantum computer?” and are now asking “How can I be compliant? Where do you think I should start? And how can I evaluate where the infrastructure to understand where the most valuable assets are, and how can I protect them? What influence can I exercise on my suppliers?” I think huge progress has been made.

Is it enough? It’s never enough in security. Security is damn difficult. It’s a multi-disciplinary topic. There are two types of people: Those who love to build security products, and those who would love to break them. We’re trying to get most of those who love to break them into the right side of history so that they can make products stronger rather than actually making existing ones vulnerable for exploitation.

Do you think we’re going to make it by 2035?

El Kaafarani: I think that the majority of our infrastructure should be post quantum secure by 2035, and that’s a good thing. That’s a good thought to have. Now, what happens if quantum computers happen to become reality before that? That’s a good topic for a TV series or for a movie. What happens when most secrets are readable? People are not thinking hard enough about it. I don’t think that anyone has an answer for that.

How AI’s Sense of Time Will Differ From Ours

2025-08-13 22:00:02



An understanding of the passage of time is fundamental to human consciousness. While we continue to debate whether artificial intelligence (AI) can possess consciousness, one thing is certain: AI will experience time differently. Its sense of time will be dictated not by biology, but by its computational, sensory, and communication processes. How will we coexist with an alien intelligence that perceives and acts in a very different temporal world?

What Simultaneity Means to a Human

Clap your hands while looking at them. You see, hear, and feel the clap as a single multimodal event—the visual, audio, and tactile senses appear simultaneous and define the “now.” Our consciousness plays out these sensory inputs as simultaneous, although they arrive at different times: Light reaches our eyes faster than sound reaches our ears, while our brain processes audio faster than it does complex visual information. Still, it all feels like one moment.

That illusion stems from a built-in brain mechanism. The brain defines “now” through a brief window of time during which multiple sensory perceptions are collected and integrated. This span of time, usually up to few hundreds of milliseconds, is called the temporal window of integration (TWI). As a proxy for this temporal grid, films with 24 frames per second create an illusion of continuous movement.

But the human TWI has its limits. See a distant lightning flash and you’ll hear the rumble of thunder seconds later. The human TWI evolved to stitch together sensory information only for events within roughly 10 to 15 meters. That’s our horizon of simultaneity.

Alien Intelligence in the Physical World

AI is poised to become a standard part of robots and other machines that perceive and interact with the physical world. These machines will use sensors hardwired to their bodies, but also remote sensors that send digital data from afar. A robot may receive data from a satellite orbiting 600 km above Earth and treat the data as real-time, as transmission takes only 2 ms—far faster than the human TWI.

A human’s sensors are “hardwired” to the body, which establishes two premises for how the brain interacts with the physical world. First, the propagation delay from each sensor to the brain is predictable. When a sound occurs in the environment, the unpredictable factor is the distance between the sound source and our ears; the time delay from the ears to the brain is fixed. Second, each sensor is used by only one human brain. The human horizon of simultaneity evolved through millions of years under these premises, optimized to help us assess opportunities and threats. A lion at 15 meters was worth worrying about, but thunder at 3 kilometers was likely not.

These two premises won’t always be valid for intelligent machines with multimodal perception. An AI system may receive data from a remote sensor with unpredictable link delays. And a single sensor can provide data to many different AI modules in real time, like an eye shared by multiple brains. As a result, AI systems will evolve their own perception of space and time and their own horizon of simultaneity, and they’ll change much faster than the glacial pace of human evolution. We will soon coexist with an alien intelligence that has a different perception of time and space.

The AI Time Advantage

Here’s where things get strange. AI systems are not limited by biological processing speeds and can perceive time with unprecedented precision, discovering cause-and-effect relationships that occur too quickly for human perception.

In our hyperconnected world, this could lead to wide-scale Rashomon effects, where multiple observers give conflicting perspectives on events. (The term comes from a classic Japanese film in which several characters describe the same incident in dramatically different ways, each shaped by their own perspective.)

Imagine a traffic accident in the year 2045 at a busy city intersection, witnessed by three observers: a human pedestrian, an AI system directly connected to street sensors, and a remote AI system receiving the same sensory data over a digital link. The human simply perceives a robot entering the road just before a car crashes into it. The local AI, with immediate sensor access, records the precise order: the robot moving first, then the car braking, then the collision. Meanwhile, the remote AI’s perception is skewed by communication delays, perhaps logging the braking before it perceives the robot stepping into the road. Each perspective offers a different sequence of cause and effect. Which witness will be considered credible, a human or a machine? And which machine?

People with malicious intent could even use high-powered AI systems to fabricate “events” using generative AI, and could insert them in the overall flow of events perceived by less capable machines. Humans equipped with extended-reality interfaces might be especially vulnerable to such manipulations, as they’d be continuously taking in digital sensory data.

If the sequence of events is distorted, it can disrupt our sense of causality, potentially disrupting time-critical systems such as emergency response, financial trading, or autonomous driving. People could even use AI systems capable of predicting events milliseconds before they occur to confuse and confound. If an AI system predicted an event and transmitted false data at precisely the right moment, it could create a false appearance of causality. For example, an AI that could predict movements of the stock market could publish a fabricated news alert just before an anticipated sell-off.

Computers Put Timestamps, Nature Does Not

The engineer’s instinct might be to solve the problem with digital timestamps on sensory data. However, timestamps require precise clock synchronization, which requires more power than many small devices can handle.

And even if sensory data is timestamped, communication or processing delays may cause it to arrive too late for an intelligent machine to act on the data in real time. Imagine an industrial robot in a factory tasked with stopping a machine if a worker gets too close. Sensors detect a worker’s movement and a warning signal—including a timestamp—travels over the network. But there’s an unexpected network hiccup and the signal arrives after 200 milliseconds, so the robot acts too late to prevent an accident. The timestamps don’t make communication delays predictable, but they can help to reconstruct what went wrong after the fact.

Nature, of course, does not put timestamps on events. We infer temporal flow and causality by comparing the arrival times of event data and integrating it with the brain’s model of the world.

Albert Einstein’s special theory of relativity noted that simultaneity depends on the observer’s frame of reference and can vary with motion. However, it also showed that the causal order of events, the sequence in which causes lead to effects, remains consistent for all observers. Not so for intelligent machines. Because of unpredictable communication delays and variable processing times, intelligent machines may perceive events in a different causal order altogether.

In 1978, Leslie Lamport addressed this issue for distributed computing, introducing logical clocks to determine “happened before” relation among digital events. To adapt this approach to the intersection of the physical and digital worlds, we must grapple with unpredictable delays between a real-world event and its digital timestamp.

This crucial tunneling from the physical to the digital world happens at specific access points: a digital device or sensor, WiFi routers, satellites, and base stations. As individual devices or sensors can be hacked fairly easily, the responsibility for maintaining accurate and trustworthy information about time and causal order will fall increasingly on large digital infrastructure nodes.

This vision aligns with developments within 6G, the forthcoming wireless standard. In 6G, base stations will not only relay information, they will also sense their environments. These future base stations must become trustworthy gateways between the physical and the digital worlds. Developing such technologies could prove essential as we enter an unpredictable future shaped by rapidly evolving alien intelligences.

Power Grid Congestion Is a Problem. Here’s a Solution

2025-08-13 20:00:03



On a cloudy day in mid-July, a crew of technicians carries a shiny metal orb, about the size and weight of a kid’s bowling ball, under a 110-kilovolt transmission line running near Hamburg. Their task: to attach the orb to the line where it will track real-time environmental conditions affecting the capacity of the wire.

To hoist the orb onto the overhead line, a crew member attaches it to a quadcopter drone and pilots it up with a remote control. As it nears the wire, one side of the orb slides open, like a real-life Pac-Man about to chomp a power pellet, and then clamps down. The process takes about 10 seconds and requires no electrical downtime.

Oslo-based Heimdall Power, the manufacturer of the orb, has installed over 200 of them on the transmission lines of SH-Netz, the grid operator in the northernmost part of Germany. Together, the devices form a system that calculates how much current these high-voltage lines can safely carry based on real-time weather conditions. The hotter it is, the lower their capacity.

Historically, grid operators have estimated the capacity of lines based on average seasonal temperatures—a fixed value called static line rating. For safety, the estimates must be highly conservative and assume that the weather is always very warm for the respective season. So for all but the hottest days, transmission lines could carry significantly more electricity, if grid operators only knew the actual temperature of the wires.

Tools like Heimdall’s orb, dubbed the Neuron, can fill in that blank. By tracking information such as the sag of the line and ambient temperature, the system can more accurately determine the temperature—and therefore the real-time capacity—of the line. This allows grid operators to take advantage of the unused headroom by maximizing the current in that line. The system uses weather forecasts to predict the temperature and capacity of the lines for the next day, which is especially useful for day-ahead planning.

Dynamic Line Rating Offers Grid-Congestion Solution

The strategy, called dynamic line rating, or DLR, is being rapidly adopted in North America and Europe as an antidote—at least in the short term—to grid congestion woes. New transmission lines are needed to accommodate the explosion of AI data centers, electrification, and renewable energy generation, but building them is a notoriously lengthy task often requiring a decade or more. In the interim, grid operators must do more with existing infrastructure.

Dynamic line rating and other grid-enhancing technologies, or GETs, offer a quick and inexpensive fix—a bridge until new transmission can be built. Heimdall is one of many companies, including Linevision in Boston and Gridraven in Tallinn, Estonia, providing the technology.

“Transmission operators aren’t maximizing the potential of our power lines, leading to unnecessarily high energy costs for consumers,” said Caitlin Marquis, managing director at Advanced Energy United, an industry advocacy group, in a statement. “Dynamic line ratings are one of the most cost-effective tools we have for getting more out of our existing power-grid infrastructure.”

Heimdall Power's metallic orb technology, called Neuron, installed on a power line above a coastal scene.Heimdall Power’s orb tracks line sag, ambient temperature, sunlight intensity, and other metrics on a transmission line in southwestern Norway.Heimdall Power

Heimdall’s Grid-Enhancing Technologies

In Heimdall’s approach, one of the key metrics its sensors track is line sag; as metal wires get hotter, they expand and make the line droop. Measuring sag helps determine how much capacity drops. The sensors also track other information such as ambient temperature and sunlight intensity. Heimdall combines the information with local weather-forecast data—particularly wind speed, which has a cooling effect on power lines and can thus increase their capacity.

Heimdall’s machine learning uses the data to help grid operators plan how they’ll route electricity for the next day. In urgent situations, they can use real-time data from its sensors to adapt on the fly.

After working mostly with utilities in Europe, last year the company opened a headquarters in Charlotte, N.C., to better access the U.S. market. Its first major U.S. project, in Minnesota, is helping the utility Great River Energy increase its capacity by 25 percent nearly 70 percent of the time, according to the utility. Heimdall has deals with six additional U.S. utilities bringing its technology to 13 states.

“We’ve spent the last 80-plus years building the North American power grid, and we’re basically still running it the way we did in the beginning,” says Heimdall U.S. President Brita Formato. “DLR lets us bring the existing grid into the digital age—essentially overnight, and with relatively low cost and effort.”

Linevision’s Lidar Sensors Enhance Power-Line Monitoring

Linevision, another dynamic line rating provider, uses lidar sensors to monitor line sag, and combines the data with weather forecasting and computer analysis of how wind is affected by objects near power lines. Instead of putting its sensors directly on the power lines, the company mounts them on the towers. This makes installation and operation easier, according to Linevision.

The company originally used electromagnetic sensors to indirectly measure loading on each line but pivoted to lidar as it became cheaper and more widely deployed in self-driving cars. “When we made that switch, we were riding the wave of autonomous vehicles,” says Jon Marmillo, Linevision’s cofounder and chief business officer.

Linevision's LUX sensor installed on the body of a power line tower.Linevision’s LUX sensor uses lidar to monitor line sag and is installed on transmission line towers.Linevision

For the wind prediction, Linevision starts with a detailed, publicly available spatial map that includes buildings and trees, then uses machine learning to interpret how obstructions change wind near power lines. Marmillo says it takes about 90 days of learning for the system to create accurate predictions of line capacities after it’s installed. Utilities then integrate that into their grid-management software.

After completing an earlier project with Linevision, National Grid, the grid operator for England and Wales, in June announced a new, bigger project with the company, on 263 kilometers of 400-kV lines. The first project increased capacity by 31 percent on average, freeing lines to carry an additional gigawatt of power and saving customers £14 million annually (US $19 million). The new project is expected to save customers £20 million annually ($27 million).

Gridraven’s Wind Forecasting Boosts Dynamic Line Rating

In Estonia, DLR provider Gridraven takes a different tack: It doesn’t use hardware at all. Instead, it relies on machine learning to make accurate, hyperlocal wind predictions. The predictions are based on weather forecasts and a detailed terrain map made from satellite and lidar scans.

Georg Rute, the CEO and cofounder, was working for Estonia’s national grid operator in 2018 when he noticed that forecasts of wind around power lines were “really bad,” he says. Rute came back to study the question in 2023 with Gridraven’s other two cofounders, finding that two-thirds of the error in wind prediction was caused by the landscape—mainly buildings and trees.

Portrait of Gridraven's CEO, Georg Rute.Gridraven’s CEO, Georg Rute, founded the company after realizing that hyperlocal wind forecasting wasn’t accurate enough for dynamic line rating.Gridraven

Gridraven began its first large rollout last month, covering 700 km of Finland’s 400-kV lines, and plans to expand to the country’s entire 5,500-km network of high-voltage transmission lines. “With DLR, it is particularly possible to support the integration of wind power into the grid,” said Arto Pahkin, a manager at Fingrid, Finland’s transmission system operator. “This makes DLR a strategically important part of the integration of renewable energy.” Gridraven’s system can increase capacity of power lines by 30 percent on average, according to the company.

DLR companies are quick to point out that the technology is not a cure for all that ails our grids; over the long term, we will simply need more transmission. Congestion is already raising electricity prices and increasing outages. Since 2021, grid congestion has cost American consumers $12–$21 billion per year, depending on electricity prices and weather.

Other grid-enhancing technologies can help in the meantime. For example, reconductoring existing lines with advanced materials can double their capacity. But that approach also takes the lines out of use while the new materials are installed, and it’s more expensive than DLR.

“The potential of DLR is to unlock up to one-third more capacity in the existing grid globally,” said Rute. “This would boost economic growth and increase affordability right away while more grid is being built.”