MoreRSS

site iconIEEE SpectrumModify

IEEE is the trusted voice for engineering, computing, and technology information around the globe. 
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of IEEE Spectrum

IEEE Society Helps Researchers Meet Their Next Corporate Backer

2026-05-15 02:00:02



The IEEE Communications Society (ComSoc)’s Research Collaboration Pitch Session initiative is proving to be a catalyst for meaningful engagement between academic researchers and industry innovators. Launched last year, the program connects promising researchers with industry leaders who can offer them funding, mentorship, and connections to bring interesting ideas closer to real-world deployment.

Rather than relying on chance encounters at conferences, the pitch sessions create a focused environment. Five academic presenters share their work with five industry representatives, known as “innovation scouts”: senior leaders primarily chosen from ComSoc’s Corporate Program partner companies such as Ericsson, Intel, Keysight, and Nokia. The curated format ensures that each idea receives dedicated attention from professionals who are seeking new concepts aligned with their organization’s priorities.

The initiative was launched in November at the IEEE Middle East Conference on Communications and Networking (MECOM) in Cairo and appeared in December at the IEEE Global Communications Conference (GLOBECOM) in Taipei, Taiwan.

AI-driven communication network

One of the most compelling outcomes came from the inaugural session in Cairo. Angela Waithaka, a student member and biomedical engineering student at Kenyatta University, in Nairobi, Kenya, presented her “AI-Driven Predictive Communication Networks for Enhanced Performance in Resource-Constrained Environments” paper. You can view her presentation along with others on IEEE.tv.

Waithaka’s research tackles a critical challenge: Next-generation communication systems increasingly rely on artificial intelligence and machine learning, yet most existing architectures consume abundant computational and energy resources, which are not always present in developing regions.

Waithaka proposed lightweight, adaptive AI/machine learning models capable of delivering predictive, reliable communication performance even under tight resource constraints.

Her vision resonated with Ruiqi “Richie” Liu, a master researcher at ZTE in China. ZTE is a global leader in integrated information and communication technology solutions. Liu says he recognized the relevance Waithaka’s proposal had to his company’s work with the International Telecommunication Union. He invited her to establish an ITU account so she could participate in the organization’s meetings discussing global telecommunications standardization projects—which would elevate her work to an international stage.

Simplifying data center protocols

The momentum continued at GLOBECOM. Among the presenters was Nirmala Shenoy, a professor at the Rochester Institute of Technology, in New York. Shenoy, an IEEE member, spoke on the topic of simplifying data center network protocols. She highlighted the growing complexity of the critical networks, which underpin cloud services, enterprise IT, and emerging AI workloads.

Shenoy’s focus on reducing protocol complexity while maintaining scalability, resilience, and low latency caught the attention of an innovation scout from Nokia, who heads its eXtended Reality Lab in Madrid. He found the key person at Nokia for Shenoy to connect with to discuss her research, and it led her to record a video for the company detailing her approach and its potential applications.

A model for accelerating innovation

The early success stories demonstrate the power of intentional, structured engagement. By bringing researchers and industry leaders together in a format designed for discovery, ComSoc is helping accelerate innovation and expand opportunities for collaboration. The pitch sessions are not merely conference events; they are becoming a bridge between academic creativity and industry implementation.

This year sessions will be held during the IEEE International Conference on Communications in Glasgow from 24 to 28 May, and more are scheduled during the IEEE International Mediterranean Conference on Communications and Networking in Sardinia from 6 to 9 July, and at GLOBECOM in Macau from 7 to 11 December.

As the program continues to grow, it could become a signature ComSoc initiative, one that strengthens the research ecosystem, supports emerging talent, and ensures that promising ideas find pathways to real-world impact.

Accelerating Chipmaking Innovation for the Energy-Efficient AI Era

2026-05-14 18:00:01



This sponsored article is brought to you by Applied Materials.

At pivotal moments in history, progress has required more than individual brilliance. The most consequential breakthroughs — such as those achieved under the Human Genome Project — required a new operating paradigm: Concentrate the world’s best talent around a single mission, establish a common platform, share critical infrastructure, and collapse feedback loops. When stakes are high and timelines are compressed, sequential and siloed innovation simply cannot keep pace.

Today’s AI era is creating an engineering race with similar demands. Every company is pushing to deliver higher-performance AI systems, faster. But performance is no longer defined by compute alone. AI workloads are increasingly dominated by the movement of data: In many cases, moving bits consumes as much — or more — energy than compute itself. As a result, reducing energy per bit can extend system‑level performance alongside gains in peak compute.

The path to energy‑efficient AI therefore runs through system‑level engineering, spanning three tightly interconnected domains:

  • Logic, where performance per watt depends on efficient transistor switching, low‑loss power, and signal delivery through dense wiring stacks.
  • Memory, where surging bandwidth and capacity demands expose the memory wall, with processor capability advancing faster than memory access.
  • Advanced packaging, where 3D integration, chiplet architectures, and high‑density interconnects bring compute and memory closer together — enabling system designs monolithic scaling can no longer sustain.

These domains can no longer be optimized independently. Gains in logic efficiency stall without sufficient memory bandwidth. Advances in memory bandwidth fall short if packaging cannot deliver proximity within thermal and mechanical constraints. Packaging, in turn, is constrained by the precision of both front‑end device fabrication and back‑end integration processes.

In the angstrom era, the hardest problems arise at the boundaries — between compute and memory in the package, front‑end and back‑end integration, and the tightly coupled process steps needed for precise 3D fabrication. And it is precisely this boundary‑driven complexity where the traditional innovation model breaks down.

The Traditional R&D Workflow Is Too Slow for Angstrom‑Era AI

For decades, the semiconductor industry’s R&D model has resembled a relay race. Capabilities are developed in one part of the ecosystem, handed off downstream through integration and manufacturing, evaluated by chip and system designers, and only then fed back for the next iteration. That model worked when progress was dominated by relatively modular steps that could be scaled independently and simply dropped into the manufacturing flow.

But the AI timeline has upended these rules. At angstrom‑scale dimensions, the physics enforces inescapable coupling across the entire stack: materials choices shape integration schemes; integration defines design rules; design rules dictate power delivery; wiring sets thermal budgets; and thermals ultimately constrain packaging scaling. System architects simply cannot wait 10–15 years for each major semiconductor technology inflection to mature.

Representing a roughly $5 billion investment, EPIC is the largest commitment to advanced semiconductor equipment R&D in U.S. history.

A long‑term perspective is essential to align materials innovation with emerging device architectures — and to develop the tools and processes required to integrate both with manufacturable precision. At Applied Materials, together with our customers, we are charting a course across the next 3–4 generations, extending as far as 10 years down the roadmap.

The angstrom era demands that we break down silos and bring together the industry’s best minds — from leading companies to leading academic institutions. If the problem is coupled, the solution must be coupled. If the timeline is compressed, the learning loop must be compressed. It’s not enough to just innovate — we must innovate how we innovate.

EPIC: A Center and Platform for High‑Velocity Co‑Innovation

This is the challenge that Applied Materials EPIC Center is designed to solve.

Representing a roughly US $5 billion investment, EPIC is the largest commitment to advanced semiconductor equipment R&D in U.S. history. When it opens in 2026, it will deliver state‑of‑the‑art cleanroom capabilities built from the ground up to shorten the path from early‑stage research to full‑scale manufacturing. But the facilities are only one component of the model. EPIC is also a platform, an operating system for high-velocity co‑innovation that revolutionizes how ideas move from the lab to the fab.

Diagram comparing traditional and EPIC chip innovation timelines showing 2x faster pathEPIC is a platform, an operating system for high-velocity co‑innovation that revolutionizes how ideas move from the lab to the fab.Applied Materials

The EPIC model compresses the traditional workflow. Customer engineers work side‑by‑side with Applied technologists from day one — moving beyond isolated process optimization and downstream handoffs. Within a shared, secure environment, EPIC tightly integrates atomistic modeling, test vehicles, process development, validation, and metrology feedback. Constraints that once surfaced late in development are identified and addressed early.

The result is a potentially 2x faster path that benefits the entire ecosystem under one roof:

  • Chipmakers gain earlier access to Applied’s R&D portfolio, faster learning cycles, and accelerated transfer of next‑generation technologies into high‑volume manufacturing.
  • Ecosystem partners gain earlier access to advanced manufacturing technology and collaboration opportunities that expand what is possible through materials innovation.
  • Academic institutions gain opportunities to strengthen the lab‑to‑fab pipeline and help develop future semiconductor talent.

Building on decades of co‑development, we are reinventing the innovation pipeline with our partners across logic, memory, and advanced packaging to deliver the next leap in energy‑efficient AI.

Accelerating Advanced Logic

Logic remains the engine of AI compute. In the angstrom era, however, system‑level gains are increasingly constrained by power and energy. Extending AI performance now depends on architectures that deliver more performance per watt — accelerating the move to 3D devices such as gate‑all‑around (GAA) transistors, which boost density within a compact footprint while preserving power efficiency.

Evolution from FinFET to GAA, backside power, isolated GAA, and CFET transistorsArchitectures that deliver more performance per watt are accelerating the move to 3D devices such as gate‑all‑around (GAA) transistors, and further out, complementary FETs (CFETs), which push density scaling even more.Applied Materials

These architectural shifts are unfolding at unprecedented scale, with the logic roadmap already extending beyond first‑generation GAA toward more advanced designs. One key example is GAA with backside power delivery, which relocates thick power lines to the backside of the wafer, reducing resistive losses and freeing front‑side routing for tighter logic cell integration. Another example brings adjacent GAA PMOS and NMOS transistors closer together while inserting a dielectric isolation wall between them to minimize electrical interference. Further out, complementary FETs (CFETs) push density scaling even more by stacking PMOS and NMOS devices directly atop one another.

While these architectures deliver compelling gains in performance per watt and logic density without relying solely on tighter lithography, they significantly raise integration complexity. Manufacturing a single GAA device today can involve more than 2,000 tightly interdependent process steps. At the same time, wiring stacks continue to grow taller and denser to connect these advanced logic devices. Modern leading‑edge GPUs now in development pack more than 300 billion transistors into an area little larger than a postage stamp, interconnected by over 2,000 miles of wiring.

Diagram of advanced AI chip showing layered wiring and 3D stack of copper interconnects.Modern leading‑edge GPUs now in development pack more than 300 billion transistors into an area little larger than a postage stamp, interconnected by over 2,000 miles of wiring.Applied Materials

At this level of complexity, the process steps used to create these precise 3D devices and wiring stacks cannot be optimized independently. Design and process must evolve in lockstep, and materials innovation and fabrication methods must advance alongside device architecture. EPIC’s co‑innovation model is designed to accelerate exactly this convergence — enabling logic compute to continue advancing the frontiers of AI at the pace the roadmap demands.

Powering the Memory Roadmap

At the same time, the AI computing era is fundamentally reshaping how data is generated, moved, and processed — making memory technologies, especially DRAM, central to delivering the energy‑efficient performance AI systems require. As models grow larger and more data‑hungry, the DRAM roadmap is shifting toward architectures that deliver higher density, greater bandwidth, and faster access per watt.

Diagram of DRAM cell scaling from 8F\u00b2 to stacked 3D DRAM architecture.At the DRAM cell level, AI performance requirements are driving a transition from 6F² buried‑channel array transistors (BCAT) to more compact 4F², and beyond that, architectures that move past what 2D scaling alone can deliver. Applied Materials

At the DRAM cell level, this shift is driving a transition from 6F² buried‑channel array transistors (BCAT) to more compact 4F² architectures, which orient the transistor vertically to boost density and reduce chip area. Looking beyond 4F², sustaining gains in performance per watt will require moving past what 2D scaling alone can deliver. The industry is therefore turning to 3D DRAM, stacking memory cells vertically to add capacity within a constrained footprint. As these structures grow taller and aspect ratios intensify, high-mobility materials engineering in three dimensions becomes increasingly critical to performance and reliability.

Beyond the memory cell array, another powerful lever for DRAM scaling is shrinking the peripheral circuitry, which includes logic transistors and interconnect wiring. One emerging approach places select periphery functions beneath the DRAM array by bonding two wafers — one optimized for the DRAM cells and the other for CMOS logic — using multiple wiring layers.

Diagram of transistor and interconnect technology progressing to FinFET and advanced Cu linksBeyond the memory cell array, another powerful lever for DRAM scaling is shrinking the peripheral circuitry, which includes logic transistors and interconnect wiring.Applied Materials

In parallel, DRAM performance is being extended by leveraging logic‑proven enhancers in the memory periphery. These include mobility boosters such as embedded silicon germanium and stress films, along with wiring upgrades like improved low‑k dielectrics and advanced copper interconnects. Memory manufacturers are also transitioning periphery transistors from planar devices to FinFET architectures, following the logic roadmap to further improve I/O speed. These valuable inflections are central to EPIC’s mission — where they can be co-developed and rapidly validated for next‑generation memory systems.

Driving System Scaling With Advanced Packaging

As data movement becomes the dominant energy cost in AI systems, advanced packaging has emerged as a critical lever for improving system‑level efficiency—shortening interconnect distances, increasing bandwidth density, and reducing the power required to move data between logic and memory.

Diagram of AI accelerator with surrounding HBM chips and enlarged stacked HBM memory.The rise of 3D packages such as high‑bandwidth memory (HBM) underscores why advanced packaging is becoming central to the AI era.Applied Materials

High‑bandwidth memory (HBM) marks a major inflection along this path. By stacking DRAM dies — scaling to 16 layers and beyond — and placing memory much closer to the processor, HBM enables rapid access to ever‑larger working datasets. This delivers step‑function gains in both bandwidth and energy efficiency.

More broadly, the rise of 3D packages such as HBM underscores why advanced packaging is becoming central to the AI era. Packaging now addresses system‑level constraints that logic and memory device scaling alone can no longer overcome. It also enables a move away from monolithic systems‑on‑chip toward chiplet‑based architectures, as AI workloads increasingly demand flexible designs that combine logic, memory, and specialized accelerators optimized for specific tasks.

A vital technology powering this roadmap is hybrid bonding. With interconnect pitches approaching those of on‑chip wiring, conventional bumps and microbumps run into fundamental limits in density, power, and signal integrity. Hybrid bonding removes these barriers by allowing dramatically higher interconnect and I/O density, supporting a broad range of chiplet architectures — from memory stacking to tighter compute‑memory integration.

Colorful 3D cross-section of a stacked computer chip package with connectorsEPIC tackles high‑value advanced‑packaging challenges through early, parallel co‑innovation across materials, integration, and manufacturing.Applied Materials

As bonded structures like HBM stacks grow larger and more complex, warpage control, die placement, stack alignment, and thermal management become first‑order challenges. EPIC tackles these and other high‑value advanced‑packaging challenges through early, parallel co‑innovation across materials, integration, and manufacturing.

Bringing It All Together

Across logic, memory, and advanced packaging, our industry faces an ambitious roadmap that promises significant gains in energy efficiency for AI systems. But realizing that potential demands breakthrough materials innovation at a time when feature sizes are shrinking, interfaces are multiplying, and process interdependencies are escalating. These challenges cannot be solved on 10–15‑year timelines under the traditional relay‑race model. We must break down silos, align earlier across the ecosystem, and parallelize learning to keep pace with AI’s demands.

In the AI era, progress will be defined by the speed at which lightbulb moments turn into manufacturing and commercialization reality. The only viable path forward is a new innovation model — and EPIC is how we are driving it.

Why RF Coexistence Testing Is Critical for Shared Spectrum

2026-05-14 18:00:01



A comprehensive review of how spectrum congestion, dynamic sharing, and cognitive radio systems are reshaping RF coexistence testing for military and commercial applications.

What Attendees will Learn

  1. Why spectrum congestion threatens wireless reliability — Explore how over 30 billion connected devices, more than 4,000 allocation changes worldwide, and the expansion from 11 to over 80 cellular bands are intensifying contention for finite RF spectrum resources.
  2. How real-world coexistence failures affect safety-critical systems — Understand the interference risks between 5G C band transmitters and aircraft radar altimeters, and between terrestrial L band networks and GPS receivers that were not designed for adjacent high-power signals.
  3. Why tiered spectrum sharing frameworks are essential — Examine how CBRS uses a cloud-based Spectrum Access System (SAS) and environmental sensing to dynamically protect incumbent Navy radar while enabling commercial cellular services across three priority tiers.
  4. What coexistence test architectures look like in practice — Learn how controlled environment testing with anechoic chambers, over-the-air signal generation, and standards such as ANSI C63.27 enable repeatable evaluation of RF device performance under real-world interference conditions.

IEEE Program Aims to Connect the Billions Who Are Still Offline

2026-05-13 02:00:02



Given how integral the Internet has become to everyday tasks such as shopping, paying bills, and holding virtual meetings, it’s interesting that nearly 30 percent of the global population still has no access to it. More than 2 billion people are still offline, according to a report released in November by the International Telecommunication Union.

More and more people are being connected, though, thanks to IEEE Future NetworksConnecting the Unconnected (CTU) and similar programs. Since 2021, the technical community has been working to accelerate the development, standardization, and deployment of 5G, 6G, and future generations.

Every year, CTU holds a worldwide competition to seek out innovators who are in the early stages of developing technologies or applications to provide greater access. It also holds an annual summit that brings together experts, community leaders, and other interested parties to discuss strategies to expand access and foster digital inclusion.

CTU expanded in several ways last year. It launched regional summits to focus on local connectivity issues, organized community-focused events, and established an expanded mentorship program to further support contest winners and the next generation of technological innovators impacting humanity. The program also partners with the IEEE Standards Association (IEEE SA) to develop guidelines for some of the submitted innovations.

“IEEE Future Networks has created a community to bring all these initiatives working on digital connectivity together in a single platform and leverage the IEEE brand to help raise the visibility of their work,” says IEEE Life Fellow Sudhir Dixit, a CTU cochair and a Basic Internet Foundation cofounder, which also works to expand Internet access.

A contest for new connectivity methods

The CTU challenge, launched in 2021, typically receives 200 to 300 submissions each year, Dixit says. Last year 245 projects from 52 countries were submitted. Participants include academics, nonprofit organizations, startups, and students.

Projects can be entered into one of three categories. The Technology Applications category is for new connectivity methods or innovations that broaden broadband access. Those who improve the affordability of Internet services can enter the Business Model category. The Community Enablement category is for strategies that promote public broadband adoption.

After selecting a category, entrants choose between two tracks based on their project’s maturity. The proof-of-concept route is for early-stage but functional technology that has already produced results. The conceptual path is for projects in the theoretical phase that have not undergone full testing.

“IEEE Future Networks has created a community to bring all these initiatives working on digital connectivity together in a single platform and leverage the IEEE brand to help raise the visibility of their work.” —Sudhir Dixit, Connecting the Unconnected cochair

Last year’s challenge submission period was from March to June, with judging phases from June through November. The 20 winners presented their solutions in December at a virtual Winners Summit. Fourteen projects received prize money, ranging from US $500 to $2,500. Six finalists earned an honorable mention at the summit.

The awards amounts have varied over the years, based on the sponsorship.

Among the winners were a solar-powered community broadband network in Tanzania, a low-cost method for accessing the Internet that uses FM radio and a short message service (SMS), and a strategy for utilizing India’s rural broadband infrastructure to deliver medical services to people living in isolated, tribal, and other underserved regions.

“Our job is to help further develop the technology, look for gaps, and see if it is good enough to be applied to rural villages, like those in Africa and India,” says IEEE Fellow Ashutosh Dutta, who is a CTU cochair and a professor at Johns Hopkins University, in Baltimore. “The idea behind the contest is to make sure the technology actually gets implemented at the grassroots level and is being used by the local community.”

This year’s challenge submission period runs until 19 June, with judging phases from July through October.

The finalists of the 2025 IEEE Connect the Unconnected challenge describe their projects.IEEE Future Networks

Local connectivity discussions

The CTU program hosted three regional summits last year. The North American event was held in September in Washington, D.C. In November, the Global/Asia-Pacific meeting took place in Bangalore, India; it was co-located with the IEEE Future Networks World Forum. The Europe, Middle East, and Africa summit also was held in November, in Abuja, Nigeria.

Topics discussed at the summits included infrastructure solutions for universal connectivity; sustainable business models; scaling homegrown technologies; and policy, regulation, and financing issues.

As of press time, the dates for this year’s regional summits had not been announced.

Community-focused events

To help bridge the gap between ideas and their deployment, the Connect a Community event was established to demonstrate how some new technologies might benefit people. The inaugural event was held in November in Bengaluru, India. During the daylong program, 10 of the challenge winners demonstrated their connectivity solutions to villagers from seven rural communities.

Dutta credits IEEE Life Fellow Rakesh Kumar with devising the event. Kumar chairs IEEE Future Directions, which was where Future Networks got its start in 2017 as the 5G Initiative.

“Kumar wants to ensure the winning technologies are going to be useful for the community,” Dutta says.

Providing entrepreneurs with business skills

Dixit says the Future Networks team believed that simply conducting a competition and distributing prizes wasn’t enough.

“We wanted to follow up with the winners, monitor their progress, and help them turn their ideas into a business,” he says.

To accomplish that, IEEE launched the Empowerment Through Mentorship program, in which budding entrepreneurs are paired with industry leaders and experienced mentors who provide them with 1,000 days of guidance, coaching them on scaling up their business.

“We launched the mentorship program to further the cause,” Dixit says. “These people may be good at developing technology, but they don’t know the marketing challenges, how to raise money, and other factors.”

The Lemelson Foundation, an organization in Portland, Ore., that partners with IEEE, collaborated on the mentorship program. The foundation’s philanthropic strategy is to cultivate a robust ecosystem for entrepreneurs in East Africa, India, and the United States. It does so by providing the entrepreneurs with tools including financing options and access to communities that share their passion.

The foundation chose to partner with IEEE “because of its powerful international network and focus on electrical engineering, which is a critical element of communications and energy infrastructure globally,” says Kory Murphy, Lemelson’s program officer for U.S. invention and entrepreneurship.

“Other factors include IEEE’s focus on nontraditional or disadvantaged areas in India,” Murphy says, “and its recognition that mentorship is critical for the successful deployment of new technologies.”

IEEE began an early pilot project in 2023 with support of a grant from the Lemelson Foundation, to determine if a sustained entrepreneurship mentorship program was valuable and necessary, he says. It then conducted a survey through 2024 to collect information to better understand the needs of stakeholders, mentors, and entrepreneurs in hard-to-reach areas in India. While the early pilot program was restricted to that country, its intent was to learn from the experience and share the findings globally, he says.

“Our job is to help further develop the technology, look for gaps, and see if it is good enough to be applied to rural villages, like those in Africa and India.” —Ashutosh Dutta, Connecting the Unconnected cochair

“The foundation’s involvement was aimed at testing certain activities, partnership strategies, and understanding the budgetary requirements for a prepilot program,” he says. “The primary goal of the foundation is to enable conditions for innovation to occur within regional systems, especially addressing the opportunity for sustained, systematic, and relational mentorship in technology innovation.”

The Empowerment Through Mentorship program is structured into three tiers. One focuses on individuals and their needs, the program/technical level focuses on the invention, and the venture level guides participants from the initial concept through product testing and validation. Within each track, participants engage in activities such as networking, securing financial support, and pitching their innovations, Murphy says.

“The 1,000-day approach reflects the belief that it requires a long period of time to coach and support those who traditionally are excluded,” he says.

CTU mentors can be IEEE members or nonmembers who are successful entrepreneurs and own small or large companies, Dixit says. They also can work in academia.

“They need to be passionate about training and mentoring other people,” Dixit says. “We have created a curriculum that covers topics such as ways to get financing from investors and how to turn ideas into a profitable business. It’s not the technology that will make the product successful; it’s everything else that goes into it.”

Rural broadband architecture standards

To determine whether any of the challenge’s submitted projects have the potential to become a standard, the CTU working group collaborates with the IEEE SA Industry Connections program’s 6G Rural Connectivity and Intelligent Village activity. Projects considered for standards do not have to be winners. Any project that has successfully passed the first phase, completed the second-phase requirements, and requested a review may be considered.

Typically, about half of the submitted projects are reviewed for possible standard implications, Dutta says.

“We selected about 60 submissions that could be potentially standardized,” he says. “Out of those, we work with IEEE SA’s rapid reactive standards activity group to narrow them down to five or 10 that can be potentially standardized.

“The CTU program is not only about developing a technology or implementing it, but also standardizing it so that people around the world can use the standard.”

One such project led to the development of IEEE P1962, “Standard for Providing Broadband Connectivity to Rural Infrastructure by Utilizing Solar Panels as Optical Communication Receivers.” It specifies an architecture for an optical receiver that uses solar panels and associated circuitry to provide energy-efficient, affordable, and high-speed optical wireless communication.

“CTU has created a platform for the world to bring their ideas to one single place where people can talk to each other about them,” Dixit says. “We are a unifying force.

We bring these many dimensions together to connect the unconnected.”

CTU Challenge Winner: Community Radio Bolo


The Connecting the Unconnected program offers contestants benefits that extend beyond the recognition and rewards. One participant who benefited is Ritu Srivastava, a telecommunications engineer and IEEE member. She placed first in the 2022 technical concept category for her project, Community Radio Bolo (CR Bolo). The verb bolo means speak in Hindi.

Internet services in India’s rural areas are either unavailable or have spotty coverage. People there rely on community radio stations to get news about local events and issues. There are about 300 such stations in India, Srivastava says.

To provide broadband Internet access in the Bhadrak district of Odisha, India, she developed a cost-effective hybrid network that uses an online and offline wireless mesh network installed on the tower of community radio station Radio Bulbul. Several transceiver locations, known as access points, are located at schools and community centers that are within a 5- to 7-kilometer radius, connecting them with Radio Bulbul.

CR Bolo includes a plug-and-play interactive voice response system that is coupled with the hybrid wireless network. The automated telephony technology routes callers using voice commands or a telephone’s keypad to the appropriate department. The system also has a direct-to-consumer platform where manufacturers sell their products through websites or mobile apps.

“CR Bolo is a unique method of leveraging rural traditional technologies and infrastructure combined with modern technology to provide meaningful access to communities,” Srivastava says, “improving livelihood opportunities and creating social and economic viability for CR stations.”

She says she plans to expand the project to other rural communities in India. She will incorporate a large language model and offer a learning management system to deliver training programs and educational courses, she says.

Winning CTU inspired her to become a more active IEEE volunteer, she says. She is working with the IEEE Standards Association to develop guidelines for the architecture of broadband technology used in rural areas.

Because of her entrepreneurial experience, CTU hired her in 2023 to assist with the challenge and the Empowerment Through Mentorship program.

Srivastava is a director at Jadeite Solutions in New Delhi. The consulting company offers nonprofit organizations that are developing socioeconomic programs with project evaluation, impact assessment, financial reviews, and similar services.

She credits CTU with giving her and her community-centered model more exposure: “The CTU challenge has given me a lot of other opportunities in terms of networking, funding resources, publishing my research in IEEE journals, and presenting at national and international conferences.”

Neutralizing the Gigascale Problem: How to Solve the Physical Power Paradox of Extreme AI Training Loads

2026-05-13 01:15:15



This sponsored article is brought to you by Ampace.

As AI workloads grow to gigascale levels, the global data center industry has hit a hidden physical wall. The real bottleneck is no longer just the thermal limit of the chip or the capacity of the cooling system — it is the dynamic resilience of the power chain.

Modern AI computing clusters, driven by massive GPU clusters, generate high-frequency, abrupt, and synchronized spikey pulse loads. As rack densities soar beyond 100 kW, these fluctuations are amplified into a “power paradox”: while the digital logic of AI is moving faster than ever, the physical infrastructure supporting it remains tethered to legacy response capabilities.

The power usage of these gigascale sites and their drastic, high frequency, abrupt load surges from the AI GPU clusters can trigger transient voltage events and frequency instability, risking the entire local grid. The grid itself is not robust enough to support these loads. This leads to the infrastructure gap: The utility is not robust enough and traditional backup sources, such as diesel generators and gas turbines, simply cannot react to millisecond-level power spikes in output. This will often force operators into a cycle of costly infrastructure over sizing just to buffer the volatility.

AI infrastructure requires energy systems capable of instantaneous response while safeguarding continuity and reliability.

The industry has explored various mitigations — from rack-level BBUs to 800V DC architectures — yet the mature, high volume, traditional UPS system remains the most viable and scalable foundation for gigawatt-level facilities. Consequently, the UPS-integrated battery system has emerged as the critical “physical buffer” to neutralize these pulses at the source.

At Data Center World 2026 in Washington, D.C., Ampace led a pivotal technical dialogue with Eaton during the session “Powering Giga-scale AI.” Their exchange unveiled a fundamental paradigm shift: To bridge the AI power gap, energy storage must evolve from a passive insurance policy into an active, high-speed stabilizer. By aligning Ampace’s semi-solid-state battery innovation with Eaton’s proven system intelligence, we are moving beyond simple backup to solve the physical paradox of the AI era.

Speaker at DCW conference presenting on stage to an audience with phones raisedTo move beyond simple backup and solve the physical paradox of the AI era, Ampace is aligning its semi-solid-state battery innovation with Eaton’s proven system intelligence.Ampace

The “Shock Absorber” physics: semi-solid chemistry for AI pulses

Conventional power systems were designed for steady-state loads, not the rapid heartbeat of a massive AI GPU cluster. When thousands of GPUs synchronize their computing cycles, they generate high-frequency, abrupt pulse loads that can lead to voltage sags, frequency oscillations, and potential interruptions of critical AI training.

Ampace’s PU Series semi-solid and low-electrolyte cells address this challenge by acting as high-speed “shock absorbers.” Leveraging ultra-low internal resistance (DCR) and high cycle capability, these batteries neutralize millisecond-level power spikes at the source, stabilizing the local power loop before disturbances propagate upstream to the grid or on-site generators. These high-rate cells enable 100 kW+ racks to maintain peak performance without transmitting instability across the power chain.

This capability aligns closely with Eaton’s matured UPS architectures, such as double-conversion topologies and advanced power electronics upgrades, which have long prioritized rapid load responsiveness and high system stability.

Together, these approaches embody a shared industry philosophy: AI infrastructure requires energy systems capable of instantaneous response while safeguarding continuity and reliability.

Diagram comparing liquid electrolyte cell vs safer Ampace semi\u2011solid battery cellAmpace’s semi-solid state chemistry minimizes liquid electrolyte, greatly reducing the risk of leakage and thermal runaway under continuous AI high-load conditions.Ampace

Algorithmic intelligence: synchronizing energy and control

Hardware alone cannot solve the AI power paradox; the system also requires intelligent coordination between energy storage and power management. Sophisticated battery management systems (BMS) like Ampace’s high-precision design track state-of-charge (SOC) with high-speed sampling, even during rapid, shallow cycling typical in AI workloads.

Complementary algorithmic approaches in modern UPS platforms — such as ramp-rate control and average power management — effectively suppress sub-synchronous oscillations and optimize load smoothing. In large-scale AI training environments, where thousands of GPUs can trigger millisecond-level power pulses, these intelligent layers ensure that batteries buffer high-frequency fluctuations without compromising the mandatory emergency backup reserves.

By transforming energy storage from passive “standby insurance” into active, schedulable assets, the system simultaneously safeguards continuous AI training and maintains the long-term health of the data center infrastructure. In practical terms, this means that even during peak compute bursts, the infrastructure remains stable, training cycles continue uninterrupted, and operators avoid costly oversizing or grid stress.

Eaton’s dual-layer algorithms serve as a valuable benchmark in this space, demonstrating how advanced control logic can achieve similar objectives, reinforcing Ampace’s approach and philosophy within the broader data center power ecosystem.

Economic scalability: optimizing AI infrastructure efficiently

One of the largest costs in deploying AI infrastructure is “oversizing”: procuring transformers, generators, and UPS systems to handle brief peak spikes. This traditional approach inflates the Total Cost of Ownership (TCO) and leads to wasted capital on underutilized hardware.

Ampace’s turn-key cabinet design developed by its independent R&D is engineered for seamless compatibility with mature, high volume UPS systems. By leveraging Eaton’s double-conversion UPS topologies alongside intelligent ramp-rate and average power management algorithms, AI data centers can scale dynamically without requiring costly infrastructure redesigns. This approach allows the UPS and batteries to act as active load-shapers, smoothing AI-driven pulses while strictly maintaining mandatory emergency backup capacity.

By utilizing energy storage as an active, schedulable asset, operators can right-size their infrastructure, avoid unnecessary grid upgrades, and deploy gigascale AI clusters with unprecedented efficiency.

Safety First: Protecting AI Infrastructure While Enabling Innovation

In high-density AI facilities, safety is non-negotiable. Ampace’s semi-solid state chemistry minimizes liquid electrolyte, greatly reducing the risk of leakage and thermal runaway under continuous AI high-load conditions.

Ampace graphic showing UL Listed and CE logos with multiple certification codesAmpace’s turn-key cabinet design developed by its independent R&D is engineered for seamless compatibility with mature, high volume UPS systems. Ampace

At the same time, Eaton’s UPS design emphasizes system-level energy scheduling that never sacrifices mandatory emergency backup reserves, ensuring thermal safety and uninterrupted operation.

This “safety-first” approach ensures that infrastructure can sustain aggressive performance targets without compromising the physical integrity of the facility. Coupled with over a decade of proven high-cycle life operation and design under shallow pulse conditions, these systems can extend operational lifespan, reduce replacement requirements, and provide operators with confidence that safety and reliability remain uncompromised as compute density continues to grow.

To remain the scalable backbone of AI data centers

As AI computing scales over the next two to three years, the industry will face stricter grid requirements and even more demanding pulse load characteristics. This evolution demands a forward-looking design philosophy that harmonizes UPS, battery, and grid compatibility.

Ampace views current low-electrolyte semi-solid technologies as the optimal transitional step toward a fully solid-state future — one that promises ultimate safety and performance.

Ampace remains committed to this long-term technological roadmap. We view current low-electrolyte semi-solid technologies as the optimal transitional step toward a fully solid-state future — one that promises ultimate safety and performance. Whether through rack-level BBU, integrated UPS systems, or containerized storage, the universal core of the AI era remains constant: high-speed response, long shallow-cycle life, and refined energy management.

By engaging in deep technical exchanges with Eaton and leading energy innovators, Ampace ensures that its solutions not only meet today’s AI pulse challenges but also harmonize with broader infrastructure strategies and shared industry best practices.

Ultimately, as traditional diesel generators gradually give way to diversified alternatives, the integrated UPS-plus-energy-storage system will become the fundamental infrastructure standard.

The dialogue has just begun. Ampace will continue to engage in strategic exchanges with global industrial automation leaders and digital energy pioneers, co-authoring the playbook for a safer, more efficient, and more resilient AI-ready world.

Why Mastering EVM Is Essential for Next-Generation Wireless Systems

2026-05-11 18:00:01



A comprehensive guide to error vector magnitude (EVM), the primary metric for quantifying modulation accuracy in Wi-Fi, LTE, and 5G NR systems.

What Attendees will Learn

  1. What error vector magnitude is and how it is calculated — Understand EVM as the distance between ideal and measured constellation points, learn the difference between peak and RMS normalization, and see how EVM is expressed in both percentage and decibel formats.
  2. How digital modulation works and why it matters — Explore the fundamentals of ASK, FSK, PSK, APSK, and QAM modulation schemes, and understand why higher modulation orders increase throughput, while also demanding greater accuracy in signal transmission and reception.
  3. What causes degraded EVM in real-world systems — Examine the four main categories of EVM contributors: amplitude effects (compression, noise, frequency response), phase effects (phase noise), I/Q imperfections (gain imbalance, quadrature error), and configuration issues.
  4. How to diagnose modulation impairments using constellation diagrams — Learn how visual inspection of constellation diagrams can identify phase noise, amplifier compression, noise, in-band spurious signals, and I/Q modulator imperfections as root causes of degraded EVM.