2025-11-15 04:24:06

While there are many useful questions to ask when encountering a new robot, “can I eat it” is generally not one of them. I say ‘generally,’ because edible robots are actually a thing—and not just edible in the sense that you can technically swallow them and suffer both the benefits and consequences, but ingestible, where you can take a big bite out of the robot, chew it up, and swallow it.
Yum.
But so far these ingestible robots have included a very please-don’t-ingest-this asterisk: the motor and battery, which are definitely toxic and probably don’t taste all that good. The problem has been that soft, ingestible actuators run on gas pressure, requiring pumps and valves to function, neither of which are easy to make without plastic and metal. But in a new paper, researchers from Dario Floreano’s Laboratory of Intelligent Systems at EPFL in Switzerland have demonstrated ingestible versions of both of batteries and actuators, resulting in what is, as far as I know, the first entirely ingestible robot capable of controlled actuation.
EPFL
Let’s start with the battery on this lil’ guy. In a broad sense, a battery is just a system for storing and releasing energy. In the case of this particular robot, the battery is made of gelatin and wax. It stores chemical energy in chambers containing liquid citric acid and baking soda, both of which you can safely eat. The citric acid is kept separate from the baking soda by a membrane, and enough pressure on the chamber containing the acid will puncture that membrane, allowing the acid to slowly drip onto the baking soda. This activates the battery and begins to generate CO2 gas, along with sodium citrate (common in all kinds of foods, from cheese to sour candy) as a byproduct.
EPFL
The CO2 gas travels through gelatin tubing into the actuator, which is of a fairly common soft robotic design that uses interconnected gas chambers on top of a slightly stiffer base that bends when pressurized. Pressurizing the actuator gets you one single actuation, but to make the actuator wiggle (wiggling being an absolutely necessary skill for any robot), the gas has to be cyclically released. The key to doing this is the other major innovation here: an ingestible valve.
EPFL
The valve operates based on the principle of snap-buckling, which means that it’s happiest in one shape (closed), but if you put it under enough pressure, it rapidly snaps open and then closed again once the pressure is released. The current version of the robot operates at about four bending cycles per minute over a period of a couple of minutes before the battery goes dead.
And so there you go: a battery, a valve, and an actuator, all ingestible, makes for a little wiggly robot, also ingestible. Great! But why?
“A potential use case for our system is to provide nutrition or medication for elusive animals, such as wild boars,” says lead author Bokeon Kwak. “Wild boars are attracted to live moving prey, and in our case, it’s the edible actuator that mimics it.” The concept is that you could infuse something like a swine flu vaccine into the robot. Because it’s cheap to manufacture, safe to deploy, completely biodegradable, and wiggly, it could potentially serve as an effective strategy for targeted mass delivery to the kind of animals that nobody wants to get close to. And it’s obviously not just wild boars—by tuning the size and motion characteristics of the robot, what triggers it, and its smell and taste, you could target pretty much any animal that finds wiggly things appealing. And that includes humans!
Kwak says that if you were to eat this robot, the actuator and valve would taste a little bit sweet, since they have glycerol in them, with a texture like gummy candy. The pneumatic battery would be crunchy on the outside and sour on the inside (like a lemon) thanks to the citric acid. While this work doesn’t focus specifically on taste, the researchers have made other versions of the actuator that were flavored with grenadine. They served these actuators out to humans earlier this year, and are working on an ‘analysis of consumer experience’ which I can only assume is a requirement before announcing a partnership with Haribo.
Eatability, though, is not the primary focus of the robot, says PI Dario Floreano. “If you look at it from the broader perspective of environmental and sustainable robotics, the pneumatic battery and valve system is a key enabling technology, because it’s compatible with all sorts of biodegradable pneumatic robots.” And even if you’re not particularly concerned with all the environmental stuff, which you really should be, in the context of large swarms of robots in the wild it’s critical to focus on simplicity and affordability just to be able to usefully scale.
This is all part of the EU-funded RoboFood project, and Kwak is currently working on other edible robots. For example, the elastic snap-buckling behavior in this robot’s valve is sort of battery-like in that it’s storing and releasing elastic energy, and with some tweaking, Kwak is hoping that edible elastic power sources might be the key for tasty little jumping robots that jump right off the dessert plate and into your mouth.
Edible Pneumatic Battery for Sustained and Repeated Robot Actuation, by Bokeon Kwak, Shuhang Zhang, Alexander Keller, Qiukai Qi, Jonathan Rossiter, and Dario Floreano from EPFL, is published in Advanced Science
.
2025-11-15 02:30:02

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Current multirotor drones provide simplicity, affordability, and ease of operation; however, their primary limitation is their low payload-to-weight ratio, which typically falls at 1:1 or less. The DARPA Lift Challenge aims to shatter the heavy lift bottleneck, seeking novel drone designs that can carry payloads more than four times their weight, which would revolutionize the way we use drones across all sectors.
[ DARPA ]
Huge milestone achieved! World’s first mass delivery of humanoid robots has completed! Hundreds of UBTECH Walker S2 have been delivered to our partners.
I really hope that’s not how they’re actually shipping their robots.
[ UBTECH ]
There is absolutely no reason to give robots hands if you can just teach them to lasso stuff instead.
[ ArcLab ]
Saddle Creek deployed Carter in its order fulfillment operation for a beauty client. It helps to automate and optimize tote delivery operations between multiple processing and labeling lines and more than 20 designated drop-off points. In this capacity, Carter functions as a flexible, non-integrated “virtual conveyor” that streamlines material flow without requiring fixed infrastructure.
[ Robust.ai ]
This is our latest work on an aerial–ground robot team, the first time a language–vision hierarchy achieves long-horizon navigation and manipulation on the real UAV + quadruped using only 2D cameras. The article is published open-access in Advanced Intelligent Systems.
[ DRAGON Lab ]
Thanks, Moju!
I am pretty sure that you should not use a quadrupedal robot to transport your child. But only pretty sure, not totally certain.
[ DEEP Robotics ]
Building Behavioral Foundation Models (BFMs) for humanoid robots has the potential to unify diverse control tasks under a single, promptable generalist policy. However, existing approaches are either exclusively deployed on simulated humanoid characters, or specialized to specific tasks such as tracking. We propose BFM-Zero, a framework that learns an effective shared latent representation that embeds motions, goals, and rewards into a common space, enabling a single policy to be prompted for multiple downstream tasks without retraining.
[ BFM-Zero ]
Welcome to the very, very near future of manual labor.
[ AgileX ]
MOMO (Mobile Object Manipulation Operator) has been one of KIMLAB’s key robots since its development about two years ago and has featured as a main actor in several of our videos. The design and functionalities of MOMO were recently published in IEEE Robotics & Automation Magazine.
We are excited about our new addition to our robot fleet! As a shared resource for our faculty members, this robot will facilitate multiple research activities within our institute that target significant future funding. Our initial focus for this robot will be on an agricultural application but we have big plans for the robot in human-robot interaction projects.
[ Ingenuity Labs ]
The nice thing about robots that pick grapes in vineyards is that they don’t just eat the grapes, like I do.
[ Extend Robotics ]
How mobile of a mobile manipulator do you need?
Robotics professor, Dr. Christian Hubicki, talks about the NEO humanoid announcement on October 29th, 2025. While explaining the technical elements and product readiness, he refuses to show any emotion whatsoever.
2025-11-14 23:00:02

There’s a class of consumer that wants something they know they cannot have. For some of those people, a Macintosh computer not made by Apple has long been a desired goal.
For most of the Mac’s history, you could only really get one from Apple, if you wanted to go completely by the book. Sure, there were less-legit ways to get Apple software on off-brand hardware, and plenty of people were willing to try them. But there was a short period, roughly 36 months, when it was possible to get a licensed Mac that had the blessing of the team in Cupertino.
They called it the Mac clone era. It was Apple’s direct response to a PC market that had come to embrace open architectures—and, over time, made Apple’s own offerings seem small.
During that period, from early 1995 to late 1997, you could get legally licensed Macs from a series of startups now forgotten to history, as well as one of Apple’s own major suppliers at the time, Motorola. And it was great for bargain hunters who, for perhaps the first time in Apple’s history, had a legit way to avoid the Apple tax.
But that period ended fairly quickly, in large part thanks to the man whose fundamental aversion to clone-makers likely caused the no-clones policy in the first place: Steve Jobs.
“It was the dumbest thing in the world to let companies making crappier hardware use our operating system and cut into our sales,” Jobs told Walter Isaacson in his 2011 biography.
Apple has generally avoided giving up its golden goose because the company was built around vertical integration. If you went into a CompUSA and bought a Mac, you were buying the full package, hardware and software, made by Apple. This had benefits and downsides. Because Apple charged a premium for its devices (unlike other vertical integrators, such as Commodore and Atari), it tended to relegate the company to a smaller part of the market. On the other hand, that product was highly polished.
That meant Apple needed to be good at two wildly disparate skill sets—and protect others from stealing Apple’s software prowess for their own cheaper hardware.
While historians can point to the rise of unofficial Apple II clones in the ‘80s, and modern Apple fans can still technically build Hackintoshes on Intel hardware, Apple’s own Mac clone program came and went in just a few short years.
It was a painful lesson.
This Outbound Notebook wasn’t sold with Apple features, but allowed users to insert a Mac ROM, a component that helped Apple limit cloning. However, the ROM had to come from a genuine, working Apple computer. Chaosdruid/Wikimedia Commons
For years, companies attempted to wrangle the Mac out of Apple’s hands, corporate blessing or no. Apple, highly focused on vertical integration, used its ROM chips as a way to limit the flow of MacOS to potential clone-makers. This mostly worked, as the Mac’s operating system was far more complex and harder to reverse-engineer than the firmware used by the IBM PC.
But plenty still tried. For example, a Brazilian company named Unitron sold a direct clone of the Macintosh 512K, which fell off the market only after a Brazilian trade body intervened. Later, a company named Akkord Technology attempted to sell a reverse-engineered device called the Jonathan, but ended up attracting a police raid instead.
Somewhat more concerning for Apple’s exclusivity: Early Macs shared much of their hardware architecture with other popular machines, particularly the Commodore Amiga and Atari ST, each of which received peripherals that introduced Mac software support and made it easier to work across platforms.
But despite claims that this ROM-based approach was technically legal, it’s not like any of this was explicitly allowed by Apple. At one point, Infoworld responded to a letter to the editor about this phenomenon with a curt note: “Apple continually reaffirms its intention to protect its ROM and to prevent the cloning of the Mac.”
So what was Apple OK with? Full-on conversions, which took the hardware of an existing Mac, and rejiggered its many parts into an entirely new product. There are many examples of this throughout Apple’s history—such as the ModBook, a pre-iPad Mac tablet—but the idea started with Chuck Colby.
Colby, an early PC clone-maker who was friends with Apple team members like Steve Wozniak, was already offering a portable Mac conversion called the MacColby at one of the Mac’s introductory events in 1984. (Apparently Apple CEO John Sculley bought two—but never paid for them.)
One of Colby’s later conversions, the semi-portable Walkmac, had earned a significant niche audience. A 2013 CNET piece notes that the rock band Grateful Dead and news anchor Peter Jennings were both customers, and that Apple would actually send Colby referrals.
So, why did Colby get the red-carpet treatment while other clone-makers were facing lawsuits and police raids? You still needed a Mac to do the aftermarket surgery, so Apple still got its cut. One has to wonder: Would Apple have been better off just giving Chuck Colby, or any other interested party, a license to make their own clones? After all, it’s not like Colby’s ultra-niche portables were going to compete with Apple’s experience. Right?
During the 1980s, this argument was basically a nonstarter—the company even went so far as to change its dealer policy to limit the resale of its system ROMs for non-repair purposes. But by the 1990s, things were beginning to thaw.
You can thank a firm named NuTek for the nudge. The company, like Apple, was based in Cupertino, California, and it spent years talking up its reverse-engineering plans.
“Nutek will do for Mac users what the first IBM-compatible developers did in the early 1980s: open up the market to increased innovation and competition by enabling major independent third-party manufacture,” explained Benjamin Chou, the company’s CEO, in a 1991 ComputerWorld piece.
And by 1993, it had built a “good enough” analogue of the Mac that could run most, but not all, Mac programs. “We’ve tested the top 15 software applications and 13 of those worked,” Chou told InfoWorld, an impressive boast until you hear the non-working apps are Microsoft Works and Excel.
It failed to make a splash, but NuTek’s efforts nonetheless exposed a thaw in Apple’s thinking. A 1991 MacWorld piece on NuTek’s reverse engineering attempt quoted Apple Chief Operating Officer Michael Spindler as saying, “It is not a question of whether Apple will license its operating system, but how it will do this.”
Meanwhile, Windows was finally making inroads in the market, and Apple was ready to bend.
There was a time when it looked like MacOS was about to become a Novell product. Really. In 1992, Apple held very serious talks with the networking software provider about selling, and it almost happened. ThenMichael Spindler became Apple’s CEO in 1993 and killed the Novell experiment, but not the idea of licensing MacOS. It just needed the right partner.
It found one with Stephen Kahng’s Power Computing. Kahng, a veteran of the clone wars, first made waves in the PC market with the clone-maker Leading Edge, and he wanted to repeat that feat with the Mac. And his new firm, Power Computing, was offering an inroad for Apple to potentially score similar success.
And so, in the waning days of 1994, just before the annual MacWorld conference, the news hit the wires: Apple was getting an authorized clone-maker. It turns out that the key was just to wait for the right CEO to take over, then ask nicely.
Though the idea may have looked rosy at first, some saw some dark clouds over the whole thing. Famed tech columnist John C. Dvorak suggested that Kahng was more dangerous than he seemed. “Apple is not going to know what hit them,” he told The New York Times.
And there were other signs that Apple was starting to lose its identity. A PC Magazine analysis from early 1995 perhaps put the biggest frowny-face on the story:
Apple’s decision to create a clone market may or may not be successful, but it didn’t really have a choice. At the recent MacWorld conference, one of the most popular technical seminars was given by Microsoft. It covered how Mac programmers can learn to write Windows applications.
One can see why Apple might have been attracted to this model, in retrospect. The company was a bit lost in the market at the time, and needed a strategy to expand its shrinking base of users.
But the clone market did not expand its base. Instead, it invited a price war.
A PowerCenter Pro 210, a Macintosh clone manufactured by Power Computing Corporation.Angelgreat/Wikimedia Commons
The best time for Apple to introduce a clone program was probably a decade earlier, in 1985 or 1986. At the time, people like Chuck Colby were inventing new kinds of Macs that didn’t directly compete with what Apple was making. Furthermore, the concept of a Mac was new, just as desred for its form factor as its software.
In hindsight, it’s clear that 1995 wasn’t a good time to do so. The decision put a mirror against Apple’s own offerings, which attempted to hit every possible market segment—47 different device variants that year alone, per EveryMac.
This didn’t reflect well on Apple—and companies like Power Computing exploited that to offer cheaper hardware. The company’s Power 100, for example, scored basically identical performance to the Macintosh 8100/100, while cutting more than US $1,000 off the Apple product’s $4,400 price tag. Meanwhile, other machines, such as the DayStar Genesis MP, outpaced Apple’s own ability to hit the high end.
Both of these machines, in their own ways, hit at a key problem with Apple’s mid-’90s industrial design. Before the iMac revolutionized Apple computers upon its 1998 release, Macs simply didn’t have enough of a “wow factor” driving the industrial design. It made the Mac about the software, not the hardware.
Within a year or two, it was clear that Apple had begun to undermine its own bottom line. When Chuck Colby put a Mac motherboard in a new chassis, Apple kept its high margins. But Power Computing’s beige boxes ate into Apple’s market share, and the MacOS-makers also got a far smaller cut.
There likely was a magic point at which Power Computing’s scale would have made up for the loss in hardware revenue. But in the era of Windows 95, Apple needed a partner that would go toe-to-toe with Packard Bell. Instead, these cut-rate Macs only attracted the already converted, undercutting Apple along the way.
“I would guess that somewhere around 99 percent of their sales went to the existing customer base,” then-CFO Fred Anderson told Wired in 1997.
The company only figured this part out after Steve Jobs returned to the fold.
The course correction got messy: Jobs, in the midst of trying to fix this situation in his overly passionate way, might have harmed the evolution of the PowerPC chip, for example. A 1998 piece from the Wall Street Journal notes that Jobs’ tough negotiations over clones damaged its relationship with Motorola, its primary CPU supplier, to the point the company pledged it would no longer go the extra mile for Apple.
“They will be just another customer,” a Motorola employee told the paper.
Power Computing—which had an apparent $500 million in revenue in 1996 alone—got a somewhat softer landing, though not without its share of drama. Apple had pushed the company to agree to a new licensing deal even before Jobs took over as CEO, and once he did, it was clear the companies would not see eye to eye. The company’s then-president, Joel Kocher, attempted to take the battle to MacWorld, where he forced a public confrontation over the issue. The board disagreed with Kocher’s actions, Kocher quit, and ultimately the company sold most of its assets to Apple for $100 million, effectively killing the market entirely.
The only clone-maker that Apple seemed willing to play ball with was the company UMAX. The reason? Its SuperMac line had figured out how to hit the low-end market, an area Apple has famously struggled to hit. Apple wanted UMAX to focus on the sub-$1,000 market, especially in parts of the world where Apple lacks a foothold. But UMAX didn’t want the low-end if it couldn’t keep a foothold in the more lucrative high end, and it chose to dip out on its own.
The situation highlighted the ultimate problems with cloning—a loss of control, and a lack of alignment between licensor and licensee.
Apple restricted the licenses, making these System 7 clones, for the most part, restricted from (legally) upgrading to Mac OS 8. It did the trick—and starved the clone-makers out.
That would be the end of the Apple clone story, except for one dangling thread: Steve Jobs once attempted to make an exception to his aversion to clones.In the early 2000s, Jobs pitched Sony on the idea of putting Mac OS X on its VAIO desktops and laptops, essentially because he felt it was the only product line that matched what Apple was doing from a visual standpoint.
Jobs looked up to Sony and its cofounder Morita Akio, even offering a eulogy for Akio after his passing. (Nippon, upon Jobs’ passing, called the Apple founder’s appreciation for the country and its companies “a reciprocal love affair.”) But Sony had already done the work with Windows, so it wasn’t to be.
On Sony’s part, it sounds like the kind of prudent decision Jobs made when he killed the clones a few years earlier.
2025-11-14 22:36:28

Healthcare is rapidly evolving with a growing reliance on portable medical devices in both clinical and home-care environments. These devices—used for diagnostics, monitoring, and life-support functions like ventilators—improve accessibility and outcomes by enabling continuous monitoring and timely interventions. However, their mobility and usage in high-impact environments demand rugged, compact, and high-speed components, particularly reliable internal connectors that can withstand shock, vibration, and physical stress.
This white paper highlights how the growth of portable and in-home medical devices has pushed the need for miniaturized, high-performance connectors. It explores how connector technology must balance reduced size, high data speeds, rugged durability, and simplified assembly to support modern healthcare demands.
2025-11-14 03:00:03

The IEEE Board of Directors shapes the future direction of IEEE and is committed to ensuring IEEE remains a strong and vibrant organization—serving the needs of its members and the engineering and technology community worldwide while fulfilling the IEEE mission of advancing technology for the benefit of humanity.
This article features IEEE Board of Directors members Antonio Luque, Ravinder Dahiya, and Joseph Wei.
Director and vice president, Member and Geographic Activities
Antonio LuqueAntonio Luque
Luque is a professor of electronics engineering at the Universidad de Sevilla, Spain, where he mentors students on digital electronics, devices, and cyber-physical systems.
His work has focused on electronics, sensors, and microsystems for biomedical applications. He also has worked on the creation of disposable smart microsystems for safe production of radiopharmaceuticals applied to medical imaging.
More recently, Luque has been working on cybersecurity and connectivity applied to the Internet of Things and real-time systems.
He holds master’s and doctorate degrees in electrical engineering from the Universidad de Sevilla.
Luque has been an active IEEE volunteer since 2002, when he first became involved in the IEEE Industrial Electronics Society’s technical conferences, and developed software to streamline many of the society’s operations. He is also a member of the IEEE Electron Devices and IEEE Education societies.
He was also a coordinator for the IEEE Young Professionals group for the IEEE Spain Section. He later served as section chair.
Luque was elected as the Region 8 director in 2020–2021 and served as director and vice president of IEEE Member and Geographic Activities in 2024. He also has served on the IEEE Governance Committee and IEEE European Public Policy Committee.
He served as an associate editor of the IEEE Journal of Microelectromechanical Systems from 2013 to 2019 and the IEEE Transactions on Industrial Electronics since 2014. During his career, he has authored 20 journal articles, 40 conference papers, three book chapters, and a textbook.
Luque received the 2007 Young Researcher Award from the Academia Europaea, which recognizes promising young scholars at the post doctoral level.
Director, Division X
Ravinder DahiyaIEEE Sensors Council
Dahiya is a professor of electrical and computer engineering at Northeastern University, in Boston. He leads the university’s Bendable Electronics and Sustainable Technologies group.
His research interests include flexible and printed electronics, robotic tactile sensing, electronic skin technology, haptics, wearables, and intelligent interactive systems.
Dahiya developed the first energy-generating tactile skin, which, in addition to providing touch feedback, generates energy that can operate actuators used by robots. His robotic tactile sensing research was recognized by IEEE through his elevation to the grade of Fellow.
During the COVID-19 pandemic, Dahiya and his research team developed a low-cost DIY ventilator and a smart bandage that accelerated healing and helped detect the signs of coronavirus through respiratory feedback.
He holds a bachelor’s degree in electrical engineering from the Kurukshetra University in Kurukshetra, India, a master’s degree in electrical engineering from the Indian Institute of Technology in Delhi, and a doctorate in humanoid technologies from Istituto Italiano di Technologia and Università di Genova, Italy.
He served as 2022–2023 president of the IEEE Sensors Council, where he launched several initiatives including journals (e.g., IEEE Journal on Flexible Electronics and IEEE Journal of Selected Areas in Sensors), conferences (e.g., IEEE International Conference on Flexible Printable Sensors and Systems and IEEE Biosensors) and Sensors in Spotlight networking event. During his time as President, he led the 25th anniversary events of the IEEE Sensors Council.
He was a member of the editorial board of the IEEE Sensors Journal from 2012 to 2020, IEEE Transactions on Robotics from 2011 to 2017 and founding editor-in-chief of IEEE Journal on Flexible Electronics from 2022 to 2023. He has authored or coauthored more than 550 research publications, as well as eight books, and he has been granted several patents. He has presented more than 250 keynote addresses and lectures worldwide, including a 2016 TEDx talk on “Animating the Inanimate World.”
Dahiya received an Engineering and Physical Sciences Research Council fellowship and a Marie Curie fellowship. He was recognized with the 2016 IEEE Sensors Council Technical Achievement Award. He is also Fellow of the Royal Society of Edinburgh.
Director, Region 6: Western U.S.
Joseph WeiTwo Dudes Photo/FMS Conference
A veteran of Silicon Valley, Wei combines his more than 40 years of experience in the entrepreneurial and information technology space with his passion for investing in and mentoring startups. He is a frequent speaker at global startup conferences on entrepreneurship and technology.
Wei’s commitment to driving innovation and technological advancement through mentorship has yielded significant results. One of his portfolio healthcare startups recently debuted on the public market, valued at over US $3 billion.
He played a key role in advancing global connectivity through his involvement with the IEEE Standards Association and its development of the IEEE 802.11 standard. Wi-Fi has become the foundation of modern wireless communication, transforming industries, enabling the digital economy, and bridging communities worldwide.
Wei’s career-long efforts to accelerate the widespread adoption of open-source software have helped empower businesses of all sizes to innovate, reduce their technology costs, and foster global collaboration in software development.
He holds a bachelor’s degree in electrical engineering from Tufts University in Medford, Massachusetts.
He has served as chair of the IEEE Santa Clara Valley (California) Section, Board of Governors, chair of IEEE Consumer Technology Society’s Santa Clara Valley Chapter, and chair of IEEE Engineering in Medicine and Biology Society chapter in California. In 2023 it became the society’s largest chapter.
He received a special Section Award from the Santa Clara Valley Section in 2020 for his outstanding volunteerism and service as a positive role model, as well as a Director’s Special Award in 2015 for his outstanding performance as the Santa Clara Valley Section chair, managing and organizing this largest section in the world to make it more effective and for supporting major IEEE Region 6 initiatives.
Wei credits the exceptional training and extensive network of experts he’s amassed through his IEEE volunteer work for enabling him to provide valuable insights, industry connections, and expertise that help him guide startups and innovators.
2025-11-14 00:00:02

Are you finally ready to hang a computer screen on your face?
Fifteen years ago, that would have seemed like a silly question. Then came the much-hyped and much-derided Google Glass in 2012, and frankly, it still seemed a silly question.
Now, though, it’s a choice consumers are beginning to make. Tiny displays, shrinking processors, advanced battery designs, and wireless communications are coming together in a new generation of smart glasses that display information that’s actually useful right in front of you. But the big question remains: Just why would you want to do that?
Some tech companies are betting that today’s smart glasses will be the perfect interface for delivering AI-supported information and other notifications. The other possibility is that smart glasses will replace bulky computer screens, acting instead as a private and portable monitor. But the companies pursuing these two approaches don’t yet know which choice consumers will make or what applications they really want.
Smart-glasses skeptics will point to the fate of Google Glass, which was introduced in 2012 and quickly became a prime example of a pricey technology in search of practical applications. It had little to offer consumers, aside from being an aspirational product that was ostentatiously visible to others. (Some rude users were even derided as “glass-holes.”) While Glass was a success in specialized applications such as surgery and manufacturing until 2023—at least for those organizations that could afford to invest around a thousand dollars per pair—it lacked any compelling application for the average consumer.
Smart-glasses technology may have improved since then, but the devices are still chasing a solid use case. From the tech behemoths to little brands you’ve never heard of, the hardware once again is out over its skis, searching for the right application.
During a Meta earnings call in January, Mark Zuckerberg declared that 2025 “will be a defining year that determines if we’re on a path toward many hundreds of millions and eventually billions” of AI glasses. Part of that determination comes down to a choice of priorities: Should a head-worn display replicate the computer screens that we currently use, or should they work more like a smartwatch, which displays only limited amounts of information at a time?
Head-worn displays fall into two broad categories: those intended for virtual reality (VR) and those suited for augmented reality (AR). VR’s more-immersive approach found some early success in the consumer market, such as the Meta Quest 2 (originally released as the Oculus Quest 2 in 2020), which reportedly sold more than 20 million units before it was discontinued. According to the market research firm Counterpoint, however, the global market for VR devices fell by 12 percent year over year in 2024—the third year of decline in a row—because of hardware limitations and a lack of compelling use cases. As a mass consumer product, VR devices are probably past their moment.
In contrast, AR devices allow the wearer to stay engaged with their surroundings as additional information is overlaid in the field of view. In earlier generations of smart glasses, this information added context to the scene, such as travel directions or performance data for athletes. Now, with advances in generative AI, AR can answer questions and translate speech and text in real time.
Many analysts agree that AI-enhanced smart glasses are a market on the verge of massive growth. Louis Rosenberg, CEO and chief scientist with Unanimous AI, has been involved in AR technology from its start, more than 30 years ago. “AI-powered smart glasses are the first mainstream XR [extended reality] devices that are profoundly useful and will achieve rapid adoption,” Rosenberg told IEEE Spectrum. “This, in turn, will accelerate the adoption of immersive versions to follow. In fact, I believe that within five years, immersive AI-powered glasses will replace the smartphone as the primary mobile device in our digital lives.”
Major tech companies, including Google and Apple, have announced their intentions to join this market, but have yet to ship a product. One exception is Meta, which released the Meta Ray-Ban Display in September, priced at US $799. (Ray-Ban Meta glasses without a display have been available since 2023.)
A number of smaller companies, though, have forged the path for true AR smart glasses. Two of the most promising models—the One Pro from Beijing-based Xreal and the AI glasses from Halliday, based in Singapore—represent the two different design concepts evolving in today’s smart-glasses market.
Halliday’s smart glasses are a lightweight, inconspicuousdevice that looks like everydayeyewear. The glasses have a single small microLED projector placed above the right lens. This imager beams a monochrome green image directly to your eye, with a level of light dim enough to be safe but bright enough to be seen against ambient light. What the user sees is a virtual 3.5-inch (8.9-centimeter) screen in the upper right corner of their field of view. Like a typical smartwatch screen, it can display up to 10 or so short lines of text and basic graphics, such as arrows when showing turn-by-turn navigation instructions, sufficient to provide an interface for an AI companion.
In press materials, Halliday describes its glasses as “a hidden superpower to tackle life’s challenges.” And hidden it is. The display technology is much more discreet than those of other designs that use waveguides or prismatic lenses, which will often reveal a reflected image or a noticeable rainbow effect. Because it projects the image directly to the eye, the Halliday device doesn’t produce any such indications. The glasses can even be fitted with standard prescription lenses.
By contrast, the Xreal One Pro has two separate imagers—one for each eye—that show full-color, 1,080-pixel images that fill 57 degrees of the user’s field of view. This allows the One Pro to display the same content you’d see on a notebook or desktop screen. (A more typical field of view for AR glasses is 30 to 45 degrees. Halliday’s virtual screen occupies only a small portion of the user’s field of view.)
Xreal’s One Pro smart glasses consist of many layers that work together to create a full-color, high-resolution display.Xreal
In fact, the One Pro is intended to eliminate those notebook and desktop screens. “We’re now at the point where AR glasses’ spatial screens can truly replace physical monitors all day long,” Xreal CEO and cofounder Chi Xu said in a December 2024 press release. But it’s not a solution for use when you’re out and about; the glasses remain tethered to your computer or mobile device by a cable.
The glasses use microLED imagers that deliver good color and contrast performance, along with lower power consumption than an OLED. They also use a “flat prism” lens that is 11 millimeters thick—less than half the thickness of the prisms in some other AR smart glasses, but three to four times as thick as typical prescription lenses.
The flat-prism technology is similar to the “bird bath” prisms in Xreal’s previous glasses, which used a curved surface to reflect the display image to the wearer’s eye, but the flat prism’s thinner and lighter design offers a larger field of view. It also has advantages over the refraction-based waveguides used by other glasses, which can introduce visible artifacts such as colored halos.
In order to improve the visibility of the projected image, the glasses block much of the ambient light from the surroundings. Karl Guttag, a display-industry expert and author of the KGOnTech blog, says that the Xreal One Pro blocks about 78 percent of real-world light and are “like dark sunglasses.”
The One Pro also has a built-in spatial computer coprocessor, which enables the glasses to position an image relative to a direction in your view. For example, if you have an application that shows one set of information to your left, another in the middle, and a third to the right, you would simply turn your head to look at a different set. Or you could position an image in a fixed location—as with the Halliday glasses—so that it remains in front of you when you turn your head.
Having separate imagers for each eye makes it possible to create stereoscopic 3D effects. That means you could view a 3D object in a fixed location in your room, making for a more immersive experience.
All these features come at a cost. Xreal’s glasses draw too much power to be run by battery, and they need a high-speed data connection to access the display data on a laptop or desktop computer. This connection provides power and enables high-resolution video streaming to the glasses, but it keeps the user tethered to the host device. The Halliday glasses, by contrast, run off a battery that the company states can last up to 12 hours between charges.
Another key difference is weight. Early AR glasses were so heavy that they were uncomfortable to wear for long periods of time. The One Pro is relatively light at 87 grams, or a little less than the weight of a small smartphone. But the Halliday’s simpler design and direct projector yields a device that’s less than half that at 35 grams—a weight similar to that of many regular prescription glasses.
Customers try out Halliday’s smart glasses at an expo in China in July 2025. Ying Tang/NurPhoto/Getty Images
In both cases, this new generation of consumer-oriented smart glasses costs much less than enterprise AR systems, which cost several thousands of dollars. The One Pro lists for $649, while the Halliday lists for $499.
Currently, neither Halliday nor Xreal has a camera built into its glasses, which instead communicate through voice control and audio feedback. This eliminates extra weight and power consumption, helps keep costs down, and sidesteps the privacy concerns that proved to be one of the main sticking points for Google Glass.
There are certainly applications where a camera can be helpful, however, such as for image recognition or when users with impaired vision want to hear the text of signs read aloud. Xreal does offer an optional high-resolution camera module that mounts at the bridge of the nose. Whether to include a built-in camera in future models is yet another trade-off these companies will need to consider.
Clearly, these two models of smart glasses represent very different design strategies and applications. The Halliday glasses exist largely as a mobile platform for an AI companion that you can use discretely throughout the day, the way you would use a smartwatch. The One Pro, on the other hand, can act as a replacement for your computer’s monitor—or several monitors, thanks to the spatial computing feature. The high resolution and full color deliver the same information that you’re used to getting from the larger displays, with the trade-off that you’re physically tethered to your computer.
Are either of these scenarios the killer app for smart glasses that we’ve been waiting for?
With the rise of generative AI agents, people are growing increasingly comfortable with easy access to all sorts of information all the time. Smart speakers such as Amazon Echo have trained us to get answers to just about anything simply by asking. Wearing a device on your face that can discreetly present information on demand, like Halliday’s glasses, will certainly appeal to some consumers, especially when it’s priced affordably.
Chris Chinnock, founder of Insight Media, thinks this is the path for the future. “I am not convinced that a display is needed for a lot of applications, or if you have a display, a small [field of view] version is sufficient. I think audio glasses coupled with AI capabilities could be very interesting in the near term, as the optics [or the] display for more full-featured AR glasses are developed.”
On the other hand, many people may be seeking a compact and convenient alternative to the large, bulky external monitors that come with today’s laptop and desktops. On an airplane, for instance, it’s difficult to comfortably open your laptop screen enough to see it, and there’s little expectation of privacy on a crowded flight. But with smart glasses that project multiple virtual screens, you may actually be able to do some useful work on a long flight.
For now, companies like Halliday and Xreal are hoping that there’s room for both strategies in the consumer market. And with multiple choices now available at consumer-friendly prices, we will soon start to see how much interest there is. Will consumers choose the smart AI companion, or a compact and private replacement for computer screens? In either case, your glasses are likely to become a lot smarter.
| Model | Display technology | Number of displays | Resolution | Price (US $) |
|---|---|---|---|---|
| Halliday | Monochrome microLED | Right eye only |
Low | 499 |
| Xreal One Pro | Full-color microLED | Both eyes | High | 649 |
| Meta Ray-Ban Display | Full-color liquid crystal on silicon (LCoS) | Right eye only |
High | 799 |
| TCL RayNeo X3 Pro | Full color microLED | Both eyes | High | 1,250 |
| Even Realities Even G1 | Monochrome microLED | Both eyes | Low | 599 |
| Rokid Max 2 | Full color micro-OLED | Both eyes | High | 529 |