2025-06-25 02:00:04
Last year marked the 25th anniversary of the IEEE Presidents’ Scholarship. Since its inception, the prestigious US $10,000 award has been given annually to one exceptional high school student participating in the Regeneron (formerly Intel) International Science and Engineering Fair. The ISEF is the world’s largest international STEM research competition for high school students.
Finalists for the scholarship are selected by a team of IEEE volunteer judges. The scholarship is funded by the IEEE Foundation and administered by IEEE Educational Activities.
To commemorate the scholarship’s anniversary, I asked past winners how the award impacted their life and career, and what they are doing today.
Elena Glassman received the scholarship in 2004 for her Brain-Computer Interface for the Muscularly Disabled project.Elena Glassman
Elena Glassman received the scholarship in 2004 for her Brain-Computer Interface for the Muscularly Disabled project. She wrote code to collect EEG wavelets that predicted her own right or left arm movement with an accuracy rate of 73 percent.
Today Glassman is an assistant professor of computer science at Harvard, where she teaches human-computer interaction. She is also a new mother. The scholarship supported her education at MIT, where she earned a bachelor’s degree in electrical engineering. She says the scholarship was among the most memorable awards she received.
“When your project is being evaluated by IEEE judges who understand the work,” she says, “that’s what was so meaningful about receiving the award.” With encouragement from her father, a lifelong IEEE member, she submitted a paper about her project to IEEE Transactions on Biomedical Engineering, which published it.
In her current work, she says, she enjoys focusing on the “human side of programming.” She adds that her electrical engineering background is useful in tackling all sorts of projects.
Adam Sidman received the 2005 scholarship for Camera Stabilization: Take Two. His project centered around the development of a handheld servo-based device. The film and TV producer in Los Angeles says his invention is a “go-to everyday technology for filmmakers on sets around the world.”
Sidman is chief executive of Timur Bekmambetov’s production company, Bazelevs, where he has overseen a variety of movies including The Current War, Hardcore Henry, Searching, and Unfriended.
Receiving the scholarship was “a tremendous honor,” he says, “validating my passion to combine the arts and sciences.” He graduated from Harvard with a bachelor’s degree in mechanical engineering and visual and environmental studies.
Last year he collaborated with ISEF organizers to establish a new category of projects, Technology Enhances the Arts, and he continues to serve as a judge.
Rahul Kumar Pandey, a software engineer-turned-entrepreneur, received the scholarship in 2007. His startup, Taro, helps software engineers navigate the professional world, providing advice on job searching, negotiation, promotions, and leadership. The platform boasts more than 100,000 users. Pandey is a writer for IEEE’s Careers newsletter.
The scholarship supported his degree in computer science at Stanford.
He credits his science-fair experience with giving him the confidence to innovate and advance the field.
His winning project, A Microwave Metamaterial Lens With Negative Index of Refraction, focused on building a lens array to transmit microwave signals, and it tested their behavior in terms of how the lens affected the propagation of electromagnetic waves.
“When I heard my name called, I couldn’t stop smiling,” he recalls, “because an organization like IEEE believed in me.”
Pandey advises high school students that “the world is your oyster, if you have curiosity. You don’t have to wait until you feel ready.”
Harikrishna “Hari” Rallapalli, the 2008 scholarship recipient, is a research fellow at the U.S. National Institute of Neurological Disorders and Stroke, in Bethesda, Md.
Rallapalli says he plans to research techniques that enable gene expression imaging in humans, a method that allows for the visualization and quantification of the activity of specific genes.
His winning project, Low-Cost Total Internal Reflection Fluorescence Microscopy, focused on building a microscope for classrooms, both for demonstrations and student-level research.
The scholarship helped support his education at the University of California, Davis, where he earned a bachelor’s degree in biomedical engineering.
“It felt amazing to have my work recognized by anyone, let alone an organization as prestigious as IEEE,” he says. “It was an early indication that I might cut it as a scientist.”
Jessica Richeri, the 2011 recipient, is a software design engineer at Fluke in Everett, Wash. She is designing and developing a supervisory control and data acquisition (SCADA) system for one of the company’s factories. The system collects data from equipment and creates analytics dashboards and reports.
Richeri’s winning entry, Autonomous Robotic Vehicle: Saving Lives, Preventing Accidents, One at a Time, centered on building a vehicle and software to support it. Ultimately, she says, her design and its use of sensors and software could be incorporated into vehicles to prevent traffic accidents.
“The scholarship meant the world [to me]. I felt so honored that I was chosen to receive the award for all the hard work I put into my project,” she says.
A year after receiving the scholarship, she was invited to the California Capitol, in Sacramento, to present her project and discuss promoting STEM fields with her U.S. representative.
The money supported her education at California Baptist University, in Riverside, where she earned a bachelor’s degree in electrical and computer engineering.
“Winning the scholarship gave me confidence that my engineering passion could become a career. It was the start of my incredibly fun and exciting professional journey.” —2015 winner Alex Tacescu
She advises aspiring engineers that “the journey might be challenging, but the sense of accomplishment and the impact you’ll make in the world are more than worth the effort.”
The 2014 scholarship recipient, George Morgan, presented A Multi-Architectural Approach to the Development of Embedded Software. His aim, he says, was to make hardware and software development more accessible. He transformed his project into a suite of development tools for embedded-systems engineers to expedite operations.
“I remember walking on stage and feeling the excitement of being recognized for my project,” Morgan says. “In that instant, all my hard work felt validated, and I knew someone understood the level of difficulty and commitment required to reach that point.”
The scholarship supported his education at the Rochester Institute of Technology, in New York, where he graduated with a bachelor’s degree in computer engineering in 2017.
He began working at Tesla in 2018 on the AI team, dealing with the software and hardware that powers the electric vehicles’ autopilot.
Recently named among Forbes’ 30 Under 30 in AI, he founded Symbolica, a research-focused startup that develops foundational AI models and alternatives to the transformer architecture used in ChatGPT.
Alex Tacescu received the 2015 scholarship for Project Maverick (now known as Mavdrive). The project let users stand upright while moving around on a wheeled, motorized 0.6- by 0.6-meter platform. It is similar to a Segway but more stable, with four wheels instead of two, and each powered by independently controlled motors. He says it serves as a pathway to learn new technologies including control engineering, robot autonomy, simulation, and AI-powered machine vision.
“Winning the scholarship gave me confidence that my engineering passion could become a career,” Tacescu says. “It was the start of my incredibly fun and exciting professional journey. I will never forget the moment I was called up on that stage.”
He earned a bachelor’s degree in science and robotics engineering and a master’s degree in robotics engineering, both from Worcester Polytechnic Institute, in Massachusetts. He joined SpaceX as a Falcon flight software engineer and contributed to two groundbreaking missions. DART was the first human-made item to measurably move a celestial body, and Inspiration4 was the first all-civilian mission to orbit. Countdown: Inspiration4 Mission to Space is now on Netflix.
After nearly three years at SpaceX, Tacescu joined Inversion Space as a flight software engineer. The startup is focused on developing re-entry vehicles.
He advises high school seniors to “find their passion and keep at it. It’s going to be hard, and there will be very rough moments, but having it part of your passion makes it so much more fun, especially when those accomplishments start coming in.”
University student Kerem Bayhan, the 2021 scholarship recipient, won for a project focused on a system to help prevent underride car crashes, which occur when a vehicle collides with the rear or side of a large truck and gets stuck under it.Kerem Bayhan
Kerem Bayhan, the 2021 scholarship recipient, won for a project focused on a system to help prevent underride car crashes, which occur when a vehicle collides with the rear or side of a large truck and gets stuck under it.
Driven by a desire to make an impact on human lives, Bayhan says, he shifted his focus from engineering to medicine. He is currently a student at Hacettepe University, in Ankara, Türkiye.
“Winning the IEEE Presidents’ Scholarship award was an incredible and unexpected honor,” he says. “To have my project recognized by IEEE, one of the largest and most prestigious organizations in engineering, was immensely rewarding.”
He says he believes engineering skills are invaluable across many fields: “The analytical thinking, problem-solving abilities, and creativity at the core of engineering can set individuals apart, no matter what path they choose to follow.”
Amon Schumann was the 2022 scholarship recipient based on his Small Radiosondes on a Great Mission project, a sustainable weather balloon that is eco-friendly, cost-effective, and stays in the air longer than traditional ones. Now he’s studying electrical engineering at Technische Universität Berlin, where he has discovered an interest in high-frequency technology and circuit design.
After receiving the scholarship, Schumann enhanced his ISEF project by adding a live-streaming camera and additional sensors.
“My system initially allowed for flights lasting a few weeks,” he says. “I’ve since refined it to enable the balloons to collect data in the stratosphere for several months.”
Winning the scholarship was a significant motivator, he says, as it validated the potential of his basement-developed project and showed its potential for broad impact.
“Receiving the scholarship gave me the opportunity to become deeply involved with IEEE,” he says, “opening doors to connect with key decision-makers in science and technology and leading to my student researcher role” at a Berlin-based research institute specializing in high-frequency electronics.
Rohan Kalia, the 2023 scholarship recipient, is a senior at Wheeler High School in Marietta, Ga. For his winning project, he developed EyePal, an inexpensive tool for early glaucoma detection.
After receiving the scholarship, he continued to refine his project, enhancing its functionality.
“I open-sourced the parts, so if anyone wants to build on my device, they can,” Kalia says.
He was honored to receive the award, he says, knowing that past scholarship winners had made such creative projects.
“I really enjoyed the conversation with my interviewers,” he says. “We discussed the technical aspects of possible solutions and their trade-offs.”
He advises high school students to “keep an open mind” about their interests.
“Be curious about many different things,” he suggests, “as they connect in interesting ways. Once you find a topic you can’t stop thinking about, start a project to explore it.”
High school senior Angelina Kim was the 2024 scholarship recipient for her Autonomous Scout and Rescue UAVs for Ocean Safety.Angelina Kim
Angelina Kim was last year’s scholarship recipient. She is a high school senior at the Bishop’s School in La Jolla, Calif., where she is president of the All Girls STEM Society, a nonprofit, student-led organization that holds free monthly workshops for girls in grades 3 to 8 across San Diego. She plans to study electrical engineering at MIT.
Kim won the scholarship for her Autonomous Scout and Rescue UAVs for Ocean Safety. She developed a drone that could survey the shoreline, taking photographs and analyzing them to identify rip currents.
To continue her work related to the project, she is chief executive of AngelTech, a startup dedicated to enhancing public safety through innovative technologies.
“Through AngelTech, I’m partnering with local lifeguards to deploy my lifeguard scout and rescue drones on nearby beaches,” she says.
She also is developing new technologies to enhance public safety, she says, including a synchronized display created through several device screens. She holds a patent for the invention.
She says she was thrilled to receive the scholarship because she knew it would help her develop valuable contacts within IEEE and provide support for her future research.
“I hope to use the connections I’ve made through the scholarship to share my research and networking experiences with fellow engineers and companies, and to serve as a mentor for young girls who have limited access to STEM resources,” she says.
An article about this year’s recipient is scheduled to be published in The Institute in August.
2025-06-25 01:52:19
As foundational AI models grow in power and reach, they also expose new attack surfaces, vulnerabilities, and ethical risks. This white paper by the Secure Systems Research Center (SSRC) at the Technology Innovation Institute (TII) outlines a comprehensive framework to ensure security, resilience, and safety in large-scale AI models. By applying Zero-Trust principles, the framework addresses threats across training, deployment, inference, and post-deployment monitoring. It also considers geopolitical risks, model misuse, and data poisoning, offering strategies such as secure compute environments, verifiable datasets, continuous validation, and runtime assurance. The paper proposes a roadmap for governments, enterprises, and developers to collaboratively build trustworthy AI systems for critical applications.
2025-06-24 00:38:43
More and more satellites are being added to low Earth orbit (LEO) every month. As that number continues to increase, so do the risks of that critical area surrounding Earth becoming impassable, trapping us on the planet for the foreseeable future. Ideas from different labs have presented potential solutions to this problem, but one of the most promising, electrodynamic tethers (EDTs), have only now begun to be tested in space. A new CubeSat called the Spacecraft for Advanced Research and Cooperative Studies (SPARCS) mission from researchers at the Sharif University of Technology in Tehran hopes to contribute to that effort by testing an EDT and intersatellite communication system as well as collecting real-time data on the radiation environment of its orbital path.
SPARCS actually consists of two separate CubeSats. SPARCS-A is a 1U CubeSat primarily designed as a communications platform, with the mission design requiring it to talk to SPARCS-B, which is a 2U CubeSat that, in addition to the communication system, contains a EDT. That EDT, which can measure up to 12 meters in length, is deployed via a servomotor, with a camera watching to ensure proper deployment.
EDTs are essentially giant poles with electric current running through them. They use this current, and the tiny magnetic field it produces, to push off of the Earth’s natural magnetic sphere using a property called the Lorentz force. This allows the satellite to adjust its orbit without the use of fuel, simply by orienting its EDT in a specific direction (which the EDT itself can assist with) and then using the Lorentz force to either push it up into a higher orbit, or—more significant for the purposes for technology demonstration—to slow the CubeSat down to a point where it can make a controlled entry into the atmosphere.
That controlled-entry feature is why EDTs have garnered so much attention. Previous missions, such as KITE from JAXA and MiTEE from the University of Michigan, have already attempted to use EDTs to change their orbits. Unfortunately neither of those missions successfully utilized their EDT, though a follow-up mission called MiTEE-2 is in the works with an even larger EDT than SPARCS.
The final piece of SPARCS’ kit is its dosimeter, which is intended to monitor the radiation environment of its orbit. As anyone familiar with spacecraft design knows, radiation hardening of electronics is absolutely critical to the success of a mission, but it is also expensive and time consuming, so it is best done at a minimal required level. Understanding the radiation environment of this popular orbital path can help future engineers make better, and hopefully less expensive, design decisions tailored to operation in this specific area.
Engineers have already finalized the design for the mission and have run simulations showing its expected operations. They have now moved on to building an engineering model of the two CubeSats, allowing them to validate their design and test the real-world implementation before it is ready for launch. Given the current turmoil in that region of the world, there is a chance that conflict could put a halt to development of this system. But, if successfully tested and launched, the very first demonstration of an EDT system could be deployed in the not-too-distant future.
2025-06-23 20:22:12
This is a sponsored article brought to you by POWER Engineers, Member of WSP.
Digital transformation is reshaping industries across the globe, and the power delivery sector is no exception. As demand for reliable and efficient energy supply continues to grow, the need to modernize and optimize operations becomes increasingly critical. By leveraging digital tools and technologies, utilities are unlocking unprecedented opportunities to enhance precision, efficiency and resilience throughout the power delivery value chain—from generation to distribution.
However, while digitalization offers transformative potential, the power delivery industry continues to grapple with substantial technical and operational challenges. Many utilities still operate with legacy or manual security protocols that rely on reactive rather than proactive strategies. The slow pace of technology adoption further compounds these issues, increasing the vulnerability of critical assets to inefficiencies, downtime and physical threats. Overcoming these obstacles requires a strategic shift toward innovative solutions that drive measurable improvements in safety, reliability and operational optimization.
Meerkat takes the guesswork out of substation security by integrating high-fidelity data with real-time 3D mitigation modeling. This sophisticated approach identifies all line-of-sight vulnerabilities, and delivers robust protection for critical infrastructure in an increasingly complex threat landscape.Video: POWER Engineers, Member of WSP
Physical attacks on substations are becoming increasingly prevalent and sophisticated. As technology evolves, so do the bad actors that are trying to take down the grid. Many mitigation methods are no longer sufficient against modern methods of attack. These facilities, which are crucial to keeping the grid operational, must be able to comprehensively assess and adapt to new threats. Digital transformation is the key to this goal.
Physical breach events, defined here as physical attacks, vandalism, theft and suspicious activity, accounted for more than half of all electric disturbance events reported to the United States Department of Energy in 2023. POWER Engineers, Member of WSP
Conventional site analysis methods in power delivery are often inefficient and prone to inaccuracies, particularly at substations, where the shortcomings can lead to significant vulnerabilities.
Physical site walkthroughs to identify areas of vulnerability, for example, are inherently subjective and susceptible to human error. Compounding matters, safety concerns in high-voltage environments, coordination challenges and access restrictions to areas not owned by the substation can result in incomplete assessments and evaluations fraught with delays.
Static analysis is also limited by outdated or erroneous publicly available data, hindering precise assessments and delaying decision-making processes. For instance, assets captured in publicly available data may misrepresent recent construction near the site, which may create new lines of sight to critical assets.
Meerkat, developed by POWER Engineers, Member of WSP, leverages advanced technology to enhance threat assessment accuracy, significantly reducing assessment times, lowering mitigation costs and improving overall protection at substation facilities.
The Vulnerability of Integrated Security Analysis (VISA) method attempts to address some of these shortcomings by leveraging expert collaboration. Yet, it too has limitations—expertise variability among participants can lead to unrepresented perspectives, and reliance on static drawings and resources hampers effective visualization during sessions.
In contrast, some utilities opt for no analysis at all, erecting perimeter walls around facilities without pinpointing specific vulnerabilities. This approach often results in overbuilding and overspending while potentially leaving critical assets exposed due to overlooked threats from neighboring structures or terrain features.
Communication silos between stakeholders can also exacerbate these inefficiencies.
Emerging tools and technologies have the ability to address the longstanding inefficiencies in physical substation security.
Integrating cutting-edge technologies such as real-time data analytics and remote sensing, for example, can significantly enhance the precision and efficiency of security assessments. These tools provide dynamic insights into potential vulnerabilities, enabling proactive measures that adapt to emerging threats.
Transitioning from subjective assessments to data-backed evaluations ensures that decisions are grounded in accurate information rather than intuition alone. Robust datasets allow for thorough risk analyses that prioritize high-impact vulnerabilities while optimizing resource allocation.
Embrace flexible solutions capable of scaling with evolving infrastructure requirements or regulatory changes over time. This adaptability ensures continued relevance amidst shifting industry landscapes driven by technological advancements or policy shifts.
To solve the insufficiencies found within conventional site assessment methodologies, POWER Engineers, Member of WSP, designed a transformative threat assessment tool called Meerkat. Meerkat harnesses high-quality data and advanced modeling techniques to deliver comprehensive vulnerability assessments customized to each unique facility. It is offered alongside an industry-leading team of experts who can help break down costs, explore alternative mitigations and address operational concerns.
Meerkat revolutionizes physical substation security by offering a more accurate and thorough analysis compared to conventional approaches. It mitigates the risk of human error inherent in manual inspections and overcomes access limitations through advanced remote sensing capabilities. Additionally, Meerkat facilitates seamless collaboration among stakeholders by providing dynamic, easily interpretable visualizations that enhance communication and decision-making processes. Analyses can even be performed in a secure, online workshop, allowing subject matter experts to skip the travel delays and jump right into the action.
By using Meerkat in substation security projects, utilities can transition from reactive to proactive strategies that anticipate and counter potential vulnerabilities before they are exploited. This shift not only ensures compliance with regulatory standards but also aligns security enhancements with financial objectives, ultimately safeguarding both assets and investments in a rapidly changing technological landscape.
The Meerkat assessment features real-time mitigation modeling, optimizes camera placement, and identifies all vulnerabilities that could be exploited by malicious actors.POWER Engineers, Member of WSP
Meerkat starts with data collection. When pre-existing data of the site is available and of good quality and accuracy, it can be used for this process. However, when there is not sufficient data available, the Meerkat team collects its own high-fidelity data of the study area. This includes the substation facility, property and all surrounding terrain and infrastructure within an established radius of concern.
Next, the high-quality data is transformed into an interactive 3D model in a virtual environment. The model is so accurate that it can facilitate virtual site visits. Users can navigate around the substation environment by clicking and dragging on screen and can visualize the site from any point ranging from a bird’s-eye view to the perspective of a potential bad actor looking into the station.
This interactive model serves as a virtual sandbox where mitigation strategies can be tested in real time. It can comprehensively and objectively map all line-of-sight vulnerabilities—big and small—that a bad actor might use to attack critical components. Then, existing or proposed mitigation strategies, if available, can be tested and validated within the system. This stage is great for testing what-if scenarios and seeing how multiple mitigations interact if combined before construction even comes into play.
POWER’s team of industry-leading experts use their knowledge to guide iterative solutions that bring substation owners and operators closer to the best-cost solutions for their substations. Sometimes moving or changing the height of a proposed wall is all it takes to drastically improve protections without drastically changing the price. A built-in cost estimator can also give a rough idea of how material costs change as the design does.
Meerkat is an industry-leading technology that offers unparalleled benefits in conducting thorough vulnerability assessments for critical assets at substations. By leveraging sophisticated algorithms and high-quality data, Meerkat delivers precise evaluations that pinpoint potential weaknesses with exceptional accuracy. This comprehensive approach means that every aspect of a substation’s physical security is meticulously analyzed, leaving no stone unturned.
One of the key advantages of Meerkat is its ability to significantly enhance efficiency in the assessment process. This not only reduces the time and resources required for site assessments but also ensures consistent and reliable results.
Meerkat also allows an evaluation and design process that can sometimes take months of back-and-forth communication to happen in just a handful of hour-long workshops.
Accuracy is another hallmark of Meerkat, as it eliminates the guesswork associated with human-based evaluations. By leveraging advanced modeling techniques, Meerkat provides actionable insights that empower utilities to make informed decisions regarding security upgrades and mitigations. This precision facilitates proactive risk management strategies, allowing stakeholders to address vulnerabilities before they manifest into tangible threats.
Ultimately, by improving both efficiency and accuracy in vulnerability assessments, Meerkat enables better decision-making processes that enhance overall risk management. Utilities can confidently implement targeted security measures tailored to each site’s unique needs, ensuring robust protection against emerging threats while optimizing resource allocation. In a landscape where rapid technological advancements challenge conventional practices, Meerkat stands as a vital tool for safeguarding critical infrastructure with foresight and precision.
The following case study has been sanitized of identifying information to maintain the security of the facility.
Background
A client faced a critical decision regarding the security of their substation, which was surrounded by a chain-link fence spanning 3,523 linear feet. Concerned about potential line-of-sight attacks on their critical assets, they planned to construct a new 15 ft tall concrete masonry unit (CMU) wall around the entire perimeter. Before proceeding with this significant investment, they sought validation from physical security experts at POWER and used the advanced threat assessment capabilities of Meerkat.
Security Plan Validation
To assess the effectiveness of the proposed security plan, Meerkat was employed to model the 15 ft wall within a highly accurate digital representation of the facility and its surroundings. The comprehensive data-backed threat assessment revealed lingering vulnerabilities despite the proposed construction. With estimated costs between $12 million and $15 million—and additional expenses for ballistic rated gates—the financial implications were substantial.
Working Backward
Recognizing that the original plan might not sufficiently mitigate risks, the client collaborated with Meerkat experts and key personnel across disciplines—including electrical engineers, civil engineers and transmission planners—to explore alternative strategies. Through a series of concise workshops over several days, they reimagined security designs by focusing on protecting critical assets identified as essential to system stability.
Meerkat enabled real-time modeling and testing of diverse mitigation strategies. Its interactive features allowed stakeholders to dynamically adjust protective measures—such as repositioning or resizing ballistic barriers—with immediate insights into effectiveness against vulnerabilities. This iterative process prioritized achieving the optimal balance between cost efficiency and robust protection.
The Results
Through strategic analysis using Meerkat, it became clear that constructing two separate 166 ft long, 25 ft tall walls at targeted locations around critical assets offered superior protection compared to encircling the entire perimeter with a single structure. This solution significantly enhanced security while reducing the estimated implementation costs to approximately $3.4 million—about a quarter of the cost of the initial projections.
Ultimately, the revised approach not only lowered risk profiles but also prevented unnecessary expenditure on inadequate defenses. By leveraging the advanced technology provided by Meerkat, the client successfully optimized resource allocation, comprehensively safeguarding their vital infrastructure.
Any entity interested in learning more about Meerkat and its applications can request a free demonstration from our team of experts at meerkat.powereng.com.
2025-06-23 12:01:02
Night is falling on Cerro Pachón.
A view of NSF-DOE Vera C. Rubin Observatory beneath the Milky Way galaxy.NSF-DOE Vera C. Rubin Observatory/H. Stockebrand
Stray clouds reflect the last few rays of golden light as the sun dips below the horizon. I focus my camera across the summit to the westernmost peak of the mountain. Silhouetted within a dying blaze of red and orange light looms the sphinxlike shape of the Vera C. Rubin Observatory.
“Not bad,” says William O’Mullane, the observatory’s deputy project manager, amateur photographer, and master of understatement. We watch as the sky fades through reds and purples to a deep, velvety black. It’s my first night in Chile. For O’Mullane, and hundreds of other astronomers and engineers, it’s the culmination of years of work, as the Rubin Observatory is finally ready to go “on sky.”
Rubin is unlike any telescope ever built. Its exceptionally wide field of view, extreme speed, and massive digital camera will soon begin the 10-year Legacy Survey of Space and Time (LSST) across the entire southern sky. The result will be a high-resolution movie of how our solar system, galaxy, and universe change over time, along with hundreds of petabytes of data representing billions of celestial objects that have never been seen before.
Stars begin to appear overhead, and O’Mullane and I pack up our cameras. It’s astronomical twilight, and after nearly 30 years, it’s time for Rubin to get to work.
The top of Cerro Pachón is not a big place. Spanning about 1.5 kilometers at 2,647 meters of elevation, its three peaks are home to the Southern Astrophysical Research Telescope (SOAR), the Gemini South Telescope, and for the last decade, the Vera Rubin Observatory construction site. An hour’s flight north of the Chilean capital of Santiago, these foothills of the Andes offer uniquely stable weather. The Humboldt Current flows just offshore, cooling the surface temperature of the Pacific Ocean enough to minimize atmospheric moisture, resulting in some of the best “seeing,” as astronomers put it, in the world.
It’s a complicated but exciting time to be visiting. It’s mid-April of 2025, and I’ve arrived just a few days before “first photon,” when light from the night sky will travel through the completed telescope and into its camera for the first time. In the control room on the second floor, engineers and astronomers make plans for the evening’s tests. O’Mullane and I head up into a high bay that contains the silvering chamber for the telescope’s mirrors and a clean room for the camera and its filters. Increasingly exhausting flights of stairs lead to the massive pier on which the telescope sits, and then up again into the dome.
I suddenly feel very, very small. The Simonyi Survey Telescope towers above us—350 tonnes of steel and glass, nestled within the 30-meter-wide, 650-tonne dome. One final flight of stairs and we’re standing on the telescope platform. In its parked position, the telescope is pointed at horizon, meaning that it’s looking straight at me as I step in front of it and peer inside.
The telescope’s enormous 8.4-meter primary mirror is so flawlessly reflective that it’s essentially invisible. Made of a single piece of low-expansion borosilicate glass covered in a 120-nanometer-thick layer of pure silver, the huge mirror acts as two different mirrors, with a more pronounced curvature toward the center. Standing this close means that different reflections of the mirrors, the camera, and the structure of the telescope all clash with one another in a way that shifts every time I move. I feel like if I can somehow look at it in just the right way, it will all make sense. But I can’t, and it doesn’t.
I’m rescued from madness by O’Mullane snapping photos next to me. “Why?” I ask him. “You see this every day, right?”
“This has never been seen before,” he tells me. “It’s the first time, ever, that the lens cover has been off the camera since it’s been on the telescope.” Indeed, deep inside the nested reflections I can see a blue circle, the r-band filter within the camera itself. As of today, it’s ready to capture the universe.
Back down in the control room, I find director of construction Željko Ivezić. He’s just come up from the summit hotel, which has several dozen rooms for lucky visitors like myself, plus a few even luckier staff members. The rest of the staff commutes daily from the coastal town of La Serena, a 4-hour round trip.
To me, the summit hotel seems luxurious for lodgings at the top of a remote mountain. But Ivezić has a slightly different perspective. “The European-funded telescopes,” he grumbles, “have swimming pools at their hotels. And they serve wine with lunch! Up here, there’s no alcohol. It’s an American thing.” He’s referring to the fact that Rubin is primarily funded by the U.S. National Science Foundation and the U.S. Department of Energy’s Office of Science, which have strict safety requirements.
Originally, Rubin was intended to be a dark-matter survey telescope, to search for the 85 percent of the mass of the universe that we know exists but can’t identify. In the 1970s, astronomer Vera C. Rubin pioneered a spectroscopic method to measure the speed at which stars orbit around the centers of their galaxies, revealing motion that could be explained only by the presence of a halo of invisible mass at least five times the apparent mass of the galaxies themselves. Dark matter can warp the space around it enough that galaxies act as lenses, bending light from even more distant galaxies as it passes around them. It’s this gravitational lensing that the Rubin observatory was designed to detect on a massive scale. But once astronomers considered what else might be possible with a survey telescope that combined enormous light-collecting ability with a wide field of view, Rubin’s science mission rapidly expanded beyond dark matter.
Trading the ability to focus on individual objects for a wide field of view that can see tens of thousands of objects at once provides a critical perspective for understanding our universe, says Ivezić. Rubin will complement other observatories like the Hubble Space Telescope and the James Webb Space Telescope. Hubble’s Wide Field Camera 3 and Webb’s Near Infrared Camera have fields of view of less than 0.05 square degrees each, equivalent to just a few percent of the size of a full moon. The upcoming Nancy Grace Roman Space Telescope will see a bit more, with a field of view of about one full moon. Rubin, by contrast, can image 9.6 square degrees at a time—about 45 full moons’ worth of sky.
That ultrawide view offers essential context, Ivezić explains. “My wife is American, but I’m from Croatia,” he says. “Whenever we go to Croatia, she meets many people. I asked her, ‘Did you learn more about Croatia by meeting many people very superficially, or because you know me very well?’ And she said, ‘You need both. I learn a lot from you, but you could be a weirdo, so I need a control sample.’ ” Rubin is providing that control sample, so that astronomers know just how weird whatever they’re looking at in more detail might be.
Rubin Observatory’s Skyviewer app lets you explore its stunning first images by interactively navigating a vast, detailed view of the cosmos — you can zoom in and out and move around to examine the rich tapestry of stars and galaxies in extraordinary detail. The area observed includes the southern region of the Virgo Cluster — approximately 55 million light-years from Earth — as well as closer stars in the Milky Way and much more distant galaxy groups. This image, built from over 3 trillion pixels of data collected in just seven nights, contains millions of galaxies. Eventually, the full Legacy Survey of Space and Time (LSST) will catalog about 20 billion galaxies of all types, and from all times in the history of the Universe.
Every night, the telescope will take a thousand images, one every 34 seconds. After three or four nights, it’ll have the entire southern sky covered, and then it’ll start all over again. After a decade, Rubin will have taken more than 2 million images, generated 500 petabytes of data, and visited every object it can see at least 825 times. In addition to identifying an estimated 6 million bodies in our solar system, 17 billion stars in our galaxy, and 20 billion galaxies in our universe, Rubin’s rapid cadence means that it will be able to delve into the time domain, tracking how the entire southern sky changes on an almost daily basis.
Achieving these science goals meant pushing the technical envelope on nearly every aspect of the observatory. But what drove most of the design decisions is the speed at which Rubin needs to move (3.5 degrees per second)—the phrase most commonly used by the Rubin staff is “crazy fast.”
Crazy fast movement is why the telescope looks the way it does. The squat arrangement of the mirrors and camera centralizes as much mass as possible. Rubin’s oversize supporting pier is mostly steel rather than mostly concrete so that the movement of the telescope doesn’t twist the entire pier. And then there’s the megawatt of power required to drive this whole thing, which comes from huge banks of capacitors slung under the telescope to prevent a brownout on the summit every 30 seconds all night long.
Rubin is also unique in that it utilizes the largest digital camera ever built. The size of a small car and weighing 2,800 kilograms, the LSST camera captures 3.2-gigapixel images through six swappable color filters ranging from near infrared to near ultraviolet. The camera’s focal plane consists of 189 4K-by-4K charge-coupled devices grouped into 21 “rafts.” Every CCD is backed by 16 amplifiers that each read 1 million pixels, bringing the readout time for the entire sensor down to 2 seconds flat.
As humans with tiny eyeballs and short lifespans who are more or less stranded on Earth, we have only the faintest idea of how dynamic our universe is. To us, the night sky seems mostly static and also mostly empty. This is emphatically not the case.
In 1995, the Hubble Space Telescope pointed at a small and deliberately unremarkable part of the sky for a cumulative six days. The resulting image, called the Hubble Deep Field, revealed about 3,000 distant galaxies in an area that represented just one twenty-four-millionth of the sky. To observatories like Hubble, and now Rubin, the sky is crammed full of so many objects that it becomes a problem. As O’Mullane puts it, “There’s almost nothing not touching something.”
One of Rubin’s biggest challenges will be deblending—identifying and then separating things like stars and galaxies that appear to overlap. This has to be done carefully by using images taken through different filters to estimate how much of the brightness of a given pixel comes from each object.
At first, Rubin won’t have this problem. At each location, the camera will capture one 30-second exposure before moving on. As Rubin returns to each location every three or four days, subsequent exposures will be combined in a process called coadding. In a coadded image, each pixel represents all of the data collected from that location in every previous image, which results in a much longer effective exposure time. The camera may record only a few photons from a distant galaxy in each individual image, but a few photons per image added together over 825 images yields much richer data. By the end of Rubin’s 10-year survey, the coadding process will generate images with as much detail as a typical Hubble image, but over the entire southern sky. A few lucky areas called “deep drilling fields” will receive even more attention, with each one getting a staggering 23,000 images or more.
Rubin will add every object that it detects to its catalog, and over time, the catalog will provide a baseline of the night sky, which the observatory can then use to identify changes. Some of these changes will be movement—Rubin may see an object in one place, and then spot it in a different place some time later, which is how objects like near-Earth asteroids will be detected. But the vast majority of the changes will be in brightness rather than movement.
Every image that Rubin collects will be compared with a baseline image, and any change will automatically generate a software alert within 60 seconds of when the image was taken. Rubin’s wide field of view means that there will be a lot of these alerts—on the order of 10,000 per image, or 10 million alerts per night. Other automated systems will manage the alerts. Called alert brokers, they ingest the alert streams and filter them for the scientific community. If you’re an astronomer interested in Type Ia supernovae, for example, you can subscribe to an alert broker and set up a filter so that you’ll get notified when Rubin spots one.
Many of these alerts will be triggered by variable stars, which cyclically change in brightness. Rubin is also expected to identify somewhere between 3 million and 4 million supernovae—that works out to over a thousand new supernovae for every night of observing. And the rest of the alerts? Nobody knows for sure, and that’s why the alerts have to go out so quickly, so that other telescopes can react to make deeper observations of what Rubin finds.
After the data leaves Rubin’s camera, most of the processing will take place at the SLAC National Accelerator Laboratory in Menlo Park, Calif., over 9,000 kilometers from Cerro Pachón. It takes less than 10 seconds for an image to travel from the focal plane of the camera to SLAC, thanks to a 600-gigabit fiber connection from the summit to La Serena, and from there, a dedicated 100-gigabit line and a backup 40-gigabit line that connect to the Department of Energy’s science network in the United States. The 20 terabytes of data that Rubin will produce nightly makes this bandwidth necessary. “There’s a new image every 34 seconds,” O’Mullane tells me. “If I can’t deal with it fast enough, I start to get behind. So everything has to happen on the cadence of half a minute if I want to keep up with the data flow.”
At SLAC, each image will be calibrated and cleaned up, including the removal of satellite trails. Rubin will see a lot of satellites, but since the satellites are unlikely to appear in the same place in every image, the impact on the data is expected to be minimal when the images are coadded. The processed image is compared with a baseline image and any alerts are sent out, by which time processing of the next image has already begun.
As Rubin’s catalog of objects grows, astronomers will be able to query it in all kinds of useful ways. Want every image of a particular patch of sky? No problem. All the galaxies of a certain shape? A little trickier, but sure. Looking for 10,000 objects that are similar in some dimension to 10,000 other objects? That might take a while, but it’s still possible. Astronomers can even run their own code on the raw data.
“Pretty much everyone in the astronomy community wants something from Rubin,” O’Mullane explains, “and so they want to make sure that we’re treating the data the right way. All of our code is public. It’s on GitHub. You can see what we’re doing, and if you’ve got a better solution, we’ll take it.”
One better solution may involve AI. “I think as a community we’re struggling with how we do this,” says O’Mullane. “But it’s probably something we ought to do—curating the data in such a way that it’s consumable by machine learning, providing foundation models, that sort of thing.”
The data management system is arguably as much of a critical component of the Rubin observatory as the telescope itself. While most telescopes make targeted observations that get distributed to only a few astronomers at a time, Rubin will make its data available to everyone within just a few days, which is a completely different way of doing astronomy. “We’ve essentially promised that we will take every image of everything that everyone has ever wanted to see,” explains Kevin Reil, Rubin observatory scientist. “If there’s data to be collected, we will try to collect it. And if you’re an astronomer somewhere, and you want an image of something, within three or four days we’ll give you one. It’s a colossal challenge to deliver something on this scale.”
The more time I spend on the summit, the more I start to think that the science that we know Rubin will accomplish may be the least interesting part of its mission. And despite their best efforts, I get the sense that everyone I talk to is wildly understating the impact it will have on astronomy. The sheer volume of objects, the time domain, the 10 years of coadded data—what new science will all of that reveal? Astronomers have no idea, because we’ve never looked at the universe in this way before. To me, that’s the most fascinating part of what’s about to happen.
Reil agrees. “You’ve been here,” he says. “You’ve seen what we’re doing. It’s a paradigm shift, a whole new way of doing things. It’s still a telescope and a camera, but we’re changing the world of astronomy. I don’t know how to capture—I mean, it’s the people, the intensity, the awesomeness of it. I want the world to understand the beauty of it all.”
Because nobody has built an observatory like Rubin before, there are a lot of things that aren’t working exactly as they should, and a few things that aren’t working at all. The most obvious of these is the dome. The capacitors that drive it blew a fuse the day before I arrived, and the electricians are off the summit for the weekend. The dome shutter can’t open either. Everyone I talk to takes this sort of thing in stride—they have to, because they’ve been troubleshooting issues like these for years.
I sit down with Yousuke Utsumi, a camera operations scientist who exudes the mixture of excitement and exhaustion that I’m getting used to seeing in the younger staff. “Today is amazingly quiet,” he tells me. “I’m happy about that. But I’m also really tired. I just want to sleep.”
Just yesterday, Utsumi says, they managed to finally solve a problem that the camera team had been struggling with for weeks—an intermittent fault in the camera cooling system that only seemed to happen when the telescope was moving. This was potentially a very serious problem, and Utsumi’s phone would alert him every time the fault occurred, over and over again in the middle of the night. The fault was finally traced to a cable within the telescope’s structure that used pins that were slightly too small, leading to a loose connection.
Utsumi’s contract started in 2017 and was supposed to last three years, but he’s still here. “I wanted to see first photon,” he says. “I’m an astronomer. I’ve been working on this camera so that it can observe the universe. And I want to see that light, from those photons from distant galaxies.” This is something I’ve also been thinking about—those lonely photons traveling through space for billions of years, and within the coming days, a lucky few of them will land on the sensors Utsumi has been tending, and we’ll get to see them. He nods, smiling. “I don’t want to lose one, you know?”
Rubin’s commissioning scientists have a unique role, working at the intersection of science and engineering to turn a bunch of custom parts into a functioning science instrument. Commissioning scientist Marina Pavlovic is a postdoc from Serbia with a background in the formation of supermassive black holes created by merging galaxies. “I came here last year as a volunteer,” she tells me. “My plan was to stay for three months, and 11 months later I’m a commissioning scientist. It’s crazy!”
Pavlovic’s job is to help diagnose and troubleshoot whatever isn’t working quite right. And since most things aren’t working quite right, she’s been very busy. “I love when things need to be fixed because I am learning about the system more and more every time there’s a problem—every day is a new experience here.”
I ask her what she’ll do next, once Rubin is up and running. “If you love commissioning instruments, that is something that you can do for the rest of your life, because there are always going to be new instruments,” she says.
Before that happens, though, Pavlovic has to survive the next few weeks of going on sky. “It’s going to be so emotional. It’s going to be the beginning of a new era in astronomy, and knowing that you did it, that you made it happen, at least a tiny percent of it, that will be a priceless moment.”
“I had to learn how to calm down to do this job,” she admits, “because sometimes I get too excited about things and I cannot sleep after that. But it’s okay. I started doing yoga, and it’s working.”
My stay on the summit comes to an end on 14 April, just a day before first photon, so as soon as I get home I check in with some of the engineers and astronomers that I met to see how things went. Guillem Megias Homar manages the adaptive optics system—232 actuators that flex the surfaces of the telescope’s three mirrors a few micrometers at a time to bring the image into perfect focus. Currently working on his Ph.D., he was born in 1997, one year after the Rubin project started.
First photon, for him, went like this: “I was in the control room, sitting next to the camera team. We have a microphone on the camera, so that we can hear when the shutter is moving. And we hear the first click. And then all of a sudden, the image shows up on the screens in the control room, and it was just an explosion of emotions. All that we have been fighting for is finally a reality. We are on sky!” There were toasts (with sparkling apple juice, of course), and enough speeches that Megias Homar started to get impatient: “I was like, when can we start working? But it was only an hour, and then everything became much more quiet.”
“It was satisfying to see that everything that we’d been building was finally working,” Victor Krabbendam, project manager for Rubin construction, tells me a few weeks later. “But some of us have been at this for so long that first photon became just one of many firsts.” Krabbendam has been with the observatory full-time for the last 21 years. “And the very moment you succeed with one thing, it’s time to be doing the next thing.”
Since first photon, Rubin has been undergoing calibrations, collecting data for the first images that it’s now sharing with the world, and preparing to scale up to begin its survey. Operations will soon become routine, the commissioning scientists will move on, and eventually, Rubin will largely run itself, with just a few people at the observatory most nights.
But for astronomers, the next 10 years will be anything but routine. “It’s going to be wildly different,” says Krabbendam. “Rubin will feed generations of scientists with trillions of data points of billions of objects. Explore the data. Harvest it. Develop your idea, see if it’s there. It’s going to be phenomenal.”
As part of an experiment with AI storytelling tools, author Evan Ackerman—who visited the Vera C. Rubin Observatory in Chile for four days this past April—fed over 14 hours of raw audio from his interviews and other reporting notes into NotebookLM, an AI-powered research assistant developed by Google. The result is a podcast-style audio experience that you can listen to here. While the script and voices are AI-generated, the conversation is grounded in Ackerman’s original reporting, and includes many details that did not appear in the article above. Ackerman reviewed and edited the audio to ensure accuracy, and there are minor corrections in the transcript. Let us know what you think of this experiment in AI narration.
0:01: Today we’re taking a deep dive into the engineering marvel that is the Vera C. Rubin Observatory.
0:06: And and it really is a marvel.
0:08: This project pushes the limits, you know, not just for the science itself, like mapping the Milky Way or exploring dark energy, which is amazing, obviously.
0:16: But it’s also pushing the limits in just building the tools, the technical ingenuity, the, the sheer human collaboration needed to make something this complex actually work.
0:28: That’s what’s really fascinating to me.
0:29: Exactly.
0:30: And our mission for this deep dive is to go beyond the headlines, isn’t it?
0:33: We want to uncover those specific Kind of hidden technical details, the stuff from the audio interviews, the internal docs that really define this observatory.
0:41: The clever engineering solutions.
0:43: Yeah, the nuts and bolts, the answers to challenges nobody’s faced before, stuff that anyone who appreciates, you know, complex systems engineering would find really interesting.
0:53: Definitely.
0:54: So let’s start right at the heart of it.
0:57: The Simonyi survey telescope itself.
1:00: It’s this 350 ton machine inside a 600 ton dome, 30 m wide, huge. [The dome is closer to 650 tons.]
1:07: But the really astonishing part is its speed, speed and precision.
1:11: How do you even engineer something that massive to move that quickly while keeping everything stable down to the submicron level? [Micron level is more accurate.]
1:18: Well, that’s, that’s the core challenge, right?
1:20: This telescope, it can hit a top speed of 3.5 degrees per second.
1:24: Wow.
1:24: Yeah, and it can, you know, move to basically any point in the sky.
1:28: In under 20 seconds, 20 seconds, which makes it by far the fastest moving large telescope ever built, and the dome has to keep up.
1:36: So it’s also the fastest moving dome.
1:38: So the whole building is essentially racing along with the telescope.
1:41: Exactly.
1:41: And achieving that meant pretty much every component had to be custom designed like the pier holding the telescope up.
1:47: It’s mostly steel, not concrete.
1:49: Oh, interesting.
1:50: Why steel?
1:51: Specifically to stop it from twisting or vibrating when the telescope makes those incredibly fast moves.
1:56: Concrete just wouldn’t handle the torque the same way. [The pier is more steel than concrete, but it's still substantially concrete.]
1:59: OK, that makes sense.
1:59: And the power needed to accelerate and decelerate, you know, 300 tons, that must be absolutely massive.
2:06: Oh.
2:06: The instantaneous draw would be enormous.
2:09: How did they manage that without like dimming the lights on the whole.
2:12: Mountaintop every 30 seconds.
2:14: Yeah, that was a real concern, constant brownouts.
2:17: The solution was actually pretty elegant, involving these onboard capacitor banks.
2:22: Yep, slung right underneath the telescope structure.
2:24: They can slowly sip power from the grid, store it up over time, and then bam, discharge it really quickly for those big acceleration surges.
2:32: like a giant camera flash, but for moving a telescope, of yeah.
2:36: It smooths out the demand, preventing those grid disruptions.
2:40: Very clever engineering.
2:41: And beyond the movement, the mirrors themselves, equally critical, equally impressive, I imagine.
2:47: How did they tackle designing and making optics that large and precise?
2:51: Right, so the main mirror, the primary mirror, M1M3.
2:55: It’s a single piece of glass, 8.4 m across, low expansion borosilicate glass.
3:01: And that 8.4 m size, was that just like the biggest they could manage?
3:05: Well, it was a really crucial early decision.
3:07: The science absolutely required something at least 7 or 8 m wide.
3:13: But going much bigger, say 10 or 12 m, the logistics became almost impossible.
3:19: The big one was transport.
3:21: There’s a tunnel on the mountain road up to the summit, and a mirror, much larger than 8.4 m, physically wouldn’t fit through it.
3:28: No way.
3:29: So the tunnel actually set an upper limit on the mirror size.
3:31: Pretty much, yeah.
3:32: Building new road or some other complex transport method.
3:36: It would have added enormous cost and complexity.
3:38: So 8.4 m was that sweet spot between scientific need.
3:42: And, well, physical reality.
3:43: Wow, a real world constraint driving fundamental design.
3:47: And the mirror itself, you said M1 M3, it’s not just one simple mirror surface.
3:52: Correct.
3:52: It’s technically two mirror surfaces ground into that single piece of glass.
3:57: The central part has a more pronounced curvature.
3:59: It’s M1 and M3 combined.
4:00: OK, so fabricating that must have been tricky, especially with what, 10 tons of glass just in the center.
4:07: Oh, absolutely novel and complicated.
4:09: And these mirrors, they don’t support their own weight rigidly.
4:12: So just handling them during manufacturing, polishing, even getting them out of the casting mold, was a huge engineering challenge.
4:18: You can’t just lift it like a dinner plate.
4:20: Not quite, and then there’s maintaining it, re-silvering.
4:24: They hope to do it every 5 years.
4:26: Well, traditionally, big mirrors like this often need it more, like every 1.5 to 2 years, and it’s a risky weeks-long job.
4:34: You have to unbolt this priceless, unique piece of equipment, move it.
4:39: It’s nerve-wracking.
4:40: I bet.
4:40: And the silver coating itself is tiny, right?
4:42: Incredibly thin, just a few nanometers of pure silver.
4:46: It takes about 24 g for the whole giant surface, bonded with the adhesive layers that are measured in Angstroms. [It's closer to 26 grams of silver.]
4:52: It’s amazing precision.
4:54: So tying this together, you have this fast moving telescope, massive mirrors.
4:59: How do they keep everything perfectly focused, especially with multiple optical elements moving relative to each other?
5:04: that’s where these things called hexapods come in.
5:08: Really crucial bits of kit.
5:09: Hexapods, like six feet?
5:12: Sort of.
5:13: They’re mechanical systems with 6 adjustable arms or struts.
5:17: A simpler telescope might just have one maybe on the camera for basic focusing, but Ruben needs more because it’s got the 3 mirrors plus the camera.
5:25: Exactly.
5:26: So there’s a hexapod mounted on the secondary mirror, M2.
5:29: Its job is to keep M2 perfectly positioned relative to M1 and M3, compensating for tiny shifts or flexures.
5:36: And then there’s another hexapod on the camera itself.
5:39: That one adjusts the position and tilt of the entire camera’s sensor plane, the focal plane.
5:43: To get that perfect focus across the whole field of view.
5:46: And these hexapods move in 6 ways.
5:48: Yep, 6 degrees of freedom.
5:50: They can adjust position along the X, Y, and Z axis, and they can adjust rotation or tilt around those 3 axes as well.
5:57: It allows for incredibly fine adjustments, microp precision stuff.
6:00: So they’re constantly making these tiny tweaks as the telescope moves.
6:04: Constantly.
6:05: The active optics system uses them.
6:07: It calculates the needed corrections based on reference stars in the images, figures out how the mirror might be slightly bending.
6:13: And then tells the hexapods how to compensate.
6:15: It’s controlling like 26 g of silver coating on the mirror surface down to micron precision, using the mirror’s own natural bending modes.
6:24: It’s pretty wild.
6:24: Incredible.
6:25: OK, let’s pivot to the camera itself.
6:28: The LSST camera.
6:29: Big digital camera ever built, right?
6:31: Size of a small car, 2800 kg, captures 3.2 gigapixel images, just staggering numbers.
6:38: They really are, and the engineering inside is just as staggering.
6:41: That Socal plane where the light actually hits.
6:43: It’s made up of 189 individual CCD sensors.
6:47: Yep, 4K by 4K CCDs grouped into 21 rafts.
6:50: They give them like tiles, and each CCD has 16 amplifiers reading it out.
6:54: Why so many amplifiers?
6:56: Speed.
6:56: Each amplifier reads out about a million pixels.
6:59: By dividing the job up like that, they can read out the entire 3.2 gigapixel sensor in just 2 seconds.
7:04: 2 seconds for that much data.
7:05: Wow.
7:06: It’s essential for the survey’s rapid cadence.
7:09: Getting all those 189 CCDs perfectly flat must have been, I mean, are they delicate?
7:15: Unbelievably delicate.
7:16: They’re silicon wafers only 100 microns thick.
7:18: How thick is that really?
7:19: about the thickness of a human hair.
7:22: You could literally break one by breathing on it wrong, apparently, seriously, yeah.
7:26: And the challenge was aligning all 189 of them across this 650 millimeter wide focal plane, so the entire surface is flat.
7:34: To within just 24 microns, peak to valley.
7:37: 24 microns.
7:39: That sounds impossibly flat.
7:40: It’s like, imagine the entire United States.
7:43: Now imagine the difference between the lowest point and the highest point across the whole country was only 100 ft.
7:49: That’s the kind of relative flatness they achieved on the camera sensor.
7:52: OK, that puts it in perspective.
7:53: And why is that level of flatness so critical?
7:56: Because the telescope focuses light.
7:58: terribly.
7:58: It’s an F1.2 system, which means it has a very shallow depth of field.
8:02: If the sensors aren’t perfectly in that focal plane, even by a few microns, parts of the image go out of focus.
8:08: Gotcha.
8:08: And the pixels themselves, the little light buckets on the CCDs, are they special?
8:14: They’re custom made, definitely.
8:16: They settled on 10 micron pixels.
8:18: They figured anything smaller wouldn’t actually give them more useful scientific information.
8:23: Because you start hitting the limits of what the atmosphere and the telescope optics themselves can resolve.
8:28: So 10 microns was the optimal size, right?
8:31: balancing sensor tech with physical limits.
8:33: Now, keeping something that sensitive cool, that sounds like a nightmare, especially with all those electronics.
8:39: Oh, it’s a huge thermal engineering challenge.
8:42: The camera actually has 3 different cooling zones, 3 distinct temperature levels inside.
8:46: 3.
8:47: OK.
8:47: First, the CCDs themselves.
8:49: They need to be incredibly cold to minimize noise.
8:51: They operate at -125 °C.
8:54: -125C, how do they manage that?
8:57: With a special evaporator plate connected to the CCD rafts by flexible copper braids, which pulls heat away very effectively.
9:04: Then you’ve got the cameras, electronics, the readout boards and stuff.
9:07: They run cooler than room temp, but not that cold, around -50 °C.
9:12: OK.
9:12: That requires a separate liquid cooling loop delivered through these special vacuum insulated tubes to prevent heat leaks.
9:18: And the third zone.
9:19: That’s for the electronics in the utility trunk at the back of the camera.
9:23: They generate a fair bit of heat, about 3000 watts, like a few hair dryers running constantly.
9:27: Exactly.
9:28: So there’s a third liquid cooling system just for them, keeping them just slightly below the ambient room temperature in the dome.
9:35: And all this cooling, it’s not just to keep the parts from overheating, right?
9:39: It affects the images, absolutely critical for image quality.
9:44: If the outer surface of the camera body itself is even slightly warmer or cooler than the air inside the dome, it creates tiny air currents, turbulence right near the light path.
9:57: And that shows up as little wavy distortions in the images, messing up the precision.
10:02: So even the outside temperature of the camera matters.
10:04: Yep, it’s not just a camera.
10:06: They even have to monitor the heat generated by the motors that move the massive dome, because that heat could potentially cause enough air turbulence inside the dome to affect the image quality too.
10:16: That’s incredible attention to detail, and the camera interior is a vacuum you mentioned.
10:21: Yes, a very strong vacuum.
10:23: They pump it down about once a year, first using turbopumps spinning at like 80,000 RPM to get it down to about 102 tor.
10:32: Then they use other methods to get it down much further.
10:34: The 107 tor, that’s an ultra high vacuum.
10:37: Why the vacuum?
10:37: Keep frost off the cold part.
10:39: Exactly.
10:40: Prevents condensation and frost on those negatives when it 25 degree CCDs and generally ensures everything works optimally.
10:47: For normal operation, day to day, they use something called an ion pump.
10:51: How does that work?
10:52: It basically uses a strong electric field to ionize any stray gas molecules, mostly hydrogen, and trap them, effectively removing them from the vacuum space, very efficient for maintaining that ultra-high vacuum.
11:04: OK, so we have this incredible camera taking these massive images every few seconds.
11:08: Once those photons hit the CCDs and become digital signals, What happens next?
11:12: How does Ruben handle this absolute flood of data?
11:15: Yeah, this is where Ruben becomes, you know, almost as much a data processing machine as a telescope.
11:20: It’s designed for the data output.
11:22: So photons hit the CCDs, get converted to electrical signals.
11:27: Then, interestingly, they get converted back into light signals, photonic signals back to light.
11:32: Why?
11:33: To send them over fiber optics.
11:34: They’re about 6 kilometers of fiber optic cable running through the observatory building.
11:39: These signals go to FPGA boards, field programmable gate arrays in the data acquisition system.
11:46: OK.
11:46: And those FPGAs are basically assembling the complete image data packages from all the different CCDs and amplifiers.
11:53: That sounds like a fire hose of data leaving the camera.
11:56: How does it get off the mountain and where does it need to go?
11:58: And what about all the like operational data, temperatures, positions?
12:02: Good question.
12:03: There are really two main data streams all that telemetry you mentioned, sensor readings, temperatures, actuator positions, command set, everything about the state of the observatory that all gets collected into something called the Engineering facility database or EFD.
12:16: They use Kafka for transmitting that data.
12:18: It’s good for high volume streams, and store it in an influx database, which is great for time series data like sensor readings.
12:26: And astronomers can access that.
12:28: Well, there’s actually a duplicate copy of the EFD down at SLAC, the research center in California.
12:34: So scientists and engineers can query that copy without bogging down the live system running on the mountain.
12:40: Smart.
12:41: How much data are we talking about there?
12:43: For the engineering data, it’s about 20 gigabytes per night, and they plan to keep about a year’s worth online.
12:49: OK.
12:49: And the image data, the actual science pixels.
12:52: That takes a different path. [All of the data from Rubin to SLAC travels over the same network.]
12:53: It travels over dedicated high-speed network links, part of ESET, the research network, all the way from Chile, usually via Boca Raton, Florida, then Atlanta, before finally landing at SLAC.
13:05: And how fast does that need to be?
13:07: The goal is super fast.
13:09: They aim to get every image from the telescope in Chile to the data center at SLAC within 7 seconds of the shutter closing.
13:15: 7 seconds for gigabytes of data.
13:18: Yeah.
13:18: Sometimes network traffic bumps it up to maybe 30 seconds or so, but the target is 7.
13:23: It’s crucial for the next step, which is making sense of it all.
13:27: How do astronomers actually use this, this torrent of images and data?
13:30: Right.
13:31: This really changes how astronomy might be done.
13:33: Because Ruben is designed to generate alerts, real-time notifications about changes in the sky.
13:39: Alerts like, hey, something just exploded over here.
13:42: Pretty much.
13:42: It takes an image compared to the previous images of the same patch of sky and identifies anything that’s changed, appeared, disappeared, moved, gotten brighter, or fainter.
13:53: It expects to generate about 10,000 such alerts per image.
13:57: 10,000 per image, and they take an image every every 20 seconds or so on average, including readouts. [Images are taken every 34 seconds: a 30 second exposure, and then about 4 seconds for the telescope to move and settle.]
14:03: So you’re talking around 10 million alerts every single night.
14:06: 10 million a night.
14:07: Yep.
14:08: And the goal is to get those alerts out to the world within 60 seconds of the image being taken.
14:13: That’s insane.
14:14: What’s in an alert?
14:15: It contains the object’s position, brightness, how it’s changed, and little cut out images, postage stamps in the last 12 months of observations, so astronomers can quickly see the history.
14:24: But surely not all 10 million are real astronomical events satellites, cosmic rays.
14:30: Exactly.
14:31: The observatory itself does a first pass filter, masking out known issues like satellite trails, cosmic ray hits, atmospheric effects, with what they call real bogus stuff.
14:41: OK.
14:42: Then, this filtered stream of potentially real alerts goes out to external alert brokers.
14:49: These are systems run by different scientific groups around the world.
14:52: Yeah, and what did the brokers do?
14:53: They ingest the huge stream from Ruben and apply their own filters, based on what their particular community is interested in.
15:00: So an astronomer studying supernovae can subscribe to a broker that filters just for likely supernova candidates.
15:06: Another might filter for near Earth asteroids or specific types of variable stars.
15:12: so it makes the fire hose manageable.
15:13: You subscribe to the trickle you care about.
15:15: Precisely.
15:16: It’s a way to distribute the discovery potential across the whole community.
15:19: So it’s not just raw images astronomers get, but these alerts and presumably processed data too.
15:25: Oh yes.
15:26: Rubin provides the raw images, but also fully processed images, corrected for instrument effects, calibrated called processed visit images.
15:34: And also template images, deep combinations of previous images used for comparison.
15:38: And managing all that data, 15 petabytes you mentioned, how do you query that effectively?
15:44: They use a system called Keyserve. [The system is "QServ."]
15:46: It’s a distributed relational database, custom built basically, designed to handle these enormous astronomical catalogs.
15:53: The goal is to let astronomers run complex searches across maybe 15 petabytes of catalog data and get answers back in minutes, not days or weeks.
16:02: And how do individual astronomers actually interact with it?
16:04: Do they download petabytes?
16:06: No, definitely not.
16:07: For general access, there’s a science platform, the front end of which runs on Google Cloud.
16:11: Users interact mainly through Jupiter notebooks.
16:13: Python notebooks, familiar territory for many scientists.
16:17: Exactly.
16:18: They can write arbitrary Python code, access the catalogs directly, do analysis for really heavy duty stuff like large scale batch processing.
16:27: They can submit jobs to the big compute cluster at SLEC, which sits right next to the data storage.
16:33: That’s much more efficient.
16:34: Have they tested this?
16:35: Can it handle thousands of astronomers hitting it at once?
16:38: They’ve done extensive testing, yeah, scaled it up with hundreds of users already, and they seem confident they can handle up to maybe 3000 simultaneous users without issues.
16:49: And a key point.
16:51: After an initial proprietary period for the main survey team, all the data and importantly, all the software algorithms used to process it become public.
17:00: Open source algorithms too.
17:01: Yes, the idea is, if the community can improve on their processing pipelines, they’re encouraged to contribute those solutions back.
17:08: It’s meant to be a community resource.
17:10: That open approach is fantastic, and even the way the images are presented visually has some deep thought behind it, doesn’t it?
17:15: You mentioned Robert Leptina’s perspective.
17:17: Yes, this is fascinating.
17:19: It’s about how you assign color to astronomical images, which usually combine data from different filters, like red, green, blue.
17:28: It’s not just about making pretty pictures, though they can be beautiful.
17:31: Right, it should be scientifically meaningful.
17:34: Exactly.
17:35: Lepton’s approach tries to preserve the inherent color information in the data.
17:40: Many methods saturate bright objects, making their centers just white blobs.
17:44: Yeah, you see that a lot.
17:46: His algorithm uses a different mathematical scaling, more like a logarithmic scale, that avoids this saturation.
17:52: It actually propagates the true color information back into the centers of bright stars and galaxies.
17:57: So, a galaxy that’s genuinely redder, because it’s red shifted, will actually look redder in the image, even in its bright core.
18:04: Precisely, in a scientifically meaningful way.
18:07: Even if our eyes wouldn’t perceive it quite that way directly through a telescope, the image renders the data faithfully.
18:13: It helps astronomers visually interpret the physics.
18:15: It’s a subtle but powerful detail in making the data useful.
18:19: It really is.
18:20: Beyond just taking pictures, I heard Ruben’s wide view is useful for something else entirely gravitational waves.
18:26: That’s right.
18:26: It’s a really cool synergy.
18:28: Gravitational wave detectors like Lego and Virgo, they detect ripples in space-time, often from emerging black holes or neutron stars, but they usually only narrow down the location to a relatively large patch of sky, maybe 10 square degrees or sometimes much more.
18:41: Ruben’s camera has a field of view of about 9.6 square degrees.
18:45: That’s huge for a telescope.
18:47: It almost perfectly matches the typical LIGO alert area.
18:51: so when LIGO sends an alert, Ruben can quickly scan that whole error box, maybe taking just a few pointings, looking for any new point of light.
19:00: The optical counterpart, the Killanova explosion, or whatever light accompany the gravitational wave event.
19:05: It’s a fantastic follow-up machine.
19:08: Now, stepping back a bit, this whole thing sounds like a colossal integration challenge.
19:13: A huge system of systems, many parts custom built, pushed to their limits.
19:18: What were some of those big integration hurdles, bringing it all together?
19:22: Yeah, classic system of systems is a good description.
19:25: And because nobody’s built an observatory quite like this before, a lot of the commissioning phase, getting everything working together involves figuring out the procedures as they go.
19:34: Learning by doing on a massive scale.
19:36: Pretty much.
19:37: They’re essentially, you know, teaching the system how to walk.
19:40: And there’s this constant tension, this balancing act.
19:43: Do you push forward, maybe build up some technical debt, things you know you’ll have to fix later, or do you stop and make sure every little issue is 100% perfect before moving on, especially with a huge distributed team?
19:54: I can imagine.
19:55: And you mentioned the dome motors earlier.
19:57: That discovery about heat affecting images sounds like a perfect example of unforeseen integration issues.
20:03: Exactly.
20:03: Marina Pavvich described that.
20:05: They ran the dome motors at full speed, something maybe nobody had done for extended periods in that exact configuration before, and realized, huh.
20:13: The heat these generate might actually cause enough air turbulence to mess with our image quality.
20:19: That’s the kind of thing you only find when you push the integrated system.
20:23: Lots of unexpected learning then.
20:25: What about interacting with the outside world?
20:27: Other telescopes, the atmosphere itself?
20:30: How does Ruben handle atmospheric distortion, for instance?
20:33: that’s another interesting point.
20:35: Many modern telescopes use lasers.
20:37: They shoot a laser up into the sky to create an artificial guide star, right, to measure.
20:42: Atmospheric turbulence.
20:43: Exactly.
20:44: Then they use deformable mirrors to correct for that turbulence in real time.
20:48: But Ruben cannot use a laser like that.
20:50: Why?
20:51: Because its field of view is enormous.
20:53: It sees such a wide patch of sky at once.
20:55: A single laser beam, even a pinpoint from another nearby observatory, would contaminate a huge fraction of Ruben’s image.
21:03: It would look like a giant streak across, you know, a quarter of the sky for Ruben.
21:06: Oh, wow.
21:07: OK.
21:08: Too much interference.
21:09: So how does it correct for the atmosphere?
21:11: Software.
21:12: It uses a really clever approach called forward modeling.
21:16: It looks at the shapes of hundreds of stars across its wide field of view in each image.
21:21: It knows what those stars should look like, theoretically.
21:25: Then it builds a complex mathematical model of the atmosphere’s distorting effect across the entire field of view that would explain the observed star shapes.
21:33: It iterates this model hundreds of times per image until it finds the best fit. [The model is created by iterating on the image data, but iteration is not necessary for every image.]
21:38: Then it uses that model to correct the image, removing the atmospheric blurring.
21:43: So it calculates the distortion instead of measuring it directly with a laser.
21:46: Essentially, yes.
21:48: Now, interestingly, there is an auxiliary telescope built alongside Ruben, specifically designed to measure atmospheric properties independently.
21:55: Oh, so they could use that data.
21:57: They could, but currently, they’re finding their software modeling approach using the science images themselves, works so well that they aren’t actively incorporating the data from the auxiliary telescope for that correction right now.
22:08: The software solution is proving powerful enough on its own.
22:11: Fascinating.
22:12: And they still have to coordinate with other telescopes about their lasers, right?
22:15: Oh yeah.
22:15: They have agreements about when nearby observatories can point their lasers, and sometimes Ruben might have to switch to a specific filter like the Iband, which is less sensitive to the laser.
22:25: Light if one is active nearby while they’re trying to focus.
22:28: So many interacting systems.
22:30: What an incredible journey through the engineering of Ruben.
22:33: Just the sheer ingenuity from the custom steel pier and the capacitor banks, the hexapods, that incredibly flat camera, the data systems.
22:43: It’s truly a machine built to push boundaries.
22:45: It really is.
22:46: And it’s important to remember, this isn’t just, you know, a bigger version of existing telescopes.
22:51: It’s a fundamentally different kind of machine.
22:53: How so?
22:54: By creating this massive all-purpose data set, imaging the entire southern sky over 800 times, cataloging maybe 40 billion objects, it shifts the paradigm.
23:07: Astronomy becomes less about individual scientists applying for time to point a telescope at one specific thing and more about statistical analysis, about mining this unprecedented ocean of data that Rubin provides to everyone.
23:21: So what does this all mean for us, for science?
23:24: Well, it’s a generational investment in fundamental discovery.
23:27: They’ve optimized this whole system, the telescope, the camera, the data pipeline.
23:31: For finding, quote, exactly the stuff we don’t know we’ll find.
23:34: Optimized for the unknown, I like that.
23:36: Yeah, we’re basically generating this incredible resource that will feed generations of astronomers and astrophysicists.
23:42: They’ll explore it, they’ll harvest discoveries from it, they’ll find patterns and objects and phenomena within billions and billions of data points that we can’t even conceive of yet.
23:50: And that really is the ultimate excitement, isn’t it?
23:53: Knowing that this monumental feat of engineering isn’t just answering old questions, but it’s poised to open up entirely new questions about the universe, questions we literally don’t know how to ask today.
24:04: Exactly.
24:05: So, for you, the listener, just think about that.
24:08: Consider the immense, the completely unknown discoveries that are waiting out there just waiting to be found when an entire universe of data becomes accessible like this.
24:16: What might we find?
2025-06-21 00:30:03
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
This is the first successful vertical takeoff of a jet-powered flying humanoid robot, developed by Artificial and Mechanical Intelligence (AMI) at Istituto Italiano di Tecnologia (IIT). The robot lifted ~50 cm off the ground while maintaining dynamic stability, thanks to advanced AI-based control systems and aerodynamic modeling.
We will have much more on this in the coming weeks!
As a first step toward our mission of deploying general-purpose robots, we are pushing the frontiers of what end-to-end AI models can achieve in the real world. We’ve been training models and evaluating their capabilities for dexterous sensorimotor policies across different embodiments, environments, and physical interactions. We’re sharing capability demonstrations on tasks stressing different aspects of manipulation: fine motor control, spatial and temporal precision, generalization across robots and settings, and robustness to external disturbances.
Thanks, Noah!
Ground Control Robotics is introducing SCUTTLE, our newest elongate multilegged platform for mobility anywhere!
Teleoperation has been around for a while, but what hasn’t been is precise, real-time force feedback. That’s where Flexiv steps in to shake things up. Now, whether you’re across the room or across the globe, you can experience seamless, high-fidelity remote manipulation with a sense of touch.
This sort of thing usually takes some human training, for which you’d be best served by robot arms with precise, real-time force feedback. Hmm, I wonder where you’d find those...?
[Flexiv]
The 1X World Model is a data-driven simulator for humanoid robots built with a grounded understanding of physics. It allows us to predict—or “hallucinate”—the outcomes of NEO’s actions before they’re taken in the real world. Using the 1X World Model, we can instantly assess the performance of AI models—compressing development time and providing a clear benchmark for continuous improvement.
[1X]
SLAPBOT is an interactive robotic artwork by Hooman Samani and Chandler Cheng, exploring the dynamics of physical interaction, artificial agency, and power. The installation features a robotic arm fitted with a soft, inflatable hand that delivers slaps through pneumatic actuation, transforming a visceral human gesture into a programmed robotic response.
I asked, of course, whether SLAPBOT slaps people, and it does not: “Despite its provocative concept and evocative design, SLAPBOT does not make physical contact with human participants. It simulates the gesture of slapping without delivering an actual strike. The robotic arm’s movements are precisely choreographed to suggest the act, yet it maintains a safe distance.”
[SLAPBOT]
Thanks, Hooman!
Inspecting the bowels of ships is something we’d really like robots to be doing for us, please and thank you.
[Norwegian University of Science and Technology] via [GitHub]
Thanks, Kostas!
H2L Corporation (hereinafter referred to as H2L) has unveiled a new product called “Capsule Interface,” which transmits whole-body movements and strength, enabling new shared experiences with robots and avatars. A product introduction video depicting a synchronization never before experienced by humans was also released.
[H2L Corp.] via [RobotStart]
How do you keep a robot safe without requiring it to look at you? Radar!
[Paper] via [IEEE Sensors Journal]
Thanks, Bram!
We propose Aerial Elephant Trunk, an aerial continuum manipulator inspired by the elephant trunk, featuring a small-scale quadrotor and a dexterous, compliant tendon-driven continuum arm for versatile operation in both indoor and outdoor settings.
[Adaptive Robotics Controls Lab]
This video demonstrates a heavy weight lifting test using the ARMstrong Dex robot, focusing on a 40 kg bicep curl motion. ARMstrong Dex is a human-sized, dual-arm hydraulic robot currently under development at the Korea Atomic Energy Research Institute (KAERI) for disaster response applications. Designed to perform tasks flexibly like a human while delivering high power output, ARMstrong Dex is capable of handling complex operations in hazardous environments.
[Korea Atomic Energy Research Institute]
Micro-robots that can inspect water pipes, diagnose cracks, and fix them autonomously—reducing leaks and avoiding expensive excavation work—have been developed by a team of engineers led by the University of Sheffield.
We’re growing in size, scale, and impact! We’re excited to announce the opening of our serial production facility in the San Francisco Bay Area, the very first purpose-built robotaxi assembly facility in the United States. More space means more innovation, production, and opportunities to scale our fleet.
[Zoox]
Watch multipick in action as our pickle robot rapidly identifies, picks, and places multiple boxes in a single swing of an arm.
[Pickle]
And now, this.
[Aibo]
Cargill’s Amsterdam Multiseed facility enlists Spot and Orbit to inspect machinery and perform visual checks, enhanced by all-new AI features, as part of their “Plant of the Future” program.
This ICRA 2025 plenary talk is from Raffaello D’Andrea, entitled “Models are Dead, Long Live Models!”
Will data solve robotics and automation? Absolutely! Never! Who knows?! Let’s argue about it!