Trends

Spaceborne Computer brings supercomputing capabilities to ISS astronauts

Hewlett Packard Enterprise (HPE) — the business-focused division within HP — in August 2017 announced the Spaceborne Computer experiment, which saw the Palo Alto firm team up with the National Aeronautics and Space Administration (NASA) and SpaceX to launch a supercomputer into space. Today, a little over a year later, HPE says it’s making the Spaceborne Computer’s high-performance computing (HPC) capabilities available to astronauts aboard the International Space Station (ISS).
The new “above-the-cloud” services, as HPE has cheekily dubbed them, will enable scientists in the ISS’ U.S. National Laboratory to run analyses on-station without having to beam data to a terrestrial way station for processing.
It promises valuable time and bandwidth savings, Dr. Eng Lim Goh, chief technology officer and vice president at HPE, explained. Beyond a range of 400 and 1,000 miles above Earth’s surface, communication latencies to and from the planet can reach up to 20 minutes — enough to threaten the success of future missions in deep space. Currently, an outsized portion of network bandwidth in space is consumed with transmitting large datasets, leaving little leeway for emergency transmissions and other critical communications.
If all goes according to plan, ISS space explorers will be to dissect and process large volumes of data with the Spaceborne Computer.
“Our mission is to bring innovative technologies to fuel the next frontier, whether on Earth or in space, and make breakthrough discoveries we have never imagined before,” Dr. Goh said. “After gaining significant learnings from our first successful experiment with Spaceborne Computer, we are continuing to test its potential by opening up above-the-cloud HPC capabilities to ISS researchers, empowering them to take space exploration to a new level.”
The Spaceborne Computer — one of the first-ever commercial off-the-shelf (COTS) computer systems launched into orbit — whirred to life in September 2017 and became the first COTS to run at a teraflop in zero gravity. As Mark Fernandez, HPC technology officer at HPE, noted in a July blog post, it’s had to contend not only with flaky network connectivity, but unpredictable radiation, inconsistent power, solar flares, subatomic particles, micrometeoroids, and irregular cooling that can damage components that aren’t properly idled beforehand.
Its software manages real-time throttling of the computer systems based on current conditions and can mitigate errors, HPE said, and it has already dealt with two anomalies — a two-hour shutdown while astronauts were replacing an electrical component on another system and a 16-hour emergency shutdown due to a false fire alarm.
The Spaceborne Computer is based on HP’s Apollo pc40 server, a dual-socket Intel Xeon Processor Scalable platform featuring up to four Nvidia Tesla graphics processing units (GPUs), twelve 2666MHz DDR4 DIMMs, two SFF hard drives or solid state drives, and dual 2000-watt power supplies. But it’s more rugged than off-the-shelf models — it had to pass more than 146 safety tests and certifications before being launched into space.
It’s optimized for deep learning workloads, with HPC-integrated nodes managed in cluster builds that support an array of network topologies. So far, it has completed over 300 benchmark experiments onboard the ISS.
HPE’s no stranger to the field of supercomputers — it codeveloped Columbia, a 10,240-processor supercluster that was named the second-fastest supercomputer in the world on the 2004 Top500 list. But HPE has its eyes set on the stars.
“[The Spaceborne Computer] has laid the groundwork for performing compute-intensive experiments without aid from Earth, which will be necessary to advance space exploration on journeys millions of miles away from our home planet,” Fernandez wrote. “More importantly, this experiment enables us to apply our learnings to advance earth-bound technologies and further increase their reliability and robustness.”
In that respect, the Spaceborne Computer’s intended as a sort of precursor for hardware capable of traveling to Mars and beyond, with future incarnations potentially adopting what HPE calls a “memory-driven computing” architecture. In May 2017, HPE demonstrated a system containing 160 interconnected terabytes of memory that can theoretically scale to 4,096 yottabytes — 250,000 times the entire world’s digital data.
Such computers, it claims, would significantly shrink the time needed to process complex problems from days to seconds.
“When we can analyze that much data at once, we can begin to discover correlations we could have never conceived before,” Kirk Bresniker, chief architect at Hewlett Packard Labs, wrote in a blog entry. “And that ability will open up entirely new frontiers of intellectual discovery.”
Source: VentureBeat
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button