Posted by on
Categories: Super computing

In Tampa, the conference center’s roof leaked. In Austin, the airport flooded. In Reno, conference organizers had to wait until a motorcycle rally was over before they could do some setup. During preparation for the SC Conference, a #supercomputing meeting, there’s always something getting in the way of networking. But the conference, held annually in November, is perhaps more sensitive to water, delays, and herds of bikes than your average gathering. Because every year, a group of volunteers shows up weeks in advance to build, from the literal ground up, the world’s fastest temporary network. The conference’s attendees and exhibitors—from scientific researchers to industry bigwigs—need superfast, reliable connections to stream in the results of their simulations and data analysis. Called #SCinet, the network “takes one year to plan, three weeks to build, one week to operate, and less than 24 hours to tear down,” according to its catchphrase.  After all, what good is high-performance computing if its results can’t reach the wider world?  THE SC CONFERENCE SERIES This year, in Denver, one difficulty was elevation—not of the city itself, but of the exhibit hall. The 188 volunteers built up the networks’ 13 equipment racks on the floor below the big, main space, constructing the infrastructure that could eventually handle around 3.6 terabits per second of traffic. (For reference, that’s probably around 400,000 times more powerful than your personal connection.) And then, after construction, they had to move those millions of dollars of delicate equipment—down a hall, into an elevator, up a floor, and across the exhibit hall. On November 8, volunteers moved the equipment on customized racklifts. “Welcome to the crazy,” someone said, unprompted, as he rushed past. The SCinetters moved like tightrope walkers, servers in tow, toward the elevators. One floor up, a guy wearing a Scooby Doo hat pulled up with a forklift, gingerly skewered one rack, and began to lift it to the central stage. As the rack approached the platform, other volunteers put their hands on it, like digital pallbearers. When they were done, eight racks sat on the stage—the beating, blinking heart of the network. Among other duties, it coordinates with the five other racks scattered strategically around the room, ready for the exhibitors that needed 100 gigabit connections, and those requiring mere 1 or 10 gigabit hookups.   The demonstrations started on November 13. NASA brought out a simulation of how shockwaves from meteorites affect the atmosphere—and then how their effects reach the ground, from impacts to tsunamis. Also on board: a simulation showing how person-transporting drones could work, and a [global weather prediction] model ( The Department of Energy presented about particle accelerators, quantum computing in science, and cancer surveillance. The company Nyriad Limited, meanwhile, has aligned its stars with the International Centre of Radio Astronomy Research, to develop a “science data processing” operating system for a telescope called the Murchison Widefield Array, which itself is a precursor to the Square Kilometer Array. The Square Kilometer Array will require more computing power than any previous one: Its data rate will exceed today’s global internet traffic. Nyriad, at the conference, revealed its first commercial offering, spun out of its SKA work: a fast and low-power storage solution useful beyond the world of astronomy. But their talks would have been all talk were it not for the homebuilt network that let them show and tell. In the weeks leading up to the actual conference, the SCinet volunteers laid 60 miles of fiber and crafted 280 WiFi access points for the nearly 13,000 attendees and their attendant devices. Oh, also, they had to have a network service provider crack up a road to illuminate a dark fiber connection.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.