Posted by on
Categories: HPC HPE

While not new, the challenges presented by computer cabling/PCB circuit routing design – cost, performance, space requirements, and power management – have coalesced into a major headache in advanced #HPC system design, said @JimmyDaley, head of HPC engineering for #HPE, at the HPC User Forum last week. What’s required, and sooner-rather-than-later, is broader adoption of optical cable technology as well as mid-board optics said Daley. Daley’s ~15-minute jaunt through the major copper versus optical issues was a good reminder of this persistent problematic area. He bulleted out three sets of challenges posed by cabling and reviewed briefly how optical solutions could help overcome them: Cost and Signaling Density and Egress Power and Thermals The slide below contrasts the size difference of copper versus optical cable. Like many, Daley is looking for cabling options and currently there aren’t many. His presentation was both a refresher and look-ahead at cable interconnect issues and opportunities. Copper cable, of course, predominates but it’s not cheap and has plenty of constraints. Active optical cable (AOC) is powerful but also on the order of 4-6X more expensive than copper; Daley called AOC HPC’s nemesis. Broader use of passive optical would solve many problems and be less expensive but there’s a fair amount of work to be done before that is practical. Today, AOC is the backbone for most very large supercomputers. “Even though these AOC cables struggle with costs and other things, the adoption of these cables is still very aggressive. We have to [use them] in order to get our science done,” he said. “At 50 gigs (see chart below), we as an industry pretty much only use them sparingly. As we get to 100 gigs speeds we are already at about 40 percent mix in active optical cables (AOC) and copper and as we move to 200 gigs and beyond, we are predicting that most of what we deploy is going to have to be active optical cable.” Daley pointed out it is not the fiber in AOC that’s so expensive; it’s actually less expensive than copper, but everything else required (various components and materials) for active optical cable that boosts its cost. “[Given] the cost of copper cable I’m not sure I am going to be able to do a meaningful 400 gig copper cable, which means I am now at a list price of $2,000-plus for AOC. Go build a fabric out of that and see how far the wallet goes,” said Daly. The big win would be in being able to deploy passive cable and efforts are ongoing to develop better passive optical cable solutions. Printed circuit boards present similar challenges as signaling demands rise. “Where we live today (PCIe Gen 3) I can get everything using fairly standard PCB material. As we start talking about PCIe Gen 4…[w]e are going to have to start looking at more and more exotic routing material. We have a lot of smart guys in HPE and industry and this box will start to shift up ever so slightly, but it will not shift up meaningfully. And my red box (chart below) here is just a lost cause,” he said. Clearly optical cable has many advantages – physical size and bandwidth are the obvious ones. The growth in systems has exacerbated pressure on interconnect throughout systems and Daley’s slide below illustrating the difference in copper cabling versus optical cabling required for 256-node makes the point nicely; the cross section of the copper cable is roughly the size of a CD while the optic cable counterpart is the size of a watch face. Moving to optical as well as “mid-board optics,” he said, is a necessary step for larger systems. He also argued for accelerated development of passive optical approaches, which are not only cheaper, but more suitable for modular systems. “If I move to passive optical cable that also frees us up. Forget the economics and physics of things. It frees you guys up to wire the datacenter one time, you put the wires in and the optics actually come with the thing that you are plugging in. Today we are very driven to making sure that within a rack that you hit the right switch radix because if you leave switch ports empty, those are costs that have to get amortized across all of the other nodes in the rack and that becomes very, very painful. “If I move to optical cable where distances aren’t [the issue] then I don’t have to have a top-of-rack switch, I have a mid-row switch and we just make sure that within that row we design and get the right things there. Then I can start to design to power constraints and thermal constraints besides switch radixes. The biggest thing in my mind is that it allows us to start to optimize our topologies for the workload we are trying to do as opposed to optimizing for “how do I get as many copper cables into this picture as I can because I can’t afford optical cables as we move forward.’”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.