In today's fast-changing data center environment, where compute performance and storage volume increase significantly every year, it is important to step back and review why top-of-rack (TOR) fabric is deployed by more than 50% of data centers worldwide. Today's servers become obsolete sooner, relying on software to take on more and more of the workload. Clearly, this is not a "set it and forget it" kind of business.
Using a TOR architecture is more relevant than ever to keep up with these changes. TOR supports and enables change as a part of your data center infrastructure management (DCM) plan, a paramount strategy in today's conditions.
The objective of this article is to review the benefits of TOR from the perspective of today's data center environment. You may have heard that data centers must be agile, ready for migration, and ready for the future. All of this is true. This article will dig a little deeper into the benefits of TOR, specifically detailing current technology, availability, and future network demands.
The "lights out" model is more important now than ever before. As compute and storage performance climbs along with the number of serves deployed, small changes in efficiency have massive impacts. TOR allows us to isolate cross connects, utilize direct attach copper (DAC) cables, produce a scalable design, decongest the increasing volume of overhead fiber, lower initial and long-term migration costs, and use less fiber overall.
A cross connect is an isolation point. Adopted from the strict physical connectivity of the Telecom world (which requires redundancy), cross connects have historically been implemented in conventional data center buildouts for the purpose of testing - or even a "physical" switch location. With the increase of virtual data center functions for Layers 2 and 3, this is less of a reason, but is is still worth noting.
Nowadays, there are two primary reasons to isolate at the top of a rack for both a server and fiber connectivity. First, if there is a failure of the optical components, a modular and easily serviceable link is a significant advantage. Replacing a jumper from the patch panel to the leaf switch is fast, easy and efficient. Without a reparable link, returning components to service requires time-consuming - and costly - replacement or additions to overhead fiber connectivity with an MPO or jumper cable. "Serviceable" jumer connections preserve the quality of link connectivity.
The benefit of less utilization in the overhead raceway is obvious. Having the capacity for adds and changes as the fiber counts increase is paramount and matching the facility infrastructure with the network demands as the components and performance increases is something that can end up being the issue in the long run. Bottom line, use MPO or higher fiber counts in the overhead raceway or basket. We will see more investigation in standardization on high fiber count options as well as we see adoption of newer fabrics to 400G. According to the IEEE 802.3bs, there is compelling reason to standardize on a 32 fiber MTP (MPO) connector like the below image:
The physical patch panel still is a failsafe for connectivity but does not ensure that costly downtime is completely limited or temporary. If the path is easy to isolate, fast to replace, and ends up costing less, it is a win-win.
Secondly, due to the cost of physical components within the rack, using copper when possible is a preferred option. Adding, aggregating, and replacing the server or leaf components must be agile. The industry-preferred method is to use fiber for home-runs and copper in the form of direct attach copper (DAC) for the 10G-40G connectivity in the rack.
What is DAC? DAC cables are a passive pluggable form factor that is a suitable replacement for two transceivers and a fiber cable. DACs are an easy plug-and-play option that has been commonly deployed in spine leaf fabric. The limitation of DAC is that it is only functional within a rack or to an adjacent rack, with a maximum effectiveness of about 3m/15ft, depending on the speed.
This is ideal for connectivity from the ToR leaf switch to the server nodes. With fiber connectivity within the rack, not only is the port-to-port cost significantly higher, the maintenance cost is a crucial factor. With DAC, there are fewer failure points such as connectors and adapters. Furthermore, DAC is tested at the factory for performance and is fast and easy to connect. In the long run, DAC has significant cost return advantages compared to fiber connectivity when deployed within 3m/15ft.
DAC also uses about ¼ of the power of an active optical cable (AOC). For the modern data center server, power usage should focus on system performance. The “lights out” model is real, and using DAC enables top data center providers to deliver the most competitive PUE. Wasting power consumption on hardware that does not add to your compute or storage performance does not add value.
Because of the physical change from copper to fiber at the leaf switch location, users can consolidate overhead fiber and patch or splice. Multimode (MM) fiber is the most cost-effective solution here, but specific usage depends on the lengths of the span. Most applications utilize MM wherever possible because of the lower cost and greater availability. The cost per meter of MM fiber jumpers can run lower than Singlemode (SM) fiber, but that in combination with a pair of transceivers makes the MM solution cost less. MM Fiber is deployed in most data centers globally. This leads to an increase in availability of MM fiber either directly from the manufacturer or in distribution channels. MM fiber is also easier to terminate at the factory, thereby yielding slightly higher-quality results with manufacturing tolerances.
Using SM or MM fiber for homerun connectivity to spine aggregation switches depends on the length and link demands of your connectivity. Variables include the size of the facility and the performance demands of that connectivity. Keep in mind that a MM fiber solutions is less expensive than that of a SM solution, so selecting aggregation switch locations at the early stages of the facility layout and design can be a major factor in limiting the total length of the fiber connections within the larger facility. Invariably, changes will need to be made, however.
That said, demand in the hyperscale market has represented significant shifts in where and how MM and SM fiber is deployed. When a Hyperscale provider standardizes on a certain fiber spec, the high marketplace demand and aggressive price pressure can artificially lower the unit price. We will likely still see certain applications adopted faster than usual, at lower than expected prices and better market availability.
It is simple and unavoidable. At some point, the facility will need to be upgraded. It is common to see spine connectivity of 40-100 Gigabits per second and increasing. This often utilizes an SR4 or PSM4 fabric in generally low fiber counts. Consolidating the fiber connectivity to loose tube or tight buffer options in 12 or more fiber counts can drastically reduce congestion in overhead rack-to-rack or room-to-room connectivity.
The primary reason for a scalable design is that when you make changes to the servers or switches, the physical work of changing the network will be isolated to the rack using the TOR cross-connect patch panel.
Currently it is most common to see 10 gigabit connectivity maximum cascading out to the server node. Will the next generation of server nodes be able to support higher 40-plus gigabit connectivity? If compute power increases enough, we will absolutely see continued adoption of higher traffic connectivity within the rack.
It is not if, it is when. Be one step ahead of your connectivity and prepare for the next generation of connectivity now.
The physical components of fiber congestion and the practice of good cable management are important. The value of using less glass is clear, but it is also crucial to manage fiber in the best way possible. This inevitably leads to the subject of labeling, troubleshooting, and testing links.
Labeling and testing links within the rack is easy and quick as long as you have a test port or cross-connect or patching location. Testing links to a different row or room is a more complex process and requires at least two people–sometimes even Layer 2 functions as well. The practice of isolating at the top of the rack with the patch panel or a modular consolidation solution will streamline the testing process and keep labor to a minimum.
A little deeper in the design, using a breakout fiber solution drastically reduces the need for a high volume of overhead fiber, and can consolidate easily to 8-, 12-, or 24-fiber cables. Likewise, using LC connectors to patch to the IO module on the front of the rack eliminates the need for a costly fanout solution, where all 12 fibers would require replacement if a single link failed (potentially due to factory quality issues or damage during installation). To the right is an example of a 144-fiber pre-term solution using a pliable ribbon.
There is a little ambiguity regarding the responsibilities of Layers 1, 2 and 3 as we see further virtualization. Yes, there is more Layer 2 and 3 traffic with virtualization, but the benefits are clear. Virtualization can automatically perform many functions that formerly required specific equipment. Common virtualizations include balancing network traffic, resulting in better utilization of hardware.
The shift to virtualization also pushes more functions to the leaf switches. Leaf switches are estimated to run about 75% lower cost per port than aggregation switches. Incorporating leaf switches into this “heavy lifting” again leads to smarter power utilization and performance.
With the combination of strong NFV and SDN deployments in today’s data centers, the effects of the increased traffic of AI and machine learning can be isolated to certain diverse racks within a data center, only using “available” bandwidth. There is still some ambiguity with how Layers 1, 2 and 3 function in the future. Having a physical connectivity failure is less of an issue, but it can still create a huge impact on operating costs.
As the demands of IoT, AI, and machine learning continue to require higher performance in the data center and data center operations get pushed to produce higher power usage effectiveness (PUE), using a smart ToR design for Layer 1 is more important today than it has ever been.
The smart ToR model described above does the following:
In conclusion, using a TOR architecture in your data center provides the efficient and effective connectivity for today, while enabling flexibility for the future. It is a key strategic component in a successful DCIM plan.
Contact our Application Engineering team at firstname.lastname@example.org