Data Center Ethernet

English: Fibre Channel over Ethernet I/O conso...
English: Fibre Channel over Ethernet I/O consolidation (Photo credit: Wikipedia)

Data Center Ethernet, or DCE, is a term that was coined by Cisco Systems to refer to Data Center Bridging (or DCB). DCE pertains to products that have the DCB standard (Ethernet enhancements).

The Fibre Channel over Ethernet experiences some type of frame loss, despite the fact that it runs on 10 Gigabit Ethernet (or 10GbE), and Data Center Ethernet is designed to treat use different tasks on the same network as though they were traveling on different paths.

When a server or network is busy, information is often loss in data travel, and signals end up being ignored or overlooked.

The goal of data center ethernet is to prevent this from happening by opening up more “paths” on which data can travel, as well as providing a better use of bandwidth so as to reduce web traffic congestion on local area networks.

While DCB relies on 10GbE as of the moment, 40 Gigabit Ethernet and 100 Gigabit Ethernet are the data speeds of the future.

As speeds increase, it is the hope that Ethernet enhancements will better handle web traffic and make the relaying of information from sender to recipient(s) somewhat easier.

Data Center Ethernet in the News

Data Center Ethernet is a collection of technologies that are designed to take Ethernet technology from where it is now and transport it into the future. To this end, Data Center Ethernet addresses three areas:

  • truly differentiated classes of service
  • the aggregate bandwidth you can receive out of Ethernet service due to congestion, multipathing, and
  • a set of management paradigms (such as protocols) that allows them to be employed operationally.

There are eight priorities of levels in Ethernet; one can turn off one priority and allow other priorities to get through to its intended target. These different priorities of services are applied to different applications.

In multipathing, multiple paths are allowed simultaneously, meaning that more information can reach its intended target faster than before. In the past, multipathing was impossible, and, when one path was activated, the other was “turned off” until the first data had reached its destination.

Cisco’s Nexus 5000 has two “40” modes, one for Ethernet and one for Fiber mode. These two 40 modes allows the device to serve as a multi-address host that can reach multiple destinations at once without a halt or break in data transmission signals.

Data Center Ethernet does have practicality for everyday life. One of the problems has been with uploading large files versus controlled messages.

Ethernet was once conducted as a tree dark fiber network setup — where files are sent hierarchically. With differentiated classes of service, large files can be placed on one path and controlled messages can be placed on another — without the large file causing the controlled message or messages to stall until the first one gets uploaded.

Now, both controlled message and large file can be sent at the same time without a halt in the transmission process.

There are three Data Center Ethernet standards that were being discussed as early as 2009.

  • Priority-Based Flow Control (802.1p)
  • Link Scheduling (Enhanced Transmission Selection) — 802.1qaz
  • Congestion Notification — 802.1aqu

Cisco Data Center Ethernet has been working with both its friends and competitors to make sure that these three standards that are now at the center of Data Center Bridging are met in its products.

Priority-Based Flow Control

Priority-based flow control (or PFC) has a goal in mind: to create a lossless Ethernet environment so that Fibre Channels can be transported over Ethernet. The 802.1p standard builds on the IEEE 802.1q, and they work in tandem.

Link Scheduling (802.1qaz) is a process by which certain pieces of data are transmitted to their intended destination at certain times. The goal of this new IEEE standard is to make sure two or more links do not conflict when in transmission.

Congestion Notification (802.1aqu) is used to determine the optimal time to send data or to refrain from so doing. In order to make data transmission more efficient, congestion notification is set up to prevent data from being lost in transmission or sent back to the sender.

When web traffic is congested or busy, sending data could lead to data overload. When web traffic is busy, congestion notification delays the transmission of data until other data are sent across the web. Once web traffic dies down, your data is then transmitted and sent to an intended destination.

Data Center Ethernet consists of the above technologies that are designed to make the Internet a more efficient web traffic environment.

Share: