Understanding Redfish Protocol: A Guide for Data Center Managers
As you update your legacy technologies and components to energy efficiency, introduce new hardware to cool your systems, or as you build new installations, remember this: Redfish®.
Why Redfish®?
When you provision your data center, you need to take the long view. Your plan will support current needs, and take both future expansion and modifications into account – including the integration of components from different manufacturers.
The Redfish API was created by the DMTF (the Distributed Management Task Force) to enable the interoperability of data center components. Procuring products that support Redfish can ensure successful integration, enabling you to manage your system and its energy consumption, including thermal equipment.
Keeping Cool
As data centers process data, they generate heat. In order to function efficiently, individual electronic components need to operate within their optimal temperature ranges; this also optimizes component-level energy consumption. When temperatures rise, the internal resistance rises as well; component performance can suffer while overall power consumption increases. In addition, heat must be dissipated to prevent damage to components. Overcooling the system
According to a recent report sponsored by the U.S. Department of Energy’s Federal Energy Management, approximately 40% of data center energy consumption is devoted to cooling. Thermal management strategies, cooling design and equipment selection are therefore critical in reducing power consumption and its costs, and the medium and method of cooling itself absolutely crucial.
Some of the methods include:
- Airbased cooling: CRAC (Computer Room Air Conditioning units), and CRAH (Computer Room Air Handlers). Smaller centers often rely on these, as do some older systems.
- Liquid cooling technologies: these offer better cooling efficiency and energy savings, and are increasing in use..
- Water-cooled centers: compete with humans for the same pure, potable water that humans drink. In drought-stricken Uruguay and Chile, Google’s plans to build data centers sparked protests.
- Direct-to-Chip Cooling (Cold Plate Cooling): a highly efficient method, where coolant circulates through pipes to cold plates mounted on heat-generating components. The cold plates absorb heat it to the coolant, which is pumped away.
- Immersion Cooling: heat-generating components are submerged in a tank of liquid that absorbs heat, and is circulated out of the tank for cooling. Includes single-phase immersion where liquid remains liquid as it absorbs heat, two-phase immersion, where liquid boils upon absorbing heat, and the vapor is condensed back into a liquid.
Specialized dielectric fluids are particularly effective at heat transfer. - Rear Door Heat Exchangers (RDHX): installed on the rear door of a server rack. Hot air exits the rack, passes through the heat exchanger, where liquid coolant absorbs the heat before the air is released back into the data center.
- Hybrid Cooling Systems: these combine liquid cooling with air cooling systems, to optimize the efficiency of both systems. Liquid cooling might, for example handle the bulk of the heat load while air cooling manages the residual heat.
Understanding the Redfish Protocol
The Redfish protocol is a RESTful interface that provides a standardized approach to management, enables seamless integration, and offers enhanced security, scalability, and ease of use, making it ideal for modern data center environments.
Below is a diagram of the resource tree, starting at the Redfish Service Root, containing all resources used to model a rack-based Cooling Distribution Unit (CDU). Other types of cooling systems, such as immersion cooling units, follow the same model.
Cooling Units
The Redfish CoolingUnit schema supports different types of gear, such as Cooling Distribution Units (CDUs), connected through Cooling Loops that service the equipment in a single rack. Cooling Loop definitions contain product information and show the connections to and from the loop. This enables software to follow the flow of coolant through its entire cycle in a facility-level cooling system as well as providing basic inventory functions
PLCs—Managing Cooling Loops
PLCs can both control coolant flow and regulate temperature. PLCs typically use protocols like Modbus, Profibus, or EtherNet/IP for industrial communication. To interface with Redfish, a communication bridge or gateway is used to translate industrial protocols into Redfish-compliant RESTful commands.
The bridge allows real-time data exchange between the PLC and the data center’s IT management systems via Redfish. For instance, the PLC might report coolant temperatures, pump statuses, or alarms to the Redfish interface.
For example, an IT Server can initiate an HTTP GET request to the /redfish/v1/Thermal endpoint (on the PLC). The PLC can respond with a JSON payload containing thermal information.
Conclusion
Implementing the Redfish protocol for comprehensive data center monitoring offers numerous benefits, including enhanced visibility, improved energy efficiency, and secure remote management, and ensures that your data center components, controls, and communicate with each other to efficiently balance energy requirements.
A final word: watch for the development of PLCs that natively support Redfish–PLCs that will efficiently balance thermal management to and keep your data centers cool.
Complete information regarding Redfish is available in the Redfish Data Model Specification, maintained by the DMTF (formerly known as the Distributed Management Task Force). The DMTF creates open manageability standards spanning diverse emerging and traditional IT infrastructures including cloud, virtualization, network, servers and storage. Member companies and alliance partners worldwide collaborate on standards to improve the interoperable management of information technologies.
List of Data Center Acronyms
ASHRAE American Society of Heating, Refrigerating and Air-Conditioning Engineers
CDU cooling distribution unit
CER cooling efficiency ratio
CoE Center of Expertise
CRAC computer room air conditioning
CRAH computer room air handlers
CUE carbon use effectiveness
DCEP data center energy practitioner
DX direct expansion
EMCS energy monitoring and control system
EPEAT Electronic Product Environmental Assessment Tool
ERE energy reuse effectiveness
ERF energy reuse factor
FEMP Federal Energy Management Program
Gflop gigaflop
ITEEsv Equipment Energy Efficiency for servers
ITEUsv IT Equipment Utilization for servers
MERV minimum efficiency reporting value
NEBS Network Equipment Building System
NREL National Renewable Energy Laboratory
PDU power distribution unit
PUE power usage effectiveness
RCI rack cooling index
REF renewable energy factor
TPU tensor processing unit
VSD variable speed drive
WUE water use effectiveness