Originally posted on Data Center POST

Byline: Ilissa Miller

DatacenterDynamics held its Connect>New York event in New York City last week. It was a very well attended event with a number of different tracks and opportunities to gain insights, learn, and challenge the status quo in the data center industry.

One of the questions pondered during a roundtable discussion at the event was:  What is the biggest challenge and opportunity that could arise in the future due to data center demand driven by AI?

The panelists had great insights, and so too did AI!

According to Perplexity AI, which is similar to ChatGPT and has advanced algorithms that can handle large amounts of data, understand context, give coherent answers, and use the latest machine learning techniques, the response was as follows:

“The biggest challenge that could arise in the future due to data center demand driven by AI is the shortage of wholesale data center space. The AI boom is creating a resource-constrained world, leading to stupendous demand for GPUs, data centers, and AI expertise. Data center developers are effectively running out of wholesale data center space in many major markets, with hyperscale users occupying most of the capacity under development. This scarcity of data center space poses a significant challenge for enterprises looking to run AI workloads

On the other hand, the opportunity that could arise from this increased demand is the growth and innovation in digital infrastructure to meet these challenges. The digital infrastructure sector has matured, has strong capital support, and a history of innovation to address the challenges posed by the surge in demand for data centers driven by AI. Companies are focusing on expanding the use of alternative energy, smart grids, hybrid grids, and innovative data center designs to deliver reliable solutions while lessening negative impacts on the environment. This presents an opportunity for advancements in technology and sustainability at the intersection of high-density cooling and power infrastructure to support the evolving needs of AI-driven data centers.”

Not to be outdone by it’s popular counterpart, ChatGPT, which offered the following insights:

“One of the biggest challenges posed by the increasing demand for data centers driven by AI is the strain on energy resources and environmental sustainability. AI requires massive amounts of computational power, which in turn demands substantial energy consumption. Balancing this energy usage with the need for sustainability will be crucial in the future.

On the flip side, this demand also presents a significant opportunity for innovation in energy-efficient computing technologies. As the need for data centers grows, so does the incentive to develop more efficient hardware and cooling systems. This could spur advancements in renewable energy integration, novel cooling techniques, and optimized architectures, leading to more sustainable and environmentally friendly data center operations. Additionally, the growing demand for AI-driven services could drive economic growth and job creation in the tech sector.”

Deploying Inference versus Learning AI Nodes

As humans discussed the question, a few key points came up.

Inference versus Learning AI approaches have different requirements. AI inference is when an AI model produces predictions or conclusions. AI training is the process that enables AI models to make accurate inferences.  Both approaches have different requirements for ongoing operations.  According to the conversation, Inference AI requires an edge deployment, and as more and more companies adopt AI tools and Inference, which is the process of applying knowledge gained during the learning phase to make predictions, classifications, or decisions based on new data, urban ‘edge’ data centers will be on the rise.

Inference AI requires real-time processing – in other words – minimal latency. It uses a lot of bandwidth as it transmits large volumes of raw data from IoT devices, sensors or other edge devices to a central data center. This can strain network bandwidth and incur significant variable costs.  And finally, privacy and security for certain applications, especially sensitive data or compliance requirements, may necessitate data processing to occur locally for privacy and security.

On the other hand, Learning AI can be nearly anywhere. Companies can operate large scale learning AI deployments from nearly anywhere. Learning AI can happen offline or distributed across various computing resources including cloud servers, on-premise data centers and even personal devices.  Learning AI does require data availability, meaning it must be able to access large amounts of data relevant to the query or task at hand.  It also requires access to a lot of computer resources – power and memory primarily. The physical location doesn’t matter as long as the compute power and memory can support the learning queries.

The question posted about the opportunity and demand driven by AI is that they are vast and unpredictable at best.  Key takeaways include:

  • The development of new business models to differentiate between large-scale deployments and general enterprise requirements.  For example, Equinix formed a joint venture called xScale to cater to hyperscale customers that buy large amounts of space and power but don’t deliver the margins for investors. So the joint venture was set up to provide a consistent and more reliable return for investors.
  • Retrofitting data centers is expensive, and as a result, new builds purposely designed for the massive compute requirements AI needs are the more cost-effective ways to roll-out large-scale AI models
  • Data center providers will need to figure out a solution for the white space they will have in their facilities. Many already have a balance averaging 60% infrastructure and 40% white space. For AI specific builds the average balance would be more like 75% infrastructure and 25% white space.

To continue reading the full article please click here.