Managing Unpredictability Without Breaking the Bank
In an environment characterized by massive, unpredictable growth, balancing responsiveness on one hand and cost efficiency on the other is a key challenge.
This is the fourth in a series of five blog posts reflecting the top-of-mind issues discussed during the most recent Infrastructure Masons Advisory Council meeting.
The digital infrastructure industry is fueled by (or, you could say, a victim of) its own success. Moore’s Law may be dead but the growth of the digital universe is continuing unabated.
Industry leaders are still trying to get some reins around it. “I’m trying to keep up with scale while trying to get to a standard, all while running the infrastructure that serves the business,” explained one Advisory Council member, an end user. “We’re growing and we never seem to get our forecast right.”
Forecasting is difficult, yet the industry craves some level of predictability, said another end user. “You have to attempt to forecast. Otherwise you’re leaving money on the table – yours (colocation provider) or ours (end user).”
“You have to attempt to forecast. Otherwise you’re leaving money on the table.” –Click to tweet
A ‘least common denominator’ problem
In a data center build, the largest capital costs come in the form of upfront outlays for facility-level power and cooling. Will that change? “Spending all this money upfront doesn’t make sense anymore. The trend is to shift all that to the rack,” said one partner member. “Then you can provide the right level of reliability, cooling, storage, etc. at the rack level so you don’t have the big investment upfront.”
In other words, the partner suggested, power and cooling gear will get smaller, and as it does, more responsive. “That’s the way the data center of the future will look.”
“Huge upfront capital outlays don’t make sense anymore. The trend is to shift all that to the rack.” – Click to tweet
Today, one end user said, “there’s a huge disconnect between what the facility provides and what the system actually needs. There’s an onus on facility owners to provide what’s needed at that time and dynamically provide power and cooling to the racks that need it.”
But for colocation providers, catering to very specific needs, or being dynamically responsive, can be cost-prohibitive. “We have a least common denominator problem,” said one partner member. “We don’t know who’s going to be in the [data center] room. So we have to build to serve the widest range of possible needs, without overbuilding – which costs too much.”
There has to be some balance, the members agreed, between meeting specific needs and controlling costs. As another partner member explained, “There is a tendency to say, ‘Go microscale and engineer systems that have shorter lifespans and adapt faster and have smaller blast radiuses.’ But then you have more labor requirements, more deployment, more logistics issues. You’re just shifting the cost somewhere else.”
“How do we solve for the least common denominator without overbuilding?” – Click to tweet
Industry-wide standardization
Standardization across the digital infrastructure ecosystem – network, power, cooling, hardware, software – could be key to solving the problems of unpredictable scale. That’s because standardization introduces to the system components that are changeable, scalable, and repeatable.
In the data center, standardization could allow for optimal space configuration, minimizing the scope of upfront costs. But it’s not easy. As one end user explained, “The common denominators or consistent elements include power, cooling, space, and speed of light. The block has to be modular. But every colo provider’s modular is different. When we sent out RFPs they came back with 13 different designs and everyone has to compromise for something.”
“Data center standardization could allow for optimal space configuration, minimizing the scope of upfront costs. But it’s not easy.” –Click to tweet
One goal of standardization could be to curtail custom designs that create challenges for other components, yet without inhibiting customization that helps business keep up with the pace of change. “We need an easier way to control [customization],” one end user said. “We need loose standards. Otherwise the components will remain one-offs.”
One solution, according to one end user, may be to have the government put money behind standardization. “Vendors have trouble because everyone’s chasing the 90-day cycle,” he said. When market failures exist – as in, for example, the gap between data center lifecycles and hardware lifecycles – the government often has to step in to correct them. “That’s not optimal but it is in fact the way our economy works.”
Beyond future proofing
The Advisory Council members agreed on the need to continue discussing the impact of a rapidly changing digital economy and looking for solutions to the unpredictability that comes with those changes.
As one partner member said, “The 4th industrial revolution will be as impactful to society as the agricultural revolution was. And the core of it is the underlying infrastructure. We need to radically fix the underlying infrastructure. We need to be constantly challenged to not just do the same as we’ve always done.”
Stay tuned for the final installment in our 2018 top-of-mind series and hear from the Advisory Council on software redundancy versus data center resiliency.
Previous posts in the 2018 top-of-mind series: