5 Key Trends: IT and the Future of the Data Center
Colocation, IoT, AI and other trends point to a hybrid approach as key to managing growing workloads

The world of the data center is changing significantly. Rising costs of infrastructure, cloud adoption, and emerging technology trends such as the Internet of Things (IoT), artificial intelligence (AI), edge computing, and high-performance computing now demand enterprises implement more flexible and scalable approaches to manage today and tomorrow’s workloads.

Here are just a few of the challenges facing computing and IT leaders:

  • IT must support an array of applications today, each with different requirements in terms of performance and security needs. Not every workload is suitable or can easily be moved to the cloud. While 93% of companies will say they are running on the cloud only 25% are running in production.
  • Edge computing and the proliferation of the Internet of Things (IoT) is driving the need for an increasingly distributed infrastructure vastly different from existing data center design. The computing power required for (AI) and high-performance computing stands to overwhelm traditional data center models.
  • Aging infrastructure of on-premise facilities and the capital-intensive costs of building a new data center combine to create a compelling business case for the addition of colocation outsourcing.

Trend #1: "Exploding" Cloud Adoption

Cloud-based computing, networking and storage infrastructure and cloud-native applications are standard choices for CIOs in most industries, and in companies of all sizes. According to a recent survey of IT professionals, 83% of enterprise workloads will be in the cloud by 2020, with 41% of enterprise workloads running on public cloud platforms (AWS, Google Cloud Platform, IBM Cloud, Microsoft Azure and others) , 20% running on private cloud, and the remaining 22% running on hybrid cloud platforms.

This trend toward the cloud within the enterprise is certainly reflected in growth numbers of key players. Microsoft’s cloud revenue nearly doubled in 4 quarters, and was up 90% year-over-year in Q3 2017. Amazon added nearly $10 billion in revenue in six quarters and was up 45% in Q3 2017 vs. the prior year.

Of the various options available to IT professionals, a hybrid cloud approach continues to be the most popular among IT organizations. According to research, almost two-thirds of enterprise-sized firms have a strategy or pilot program for hybrid cloud in place, and over 80 percent of enterprises deploy workloads using a mix of cloud types. A hybrid IT model and multi-cloud strategy is perceived to be most beneficial because it offers IT professionals more options, easier faster disaster recovery, and increased flexibility to spread workloads across end points.

“If someone asks me what cloud computing is, I try not to get bogged down with definitions. I tell them that, simply put, cloud computing is a better way to run your business.”
– Marc Benioff Founder, CEO and Chairman of Salesforce

Colocation + Cloud Maximizes Business Flexibility

For many businesses, transformation to the cloud is not a linear journey with one endpoint for all workloads. Rather, cloud transformation is complicated. Different applications require different environments. For many organizations, the end goal is to place web servers, media servers and other servers with spiky usage on two public clouds and run a direct connection back to colocation or to an in-house data center where more solid-state applications such as storage or backup are running.

Larger companies and companies with well-developed information technology departments will often opt for colocation, enabling them to rapidly expand the business and respond to changing business conditions while retaining high levels of reliability. In truth, many cloud computing vendors themselves house their systems in existing colocation facilities.

Companies will also opt for colocation over cloud. In fact, according to recent data, 30% of enterprises operating in the cloud are moving back to an on-premise model as a result of concerns around latency, performance in the public cloud, data sovereignty changes, better on-premise clouds, and cost issues.

Trend #2: Enterprise Connectivity

Connectivity is one of the most critical components of a hybrid strategy. To maintain the flow of business, data must move rapidly, cost-effectively, and securely. As applications and end users become increasingly distributed with workloads hosted at different points across the IT infrastructure (by nature more hybrid), speed, quality of service (QoS), and the availability and security of network connections becomes even more critical.

A hybrid model incorporating colocation offers fast, secure interconnections to top public cloud services for fast-lane information exchanges and the ability for businesses to easily extend their corporate WAN when running high volume networks, or where there’s a need to connect to multiple clouds.

Colocation providers also typically offer clients a large choice of Internet Service Providers, so services to customers are always delivered with lowest possible latency, and provide direct connections between leased data centers so there’s always consistent, high quality connectivity.

Here are some connectivity models to consider when building out your hybrid strategy:

    Dedicated compute, storage, and connectivity are critical for many workloads hosted at different points across an IT architecture. These workloads are becoming increasingly stretched and more hybrid, and run the risk of becoming disconnected from other resources. In building a connectivity strategy, companies should seek out a data center provider and network provider that work together, or a partner that can provide both services under one roof, to guarantee network quality from doorstep to data center. This will improve IT resiliency and performance, and avoid the issues that arise from working with multiple providers when problems arise.
    With a secure direct connection everything happens at data center speed. There is no need to go out on the WAN to access external services and wait for them to respond. Nor are you at the whim of the Internet. At the very worst, connectivity between the enterprise and the service provider will be at standard Ethernet speed (1-10 Gbps), but it is entirely possible to plug into a 40 Gbps or even 100 Gbps backbone, or into a virtualized fabric network where port aggregation can provide blistering fast bandwidth.
    In this connectivity model, a secure private connection is established between cloud providers and customers outside of the colocation facility. This type of interconnection ‘ecosystem’ is growing in popularity, particularly for organizations in industry sectors where groups of companies need to share large data sets (for instance, oil and gas, movie production, pharmaceuticals, genomics), or that need to trade information (such as financial services trading systems). As more organizations start to compute and share large data sets, demand for colocation-hosted communal meeting points will continue to grow. Colocation facilities offer one of the most reliable routes into the public cloud, backed by guaranteed SLAs to ensure QoS.

Trend #3: The Internet of Things (IOT) and Edge Computing

By 2020, data generated by the Internet of Things (IoT) could reach 600 zettabytes, or 275X the amount of data generated by IP traffic. From sensors used in industrial manufacturing to driverless vehicles, to smart cities and smart building controls, IoT is creating significant new demands on IT infrastructure giving rise to the concept of edge computing.

Edge computing is essentially a “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet.” These edge devices collect the data and send it to a data center (or “cloud”) for processing. This allows for data to be processed locally or near the source of data generation, thereby reducing traffic to the central data center.

By 2020, IT spending on edge infrastructure will reach up to 18% of the total spend on IoT infrastructure. This spending will be driven by the deployment of converged IT and OT systems which reduces the time to value of data collected from their connected devices and provides enterprises with visibility into and control of customer and operational realities. The returns from IoT however, will far outpace proposed spending in the same timeframe. It is estimated by 2022 businesses and their customers will save $1 trillion a year in maintenance and services as a direct result of IoT.

“The Internet will disappear. There will be so many IP addresses, so many devices, sensors, things that you are wearing, things that you are interacting with, that you won’t even sense it.”
– Eric Schmidt, Chairman, Google


While software platforms and solutions will act as a bridge between highly specialized sensor, actuator, compute and networking technology for real world objects and related business software12, deploying an edge strategy and infrastructure presents its own share of risks for an IT organization:

  • Extending the footprint using edge computing presents a security risk as the surface area for attacks is exponentially greater.
  • As new IoT endpoints proliferate, scalability concerns arise.
  • Protocol diversity and immature standards may create obstacles or delay deployment.


Edge computing will neither replace cloud computing nor the data center. Rather, the three will continue to co-exist and complement one another. Edge-driven systems will work alongside cloud and the huge data center model. For instance, sensors in an autonomous vehicle might be interact directly with the car’s data processing center to keep vehicle-data running quickly and smoothly and to reduce the strain on the cloud, with the core data center performing deep analytics on data sets forwarded from the edge.

As demand for edge devices and applications increases, this part of the network could see big growth. There are currently 263 million registered passenger vehicles in the United States.

If 10% of US drivers adopt self-driving cars, that represents 4 terabytes of daily data per vehicle, generating 38.4 zettabytes of data annually. This data must move wirelessly, at scale, to connect a new digital infrastructure that will stretch nearly wherever cars can drive.

For this distributed model to happen, IT leaders agree that colocation providers will likely play a role in the next two to three years, with colocation data centers serving as “on ramps” and connected locations for workloads not placed in the cloud.

More than a quarter (26%) of IT professionals surveyed said they would use “mostly colocation providers’ data centers” while another 38% said they’d use a mix of their own and colocation data centers16 for data processing that is either not as time-sensitive or is not needed by the device, for instance, to generate big data analytics on data from devices.

Trend #4: Artificial Intelligence

According to industry analyst firm Forrester Research 70% of enterprises expect to implement AI this year and 20% said they would deploy AI to make decisions. AI and machine learning have been forecasted to be game changers for the next century. Within this context, it will be vital that organizations have supporting infrastructure to develop AI applications and to provide the speed, performance and computing power needed to manage the massive amounts of Big Data generated by AI and IoT.


IT infrastructures in support of AI will require a true hybrid model comprised of a wide array of resources (compute, storage, input/output and applications), real and virtual, owned and leased, residing across public and private clouds – all interconnected via local and wide area networks.

The heavy computational demands of AI will put to the test the traditional data center model. By its nature, AI is resource intensive, generating large peak demands. Even a nominal AI implementation at peak can require 10 times more power and cooling than a typical compute environment. Implementations greater than 50 kilowatts per rack are already being identified. The resultant power densities are often well beyond the capabilities of in-house data centers. A colocation provider with data centers offering support for GPU-based processing is an ideal partner for enterprises running deep learning applications.

Trend #5: High Performance Computing

The world of supercomputing is growing at breakneck speed. Connected devices now far exceed the number of humans on this planet, linking us to the world through billions of things that sense, think, and act across a global network. High performance computing (HPC) considered critical by world governments for national security, scientific leadership and economic prosperity and is being employed to solve some of the world’s biggest and most pressing issues from solving the issue of global warming, to predicting natural disasters such as earthquakes.

In HPC, computing power is aggregated in a way to deliver much higher performance than one could get out of a typical desktop computer or workstation to solve the large problems presented in science, engineering, or business.

By 2021, 50 percent of data centers will use solid state arrays (SSAs) for highperformance computing and big data workloads – up from less than 10 percent today.

High performance computers are actually clusters of computers or nodes. These nodes then work together to solve a problem larger than a single computer can easily solve (for instance analyzing large datasets) and “talk” together (through interconnection) to work meaningfully.

In a hybrid model, HPC environments are paired with the traditional on-premise data center, which helps accelerate data workloads.

“There are problems you cannot solve, (not) today, you can’t solve them with Exascale … But in some cases, machine learning combined with quantum computing may be a way to solve problems that otherwise would never be solved with normal digital computing techniques.”
– Karl Freund, consulting lead HPC and deep learning, Moor Insights & Strategy


The technologies required for HPC have become more commoditized in recent years, which now makes colocation an option, particularly for organizations with more generalized IT needs. If a business is demanding near real-time data analytics and requires HPC-strength processing to yield answers to highly complex questions, seek out a data colocation vendor with HPC cluster computing and in-house managed services expertise on managing big data workloads.

That said, colocation for HPC still has specific requirements. If considering turning to a third-party colocation provider, take time to evaluate their capabilities against the power, emergency power and cooling requirements demanded by HPC.


Is your business embracing the cloud, seeing to improve enterprise connectivity or impacted by broad technology advancements (IoT, AI, Edge, HPC)? Are you lacking the compute resources and IT skill sets required to manage applications internally? If so, it may be time to seek out a partner to guide you in formulating a flexible and modern hybrid solution. Partnering with a data center colocation vendor with HPC cluster computing, cloud experience and knowledge and in-house managed services expertise, can help enterprises more efficiently and skillfully strategize, design, build and implement a hybrid strategy to manage the big data workloads driven by these significant compute trends.

No organization is alike, and each IT strategy is unique. However, with the right partner, organizations can align their IT strategy, locations, vendors, technologies, and applications to ensure IT is performing optimally and in a position to powering future workloads.

Start The Conversation

Want to learn more about how to unlock the potential of your data infrastructure? Talk to an infrastructure solutions expert today and find out how Aptum can help!

Get Started