The Future of the Data Center

By Ashmeet Sidana

February 2016

I was a teenager on a typical hot, dusty afternoon in Pilani, India when I walked into a data center for the first time—and it was love at first sight! In front of me was an early computer, an IBM 1130. I was mesmerized by the blinking lights, the rows of switches and the clattering punch-card reader, so much so that I was oblivious to the shock of the 30-degree temperature drop I had just experienced.

Data centers have come a long way since the days of the IBM 1130. However, the purpose of data centers hasn’t changed since Alan Turing helped build the world’s first one in Bletchley Park, UK in 1943. He had to start by building the world’s first programmable, electronic, digital computer, the Colossus, to crack the “unbreakable” enigma machines used by the Nazis during World War II. They didn’t call it that in 1943, but around the Colossus was, of course, the world’s first data center. In fact, even before the Colossus, Alan Turing had already written his famous paper laying the underpinnings of modern computer science: all modern programmable computers are essentially equivalent and that it is actually possible to build a complete computer with just one instruction. So then, why do we build data centers when they are all, in effect, just running one instruction over and over again?

Make Magic Possible

Data centers are built to process data, and it is their technology that makes the Facebook news feed so addictive, Google search so powerful, and Amazon shopping so lucrative. In the beginning, data centers, such as those running the IBM 1130, were built around one machine. Each machine was expensive and took a lot of care and feeding by sophisticated people to keep it running. This evolved into groups of machines in the 1980s, but it was still essentially a high-maintenance endeavor.

Then came what was later to become Google, as we know it today. It all started with research in the 1990s at Stanford to scale a search engine – by not one or two, but three and even four orders of magnitude. They needed to create a new way to index data at a scale that had never before been considered, and it led to a reimagining of the data center. At the same time, VMware commercialized and popularized an old idea—virtualization—which loosened the bonds between software and hardware, making servers more fluid. Since then data centers changed dramatically on several levels—physical, conceptual and in the marketplace.

The most obvious change in data centers can be seen at the physical level. These machines once required a lot of administrators to manage and maintain them. The ratio of humans to machines flipped when we moved to minicomputers and then workstations that were easier to use and lower-maintenance. Today, data centers are comprised of vast rooms (and often warehouses) of machines that are completely maintained robotically. Modern data centers usually contain tens of thousands of machines.

Pet or Cattle?

This physical change is symbolic of a conceptual change. Originally, all the data, programs and people were physically working together in the same location. With the advent of cloud computing, the work being done lies within a virtual idea of a program being run and distributed across the globe. There’s a well-worn phrase used in the IT world that cheekily illustrates this paradigm shift: “Is your server a pet or is it cattle?” The data center of old required high-touch management of and care for each box like a beloved pet, whereas today’s data center consists of interchangeable computers, none of which are special. The latter requires a lot less administration and, therefore, fewer people.

When the boxes changed, the leading companies in the space changed along with them. In the 1980s and 1990s there was a standard formula of equipment in every data center. Servers were HP, Dell or IBM. Networking came from Cisco or Juniper. Your database was Oracle, and your operating system was Microsoft. Most of these incumbents are nowhere to be found in the new landscape. It looks like Microsoft is the only legacy player who will continue to thrive, thanks to their now rapid acceptance and embrace of this new reality. They sit alongside Amazon Web Services (AWS), Google, Facebook and Alibaba as the new masters of the data center universe. These companies came in and built their own proprietary systems, often using open source, and are even beginning to build their own chips. They’ve leveraged these systems into massive businesses, where they rent out use of their technology to other companies that don’t have the scale to build and manage it themselves. This is the so called “Cloud” (of computers), and it made computing tremendously efficient for everyone from startups and enterprises to governments.

What lies ahead for data centers? We can look at this both qualitatively and quantitatively. The qualitative future of data centers lies in their conceptual evolution. While there will always need to be a room full of computers, a data center is now available on demand, as a service, for rent and use to anyone with a credit card and the technical knowledge to leverage it. The magic and power of the new data center is that it is an enabler for human imagination, and there are no limits to the solutions the human imagination can create. Just as the modern data center freed up tech workers to spend less time administrating and more time solving problems, the solutions they develop will bring us more freedom in our everyday lives.

Quantitatively, the commercial reality of the data center market is hundreds of billions of dollar every year. This massive prize has attracted a plethora of entrepreneurs and startups. Innovation very often comes from startups, and the rapid and revolutionary changes we’re seeing in data centers provides fertile ground for them. As the founder of Engineering Capital, I have the privilege of investing in and working with some of the leading entrepreneurs and startups in this space. This provides a unique vantage point, and based on what I’m seeing I expect several new, independent, billion dollar companies to be created from such startups.

Massive Prize and Big Behemoths

Big players have also been attracted to this market. Amazon AWS has leaped ahead with an early lead in infrastructure. They have amassed enormous scale and rich functionality and, consequently, have established a significant advantage. AWS is focused on traditional IT and is looking to lift and shift existing workloads to the Cloud.

Google, on the other hand, has powerful technology in-house, though they have not applied it to traditional IT. Their machine learning technology, in particular, is excellent, and they will attempt to leverage this as their competitive advantage. Google is focused on cloud-native applications, which are built from the ground up to operate not on one machine, but on 10s or 100s of machines in parallel in a distributed fashion.

I already mentioned Microsoft as the one legacy player who appears to be crossing the divide into this new market. They are leveraging their franchises in Office, SQL Server and SharePoint, any of which could be a silver bullet given their deep relationships with enterprises – small, medium and large.

Each of the incumbents sits on tens of billions of dollars in cash. They are not accustomed to losing, so I expect a fierce fight.

Let the games begin! 

Next
Next

Technical Insights in Startups