There are still plenty of unanswered questions about 5G, but it’s becoming evident it will leave a lasting mark on our world. With speeds of 10 gigabits per second, it stands to improve upon 4G connectivity by about 100 times. Then there’s the improved bandwidth and less-than-a-millisecond latency. The implications for mobile internet connectivity, driverless cars, drones, smart homes and cities, and electric grids, plus many other technologies, are impressive indeed.

But what about the venerable data center? How will these important structures change once 5G becomes an official part of the mainstream? First, we need to understand the limitations of our current infrastructure and what they mean for the rollout of 5G.

What’s Getting in the Way of 5G’s Rollout?

Fault tolerance is one of the biggest reasons 5G seems like it’s in a holding pattern right now. The systems mentioned above, like smart P2P electric grids — plus some we haven’t mentioned, like telemedicine and remote robotic surgery — are extremely fault-intolerant. Surgeons are already performing successful surgeries, via robotic arms and 5G connectivity, from distances of up to 30 miles away from the operating room. Needless to say, systems like this cannot fail during use. And neither can communication between autonomous cars and smart city infrastructure.

With 4G, our devices connect to one piece of infrastructure at a time: one cell tower, which transmits to the next tower, and so on. But with 5G, our devices and equipment will have to communicate with multiple cell towers and other infrastructure at once if they want to achieve “zero tolerance.” You may hear this referred to as “spherical coverage.”

A lot of the showiest demonstrations of 5G so far involve the “last mile” of internet connectivity: between handheld or IoT devices and local network routers or cell towers. But most of the rest of the internet’s backend — our servers and data centers — aren’t yet equipped to handle either the speeds or the low latency of 5G connections.

The simple way to describe the problem is that data processing and server facilities need to move closer to the edge of our networks.

How Data Centers Must Change to Accommodate 5G

Internet speeds over 4G are slow enough that most of us don’t notice the latency that results from shuttling data packets across hundreds or thousands of miles. What this means is that data centers across the world will have to become far less centralized, and much more geographically diffused, than they are currently. There simply isn’t any other way to realize the blinding fast speeds and low latency that 5G promises.

“Micro data centers” are one way forward. They also go by the name “containerized” data centers. To get a sense of what this could look like in practice, picture a miniature data center attached to each cell tower site. Then, picture many more cell towers than we have now.

Building out this kind of infrastructure will get us part of the way there and will make it possible to deploy 5G-powered Internet of Things devices across a considerable geographical area without latency. But what about larger, industrial-scale data processing tasks? This is a slightly different story.

Enterprises that rely on large-scale data transmission and processing will be able to place new data centers with relative ease strategically. As this happens, smaller companies will either be left behind or will have to turn to colocation services to piggyback their traffic onto this new infrastructure as it comes together.

Whether large or small, the goal for companies that rely on the accumulation, analysis, and distribution of data is to move the processing equipment closer to where the data is generated: the end-user. Under our existing computational models, our services and devices have to send data out to the cloud, then to “core” data processing infrastructure, and then back again. But this model simply isn’t fast enough for 5G, nor for the remarkable functionality and emerging technologies it will help us unlock.

What 5G Really Represents

The term “general-purpose technology,” or GPT, refers to a branch of technology with the ability to impact, disrupt and improve an entire nation’s economy. As we come to terms with 5G’s practical requirements and build out our shared and proprietary infrastructure accordingly, it will enter the realm of GPTs alongside previous entries like the steam engine, interchangeable parts, automobiles and the internet itself. Even with the growing pains, we’re looking at a sea change in how people and data services communicate with one another.

Humankind will possess 75 billion internet-connected devices by 2025. And with 5G, all of these devices will be able to transmit more wireless data, more quickly, than at any point in our history.

But reaching that point will require the public and private sectors to lay the groundwork — and that means rethinking the size and placement of data centers and the emergence of new business models for sharing our data transmission and processing capabilities.

About the Author

Nathan Sykes writes about business technologies on his blog, Finding an Outlet. To read his latest articles, follow him on Twitter @nathansykestech.