A new development on the Jersey Shore is signaling a shift in how and where AI infrastructure will grow. A subsea cable landing station has announced plans for a data hall built specifically for AI, complete with liquid-cooled GPU clusters and an advertised PUE of 1.25. That number reflects a well-designed facility, but it highlights an emerging reality. PUE only tells us how much power reaches the IT load. It tells us nothing about how much work that power actually produces.
As more “AI-ready” landing stations come online, the industry is beginning to move beyond energy efficiency alone and toward compute productivity. The question is no longer just how much power a facility uses, but how much useful compute it generates per megawatt. That is the core of Power Compute Effectiveness, PCE. When high-density AI hardware is placed at the exact point where global traffic enters a continent, PCE becomes far more relevant than PUE.
To understand why this matters, it helps to look at the role subsea landing stations play. These are the locations where the massive internet cables from overseas come ashore. They carry banking records, streaming platforms, enterprise applications, gaming traffic, and government communications. Most people never notice them, yet they are the physical beginning of the global internet.
For years, large data centers moved inland, following cheaper land and more available power. But as AI shifts from training to real-time inference, location again influences performance. Some AI workloads benefit from sitting directly on the network path instead of hundreds of miles away. This is why placing AI hardware at a cable landing station is suddenly becoming not just possible, but strategic.
A familiar example is Netflix. When millions of viewers press Play, the platform makes moment-to-moment decisions about resolution, bitrate, and content delivery paths. These decisions happen faster and more accurately when the intelligence sits closer to the traffic itself. Moving that logic to the cable landing reduces distance, delays, and potential bottlenecks. The result is a smoother user experience.
Governments have their own motivations. Many countries regulate which types of data can leave their borders. This concept, often called sovereignty, simply means that certain information must stay within the nation’s control. Placing AI infrastructure at the point where international traffic enters the country gives agencies the ability to analyze, enforce, and protect sensitive data without letting it cross a boundary.
This trend also exposes a challenge. High-density AI hardware produces far more heat than traditional servers. Most legacy facilities, especially multi-tenant carrier hotels in large cities, were never built to support liquid cooling, reinforced floors, or the weight of modern GPU racks. Purpose-built coastal sites are beginning to fill this gap.
And here is the real eye-opener. Two facilities can each draw 10 megawatts, yet one may produce twice the compute of the other. PUE will give both of them the same high efficiency score because it cannot see the difference in output. Their actual productivity, and even their revenue potential, could be worlds apart.
PCE and ROIP, Return on Invested Power, expose that difference immediately. PCE reveals how much compute is produced per watt, and ROIP shows the financial return on that power. These metrics are quickly becoming essential in AI-era build strategies, and investors and boards are beginning to incorporate them into their decision frameworks.
What is happening at these coastal sites is the early sign of a new class of data center. High density. Advanced cooling. Strategic placement at global entry points for digital traffic. Smaller footprints but far higher productivity per square foot.
The industry will increasingly judge facilities not by how much power they receive, but by how effectively they turn that power into intelligence. That shift is already underway, and the emergence of AI-ready landing stations is the clearest signal yet that compute productivity will guide the next generation of infrastructure.
# # #
About the Author
Paul Quigley is the former President and current Chief Strategic Partnership Officer of Airsys Cooling Technologies, and a global advocate for high density, energy efficient data center design. With more than three decades in HVAC and mission critical cooling, he focuses on practical solutions that connect energy stewardship with real world compute performance. Paul writes and speaks internationally about PCE, ROIP, and the future of data center health in the age of AI.