You may not know the term “hyperscaler,” but you certainly know who they are: Meta, Google, Amazon, Apple, and Microsoft. These companies have stratospheric market caps and spend billions on AI and cloud computing annually. Their combined influence and spending on the technological landscape may have notable impacts on the future of research computing:
- The “smallest” hyperscaler – Meta – has a market capitalization (~$1.25 trillion) roughly equivalent to the combined market capitalization of nearly all “traditional” computing companies, such as Intel, AMD, Cisco, IBM, and more. If money talks, the combined fiscal power of hyperscalers mean that they will control the direction of research computing moving forward. Even purchases of millions of dollars in a given year are little more than “noise” compared to the budgets of hyperscalers.
- One consequence of this is that the research computing needs of anyone else – including higher education and the rest of industry – are at risk of being ignored and largely unsupported.
- Hyperscalers are already creating proprietary hardware that will never be for sale to end users, meaning that access to such technology will require use of their compute ecosystems. This likewise challenges the relevancy of traditional companies that manufacture components, such as processors, GPUs, and networking equipment, for research computing.
- Critically, this raises questions of the reproducibility and portability of basic scientific and research workflows. A core tenet of science is that it is testable and reproducible by others. But if a research workflow only can run within the “walled garden” of a particular hyperscaler, what does that mean?
Will any or all of this come to pass? It is unclear. Other pressures beyond the scope of this post, such as the seeming end of Moore’s Law and exponentially growing energy demands, are also creating challenges for everyone, including hyperscalers. Murky as it may be, the future of research computing at any scale is certainly interesting!