Jump to content

Hyperscale computing

From Wikipedia, the free encyclopedia

In computing, hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system.

This typically involves the ability to seamlessly provide and add compute, memory, networking, and storage resources to a given node or set of nodes that make up a larger computing, distributed computing, or grid computing environment. Hyperscale computing is necessary in order to build a robust and scalable cloud, big data, map reduce, or distributed storage system and is often associated with the infrastructure required to run large distributed sites such as Google, Facebook, Twitter, Amazon, Microsoft, IBM Cloud or Oracle Cloud. Companies like Ericsson, AMD, and Intel provide hyperscale infrastructure kits for IT service providers.[1] Companies like Scaleway, Switch, Alibaba, IBM, QTS, Digital Realty Trust, Equinix, Oracle, Meta, Amazon Web Services, SAP, Microsoft and Google build data centers for hyperscale computing.[2][3][4][5] Such companies are sometimes called "hyperscalers." Companies known as "hyperscalers" are recognized for their massive scale in cloud computing and data management, operating in environments that require extensive infrastructure to accommodate large-scale data processing and storage.[6]

See also

[edit]

References

[edit]
  1. ^ "Ericsson to sell Intel's hyperscale kit to network operators".
  2. ^ "Hyperscale data center expert Switch files for IPO". www.datacenterdynamics.com. Retrieved 2019-08-19.
  3. ^ "GIC to Fuel Equinix's Hyperscale Market Ambition". Data Center Knowledge. 2019-07-02. Retrieved 2019-08-19.
  4. ^ "QTS Delivers Hyperscale Data Center to Non-Hyperscale Client in Ashburn". Data Center Knowledge. 2018-08-13. Retrieved 2019-08-19.
  5. ^ "Hyperscaling is trending". Journal du Net (in French). 2020-09-14. Retrieved 2020-09-14.
  6. ^ "How Hyperscalers Can Maximize Data Storage Capabilities". parachute.cloud. Retrieved 2023-12-26.