Compute, Costs, and Competition: The Economics of AI hardware

We’ve all noticed the increasing presence of Artificial Intelligence integrating into all forms of technology. This is putting pressure on a key resource: computational power. 

The Growing Demand for AI Compute

Recent developments in AI, such as reasoning-based LLMs and image generation models, are fuelling the ever-growing need for high-performance computing to support the growing expectations of AI. 

To quantify this growth, let’s look at some numbers recently published in a press-release from Gartner (July 2025).

  • Last year, an estimated $333.4 Billion was spent on data centre systems worldwide)
  • This year, $474.9 Billion is expected to be spent on data centres, which is a 42.4% growth driven primarily by AI demand
  • “… spending on AI optimized servers, which was virtually non-existent in 2021, expected to triple that of traditional servers by 2027” (Lovelock)

To provide some specific examples, Microsoft announced in a blog post that they are on track to invest approximately $80 billion in the financial year of 2025 to build out “AI-enabled datacenters to train AI models and deploy AI and cloud-based applications around the world”. Mark Zuckerberg, founder, chairman and CEO of Meta (formerly known as Facebook), posted on social media that Meta would “invest hundreds of billions of dollars into compute to build superintelligence” and announced that they are building “several multi-GW [gigawatt] clusters”.

Clearly, companies are competing to build the largest and most powerful data centres as a means to build the most capable AI models, which is a primary driver in ongoing advancements in AI hardware.

Commercial Interest and Advantages of Specialised AI Hardware

Several companies have developed custom specialised hardware to push ahead of their competitors, such as, Google’s Tensor Processing Unit (TPU), Amazon’s AWS Trainium, Amazon’s AWS Inferencia, and Graphcore’s Intelligence Processing Unit (IPU).

Innovation in AI-specialised hardware has the following benefits:

  • Increasing the total available compute for AI applications, which enables larger and higher performance AI models
  • Lowering the cost per compute, which lowers the barrier to entry and enables greater development in the AI field
  • Lowering the demand for electricity, which can reduce power usage or enable more compute for the available power
  • Lowering the impact on the environment by reducing power consumption for providing AI applications or services

For example, Google uses their own TPUs to perform training and inferencing much faster and more efficiently, enabling them to provide access to their LLMs (known as Gemini) at much more competitive price points with much higher performance. As another example, Amazon provides compute for AI applications on AWS at lower costs using their custom hardware specialised for training or inference.

The ever-increasing spend on AI hardware may not be sustainable, from a cost or energy perspective, and Google and Amazon are showing that more efficient systems may reap the greatest rewards.

Interest in AI-specialised hardware can be quantified by some of the investments in the new wave of companies focussed on creating specialised hardware for AI applications. For example:

  • Graphcore was acquired by SoftBank Group in 2024, previously having raised $222 million in a Series E funding round in 2020. Across all funding rounds, they have raised $767 million. 
  • Groq, Inc. is a company that builds ASICs for accelerating AI inferencing, notably their Language Processing Unit (LPU). They recently raised $640 million in a Series D funding round led by Cisco Investments in 2024. Across all funding rounds, they have raised $1 billion.

Conclusion

It’s inevitable that the demand for AI compute capacity will continue to grow. As such, innovation in the underlying hardware will be necessary to support the expansion of AI. This innovation is required to improve the efficiency of AI systems, and these efficiency improvements will reduce the environmental impact of AI, continue progress in the development of AI, and reduce the costs of all aspects of AI from training to inferencing, for everyone.

As with all innovation, patents play a major role to protect these innovations and enable proprietors to commercialise and benefit from their work. As innovations in AI hardware seek to keep up with the bigger and better AI models, the patent landscape around specialised AI hardware is also shaping up to be a more crowded and contested space. We are looking forward to seeing who comes out on top over the coming years.

 


 

This blog was co-authored by Rebecca Frith, Henry Suen and Luke Jones

 

Rebecca Frith circle frameRebecca Frith

Rebecca is a patent attorney working in our engineering team at Mewburn Ellis. She has a first-class MEng degree in General Engineering from Durham University where she specialised in electronics. After graduating, she worked for three years at a technology consulting firm as an electronics and firmware engineer. As a technology consultant Rebecca dealt with a variety of research and development projects for the defence and aerospace industries, including projects in computer vision, data security in machine learning, sensing devices, radar modelling, radio communications and safety assured electronics design.

Email: rebecca.frith@mewburn.com 

 


 

Henry Suen circle-2Henry Suen

Henry is a trainee patent attorney in the Engineering practice group at Mewburn Ellis. His area of expertise consist of Artificial Intelligence (AI) Software, Networks, Distributed Systems, Data Storage Devices. Henry graduated with a first-class degree in Computer Science with Artificial Intelligence from the University of Leeds. His final year project looked at methods of reconstructing audio from spectrogram images using algorithmic and machine-learning approaches and proposed it as an alternative form of audio compression.

Email: henry.suen@mewburn.com