New TUM training model slashes AI energy consumption

Artificial intelligence (AI) has become an integral part of modern life, powering applications from chatbots to image recognition.

However, the energy consumption of AI systems, particularly large language models (LLMs), has raised concerns about sustainability.

These systems rely on data centres, which require vast amounts of electricity for computing, storage, and data transmission. In Germany alone, data centres consumed approximately 16 billion kWh in 2020 – accounting for around 1% of the country’s total energy usage.

By 2025, this figure is projected to rise to 22 billion kWh, reflecting the increasing demand for AI-powered services.

To combat this issue, experts at the Technical University of Munich (TUM) have developed a novel training method that slashes AI energy consumption significantly.

What drives AI energy consumption?

It is increasingly evident that the energy consumption of AI poses a significant environmental challenge.

The core of this issue lies in the immense computational power required to train and operate advanced AI models. These models necessitate processing vast datasets, leading to prolonged and intensive use of powerful hardware such as GPUs and TPUs, which consume large amounts of electricity.

This high energy demand is further amplified by the reliance on AI operations in data centres, which require substantial power for both computation and cooling.

According to research from sources like Built In, the energy used to produce a single image from an AI image generator can equal the energy used to fully charge a smartphone. This gives a tangible example of the power consumption of AI.

Additionally, the International Energy Agency (IEA) has highlighted that interactions with AI systems like ChatGPT could consume significantly more electricity than standard search engine queries.

The IEA also states that the increase in electricity consumption by data centres, cryptocurrencies and AI between 2022 and 2026 could be equivalent to the electricity consumption of Sweden or Germany. This emphasises the scale of AI energy consumption.

Furthermore, reports project a substantial increase in data centre energy consumption in the coming years, driven largely by the proliferation of AI.

For example, McKinsey & Company have projected that power demand for data centres in the United States is expected to reach 606 terawatt-hours (TWh) by 2030, up from 147 TWh in 2023. This projected increase shows the rapidly increasing demand for energy from AI.

To address this challenge, TUM researchers have developed a revolutionary training method that is 100 times faster while maintaining accuracy comparable to existing techniques.

This breakthrough has the potential to significantly reduce AI energy consumption, making large-scale AI adoption more sustainable.

Understanding  neural networks

AI systems rely on artificial neural networks, which are inspired by the human brain. These networks consist of interconnected nodes – artificial neurons – that process input signals.

Each connection is weighted with specific parameters, and when the input exceeds a threshold, the signal is passed forward.

Training a neural network involves adjusting these parameters through repeated iterations to improve predictions. However, this process is computationally expensive and contributes to high electricity usage.

A more efficient training method

Felix Dietrich, a professor specialising in physics-enhanced machine learning, and his research team have introduced an innovative approach to neural network training.

Instead of relying on traditional iterative methods, their technique employs probabilistic parameter selection.

This method focuses on identifying critical points in training data – where rapid and significant changes occur – and strategically assigning values based on probability distributions.

By targeting key locations in the dataset, this approach dramatically reduces the number of required iterations, leading to substantial energy savings.

Real-world applications

This new training technique holds immense potential for a variety of applications. Energy-efficient AI models could be used in climate modelling, financial market analysis, and other dynamic systems that require rapid data processing.

By reducing the energy footprint of AI training, this method not only lowers operational costs but also aligns AI development with global sustainability goals.

A greener AI future

The rapid expansion of AI applications necessitates a sustainable approach to energy consumption.

With data centre electricity usage expected to rise, adopting energy-efficient training methods is crucial. The breakthrough by the TUM team marks a significant step towards making AI more environmentally friendly without compromising performance.

As the technology evolves, innovations like this will play a pivotal role in shaping a more sustainable digital future.

Subscribe to our newsletter

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network