Blog

Summer blog - In the pursuit of a greener GenAI

31.07.2024

During the summer, the Knowledge Centre Data & Society offers a platform for partner organisations. This week, researcher Alexandra Papageorgiou from the KU Leuven Centre for IP & IT Law (CiTiP) shares her insights on Generative AI and Sustainability.

The use of Generative AI (GenAI) is undoubtedly becoming an indispensable part of our lives. But as its models become more complex, its appetite for vital resources like water and energy, which are already scarce, continues to grow. It looks like our survival is more threatened by the development and functioning of resource-hungry AI models, than sci-fi scenario-based androids turning against humanity. The recently published EU AI Act includes sustainability-related provisions, but their reach is rather limited. Fortunately, there are solutions with which both GenAI itself and ourselves can contribute to the problem.


Green leaves on circuit board
Image by Balaraw via Adobe Stock

GenAI: ally or enemy in the fight against climate change?

Just a couple of months following its release in November 2022, ChatGPT was found to be the fastest growing app in history, with an astonishing total of 180.05 million users as of June 2024. GenAI has significantly altered the way we work, express our creativity, and handle everyday tasks, but it has also shown its huge potential in helping us mitigate bigger challenges, such as climate change, disaster response, and cultural heritage preservation, as well as assist in accelerating scientific development in various areas such as agriculture, education and healthcare.

However, despite the potential benefits of GenAI technologies, there are serious concerns about their massive environmental impact. Big data centres, filled with supercomputers, are in constant – and growing – need of two otherwise valuable resources: water and energy. Amongst other things, these supercomputers are used to run AI models, like ChatGPT. According to a recent study, the training of GPT-3 alone in Microsoft’s US data centre can consume up to 5.4 million litres of water, and a simple conversation of 10-50 responses with the model can amount to an added consumption of a 0.5 litre water bottle. These amounts increase for each newest version.

Data center
Image from Rawpixel

These numbers are particularly worrisome when one thinks of the current state of affairs: heat records have continued to break in recent years, with the Global South, and especially its vulnerable populations, suffering the most from the effects of climate change. Not that the North is able to escape the catastrophes that come with the disturbance of ecosystems: unprecedented droughts and deadly heatwaves have been increasingly tormenting, amongst others, many southern European countries, the US, and Canada. Growing AI energy consumption also puts a huge strain on the global energy supply, testing the limits of utilities.

A common misconception should be clarified at this point: inasmuch as the role of AI is dual, with AI systems helping us fight against climate change while also contributing to it, GenAI technologies designed to combat climate change (such as smart grids, climate modelling and precision agriculture)are not the same as the ones that depend heavily on resources (such as Large Language Models (LLMs) like ChatGPT).


Image by KlausKre via Commons Wikimedia
Image by KlausKre via Commons Wikimedia

Sustainability in the AI Act

Matters linked to resource consumption of AI systems could not have been left out of the recently adopted EU AI Act. Included in its subject matter, environmental protection is defined as one of the main human rights that are protected under the Act:

Technical documentation.
Responding to calls for greater transparency of AI model development, one of the obligations incumbent on providers of general purpose AI models (GPAI) is to report on the ‘known or estimated energy consumption of the model’, as part of its technical documentation. This documentation has to be kept up-to-date and always readily prepared, in case the AI Office or national competent authorities request it at any point. Surprisingly enough, this obligation does not concern high-risk AI models (HRAI).

Codes of conduct for voluntary application of specific requirements.
In collaboration with the Member States, the AI Office can draw up codes of conduct including clear objectives and KPIs, which can seek to measure the minimisation of AI impact on sustainability (i.e., through energy-efficient programming and techniques for efficient design, training and use of AI). The drafting of those codes remains, nevertheless, voluntary.

Waiver of conformity assessments.
For reasons of environmental protection, products that have not undergone a conformity assessment could be authorised to enter the market or be put into service.

Regulatory sandboxes.
Aiming at encouraging the development of sustainable AI systems, AI technologies that prioritise sustainability are granted preferential access to regulatory sandboxes.

Despite making the first decisive step towards the recognition of the importance that sustainability plays for the regulation of AI systems, the AI Act’s provisions are rather mild and thus inadequate to sufficiently address the issue. Supplementary policy proposals have already been put forward, in an attempt to reinforce sustainable AI regulation.

Solutions that come from AI development itself

In response to such concerns, the scientific community has started to shift its attention to seek out technical solutions to make GenAI training and functioning less ‘hungry’, or at least, ‘feed’ it more wisely.

More computationally efficient algorithms. As most of the energy consumed for the training and functioning of AI systems is due to the constant moving of data around, scientists drew inspiration from the human brain and developed an AI model whose layers are trained independently. When fed with new information, the model adjusts ‘on the spot’, thus avoiding the need for this information to ‘run’ across all layers.

Developing hardware and software that require less energy. Efforts should be made from the industry and academia towards the development of less-energy dependent hardware and software.

Training smaller models and with less data. Although larger datasets lead to more accurate learning models, the rate of improvement of their accuracy, beyond a certain point, notably slows down. Against this backdrop, significantly smaller models, that require less data and less training, have started to gain momentum and prove to outperform larger LLMs, in terms of accuracy and efficiency.

Better time-place training choices functioning. The time and the place where AI models are trained and put in operation can influence decisively their ‘appetite’ for resources. Training during night hours where energy demand falls, or in water-efficient data centres (which are located at cooler regions instead of hotter ones, or use sea water instead of potable) can significantly lower water and energy AI models’ footprint.

. . . but from us as well

While companies bear the largest responsibility, individuals also play a role in this resources-reduction equation. ChatGPT’s recently imposed usage limit was vaguely justified for reasons of fair access and prevention of system abuse; we can’t help but see there the opportunities such restrictions offer for resources preservation-related purposes. Optimising the use of GenAI services by avoiding asking unnecessary or redundant questions, but, instead, focusing oncomprehensive and well-thought-out ones, can help users get the most out of the interaction with the system, and at the same time minimise its resource consumption. A universal, yet fair, fee could also help further incentivise a responsible use of large LLMs like ChatGPT.

The idea of the human race antagonising GenAI and other technologies for essential goods feels not only daunting, but also as an oxymoron

Technology should work for humanity, not against it – the challenge before us should not be a battle for resources, but to find ways to develop GenAI without compromising our commitment to sustainability.

Author

Author

Alexandra Papageorgiou is a researcher at the KU Leuven Centre for IT and IP Law (CiTiP)


e-mail

The Knowledge Centre Data & Society does not bear responsibility for the content of the blog posts featured in our 'Summer' blog series, and therefore, we will not engage in any correspondence regarding the content. In case of questions, comments, or concerns: contact the author(s)!