Addressing the massive climate and energy costs of AI

Scientists from the University of Cambridge say the research community needs to take responsibility for the environmental impacts of cloud computing and artificial intelligence (AI) to avert potentially uncontrolled increases in greenhouse gas emissions.

Writing in Nature Computation Science the public health researchers argue the environmental impact of relying on high powered computing and artificial intelligence in science is often overlooked.

Co-author and biomedical data scientist Dr Loïc Lannelongue says scientists need to act to reduce the carbon footprint of their work to “ensure that the benefits of our discoveries are not outweighed by their environmental costs.”

In 2020, the Information and Communication Technologies sector was estimated to have made up between 1.8% and 2.8% of global greenhouse gas emissions – more than aviation (1.9%), the report says. 

Those emissions result largely from electricity and energy use in manufacturing hardware and powering data centres, and the embodied energy in raw materials.

Co-author Professor Michael Inouye says, “while new hardware, lower-energy data centres and more efficient high performance computing systems can help reduce their impact, the increasing ubiquity of artificial intelligence and data science more generally means their carbon footprint could grow exponentially in coming years if we don’t act now.”

The authors have proposed a set of principles to guide researchers in the use of AI and computing tools, using the acronym GREENER (Governance, Responsibility, Estimation, Energy and embodied impacts, New collaborations, Education and Research).

The principles highlight the need for scientists to understand and quantify, report on and take responsibility for the environmental impacts of computing used in their research. 

Scientists can then make choices to reduce impacts, such as ensuring computation and data storage – both onsite and out-sources – is powered by renewable energy. Coding choices too can impact carbon footprints.

As experts draw attention to the energy cost of computing, particularly generative AI, they are also coming forward with solutions.


Read more Cosmos coverage about the climate costs and benefits of computing and AI:

Massive computers chewing up gigawatts of energy to support AI

Next level: the climate-changing potential embedded in the immersive world of video games


Friederike Rohde researches the social and environmental impacts of digitisation and machine learning based at Germany’s Institute for Ecological Economy Research.

Environmental and climate change impacts of machine learning are growing as AI models are getting larger and more complex, she says.

Rohde has collaborated on sustainability criteria and indicators for AI systems, publishing in SustAIn Magazine

The approach applies an “holistic sustainability perspective” considering ecological, social and economic impacts of machine learning AI systems, includes criteria and indicators as well as an assessment tool for organisations which are developing and deploying those systems.

Subscribe to Energise from RiAus

Are you interested in the energy industry and the technology and scientific developments that power it? Then our new email newsletter Energise, launching soon, is for you. Click here to become an inaugural subscriber.

Criteria include step-by-step social, environmental and economic considerations at each phase in the development and use of machine learning systems: planning and design; data; development and implementation.

Social sustainability refers to basic needs, human rights and the empowerment of people, along with aspects discussed in AI ethics such as transparency, accountability, non-discrimination, she says. 

Environmental sustainability encompasses energy consumption, greenhouse gas emissions and the sustainability potential of the application.

Economic criteria include issues such as market concentration and worker conditions.

Given its high energy and environmental cost, Rohde wants people considering developing or using machine learning to first ask whether it is necessary or appropriate for the task.

“Are the risks outweighing the problems we want to solve with this technology?”

Please login to favourite this article.