Environment

Training a Single Artificial-Intelligence Model Can Emit as Much Carbon as Five Cars in Their Lifetimes

Researchers at the University of Massachusetts, Amherst, performed a life-cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 lbm of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car.

Colored image on banks of servers inside a data center
Credit: Dean Mouhtaropoulos/Getty;Edited by MIT Technology Review.

The artificial-intelligence (AI) industry is often compared to the oil industry: Once mined and refined, data, like oil, can be a highly lucrative commodity. Now it seems the metaphor may extend even further. Like its fossil-fuel counterpart, the process of deep learning has an outsize environmental impact.

In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 lbm of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).

It’s a jarring quantification of something AI researchers have suspected for a long time. “While probably many of us have thought of this in an abstract, vague level, the figures really show the magnitude of the problem,” said Carlos Gómez-Rodríguez, a computer scientist at the University of A Coruña in Spain, who was not involved in the research. “Neither I nor other researchers I’ve discussed them with thought the environmental impact was that substantial.”

The Carbon Footprint of Natural-Language Processing

The paper specifically examines the model training process for natural-language processing (NLP), the subfield of AI that focuses on teaching machines to handle human language. In the last two years, the NLP community has reached several noteworthy performance milestones in machine translation, sentence completion, and other standard benchmarking tasks. OpenAI’s infamous GPT-2 model, as one example, excelled at writing convincing fake news articles.

But such advances have required training ever-larger models on sprawling data sets of sentences scraped from the Internet. The approach is computationally expensive—and highly energy intensive.

The researchers looked at four models in the field that have been responsible for the biggest leaps in performance: the Transformer, ELMo, BERT, and GPT-2. They trained each on a single GPU for up to a day to measure its power draw. They then used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process. That number was converted into pounds of carbon dioxide equivalent based on the average energy mix in the US, which closely matches the energy mix used by Amazon’s AWS, the largest cloud services provider.

Read the full story here.

Find the paper here.