Generative AI is the hot new technology behind chatbots and image generators. But how hot is it making the planet?
As an AI researcher, I often worry about the energy costs of building artificial intelligence models. The more powerful the AI, the more energy it takes. What does the emergence of increasingly more powerful generative AI models mean for society’s future carbon footprint?
“Generative” refers to the ability of an AI algorithm to produce complex data. The alternative is “discriminative” AI, which chooses between a fixed number of options and produces just a single number.
Generative AI can create much more complex outputs. It has long been used in applications such as smart speakers to generate audio responses, or in autocomplete to suggest a search query. However, it only recently gained the ability to generate humanlike language and realistic photos.
The exact energy cost of a single AI model is difficult to estimate. In 2019, researchers found that creating a generative AI model called BERT with 110 million parameters consumed the energy of a round-trip transcontinental flight for one person. The number of parameters refers to the size of the model, with larger models generally being more skilled. Researchers estimated that creating the much larger GPT-3, which has 175 billion parameters, consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide equivalent, the equivalent of 123 gasoline-powered passenger vehicles driven for one year. And that’s before any consumers start using it.
Size is not the only predictor of carbon emissions. The open-access BLOOM model, developed by the BigScience project in France, is similar in size to GPT-3 but has a much lower carbon footprint. A study by Google found that for the same size, using a more efficient model architecture and processor and a greener data center can reduce the carbon footprint by 100 to 1,000 times.
Larger models do use more energy during their deployment. There is limited data on the carbon footprint of a single generative AI query, but some industry figures estimate it to be four to five times higher than that of a search engine query. As chatbots and image generators become more popular, and as Google and Microsoft incorporate AI language models into their search engines, the number of queries they receive could grow exponentially.
If chatbots become as popular as search engines, the energy costs of deploying the AIs could really add up.
Another problem is that AI models need to be continually updated. If ChatGPT had to be recreated regularly to update its knowledge, the energy costs would grow even larger.
One upside is that asking a chatbot can be a more direct way to get information than using a search engine. Getting to the information quicker could potentially offset the increased energy use compared with a search engine.
The future is hard to predict, but large generative AI models are here to stay, and people will probably increasingly use them.
While a single large AI model is not going to ruin the environment, if a thousand companies develop slightly different AI bots for different purposes, each used by millions of customers, energy use could become an issue. The good news is that AI can run on renewable energy. By bringing the computation to where green energy is more abundant, or scheduling computation for times of day when renewable energy is more available, emissions can be reduced by a factor of 30 to 40.
Kate Saenko is an assistant professor in Boston University’s Department of Computer Science. Distributed by The Associated Press.