Not surprisingly, researchers are working on new methods with a view to reducing the carbon footprint of these machines.
In June, American company OpenAI unveiled the world’s largest text generator. Called GPT-3, the new artificial intelligence (AI) model can, among other things, write creative fiction and translate legal jargon into plain English, two functions that have been achieved using deep learning. However, above and beyond these technological breakthroughs, it is important to bear in mind that the creation of this new tool generated an enormous amount of pollution.
Deep learning can be polluting
The extent to which deep learning and computing are polluting is often overlooked. A recent study by the University of Massachusetts has shown that the training of a deep learning machine, which can take several hours or even days, can produce up to 283,000 kilograms of greenhouse gas. That is the equivalent of the emissions created in the entire lifetime of five automobiles.
Since the 1950s, the development of artificial intelligence has required more and more energy, which in turn has led to a steep rise in the pollution it creates. We have now reached a point where the scientific community in the field fears that if the issue is not resolved soon, the very future of deep learning will be at risk. So if it is to survive, machines, programs and all computer equipment involved in deep learning will have to use much less energy. Researchers are therefore working on new methods to make the process radically more efficient.
New greener algorithms
Drawing inspiration from the human brain, which does not pay attention to every detail in visual information, researchers at the Massachusetts Institute of Technology (MIT) have announced the development of a technique to unpack a scene in a few glances, as humans do, by selecting the most relevant data.
Cutting back on the quantity of data they require will reduce the energy used by deep learning machines and make them less polluting. At the same time, researchers are seeking to create more economical models using an automated process called neural architecture search. Finally in a third method, which has been dubbed the lottery ticket approach, researchers are aiming to train a small part of networks, which can take charge of recognition tasks more rapidly using far fewer parameters and thus less power.
Scientists are also working on developing more energy efficient hardware, notably optical computers that store and transmit data using photons instead of electrons.
In the hope of reaping further benefits in terms of rapidity and energy savings, scientists are attempting to reproduce the frugality of the brain. A robot with an AI model uses 500,000 times more energy than a human to think and solve problems. In a bid to radically reduce this difference, projects are underway to replace binary on-off transistors in computers with analog devices that mimic how synapses in the brain become stronger and weaker during learning and forgetting.
(Main and featured image: monsitj/Istock.com)