DEV Community

Cover image for Artificial Intelligence and Sustainability
Lola Farias
Lola Farias

Posted on • Edited on

Artificial Intelligence and Sustainability

The Intergovernmental Panel on Climate Change (IPCC), of the United Nations, has already warned us about the urgency of creating solutions and damage control measures for global warming, as our planet is currently heading towards climate collapse.

We frequently read about the numerous risks that accelerated technological development brings to humanity. With Artificial Intelligence (AI), the concerns are no different.

One of the earliest works on AI was published in the mid-20th century by Alan Turing, who envisioned AI as a way for computers to learn from experience and solve problems through algorithms.

At that time, we did not have the hardware with the necessary power to run such complex algorithms, but it was only a matter of time before humans made it a reality. Fifty years later, IBM could analyze 200 million chess moves per second and predict 14 future moves through the Deep Blue project, which used a high-performance processor, IBM POWER2.

Today, in the 21st century, we live in constant concern, directly affected by global warming. The ethical discussions around AI cover several topics: algorithmic racism, social inequality, privacy, security, and, more worryingly, global warming.

Large-scale data processing requires a lot of energy. Researchers at the University of Massachusetts at Amherst published a report estimating that the energy required to train and develop an AI model can emit 283,950 kg of carbon dioxide into the atmosphere. To put this in perspective, this is equivalent to the carbon emissions of a popular car over its entire lifecycle, including manufacturing. And it doesn’t stop there — the deployment of an AI model increases this expenditure even further.

In the media, these data showing the climate impact of new technologies have sparked panic. The use and development of AI innovations are controversial, as if it were the great nemesis of planet Earth and all life inhabiting it. There are already anti-AI movements, all concerned with our future. And rightly so — we are, as previously mentioned, facing the critical juncture of environmental and technological challenges. Economically, we are experiencing a huge, perhaps silent, crisis. There are already debates about the declining population caused by young people giving up on having children due to fear of the future. Even worse, they are abandoning dreams and leaving hope behind.

Considering all these issues, does it make sense, ethically, to use AI for the well-being and evolution of humanity? Do the impressiveness of the technological feats of this new era justify the chaos it’s causing? Will the benefits be greater than the negative consequences, considering ethics?

To discuss and reflect on AI ethics, we can use the example of studies by the great philosopher Hans Jonas. A Jewish man born in Germany in 1903, Jonas fled his country during Hitler's rise to power. He served in the British Army and witnessed great tragedies, which led him to reflect on humanity, technological progress, and ethics. He became an important figure in ethical studies, particularly regarding the responsibility of current generations towards future ones. Jonas warned about technological advancements and unrestrained scientific development, which, in his view, no longer aimed to improve human quality of life and societal well-being but rather to uphold scientific progress.

In his work The Imperative of Responsibility, Jonas expresses concern for future generations and the environment. It makes no sense to think about development and progress without prioritizing human well-being. Furthermore, he emphasizes the importance of considering how today’s decisions will impact those who will live tomorrow. Reflecting on Kant’s ethical philosophy, how can we make decisions that benefit us today without considering their long-term effects on life and the planet? No decision should be made today that endangers humanity or the Earth.

Here arises a significant dilemma for technology professionals. We, who aim to reach new heights in human history and use technology to our advantage, also have the hope and desire to see better days, reverse catastrophic situations, and help people cope with debilitating diseases. Are we on the right path, or are we blinded?

Jonas describes science and technology as non-sentient entities, meaning they don’t feel, experience pleasure, or have emotions. These entities cannot grasp complex concepts like human life and the importance of the other. They don’t have the same understanding of community that we, as human beings, do. Yet, we allow these entities to define the future and present in which we live. These are not people or living beings, but companies, tools, formulas and machines. Our goal may be blindly focused on their development.

However, these are not simple issues to resolve. We can’t just pull the plug and turn off these machines. We also live in a system that will continue to depend on these tools. We must analyze how to deal with this reality, confront the consequences of our decisions, and make enough noise to be heard.

Throughout human history, knowledge — albeit in different forms — has been seen as a taboo. The moment we vilify something, it becomes obscure, difficult to access, and marginalized. The problem is that when we cease to understand something fully, we leave it for others to wield and use as they please. Simultaneously, nothing is more beautiful than humanity’s ability to break barriers and use knowledge and reason to its advantage. If we are dealing with something of great importance that will radically shape our future (for better or worse), how can we allow decisions to be made by others—often those incapable of thinking universally about their actions?

By making our relationship with these machines transparent, there is nothing to fear, for fear only exists in darkness, in ignorance, in not knowing what will happen.

We need to shed light on the ethical dilemma surrounding AI. People must participate in discussions and lead the paths it will forge in human history. How will we turn the tide and transform this immense concern into a solution for our problems, ensuring the quality of life for future beings on this planet? We already have some answers.

The Massachusetts Institute of Technology (MIT) is already prioritizing studies on reducing AI’s carbon footprint. Just as Alan Turing lacked powerful equipment to conduct his AI research (imagine how the world might have changed if he had!), and IBM made significant progress by the end of the 20th century, researchers and professionals are now studying ways to optimize processor computational power to a level that requires less energy to run algorithms. MIT’s system is estimated to use 1/1,300 of the power required today. This shows that scientists are already concerned about the pollution caused by AI and are prioritizing more sustainable ways to use it.

Additionally, Columbia University students have created a project using AI to clean the oceans, which have a significant impact on our quality of life, from plastics and their residues. Although there has already been considerable effort to address this task, we know that some debris is too small to be filtered or removed from the ocean. The technology developed by these students uses computer vision to analyze colors and sweep this trash from the oceans. The project is called “Precision Plastic Waste Cleanup and Monitoring: AI-Enhanced Solution for Sustainable Waterways and Ocean Health.”

There are thousands of other studies focused on ensuring quality life for future generations on our planet, such as using data analysis to monitor pollution and deforestation levels, disease detection and diagnosis like cancer, and restoring the quality of life for humans and animals with disabilities.

Despite its promise and the hope it brings, this technology also has catastrophic potential if misused. This is why we must be vigilant, especially as technology professionals (though I emphasize that everyone has the right and duty to be knowledgeable and clear about the subject). Let me highlight a few crucial points:

  • General policies and regulations regarding the legal use of AI;
  • Systems aligned with life and human values;
  • Minimizing negative impacts, if unavoidable;
  • Advisory and monitoring committees for AI use, involving all areas of knowledge and countries;
  • Technology education and its effect on our lives;
  • Security and privacy in data usage and sharing;
  • Diversity in the metrics and data used;
  • Studies and research focused on the long-term effects of these innovations;
  • Special protection for emerging countries, which are likely to be the most affected by technology;

All this demonstrates how responsible technology and innovation professionals need to be with their actions and decisions. This brings us back to ethics: every human being has a duty to act not for their own well-being but for universal well-being. Only then can we count on a future for humanity.

Humans did not evolve as solitary animals but as social beings — otherwise, we would not be here today. The concept of community was fundamental to the evolution of the human race and our moral compass. Empathy and cooperation are embedded in our genes. Ignoring this fact will doom us to extinction, as we would abandon the very concept that brought us this far.

References:

MIT News. Artificial intelligence's growing carbon footprint. Available at: https://news.mit.edu/2020/artificial-intelligence-ai-carbon-footprint-0423.

SCIENTIFIC AMERICAN. With limited time left, new IPCC report urges climate adaptation. Available at: https://www.scientificamerican.com/article/with-limited-amount-of-time-left-new-ipcc-report-urges-climate-adaptation/

HARVARD BUSINESS REVIEW. How companies can mitigate AI's growing environmental footprint. Available at: https://hbr.org/2024/07/how-companies-can-mitigate-ais-growing-environmental-footprint

MIT News. Shrinking deep learning's carbon footprint. Available at: https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807

UNESCO. Recommendation on the Ethics of Artificial Intelligence. UNESCO's first-ever global standard on AI ethics – the ‘Recommendation on the Ethics of Artificial Intelligence’ – was adopted by all 193 Member States in November 2021. Available at: https://www.unesco.org/en/legal-standards/recommendation-ethics-artificial-intelligence

Top comments (0)