AI and neural networks

OpenAI invests in developing “moral AI” that can predict human judgment

OpenAI invests in developing “moral AI” that can predict human judgment

OpenAI is funding research at Duke University aimed at creating algorithms that can predict human moral decisions in complex situations such as medicine, law, and business. The project is part of a broader program that explores the potential of artificial intelligence in the field of ethics.

The project is part of a larger program that explores the potential of artificial intelligence in the field of ethics.

About the project: “Exploring AI Morality”

  • Funding:
    The nonprofit division of OpenAI has committed three years and $1 million to the research, led by Professors Walter Sinnott-Armstrong and Jana Borg.
  • Objectives:
    The research focuses on training AI models to analyze moral conflicts that involve competing values and priorities in different domains.

.

Sinnott-Armstrong has previously studied topics such as:

    .

  • Use of AI in the role of a “moral GPS” to help humans make decisions.
  • Developing algorithms for fair distribution of organ donations.

The details of the current project are being kept secret for now; completion is scheduled for 2025.

The details of the current project are still being kept secret.

Challenges: Can AI understand morality?”

.

Modern AI models face a number of limitations:

  1. The statistical nature of AI:
    • Algorithms do not understand moral concepts; they merely reproduce patterns from training data.
    • This can lead to copying the cultural and social biases that dominate the internet.
  2. Subjectivity of ethics:
    • Moral standards vary across cultures and beliefs. For example, philosophers still debate whether Kantian ethics (absolute rules) or utilitarianism (maximizing the common good) is better.
    • Creating a system that accounts for all these differences is a daunting task.
  3. Historical failures:
    • For example, the Ask Delphi tool developed by the Allen Institute did not handle moral dilemmas well. Although Delphi could evaluate simple situations (e.g., “fraud is bad”), a slight change in wording often led to absurd and even dangerous conclusions.
OpenAI

Ethical risks of creating “Moral AI”

The idea that machines will be able to make moral decisions raises many questions:

  • Whose morality?”
    • AI learning from limited data can exclude or ignore the values of different groups of people.
  • Boundaries of Acceptable:
    • Should algorithms be involved in issues that affect people’s lives and well-being? For example, who should be treated first?”
    • .

  • Strengthening biases:
    • Historical examples show that algorithms can unconsciously maintain existing inequalities.

Why is this necessary?

OpenAI’s investment in ethical research shows the company’s commitment to making AI useful and safe for society. However, the development of “moral AI” faces fundamental limitations due to the subjectivity of morality.

Could such a system become a practical tool? For now, it remains a matter of time and scientific progress.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

You may also like