OpenAI is investing a million dollars into research at Duke University aimed at developing algorithms that predict human moral judgments. The research team previously created an AI system for decision-making in transplants. Current AI systems, however, operate solely on a statistical basis and lack true understanding of ethics. Furthermore, different AI systems uphold different philosophical stances on morality.
OpenAI is investing a million dollars into research at Duke University aimed at creating algorithms capable of predicting human moral judgments. The team led by ethics professor Walter Sinnott-Armstrong will try to teach artificial intelligence to make decisions in ethically complex situations in medicine, law, and business.
Researchers from Duke University have previously created a morally oriented algorithm for kidney transplant recipient decisions. Their vision now heads towards creating a kind of moral compass to assist people in complex ethical decisions. But is it even possible to teach a machine to understand human morality?
History warns us. In 2021, the Allen Institute developed the Ask Delphi tool, which was supposed to provide ethical recommendations. It could correctly assess that cheating on a test is wrong, but just rephrasing the question made the system begin to approve overtly unethical behavior. The reason? AI systems are essentially just statistical machines without true understanding of ethics.
Current AI systems are primarily trained on data from the Western world, leading to a one-sided view on morality. Systems often reproduce values of Western, educated, and industrialized society, while other cultural perspectives remain overlooked.
Additionally, each algorithm is trained on different data. This is evident in controversial judgments, such as when Delphi deemed heterosexuality more morally acceptable than homosexuality. Other AI systems also hold different philosophical stances.
Claude, for instance, leans towards Kantian absolutism, while ChatGPT inclines towards utilitarianism. It seems truly challenging to create a universal moral algorithm when even philosophers have been debating ethical theories for thousands of years.
RoboMind is an educational tool designed to teach the basics of programming using a virtual robot. It uses the simple Robo programming language, which is an ideal choice for beginners. Students learn algorithmic thinking through practical tasks such as navigating mazes or manipulating objects.
Sam Altman, CEO of OpenAI, announced that the company already knows how to create general artificial intelligence and is aiming for the development of superintelligence. According to his prediction, it could become a reality in just a few years. Although current AI systems still have significant shortcomings, Altman believes in their rapid overcoming.
Minecraft: Education Edition connects the popular gaming world with education. Kids can explore programming, collaborate on problem-solving, and learn new skills in a familiar, creative setting. How does it work and is it suitable for younger children?
Alice is an educational platform that allows children and students to delve into the world of programming through creating 3D animations, interactive stories, and simple games. It is suitable for both schoolchildren and university users. What does it offer and how does it work?
The American government has launched an investigation into the Chinese company TP-Link, which controls 65% of the router market. The reason is national security concerns following the use of their devices in ransomware attacks.
OpenAI concluded its Christmas event "12 Days of OpenAI" by announcing the revolutionary model o3 and its smaller version o3-mini. The new model promises significant improvements in reasoning and solving complex tasks. For now, it will only be available to safety researchers.