OpenAI is investing a million dollars into research at Duke University aimed at developing algorithms that predict human moral judgments. The research team previously created an AI system for decision-making in transplants. Current AI systems, however, operate solely on a statistical basis and lack true understanding of ethics. Furthermore, different AI systems uphold different philosophical stances on morality.
OpenAI is investing a million dollars into research at Duke University aimed at creating algorithms capable of predicting human moral judgments. The team led by ethics professor Walter Sinnott-Armstrong will try to teach artificial intelligence to make decisions in ethically complex situations in medicine, law, and business.
Researchers from Duke University have previously created a morally oriented algorithm for kidney transplant recipient decisions. Their vision now heads towards creating a kind of moral compass to assist people in complex ethical decisions. But is it even possible to teach a machine to understand human morality?
History warns us. In 2021, the Allen Institute developed the Ask Delphi tool, which was supposed to provide ethical recommendations. It could correctly assess that cheating on a test is wrong, but just rephrasing the question made the system begin to approve overtly unethical behavior. The reason? AI systems are essentially just statistical machines without true understanding of ethics.
Current AI systems are primarily trained on data from the Western world, leading to a one-sided view on morality. Systems often reproduce values of Western, educated, and industrialized society, while other cultural perspectives remain overlooked.
Additionally, each algorithm is trained on different data. This is evident in controversial judgments, such as when Delphi deemed heterosexuality more morally acceptable than homosexuality. Other AI systems also hold different philosophical stances.
Claude, for instance, leans towards Kantian absolutism, while ChatGPT inclines towards utilitarianism. It seems truly challenging to create a universal moral algorithm when even philosophers have been debating ethical theories for thousands of years.
The American government has launched an investigation into the Chinese company TP-Link, which controls 65% of the router market. The reason is national security concerns following the use of their devices in ransomware attacks.
OpenAI concluded its Christmas event "12 Days of OpenAI" by announcing the revolutionary model o3 and its smaller version o3-mini. The new model promises significant improvements in reasoning and solving complex tasks. For now, it will only be available to safety researchers.
SpaceX, in collaboration with New Zealand operator One NZ, has launched the first nationwide satellite network for sending SMS messages. This groundbreaking service allows communication even in areas without traditional mobile signal. Currently, it supports only four phone models and message delivery time can take up to 10 minutes.
Tynker is a modern platform that teaches kids to program in a fun way. With the help of visual blocks, they can create their own games, animations or control robots. The platform supports creativity, logical thinking and allows kids to explore technology in a playful way. Find out how it works and what makes it better or worse than other platforms.
Digital blackout. ChatGPT, Sora, Instagram, and Facebook were down. Millions of users were left without access to their favorite services. The outages revealed the fragility of the online world and dependency on technology. OpenAI struggled with server issues, while Meta dealt with a global outage. What is happening behind the walls of the tech giants?
Do you want your child to learn the basics of programming in a fun and accessible way? Scratch is the ideal starting point. This visual programming language allows children to create games, animations, and stories without the need to write complex code. They will learn the basics of logical thinking and creativity, opening doors to real programming.