Ai is all brains and not ethics Fox News


Become a member of Fox News for access to this content

Plus special access to select articles and other premium content with your account – free.

By entering your e -mail and by pushing, you agree with Fox News’ Terms of use And Privacy policyIncluding our Notification of financial stimulans.

Enter a valid e -mail address.

NEWYou can now listen to Fox News articles!

A report from February 2025 by Palisades Research shows that this AI reasoning models Miss a moral compass. They will cheat to achieve their goals. So -called large language models (LLMS) will incorrectly display the extent to which they are tailored to social standards.

None of this should be surprising. Twenty years ago, Nick Bostrom prepared a thought experiment in which one Ai was asked To produce paper clips most efficiently. Given the mandate and the desk, it would eventually destroy all life to produce paper clips.

Isaac Asimov saw this coming in his “I, robot” Stories that consider how a “aligned” robot -like brain could still go wrong in ways that people harm.

You have a robot

The moral/ethical context within which AI reasoning models work is pathetic. (Getty Images)

A remarkable example, the story ‘Runaround’ places a robot extraction tool on the planet Mercury. The two people on the planet need to work if they have to return home. But the robot is caught between the question of following orders and the question of preserving itself. As a result, it circles around unreachable minerals, not knowing that in the big whole it ignores its first order to preserve human life.

The approaching AI-driven unemployed economy: who will pay taxes?

And the big picture is the problem here. The moral/ethical context within which AI reasoning models work is pathetic. The context includes the written rules of the game. It does not contain all unwritten rules, such as the fact that you cannot manipulate your opponent. Or that you are not supposed to lie to protect your own observed interests.

Nor can the context of AI reasoning models possibly include the countless moral reasons that spread from any decision that a person or an AI makes. That is why Ethics is difficultAnd the more complex the situation, the harder they become. In an AI there is no “you” and there is no “me”. There is only fast, process and answer.

So “Do others …” really doesn’t work.

AI reforms the business community. This is how we stay for China

In people A Moral compass has been developed Through socialization, being with other people. It is an imperfect process. Yet so far it has enabled us to live in huge, diverse and enormously complex societies without destroying ourselves

A moral compass develops slowly. It takes people years from children’s shoes to adulthood to develop a robust sense of ethics. And many still hardly get it and pose a constant danger to their fellow people. It took millennia for people to develop morality that is sufficient for our ability to destroy and self -destruction. Only the rules of the game never work. Ask Moses, or Mohammed, or Jesus, or Buddha, or Confucius and Mencius, or Aristotle.

Would even a well -aligned AI be able to explain the effects of his actions on thousands of people and societies in different situations? Could it explain the complex natural environment on which we all depend? At the moment the very best cannot even distinguish between being honest and cheating. And how could they? Honesty cannot be reduced to a rule.

Ai can’t wait: why we need speed to win

Perhaps you remember experiments that show that Hoodchin -Aapen weathered what seemed to be “uneven wage” for performing the same task? This makes them a lot of evolved than every AI when it comes to morality.

To be honest, it is difficult to see how an AI can get such a sense of morality without socialization and continuous evolution for which current models have not absent capacity. And even then they are trained, not formed. They don’t get moral, they only learn more rules.

This does not make AI worthless. It has a huge capacity to do well. But the Makes ai dangerous. It therefore requires that ethical people create the guidelines that we would create for every dangerous technology. We don’t need a race to AI -Oararchy.

Click here for more the opinion of Fox News

I had a corrosive end for this comment, a fully based on the public reported. But after reflection, I realized two things: first of all, that I used someone’s tragedy for my mic-drop moment; And secondly, that those involved can be hurt. I dropped it.

It is unethical to use the pain and suffering of others to promote someone’s self -interest. That is something that people, at least most of us, know. It is something that AI can never understand.

Click here to get the Fox News app

Leave a Reply

Your email address will not be published. Required fields are marked *