China, Iran Discover new ways to hijack American AI models to disturb American society


Threat actors, probably based in China and Iran, formulate new ways to hijack and use American Artificial Intelligence (AI) Models for malignant intention, including secret influence activities, according to a new report from OpenAi.

The February report contains two disturbances with threat factors that seem to have come from China. According to the report, these actors used models, or at least tried to use it, built by OpenAi and Meta.

In one example, OpenAi forbade a Chatgpt account that generated comments that was critical of Chinese dissident Cai Xia. The comments were posted on social media by accounts that claimed people to be located in India and the US, but these messages did not seem to attract substantial online involvement.

The same actor also used the Chatgpt -Service To generate long-form Spanish news articles that “denigrated” the US and then published by regular news broadcasts in Latin America. The bylines of these stories were attributed to an individual and, in some cases, a Chinese company.

China, Iran and Russia convicted by dissidents on the Summit of UN Watchdog

Chinese and Iranian flag with an AI hacker

Threat actors around the world, including those based in China and Iran, find new ways to use American AI models for malignant intention. (Bill Hinton/Philip Fong/AFP/Maksim Konstantinov/Sopa images/Lighttrocket via Getty Images)

During a recent press conference that included Fox News Digital, Ben Nimmo, lead researcher in the OpenAI’s Intelligence and Investigations team, said that a translation was mentioned as a sponsored content at least one occasion, which suggests that someone had paid for it.

OpenAI says this is the first instance in which a Chinese actor has successfully planted long-shaped articles in regular media to focus on the Latin American public with anti-U stories.

“Without a look at that use of AI, we could not have been able to go with the connection between the tweets and the web articles,” said Nimmo.

He added that threat actors OpenAi sometimes give a glimpse of what they do in other parts of the internet because of how they use their models.

“This is a pretty disturbing look in the way a non-democratic actor tried to use Democratic or in the US for non-democratic purposes, according to the materials they themselves generated,” he continued.

What is artificial intelligence (AI)?

China Spy Scare

The flag of China is flown behind a few surveillance cameras outside the central government offices in Hong Kong, China, on Tuesday 7 July 2020. Hong Kong -leader Carrie Lam defended the national security legislation that was imposed by China for hours last week, hours by China, Hours after its government had claimed broad new police powers, including search -free searches, online surveillance and seizures of real estate. (Roy Liu/Bloomberg via Getty images)

The company has also banned a ChatGPT account that tweets and articles generated that were subsequently posted on third-party assets that were publicly linked to well-known Iranian iOS (input/output). IO is the process of moving data between a computer and the outside world, including the movement of audio, video, software and text.

These two operations have been reported as separate efforts.

“The discovery of a potential overlap between these operations – albeit small and isolated – raises a question about whether there is a collaboration between these Iranian iOS, where an operator can work on behalf of some different networks,” the threat reports states.

In another example, OpenAI forbade a series of chatgpt accounts that used OpenAi models To translate and generate comments for a romance counter network, also known as “Pig Butchering”, on platforms such as X, Facebook and Instagram. After reporting these findings, Meta indicated that the activity seemed to be coming from a “newly rising scam connection in Cambodia.

What is the Chinese AI startup Deepseek?

The OpenAI Chatgpt logo can be seen on a mobile phone

The OpenAI Chatgpt logo can be seen on a mobile phone in this photo illustration on 30 May 2023 in Warsaw, Poland. ((Photo by Jaap Arriens/Nurphoto via Getty images))

Last year, OpenAI became the first AI research laboratory that published reports on efforts to prevent abuse by opponents and other malignant actors by supporting the US, Allied governments, industrial partners and stakeholders.

OpenAI says it has greatly expanded its research capacities and the understanding of new types of abuse since the first report has been published and has disrupted a wide range of malignant use.

The company believes that, among other things, disruption techniques, that AI companies can convey substantial insights Threat actors If the information is shared with electricity providers, such as hosting and software providers, as well as downstream distribution platforms (social media companies and open-source researchers).

Click here to get the Fox News app

OpenAi emphasizes that their studies also benefit greatly from the work that is shared by peers.

“We know that threat actors continue to test our defense. We are determined to continue to identify, prevent, disrupt and expose attempts to abuse our models for harmful goals,” OpenAI stated in the report.

Leave a Reply

Your email address will not be published. Required fields are marked *