X users Treaking Grok Like a Fact-CHACKER Spark Concerns over Misinformation

Some users on Elon Musk’s X Are Turning to Musk’s Ai Bot Grok for Fact-CHACKING, RAISING CONCERNS MONSMANS MON HUMAN HOMAN FACT -CHECKERS that this would Fuel Misinformation.

Earlier this month, x enabled Users to call out Xai’s Grok and Ask Questions on different things. The move was Similar to perplexityWhich has been running an automated account on x to offer a similar experience.

Soon after Xai Created Grok’s Automated Account on X, Users Started Experimenting with Asking It Questions. Some People in Markets Including India Began Asking Grok to Fact-The CHECK COMMENTS and Questions that Target Specific Political Beliefs.

Fact-chackers are concerned about using Grok-or Any Other AI Assistant of this Sort-In this manner trust the bots can frame their answers to sound convincing, Eveen accident. Instans of spreading fake news and Misinformation Were Seen with Grok in the Past.

In August Last Year, Five State Secretaries Urged Musk to implement Critical Changes to Grok after the Misleading Information Generated by the Assistant Surfaceed on Social Networks ahead of the US election.

Other Chatbots, Including Openai’s chatgpt and google’s gemini, was also seen to be Generating inaccurate information on the election last year. Separately, Disinformation Researchers Found in 2023 That AI Chatbots Including Chatgpt Could Easily Be Used To Produce Convincing text with Misleading Narraves,

“AI Assistants, Like Grok, They’re Really Good at Using Natural Language and Give An Answer that sounds like a human being said it. And in that way, the ai products have been there Sounding Responses, even when they are potentially very wrong.

Grok was asked by a user on x to fact-check on claims made by another user

Unlike AI Assistants, Human Fact -Checkers Use Multiple, Credible sources to verify information. They also take full accountability for their findings, with their names and organizations attached to ensure credit.

Pratik Sinha, Co-Founder of India’s Non-Profit Fact-Checking Website ALT News, said that although is the Grok Currently Appears to Have Convincing Answers, It is a good as the data ut Supplied with.

“Who’s Going to Decide What Data It Gets Supplied With, and that is where government interference, etc., will come into picture,” He NOTED.

“There is no transparency. Anything which lacques transparency will cause harm because anything

“Cold be Misuned – to Spread Misinformation”

In one of the responses posted earlier this week, Grok’s account on x Acknowledged That it is “Cold be Misuned – to Spread Misinformation and Violate Privacy.”

However, the automated account does not show any disclaimers to users when they get their get its answers, leading them to be Misinformed if it has, for instance, hallucinated the answer, which is the potent disadvantage of ai.

Grok’s Response on Whether it can Spread Misinformation (Translated from Hinglish)

“It may make upRemation to provide a response,” anushka jain, a research associate at goa-based multidisciplinary research collective digital futures lab, Told Techrunk.

There’s also some question about how much Grok Uses Posts on X as Training Data, and What Quality Control Measures IT Uses to Fact-Make Such Posts. Last Summer, IT Pushed out a change That appeared to allow Grok to Consume X User Data by default.

The other Concerning Area of ​​AI Assistants Like Grok Being Accessible Through Social Media Platforms is their delivery of information in public – Unlike Chatgpt or Other Chatbots being used Privately.

Even if a user is well aware that the information from the Assistant BE MISLEADING or Not Completely Correct, others on the Platform Might Still Believe It.

This single couses serial Social harms. Instans of that Were Seen Earlier in India When Misinformation Circulated over Whatsapp LED to Mob LynchingsHowever, the Severe Incidents Occurred Before the Arrival of Genai, which has made synthetic content generation even economy and appear more realistic.

“If you see a lot of these Grok answers, you’re going to say, hey, well, most of them are right, and that may be so so, but there are going to be some that are wrong. And how many? Research Studies Have Shown that AI Models are Subject to 20% Error Rates… and when it goes wrong, it can go really wreong with real world consciousnesses, “Ifcn’s hollan TOLD TOLD TOLD THCRINCH.

AI vs. Real Fact-Checkers

While Ai Companies Including Xai Are Refining Their AI Models to Make Them Communicate more like more like humans, they still are not – and cannot – replace humans.

For the last few months, tech companies are exploring ways to Reduce Reliance on Human Fact -Checkers. Platforms Including X and Meta Started Embracing The New Concept of Crowdsourced Fact -Checking Through SO-Called Community Notes.

Naturally, Such changes also cause concern to fact checkers.

Sinha of alt news optimistically believes that people will learn to differentiate to differentiate between machines and human fact checkers and will value the accuracy of the humans more.

“We’re going to see the pendulum swing back Eventally Toward More Fact Checking,” IFCN’s Holn Said.

However, she noted that in the meantime, fact-chackers will likely have more work to do with the ai-generated information spreading swiftly.

“A lot of this issue depends on, do you really care about what is actually true or not? That’s what ai assistance will get you, “She said.

X and Xai Didn’t Respond to our request for comment.

Leave a Comment