AI and neural networks

X will test the ability to create “Community Notes” using AI chatbots

X will test the ability to create “Community Notes” using AI chatbots

Community Notes is a feature that dates back to the days of Twitter and was expanded under Ilon Musk after the service was renamed X. Users participating in the fact-checking program can leave comments that add context to certain posts. These comments are verified by others in the community before they are made visible. An example might be a note on an AI-generated video that doesn’t state that it’s synthetic, or an explanation of a misleading post by a politician.

An example would be a comment on an AI-generated video that doesn’t state that it’s synthetic, or an explanation of a politician’s misleading post.

Notes become public only after consensus has been reached among previously divided user groups.

The feature has been successful enough to inspire Meta*, TikTok and YouTube to launch similar initiatives. Meta*, for example, has completely abandoned third-party fact-checking software in favor of this community-driven model.

The feature has been successful enough to inspire Meta*, TikTok, and YouTube to launch similar initiatives.

threads dms reply

It remains to be seen, however, whether the use of AI will be beneficial or detrimental to the process.

AI notes can be generated using Grok – X’s own AI – or other tools connected to the platform via APIs. All AI-generated notes will be treated the same as human-generated comments, including going through a validation process.

All AI-generated notes will be treated the same way as human-generated comments, including going through a validation process.

The effectiveness of AI in this role is questionable, however, especially given the models’ tendency to “hallucinate” – creating facts that are not based on reality.

According to a study published this week by developers, Community Notes recommends that humans and large language models (LLMs) work in tandem. Feedback from humans can improve AI note generation through reinforcement learning, with human moderators remaining as the final filter before publication.

“The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that allows people to think critically and better understand the world around them,” the paper says. “LLM and humans can work together in a productive bundle.”

There remains, however, a risk of over-reliance on AI even with human involvement, especially given the possibility of embedding third-party models. For example, ChatGPT has recently faced criticism for being overly accommodating. If the LLM prioritizes “benevolence” above accuracy, the annotations could simply be unreliable.

However, the LLM has been criticized for being overly reliant on AI, especially given the possibility of embedding third-party models.

There’s another problem: the flood of AI-generated comments could overwhelm volunteer reviewers, reducing their motivation to perform thorough reviews.

Till users see AI comments in action – X plans limited testing of the feature for a few weeks, after which a decision will be made about a broader launch.

* Owned by Meta, it is recognized as an extremist organization in Russia and its activities are banned.

The X will test the ability to create «Community Notes» using AI chatbots was first published on ITZine.ru.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

You may also like