niharika times
  • National
  • Rajasthan
  • Sports
  • Cinema
  • Business
  • Recipe
No Result
View All Result
  • National
  • Rajasthan
  • Sports
  • Cinema
  • Business
  • Recipe
No Result
View All Result
niharika times
No Result
View All Result
Home India

AI models can spread misinformation

jaswant singh by jaswant singh
May 6, 2023
AI models can spread misinformation

RelatedPosts

Contents +
RelatedPosts
Sidhu Moose Wala Death Anniversary: Mother Pays Tribute with Newborn Son, Demands Justice After 3 Years
CBSE Result 2025: Over 42 Lakh Students Anxiously Await Final Scores – Big Update Soon!
Sidhu Moose Wala Death Anniversary Mother Pays Tribute with Newborn Son, Demands Justice After 3 Years

Sidhu Moose Wala Death Anniversary: Mother Pays Tribute with Newborn Son, Demands Justice After 3 Years

CBSE Result 2025: Over 42 Lakh Students Anxiously Await Final Scores – Big Update Soon!

CBSE Result 2025: Over 42 Lakh Students Anxiously Await Final Scores – Big Update Soon!

New Delhi, May 6 (). At its best, artificial intelligence (AI) can be a tool to increase the reach of political engagement and ease polarization.

At its best, artificial intelligence (AI) can be a tool to increase the reach of political engagement and reduce polarization. Nathan E. Sanders and Bruce Schneier wrote for The Atlantic that in the worst case, it could propagate misinformation and increase the risk of voter manipulation.

Sanders is a data scientist and fellow at the Berkman Klein Center at Harvard University. Schneier is a Fellow and Lecturer at the Harvard Kennedy School. In the time-honored tradition of democracies around the world, the LLM may disproportionately represent a candidate’s views in order to appeal to each voter’s personal instincts.

In fact, the fundamentally compliant nature of the current generation of large language models (LLMs) results in them acting like demagogues, the authors said. Current LLMs have been known to hallucinateor go completely off-scriptand produce answers that have no basis in reality. These models do not experience emotion in any way, but some research suggests that they have a sophisticated ability to assess the emotions and tone of their human users.

The article states that, although they were not trained for this purpose, ChatGPT and its successor, GPT-4, may already be fine at assessing some of their users’ symptomssay, likely. That the author of the text prompt is sad. Along with his persuasive abilities, this means he can learn to skillfully manipulate the emotions of his human users.

A Stanford University study found that the number of incidents related to the misuse of AI is growing rapidly. According to the AIAAIC database, which tracks incidents related to the ethical misuse of AI, the number of AI incidents and controversies has increased 26-fold since 2012. Some notable events in 2022 included the video of the surrender of Ukrainian President Volodymyr Zelensky and the use of call-monitoring technology on his prisoners in US prisons. This growth is evidence of both greater use of AI technologies and an awareness of the potential for misuse.

In a research paper for The Center for the Governance of AI, Markus Anderljung and Julian Heijl stated that authoritarian governments could misuse AI to improve the efficacy of repressive domestic surveillance operations. The Chinese government is increasingly turning to AI to improve its intelligence operations, including facial and voice recognition models and predictive policing algorithms.

In particular, these techniques have been used to persecute the Uyghur population in the Xinjiang regionaccording to a recent United Nations report. This torture could be a crime against humanity.

In response, it has been suggested that democracies coordinate to design export controls that prevent the spread of these technologies to authoritarian regimes. AI can be used to create Lethal Autonomous Weapon Systems (LAWS) with significant abuse potential, the research paper said. Some critics have argued that LAWS can enable human commanders to commit criminal acts without legal accountability, can be used by non-state actors to commit acts of terrorism, and violate human rights.

Furthermore, LLMs can enable malicious actors to generate increasingly sophisticated and persuasive propaganda and other forms of disinformation. Similar to automated phishing attacks, LLM can greatly increase both the scale and sophistication of phishing campaigns. The use of large language models to automate campaigning allows for a greater number of campaigners as the reliance on manual labor is reduced, thus reducing the overall cost of these campaigns.

Furthermore, image building models can be used to spread misinformation by portraying political figures in unfavorable contexts.

KC/ANM

Follow Niharika Times for all the big news from India and abroad. Like us on Facebook and Twitter . Always visit Niharika Times for latest news.

Tags: current india newsindia latest newsindia newsindia news onlineindian newslatest india breaking newslatest india newsLatest Newslive news indianews from indianews on indianews paper indiaNiharika TimesNiharika Times newsniharikatimesthe Niharika Timestimes news
ShareTweetSend

Related Posts

Sidhu Moose Wala Death Anniversary Mother Pays Tribute with Newborn Son, Demands Justice After 3 Years

Sidhu Moose Wala Death Anniversary: Mother Pays Tribute with Newborn Son, Demands Justice After 3 Years

CBSE Result 2025: Over 42 Lakh Students Anxiously Await Final Scores – Big Update Soon!

CBSE Result 2025: Over 42 Lakh Students Anxiously Await Final Scores – Big Update Soon!

Recent News

  • Sidhu Moose Wala Death Anniversary: Mother Pays Tribute with Newborn Son, Demands Justice After 3 Years
  • CBSE Result 2025: Over 42 Lakh Students Anxiously Await Final Scores – Big Update Soon!
  • IPL 2025: Lhuan-dre Pretorius Joins RR in Daring Swap for Injured Nitish Rana
  • Operation Sindoor: India’s 9-Target Airstrike Devastates Terror Camps in Pakistan
  • Amul Milk Price Hike: Shocking Rise Hits Consumers from May 1
  • About Us
  • Contact Us
  • Cookie Policy
  • Corrections Policy
  • DMCA
  • Privacy & Policy
  • About
Call us: +91 97996 37175

© 2022 Niharika Times. All Rights Reserved

📰 ई-पेपर

  • National
  • Rajasthan
  • Sports
  • Cinema
  • Business
  • Recipe

© 2022 Niharika Times. All Rights Reserved

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.