Social bot

Software agent that communicates on social media From Wikipedia, the free encyclopedia

A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages (e.g. tweets) it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.

Uses

Summarize
Perspective

Social bots are used for a large number of purposes on a variety of social media platforms, including Twitter, Instagram, Facebook, and YouTube. One common use of social bots is to inflate a social media user’s apparent popularity, usually by artificially manipulating their engagement metrics with large volumes of fake likes, reposts, or replies. Social bots can similarly be used to artificially inflate a user’s follower count with fake followers, creating a false perception of a larger and more influential online following than is the case.[1] The use of social bots to create the impression of a large social media influence allows individuals, brands, and organizations to attract a higher number of human followers and boost their online presence. Fake engagement can be bought and sold in the black market of social media engagement.[2]

Corporations typically use automated customer service agents on social media to affordably manage high levels of support requests.[3] Social bots are used to send automated responses to users’ questions, sometimes prompting the user to private message the support account with additional information. The increased use of automated support bots and virtual assistants has led to some companies laying off customer-service staff.[4]

Social bots are also often used to influence public opinion. Autonomous bot accounts can flood social media with large numbers of posts expressing support for certain products, companies, or political campaigns, creating the impression of organic grassroots support.[5] This can create a false perception of the number of people who support a certain position, which may also have effects on the direction of stock prices or on elections.[6][7] Messages with similar content can also influence fads or trends.[8]

Many social bots are also used to amplify phishing attacks. These malicious bots are used to trick a social media user into giving up their passwords or other personal data. This is usually accomplished by posting links claiming to direct users to news articles that would in actuality direct to malicious websites containing malware.[9] Scammers often use URL shortening services such as TinyURL and bit.ly to disguise a link’s domain address, increasing the likelihood of a user clicking the malicious link.[10] The presence of fake social media followers and high levels of engagement help convince the victim that the scammer is in fact a trusted user.

Social bots can be a tool for computational propaganda.[11] Bots can also be used for algorithmic curation, algorithmic radicalization, and/or influence-for-hire, a term that refers to the selling of an account on social media platforms.

History

Summarize
Perspective

Bots have coexisted with computer technology since the earliest days of computing. Social bots have their roots in the 1950s with Alan Turing, whose work focused on machine intelligence with the development of the Turing Test. The following decades saw further progress made towards the goal of creating programs capable of mimicking human behavior, notably with Joseph Weizenbaum’s creation of ELIZA.[12] Considered to be one of the first Chatbots, ELIZA could simulate natural conversations with human users through pattern matching. Its most famous script was DOCTOR, a simulation of a Rogerian psychotherapist that was programmed to chat with patients and respond to questions.[13]

With the growth of social media platforms in the early 2000s, these bots could be used to interact with much larger user groups in an inconspicuous manner. Early instances of autonomous agents on social media could be found on sites like MySpace, with social bots being used by marketing firms to inflate activity on a user’s page in an effort to make them appear more popular.[14]

Social bots have been observed on a large variety of social media websites, with Twitter being one of the most widely observed examples. The creation of Twitter bots is generally against the site’s terms of service when used to post spam or to automatically like and follow other users, but some degree of automation using Twitter’s API may be permitted if used for “entertainment, informational, or novelty purposes.”[15] Other platforms such as Reddit and Discord also allow for the use of social bots as long as they are not used to violate policies regarding harmful content and abusive behavior. Social media platforms have developed their own automated tools to filter out messages that come from bots, although they cannot detect all bot messages.[16]

Summarize
Perspective
Thumb
Twitter bots posting similar messages during the 2016 United States elections

Due to the difficulty of recognizing social bots and separating them from "eligible" automation via social media APIs, it is unclear how legal regulation can be enforced. Social bots are expected to play a role in shaping public opinion by autonomously acting as influencers. Some social bots have been used to rapidly spread misinformation, manipulate stock markets, influence opinion on companies and brands, promote political campaigns, and engage in malicious phishing campaigns.[17]

In the United States, some states have started to implement legislation in an attempt to regulate the use of social bots. In 2019, California passed the Bolstering Online Transparency Act (the B.O.T. Act) to make it unlawful to use automated software to appear indistinguishable from humans for the purpose of influencing a social media user’s purchasing and voting decisions.[18] Other states such as Utah and Colorado have passed similar bills to restrict the use of social bots.[19]

The Artificial Intelligence Act (AI Act) in the European Union is the first comprehensive law governing the use of Artificial Intelligence.[20] The law requires transparency in AI to prevent users from being tricked into believing they are communicating with another human. AI-generated content on social media must be clearly marked as such, preventing social bots from using AI in a manner that mimics human behavior.[21]

Detection

Summarize
Perspective

The first generation of bots could sometimes be distinguished from real users by their often superhuman capacities to post messages. Later developments have succeeded in imprinting more "human" activity and behavioral patterns in the agent. With enough bots, it might be even possible to achieve artificial social proof. To unambiguously detect social bots as what they are, a variety of criteria[22] must be applied together using pattern detection techniques, some of which are:[23]

  • cartoon figures as user pictures
  • sometimes also random real user pictures are captured (identity fraud)
  • reposting rate
  • temporal patterns[24]
  • sentiment expression
  • followers-to-friends ratio[25]
  • length of user names
  • variability in (re)posted messages
  • engagement rate (like/followers rate)
  • analysis of the time series of social media posts[26]

Social bots are always becoming increasingly difficult to detect and understand. The bots' human-like behavior, ever-changing behavior of the bots, and the sheer volume of bots covering every platform may have been a factor in the challenges of removing them.[27] Social media sites, like Twitter, are among the most affected, with CNBC reporting up to 48 million of the 319 million users (roughly 15%) were bots in 2017.[28]

Botometer[29] (formerly BotOrNot) is a public Web service that checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. The system leverages over a thousand features.[30][31] An active method for detecting early spam bots was to set up honeypot accounts that post nonsensical content, which may get reposted (retweeted) by the bots.[32] However, bots evolve quickly, and detection methods have to be updated constantly, because otherwise they may get useless after a few years.[33] One method is the use of Benford's Law for predicting the frequency distribution of significant leading digits to detect malicious bots online. This study was first introduced at the University of Pretoria in 2020.[34] Another method is artificial-intelligence-driven detection. Some of the sub-categories of this type of detection would be active learning loop flow, feature engineering, unsupervised learning, supervised learning, and correlation discovery.[27]

Some operations of bots work together in a synchronized way. For example, ISIS used Twitter to amplify its Islamic content by numerous orchestrated accounts which further pushed an item to the Hot List news,[35] thus further amplifying the selected news to a larger audience.[36] This mode of synchronized bots accounts can be used as a tool of propaganda as well as stock markets manipulations.[37]

Platforms

Summarize
Perspective

Instagram

Instagram reached a billion active monthly users in June 2018,[38] but of those 1 billion active users, it was estimated that up to 10% were being run by automated social bots. While malicious propaganda posting bots are still popular, many individual users use engagement bots to propel themselves to a false virality, making them seem more popular on the app. These engagement bots can like, watch, follow, and comment on the users' posts.[39]

Around the same time, the platform achieved the 1 billion monthly user plateau. Facebook (Instagram and WhatsApp's parent company) planned to hire 10,000 to provide additional security to their platforms; this would include combatting the rising number of bots and malicious posts on the platforms.[40] Due to increased security on the platform and the detection methods used by Instagram, some botting companies are reporting issues with their services because Instagram imposes interaction limit thresholds based on past and current app usage, and many payment and email platforms deny the companies access to their services, preventing potential clients from being able to purchase them.[41]

Twitter

Twitter's bot problem is caused by the ease of creating and maintaining them. The ease of creating the account as and the many APIs that allow for complete automation of the accounts are leading to excessive amounts of organizations and individuals using these tools to push their own needs.[28][42] CNBC claimed that about 15% of the 319 million Twitter users in 2017 were bots; the exact number is 48 million.[28] As of July 7, 2022, Twitter is claiming that they remove 1 million spam bots from their platform every day.[43]

Some bots are used to automate scheduled tweets, download videos, set reminders and send warnings of natural disasters.[44] Those are examples of bot accounts, but Twitter's API allows for real accounts (individuals or organizations) to use certain levels of bot automation on their accounts and even encourages the use of them to improve user experiences and interactions.[45]

Meta

In 2025, Meta announced it would be creating an AI product that helps users create AI characters on Instagram and Facebook, allowing these characters to have bios, profile pictures, generate and share "AI-powered content" on the platforms.[46][47][48] Bot accounts managed by Meta began to be identified by the public around on January 1, 2025,[49][50] with social media users noting that they appeared to be unblockable by human accounts and came with blue ticks to indicate they had been verified by Meta as trustworthy profiles.[51]

SocialAI

SocialAI, an app created on September 18, 2024, was created with the full purpose of chatting with only AI bots without human interaction.[52] Its creator was Michael Sayman, a former product lead at Google who also worked at Facebook, Roblox, and Twitter.[53] An article on the Ars Technica website linked SocialAI to the Dead Internet Theory.[54]

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.