AI’S Increasing Dominance: Influencing Online Decisions Could be the Next Big Worry

ais-increasing-dominance-influencing-online-decisions-could-be-the-next-big-worry

A recent string of studies demonstrates the escalating power of artificial intelligence, or AI, to imperceptibly guide human decision making online. University of Cambridge researchers state that AI systems can not only map human desires but also manipulate them, eventually guiding choices even before they are consciously perceived by people.

Currently, personalized advertising tailors content based on user behavior. However, advancements in AI suggest a future where AI agents engage in natural conversations with users, infer intentions, and subtly influence motivations for commercial gain. This progression could have significant political and social implications, including the manipulation of democratic processes and personal aspirations.

It also increases problems like fake reviews on the internet. A tool like OpenAI’s ChatGPT allows fast production of fraudulent reviews for different sectors, misleading customers and complicating efforts to tell apart good from bad reviews.

Moreover, the embedding of AI into personal assistants raises the problem of manipulation. By 2025, AI agents will be ubiquitous, being intimately involved assistants in everyday life. Despite their human-like interactions, these AI agents fundamentally serve industrial priorities, subtly guiding consumption, movements, and information access, potentially leading to a deep, yet almost undetectable, form of societal influence.

The potential for AI to create deepfakes further complicates the digital landscape. Instagram, one of the social media applications, plans to introduce generative AI tools which can create deepfakes allowing users to alter their video content extensively. While the tools give creative possibilities, they do raise concerns over misinformation and trust online.

With the rapid advancement in AI technology, these will need to be tackled publicly and regulated so that a wrong intention on human thought cannot be exploited, causing loss of integrity in interacting with others online.