Artificial intelligence (AI) tools could soon start predicting and manipulating s with the large pool of “intent data” they have, a study has claimed. Conducted by the University of Cambridge, the research paper also highlights that in the future, an “intention economy” could be formed which could create a marketplace for selling “digital signals of intent” of a large base. Such data can be used in a variety of ways, from creating customised online ads to using AI chatbots to persuade and convince s to buy a product or service, the paper warned.
It is undeniable that AI chatbots such as Copilot, and others have access to a massive dataset that comes from s having conversations with them. Many s talk about their opinions, preferences, and values with these AI platforms. Researchers at Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) claim that this massive data can be used in dangerous ways in the future.
The paper describes an intention economy as a new marketplace for “digital signals of intent”, where AI chatbots and tools can understand, predict, and steer human intentions. Researchers claim these data points will also be sold to companies who can profit from them.
Researchers behind the paper believe the intention economy would be the successor to the existing “attention economy” which is exploited by social media platforms. In an attention economy, the goal is to keep the hooked on the platform while a large volume of ads could be fed to them. These ads are targeted based on s' in-app activity, which reveals information about their preferences and behaviour.
The intention economy, the research paper claims, could be far more pervasive in its scope and exploitation as it can gain insight into s by directly conversing with them. As such, they could know their fears, desires, insecurities, and opinions.
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition before we become victims of its unintended consequences,” Dr. Jonnie Penn, a Historian of Technology at LCFI told The Guardian.
The study also claimed that with this large volume of “intentional, behavioural, and psychological data”, large language models (LLMs) could also be taught to use such information to anticipate and manipulate people. The paper claimed that future chatbots could recommend s to watch a movie, and could use access to their emotions as a way to convince them to watch it. “You mentioned feeling overworked, shall I book you that movie ticket we'd talked about?”, it cited an example.
Expanding upon the idea, the paper claimed that in an intention economy, LLMs could also build psychological profiles of s and then sell them to rs. Such data could include information about a 's cadence, political inclinations, vocabulary, age, gender, preferences, opinions, and more. rs will then be able to make highly customised online ads knowing what could encourage a person to buy a certain product.
Notably, the research paper offers a bleak outlook on how private data in the age of AI can be used. However, given the proactive stance of various governments across the world in limiting AI companies' access to such data, the reality might be brighter than the one projected by the study.