AI Chatbots Are Quietly Creating A Privacy Nightmare

Posted by Bernard Marr, Contributor | 2 hours ago | /ai, /enterprise-tech, /innovation, AI, Enterprise Tech, Innovation, standard, technology | Views: 3


AI chatbots like ChatGPT, Gemini and Grok are increasingly woven into the fabric of everyday life.

Interestingly, recent research shows that the most popular use for them today is therapy, and people often feel safe to discuss issues they wouldn’t feel comfortable talking about with other humans.

From writing job applications to researching legal issues and discussing intimate medical details, one perceived benefit of them is that people believe their conversations will remain private.

And from a business perspective, they have proven themselves to be powerful tools for drafting policies, defining strategies, and analyzing corporate data.

But while we may feel reasonably anonymous as we chat away, it’s important to remember chatbots are not bound by any of the same confidentiality rules as doctors, lawyers, therapists, or employees of organizations.

In fact, when safeguards fail or people use them without fully understanding the implications, very sensitive and potentially damaging information could be exposed.

Unfortunately, this risk isn’t just hypothetical. Recent news reports highlight several incidents where this sort of data leak has already happened.

This raises a worrying question: without a serious rethink of how generative AI services are used, regulated and secured, could we be sleepwalking towards a privacy catastrophe?

So what are the risks, what steps can we take to protect ourselves, and how should society respond to this serious and growing threat?

How Do Chatbots And Generative AI Threaten Privacy?

There are several ways that information we might reasonably expect to be protected can be exposed when we put too much trust in AI.

The recent ChatGPT “leaks”, for example, reportedly occurred when users didn’t realize that the “share” function could make the contents of their conversations visible on the public internet.

The share functionality is designed to allow users to take part in collaborative chats with other users. However, in some cases, this meant they also became indexed and searchable by search engines. Some of the information inadvertently made public in this way included names and email addresses, meaning the participants of the chat could be identified.

It was also recently revealed that up to 300,000 chats between users and the Grok chatbot had been indexed and made publicly visible in the same way.

While these issues seem to have been caused by users’ misunderstanding of features, other, more nefarious security flaws have emerged. In one case, security researchers found that Lenovo’s Lena chatbot could be “tricked” into sharing cookie session data via malicious prompt injections, allowing access to user accounts and chat logs.

And there are other ways that privacy can be infringed upon besides chat logs. Concerns have already been raised over the dangers of nudification apps that can be used to create pornographic images of people without their consent. But one recent incident suggests this can even happen without user intent; Grok AI’s recent “spicy” mode is reported to have generated explicit images of real people without even being prompted to do so.

The worry is that these aren’t simple, one-off glitches, but systemic flaws with the way that generative tools are designed and built, and a lack of accountability for the behavior of AI algorithms.

Why Is This A Serious Threat To Privacy?

There are many factors that could be involved in exposing our private conversations, thoughts and even medical or financial information in ways we don’t intend.

Some are psychological — like when the feeling of anonymity we get when discussing private details of our lives prompts us to over-share without thinking about the consequences.

This means that large volumes of highly sensitive information could end up being stored on servers that aren’t covered by the same protections that should be in place when dealing with doctors, lawyers, or relationship therapists.

If this information is compromised, either by hackers or poor security protocols, it could lead to embarrassment, risk of blackmail or cyberfraud, or legal consequences.

Another growing concern that could contribute to this risk is the increasing use of shadow AI. This term refers to employees using AI unofficially, outside of their organizations’ usage policies and guidelines.

Financial reports, client data, or confidential business information can be uploaded in ways that sidestep official security and AI policies, often neutralizing safeguards intended to keep information safe.

In heavily regulated industries such as healthcare, finance, and law, many believe that this is a privacy nightmare waiting to happen.

So What Can We Do About It?

First, it’s important to acknowledge the fact that AI chatbots, however helpful and knowledgeable they might seem, are not therapists, lawyers, or close and trusted confidants.

As things stand now, the golden rule is simply never to share anything with them that we wouldn’t be comfortable posting in public.

This means refraining from discussing specifics of our medical histories, financial activities or personal identifiable information.

Remember, no matter how much it feels like we’re having a one-to-one conversation in a private environment, it’s highly likely that every word is stored and, by one means or another, could end up in the public domain.

This is particularly relevant in the case of ChatGPT, as OpenAI is, as of writing, obliged by a US federal court order to store all conversations, even those deleted by users or conducted in its Temporary Chat mode.

When it comes to businesses and organizations, the risks are even greater. All companies should have procedures and policies in place to ensure everyone is aware of the risks and to discourage the practice of “shadow AI” as far as is practically possible.

Regular training, auditing, and policy reviews must be in place to minimize risks.

Beyond this, the risks to personal and business privacy posed by the unpredictable way chatbots store and handle our data are challenges that wider society will need to address.

Experience tells us we can’t expect tech giants like OpenAI, Microsoft and Google to do anything other than prioritize speed-of-deployment in the race to be the first to bring new tools and functionality to market.

The question isn’t simply whether chatbots can be trusted to keep our secrets safe today, but whether they will continue to do so tomorrow and into the future. What is clear is that our reliance on chatbots is growing faster than our ability to guarantee their privacy.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *