The Microsoft 365 website on a laptop in New York, USA, on Tuesday, June 25, 2024.Â
Bloomberg | Bloomberg | Getty Images
The start of the year is a great time to do some basic cyber hygiene. We’ve all been told to patch, change passwords and update software. But one concern that has come increasingly to the fore is the sometimes-quiet integration of potentially privacy-invading AI into software.Â
“The rapid integration of AI into our software and services has and should continue to raise important questions about privacy policies that predated the AI era,” said Lynette Owens, vice president, global consumer education at the cybersecurity company. Trend Micro. Many programs we use today—be they email, accounting, or productivity tools, as well as social media and streaming apps—may be governed by privacy policies that lack clarity about whether our personal data can be used to train AI models.
“This leaves us all vulnerable to the use of our personal information without proper consent. It’s time for every app, website or online service to take a hard look at the data they’re collecting, who they’re sharing it with. how they’re sharing it and whether or not it can be accessed to train AI models,” Owens said. “There’s a lot of effort to be done.”
Where AI is already inside our daily lives online
Owens said the potential problems overlap with most programs and applications we use every day.
“Many platforms have been integrating AI into their operations for years, long before AI became a buzzword,” she said.
As an example, Owens points out that Gmail has used AI for spam filtering and predictive text with its “Smart Compose” feature. “And streaming services like Netflix rely on AI to analyze viewing habits and recommend content,” Owens said. Social media platforms like Facebook and Instagram have long used AI for facial recognition in photos and personalized content feeds.
“While these tools offer convenience, consumers should consider potential privacy tradeoffs, such as how much personal data is being collected and how it is used to train AI systems. Everyone should carefully review their privacy settings , understand what data is being shared, and check regularly for updates to the terms of service,” Owens said.
One tool that has come in for special scrutiny is Microsoft’s Connected Experiences, which has been around since 2019 and is enabled by an optional choice. It was recently highlighted in press reports — inaccurately, according to the company, as well as some outside cybersecurity experts who have looked into the matter — as a feature that is new or has changed settings. Headlines aside, privacy experts worry that advances in AI could lead to the potential for data and words in programs like Microsoft Word to be used in ways that privacy settings don’t adequately cover.
“When tools like connected experiences evolve, even if the underlying privacy settings haven’t changed, the implications of data usage can be much broader,” Owens said.Â
A Microsoft spokesperson wrote in a statement to CNBC that Microsoft does not use customer data from Microsoft 365 from consumer and commercial applications to train the underlying models of large languages. He added that in certain cases, customers may consent to the use of their data for specific purposes, such as custom model development explicitly requested by some commercial customers. Additionally, the setting enables cloud-based features that many people have come to expect from productivity tools, such as real-time co-authoring, cloud storage, and tools like Word Editor that offer spelling and grammar suggestions.
The default privacy settings are a problem
Ted Miracco, CEO of security software company Approov, said features like Microsoft’s connected experiences are a double-edged sword — promising improved productivity but introducing significant privacy red flags. The setting’s default status, Miracco said, could opt people into something they’re not necessarily aware of, mostly related to data collection, and organizations may also want to think twice before leaving the feature on. .
“Microsoft’s assurance offers only partial relief, but it still can’t alleviate some real privacy concerns,” Miracco said.
Perception may be its problem, according to Kaveh Vadat, founder of RiseOpp, an SEO marketing agency.
“Enabling the default shifts the dynamic significantly,” said Vahdat. “Automatically enabling these features, even with good intentions, essentially puts the onus on users to review and modify their privacy settings, which may intrusive or manipulative to some.”
His view is that companies need to be more transparent, not less, in an environment where there is a lot of mistrust and doubt about AI.
Companies, including Microsoft, should emphasize default rather than opt-in, and can provide more detailed, non-technical information about how personal content is handled because perception can become reality.
“Even if the technology is completely safe, public perception is shaped not only by facts, but by fear and assumptions – especially in the age of AI where users often feel powerless,” he said.
Default settings that enable sharing make sense for business reasons, but are bad for consumer privacy, according to Jochem Hummel, assistant professor of information systems and management at Warwick Business School at the University of Warwick in England.
Companies are able to improve their products and stay competitive with more data sharing by default, Hummel said. However, from the user’s perspective, prioritizing privacy by adopting an opt-in model for data sharing would be “a more ethical approach,” he said. And as long as the additional features provided through data collection are not necessary, users can choose which one most closely matches their interests.
There are real benefits to the current tradeoff between AI-enhanced tools and privacy, Hummel said, based on what he’s seeing in the work submitted by the students. Students who have grown up with webcams, real-time lives on social media and pervasive technology are often less concerned about privacy, Hummel said, and are enthusiastically embracing these tools. “My students, for example, are creating better presentations than ever,” he said
Risk management
In areas such as copyright law, fears of mass copying by LLMs have been overblown, according to Kevin Smith, director of libraries at Colby College, but the evolution of AI intersects with fundamental privacy concerns.
“Many of the privacy concerns that are currently being raised about AI have existed for years; the rapid deployment of AI trained with large language models has just focused attention on some of those issues,” Smith said. “Personal information is about relationships, so the risk that AI models could uncover data that was more secure in a more ‘static’ system is the real change we need to find ways to manage,” he added.
In most programs, disabling AI features is an option buried in the settings. For example, with connected experiences, open a document and then click “file” and then go to “account” and then find privacy settings. Once there, go to “manage settings” and scroll down to connected experiences. Click the box to turn it off. After doing so, Microsoft warns: “If you turn this off, some experiences may not be available to you.” Microsoft says that leaving the setting enabled will allow for more communication, collaboration, and AI-served suggestions.
In Gmail, you need to open it, tap the menu, then go to settings, then click the account you want to change, then scroll to the “general” section and check the boxes next to the various “Smart Features” and customization options . .
As cybersecurity vendor Malwarebytes puts it in a blog post about Microsoft’s feature: “Disabling this option may result in lost functionality if you’re working on the same document as other people in your organization. . .. If you want to disable These settings are disabled for privacy reasons and you don’t use them much, in any case, all settings can be found under Privacy Settings for a reason experiences are used to train the models of HE.”
While these instructions are easy enough to follow, and learning more about what you’ve agreed to is probably a good option, some experts say the burden shouldn’t be on the consumer to disable these settings. “When companies implement features like these, they often present them as options for improved functionality, but users may not fully understand the scope of what they’re agreeing to,” said Wes Chaar, a data privacy expert. .
“The crux of the issue lies in vague disclosures and a lack of clear communication about what ‘affiliated’ includes and how deeply their personal content is analyzed or stored,” Chaar said. “For those outside of tech, it can be compared to inviting a helpful assistant into your home, only to find out later that they’ve been taking notes on your private conversations for a training manual.”
The decision to manage, limit or even revoke access to data highlights the imbalance in the current digital ecosystem. “Without robust systems that prioritize user consent and provide control, individuals remain vulnerable to the re-use of their data in ways they neither anticipate nor benefit from,” Chaar said.