Originally published on Cyber News, December 16, 2021
In the wake of unprecedented changes in 2021, experts offer their takes on how the online privacy landscape will evolve in 2022 and beyond.
As the year draws to a close, the momentum of pro-privacy sentiment among consumers and lawmakers seems to be at an all-time high. Data brokers and marketers are being forced to adapt to new realities, while online privacy advocates are celebrating both fresh and upcoming data protection regulations.
With some notable exceptions, it might seem that 2021 was a great year for privacy. As new regulations are set to go into effect in Japan and Australia, Saudi Arabia and China have already signed their own bills into law. Meanwhile, countries like Canada, India, and South Korea are likely to implement additional privacy protections soon.
Alongside stricter regulations from lawmakers, tech giants like Apple are making their software more privacy-friendly, while the privacy protection market is set to reach new heights. What will these changes and trends mean for consumers and organizations in the coming year?
I asked experts about what to expect for privacy in 2022. Here’s what they said.
False privacy promises
Dr. Chris Pierson, CEO of digital protection firm BlackCloak, notes that data collectors and advertising giants are well aware of rising consumer privacy awareness and the mainstream discontent that is bubbling up.
“More and more people are demanding unprecedented transparency, and manufacturers and operators will have no choice but to act. That’s why in 2022, many carriers will begin to provide more transparency to the end-users,” says Pierson.
As a perfect example of improved transparency, he points to Apple’s release of iOS 15.2, which now allows its millions of users to see what apps are tracking them and what data requirements exist.
“But not all data collectors will follow Apple’s lead of greater transparency,” cautions Pierson. “Instead, look for others to adopt new technologies to help collect the same type of information, but do so in a less overt way, and to form coalitions for data sharing that enable the value of records to remain high.”
Similarly, Professor Michael Huth, Head of the Department of Computing at Imperial College London and Chief Research Officer of Xayn, adds that users should be on the lookout for false privacy promises and stay critical.
“Some companies might try to sell people services they don’t actually need by highlighting false advantages, such as how an always-on camera could give them free-hand access to their smartphone,” warns Professor Huth. “Hands-free access can be provided when cameras are off by default, and such a feature opens the door for total user surveillance, meaning there is no real value proposition for this.”
“The takeaway is this: The most important tool a user has to protect their privacy is to check, prior to using a service, whether using that service is necessary and worth a potential privacy trade-off.”-Professor Michael Huth
A farewell to cookies
Rob Shavell, the CEO of personal information removal company Abine/DeleteMe, points to the inevitable death of the ubiquitous cookie as something to look forward to in 2022.
“Their demise will be caused by the structural changes to the ad-tech landscape driven by the new approaches being taken by Apple and Google, as well as major browsers ending support of third-party tracking overall,” Shavell suggests, noting that it won’t all happen at once.
“Chrome has already postponed the end of cookie tracking until 2023, but other major browsers have already phased the technology out, and it's already clear that the ‘the old way’ of siphoning information about online consumers is quickly coming to a close.”
According to Shavell, the death of the cookie will prompt a scramble among data-driven businesses to find new methods to farm user data.
“While some are resorting to relatively benign ‘opt-in’ methods, such as vendors offering ‘points’ systems with benefits in exchange for user information, others are more invasive, like various changing methods of browser fingerprinting,” he commented. “While the latter tech is often blocked in the Google and Apple ecosystems, it is more of a whack-a-mole process that may persist.”
On the other hand, Shavell speculates that in response to both the increasingly closed-off mobile platforms, as well as the looming enforcement of privacy regulations, invasive technology may become even more sophisticated in 2022.
The rise of ubiquitous biometric authentication
Speaking of invasive authentication technologies, Rob Shavell notes that major tech platforms have increasingly begun to see things like passwords and two-factor authentication as “outdated processes that are still too easily abused and exploited.”
“The double-edged sword of identity authentication is that any process too complex or new to users risks limiting new user growth, or complicating existing user experiences,” he claims, adding that authenticator apps – themselves still relatively new to the market – are already seen as less than ideal.
“What the wider industry would prefer is a single form of ID verification that people use in all circumstances, and the most obvious is biometric authentication.” -Rob Shavell, CEO of DeleteMe
“While fingerprints, eye scans, and facial recognition technologies are improving dramatically, and it provides many user conveniences that passwords don’t, we still consider the growing prevalence of digitized biometrics to be a longer-term source of privacy risk,” Shavell says.
According to him, when the same unique piece of data is used to unlock online accounts and at the same time provides access to financial assets, it creates a skeleton key for fraudsters.
“Biometric authentication carries many of the same risks that overuse of social security numbers created, but with even greater potential for long term damage in the event of data breaches, which will be increasingly common 2022,” Shavell warns.
Going beyond privacy-by-design
On a more positive note, Dan DeMers, CEO of dataware platform Cinchy, predicts we might finally see tech companies embrace the principle of privacy-by-design in 2022.
“The notion of privacy-by-design has actually been around since the 1970s,” reminds DeMers, adding that the term has taken on far greater significance in the current era. “While too many organizations still only pay lip service to this priority, we may reach critical mass in 2022, as the majority of product and software developers finally embrace this principle.”
DeMers believes that there are signs that we may go even further than mere privacy-by-design. “The notion of ‘zero-copy integration’ – a dramatically different approach that eliminates traditional copy-based data integration in favor of meaningful access controls – is set to become a national standard in Canada, and start to develop traction everywhere,” he tells us.
With that said, DeMers notes that data privacy shouldn’t begin with technologies or processes.
“It’s really about defining control and access. Policies need to govern who can access data, what data can be accessed, how it can be accessed, and when it can be accessed.” -Dan DeMers, CEO of Cinchy
According to him, in 2022, data privacy controls may include the power to ‘port’ data from one hosting environment to another, and to delete data as necessary.
“A lot of data is now routinely copied via the process known as data integration, and this is where the problems mount,” DeMers explains. “Data is a singularly unique commodity, just like currency, identity cards, and intellectual property – and there are strict laws in place to prevent the duplication of those assets. In 2022, organizations will need to govern and secure their data with that mindset.”
AI-powered Big Brother in the workplace
When it comes to the worrying proliferation of workplace surveillance tools, Theo Wills, Senior Director of Privacy at privacy and security consulting company Kuma, believes that the use of AI to monitor workers will continue to expand in 2022.
“These tools can monitor everything from behavior to biometrics, but we are going to find ourselves increasingly confronted with the fact that AI and algorithms aren’t perfect, and learn to understand and adapt to their limitations,” Wills explains. “We’ve seen, for example, that AI sometimes interprets the actions of members of minority groups differently.”
As the use of AI technology is growing, Willis predicts that – somewhat paradoxically – this forward motion will require bosses to take a step back and consider how to use this powerful tool ethically.
“What are the ethical boundaries of AI and workplace surveillance, especially now that so many of our workplaces are in our homes? How will AI be used appropriately when it comes to disciplinary decisions?” -Theo Wills, Senior Director of Privacy at Kuma
According to Willis, companies will need to get thoughtful and clear on their corporate policy and the appropriate use of AI in their organization.
Comentários