The homeowners association (HOA) of my townhouse community is debating the cost of replacing shrubberies separating our shared patios with patio privacy walls. I don’t use my patio as often as my neighbors, so I've been passively intrigued by the passionate rhetoric some homeowners have used at HOA meetings to argue in favor of the privacy walls. Although I appreciate the desire for personal privacy (and wish some of my sunbathing neighbors would value their privacy a little bit more), as I connected to my WiFi I couldn’t help but wonder why my neighbors aren’t as concerned about their Internet firewalls as they are about their patio walls. Most of my neighbors’ wireless networks are either unsecured or have a default password. (DSL networks through our local landline telephone provider use your home phone number as the default password.)
This is just one example of how in the age of big data, where nearly every aspect of our personal and professional lives is captured as data, we rarely seem concerned about data privacy. Not only do we use unsecured WiFi connections, we freely give away countless bytes of our personal data in exchange for free Internet/mobile-based services. But even when we do pay for services, how often do we consider how our data will be used? Where does the responsibility for data privacy lie?
Seeing through transparency
One of the most lauded principles of data privacy is transparency – but Omri Ben-Shahar dispelled its mythology. As he explained, the theory says “if firms are required to tell people what information they collect, and do so in a simple and conspicuous manner, people would be able to avoid doing business with those that inflict abusive privacy practices.” But there's a problem with the transparency solution – studies have shown no evidence that it works. It's tempting to conclude that this is because privacy policies are often verbose and vague. Yet “even when the notices about the ways firms collect, use and share information are delivered in the simplest and most concise manner, people still don’t read the notices and don’t change their behavior. If simple notices are not read or used by people, the hopes for informed choice crumble. Users are not going to opt out of Google’s personalized ads or personalize Facebook’s privacy settings. These consumers might comparison-shop among services based on various quality and service measures, but not on the basis of privacy features.”
Saving private data
So it would seem transparency alone isn’t the answer since the majority of consumers don’t prioritize data privacy. Even when they do, consumers still require the organizations saving their data to save its privacy as well. Whether externally transparent or not, organizations need to protect data privacy by having clearly defined data privacy policies detailing exactly what data is to be saved and how that data should be used. Doing this provides a well-defined way to identify which data is sensitive and therefore subject to protection. Organizations also need to reduce the risk of unauthorized access to sensitive data by understanding how data privacy affects different organizational roles and by masking sensitive data to provide differential privacy that still enables essential analytics but without needlessly exposing sensitive information.
What say you?
Can we save private data? Or in the age of big data, is this a war we have already lost? Please share your perspective and experience regarding data privacy by posting a comment below.
Download a paper about SAS Data Management and the General Data Protection Regulation
2 Comments
Your article brings up some important questions, thank you for this thoughtful piece. I would argue that the main reason transparency does not have an effect on whether or not people agree to forfeit their data is the lack of choice and alternatives. Google, Facebook, LinkedIn, internet services, and so many other services have become staples of our society; you can't just opt out. And because all of these groups require you to forfeit your data, you do it. Not because you want to. Not because you trust them to protect you. But because you have NO CHOICE. So as to your question on data privacy, there are really two possibilities that I see. We either find a way of enforcing regulations on companies to make them protect the data or pay meaningful consequences. Or, we develop alternative services that don't require you to forfeit your data. I would leave Facebook in a heartbeat if there was an alternative that wasn't so invasive. And while some people are simply jaded to the lack of data privacy, I think there are many others that feel the same as me.
Thanks for your comment, Elizabeth. I agree that the reason “informed choice” doesn’t work is our reality is truly “no choice” as opting out is not really an option since, as you said, services such as Google and Facebook are too ingrained in our society and no real alternatives currently exist. I also agree with your perspective on alternatives. I, perhaps pessimistically, doubt the efficacy of enforcing regulations since technology always evolves faster than governments’ political consensus on how to regulate it. I would also leave Facebook in an instant if a real alternative becomes available. My cynicism doubts this will ever happen since only a paid service could challenge the profitability of free services free to freely exploit our data, and back to your earlier point, due to how ingrained Facebook is in our society, not enough people would switch to a paid service.