When Privacy Law Goes “Bump in the Night”

Summary

Is privacy dead? Are the laws that are supposed to protect our privacy effective? This article will discuss the concept of privacy in general and a specific privacy law that may do more harm than good. The Minnesota “Internet Privacy” (2016) statute may, at first glance, appear to protect consumers from internet service providers disclosing personally identifiable information. However, upon further examination, the statute does not adequately define the types of data that can be used to identify users and allows the release of consumer browsing information to law enforcement agencies without a warrant or court order. Even if the law were effectively written, it may not be enough to truly protect consumers’ privacy as violations of privacy law often go unprosecuted. Free-market capitalism also fails to protect privacy as privacy issues are often hard to understand which prevents consumers from making informed decisions.

           The last 50 years have resulted in massive changes in technology and society. The surge of big data allows quicker and more efficient access to data than ever before and significant breakthroughs in many aspects of society, but the privacy violations that surged with the boom of this data have been left largely unchecked. In recent times, when someone mentions “privacy” on the internet, they are almost always met with one of two responses: fear and paranoia, or a quick scoff followed by “There is no such thing as privacy on the internet! And besides, do you see the irony of talking about privacy while you are on [insert your social media platform of choice here]?” If society stands a chance at keeping privacy violations in check, laws such as the Minnesota “Internet Privacy” (2016) statute must be updated to reflect current definitions of personally identifiable information (PII) while removing excuses and adding real consequences for privacy violations. The reality is that privacy is not dead, but rather poorly regulated and enforced.

           While Minnesota’s “Internet Privacy” (2016) statute is often touted as protecting Minnesota residents by prohibiting internet service providers (ISPs) from disclosing PII, the reality is that the statute drastically falls short by listing exceptions that allow (and in many cases even require) ISPs to disclose information containing PII (MN Stat § 325M, 2016).  Additionally, it also defines PII in a way that allows for easy tracking of individuals across the web. Legislators may mean well when they create these types of laws, but they are often uninformed regarding the technology from which they are trying to protect their citizens.

            To make informed laws protecting PII, one must first understand that computers are not human – they identify people differently than other people do. When a person meets someone for the first time, they attempt to associate that person’s face with their name. With varying success, this allows them to recognize that person and identify them by their name at a later time. Research has shown that in typically developing adults, a face is not typically recognized by individual features but rather as a whole (although this is not necessarily true for many people with autism spectrum disorders) (Faja et al., 2012, p. 2).

            By contrast, computers identify information using unique information rather than common information. This is evidenced when facial recognition is done by computers, as it is a complex task that requires over 80 points of data for each template. (Thornburg, 2002, p. 325) Even with the complexity of the facial templates, a 2018 report on facial-recognition use by the police showed an approximately 92% false-positive rate when attempting to identify criminals (Fingas, 2018). Even a person who claims that they are terrible at remembering names would have to work hard to achieve a failure rate that high.

            This does not mean that computers are bad at tracking or identifying people or devices. Rather, computers use different methods of tracking and identifying than humans do, and they do not perform as well when employing the same methods that humans do. Computers track best when referencing unique and persistent information about a person. Additionally, a computer does not always necessarily need to track a person, as tracking a device that a person uses is often more than adequate for many forms of tracking.

            Finding this unique and persistent information can be done in a variety of ways such as by combining multiple points of contact information about a person such as their name, address, email address, and phone number. It can also be done by obtaining larger amounts of other less unique information such as zip code, age, gender, and similar information. Many online services may also set tracking cookies in a user’s browser as they visit said services. These cookies can be used for legitimate purposes such as authentication and saving a user’s preference, but they also can be used to track a person across the web by setting seemingly random unique identifiers. Devices may also have unique identifiers such as serial numbers, MAC addresses, IP addresses, and the like, which can be accessed using a variety of methods, but especially any time that an application is installed on the device.

           These identifiers along with any other uniquely identifying information, make it easy for a computer to attribute tracked activity to a single entity. Tracking to a single entity may not sound too scary if that entity’s name and contact information is not included in the profile. There are even processes to “de-identify” data and remove all references to information that humans can easily use to identify a person while replacing such references with unique identifiers instead. These seemingly random identifiers may not seem like something that could be used to link data back to users, but studies have shown that “any information that distinguishes one person from another can be used for re-identifying data” (Narayanan & Shmatikov, 2010, p. 26).

            Once one understands that what may seem like just random letters and numbers can identify people when these identifiers are unique to a specific person, the importance of protecting such identifiers in the same way as other PII becomes much clearer. In fact, there is already legal precedent for defining PII in this way which is found in other laws. The Children’s Online Privacy Protection Act (COPPA) is one of the strongest examples of this. Its definition of personal information includes the following: “A persistent identifier that can be used to recognize a user over time and across different Web sites or online services. Such persistent identifier includes, but is not limited to, a customer number held in a cookie, an Internet Protocol (IP) address, a processor or device serial number, or unique device identifier” (Children’s Online Privacy Protection Act, 2013).

           While COPPA provides one of the strongest definitions of PII, it is not the only law to include such references. The California Consumer Privacy Act (CCPA) states the following: “‘Personal information’ means information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. Personal information includes, but is not limited to, the following if it identifies, relates to, describes, is reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household: … Identifiers such as a real name, alias, postal address, unique personal identifier, online identifier, internet protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.” (California Consumer Privacy Act, 2018)

           For regulatory bodies outside of the United States, the General Data Protection Regulation (GDPR) is a good example of a legal precedent for unique identifiers being considered PII. The GDPR states that “‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” (General Data Protection Regulation, 2016)

           By contrast, the Minnesota Internet Privacy statute defines PII as “(1) a consumer by physical or electronic address or telephone number; (2) a consumer as having requested or obtained specific materials or services from an Internet service provider; (3) Internet or online sites visited by a consumer; or (4) any of the contents of a consumer’s data-storage devices.” (MN Stat § 325M, 2016) Not only does the statute fail to define unique identifiers as PII, it also requires ISPs to disclose the protected data when a subpoena is issued (including an administrative subpoena which any law enforcement agency can issue without a court order): “An Internet service provider shall disclose personally identifiable information concerning a consumer: … pursuant to subpoena, including an administrative subpoena, issued under authority of a law of this state or another state or the United States” (MN Stat § 325M, 2016).

           Even if the Minnesota “Internet Privacy” (2016) statute did adequately define PII and did not require ISPs to give up PII according to the whims of law enforcement agencies, the enforcement of privacy laws would still be difficult. The legal system is largely based on reviewing past cases and opinions to establish precedents, but the area of privacy law (although growing) is rather new to the legal system. Due to its novelty, privacy law lacks many of the established precedents that other areas of law have.

           In addition to the lack of precedents, regulators are motivated to enforce laws, but they often lack the resources to effectively enforce the law for every infringement. This leads to a situation where only the most drastic violations are prosecuted. When only the most egregious violations are dealt with, companies are free to largely operate as the please, which results in an erosion of rights as the lack of enforcement continues.

           Free market capitalism also fails in the area of privacy rights more so than many would argue it does in other areas. When a company is “outed” for a specific incident that goes against the beliefs of a large group of people such as exploiting child labor or other humanitarian issues, the outcry from that group of people can cause devastating losses for the company. (Steinman & Wolfrom, 2012, p. 32) This is because it is easy to understand specific incidents of deviation from social norms. At the same time, understanding privacy issues is often not easy and often requires special skillsets.

           For instance, consider policies. Anyone can read a privacy policy, but such policies are intimidating, and it takes some time to get used to reading them. In addition, the average consumer has no way of verifying that the company is being honest with them. If someone reads a return policy and the company violates that policy, such a violation is easy to detect. However, if someone reads a privacy policy, and the company violates the privacy policy, the person would often have no way of knowing this until it was too late (unless they have the skillset to intercept and analyze the traffic that their device is sending).

           If regulators are over-worked and the self-policing free market is ineffective for this issue, what is the solution? Private right of action (which allows consumers to file a lawsuit for violation of certain laws instead of only government regulators) and allowing consumers to stand up for themselves may be part of the answer. While some plaintiffs’ firms may attempt to abuse their power when it comes to filing lawsuits due to the smallest infractions (thus taking the focus away from the bigger picture), the threat of any law firm that discovers an infraction being able to file suit would likely cause companies to self-police more and thus stay more conscious about protecting privacy rights of their users.

           Ezra Thayer proposed a thought experiment on this theory of enforcement through private right of action when he wrote the following: “Suppose, for example, that an ordinance makes it a misdemeanor punishable by a fine to leave a horse unhitched on a highway. The defendant leaves his horse unhitched contrary to the ordinance and it runs away and injures the plaintiff. How does the fact that he has violated the ordinance affect his civil liability?” (Thayer, 1914, p. 319) Thayer went on to suggest that because private right of action was not built into the hypothetical law, it should not increase the defendant’s civil liability within common law (Thayer, 1914, pp. 319-321).

           While Thayer was a well-respected law scholar, when one examines this thought experiment within the context of privacy law, they must basically remove common law from the equation as common law does not provide protection for privacy in the current technology landscape. Without civil liability within common law, the plaintiff would be injured, and although the defendant would be acting outside of the law, the plaintiff would have no way of seeking redress. Even if a regulator decided to prosecute the defendant, the plaintiff would still not receive restitution from the judgement. Using this thought experiment, it seems clear that the plaintiff has the best motive for ensuring the law is enforced and should be able to claim at least a part of any restitution awarded by the court as they, rather than the government, were the ones harmed.

           However, private right of action may not be the entire answer. One problem with allowing private right of action in today’s threat landscape is the tendency to increase the number of binding arbitration agreements that companies attempt to enter into their terms of service. In the US, binding arbitration clauses effectively stifle the ability for private law firms to effectively enforce laws and contracts because cases must be litigated on an individual basis (9 U.S.C. § 301, 1925) which is a very inefficient way of handling claims that affect millions of people.

           Arbitration clauses do help reduce the workload of the court system. The problem is that ultimately, they lead to situations where laws continue to go unenforced unless regulators decide to get involved. When regulators are forced to get involved, consumers may not get any direct restitution as often the damages awarded are punitive rather than compensatory.

           Binding arbitration clauses have been heavily challenged in recent years, but, ultimately, have been upheld by the Supreme Court of the United States (Epic Systems Corp. v. Lewis, 2018). Some plaintiffs’ firms have attempted to get around this by filing massive amounts of arbitrations for issues that would otherwise be handled by class action or mass action lawsuits. This has led to the laughable situation where the company that forced the arbitration often attempts to get out of the arbitration because they are responsible for the increased costs associated with arbitration. (McClenon et al. v. Postmates Inc., 2020) While admirable in making companies think twice before enforcing arbitration, these tactics are ultimately very inefficient and end up costing more than the damages that are awarded, thus making them ineffective at enacting any real change (as the number of firms willing to pursue this tactic is small).

           The reality is that there is no perfect solution when it comes to forcing companies to respect privacy. Privacy is not dead yet, but it is on life support. The status quo is not sustainable and will quickly lead to a complete and utter collapse of privacy. This is the point at which many people start to question whether or not privacy really matters.

           If some think that they do not have anything to hide, research suggests differently. Studies have shown that people who know they are being watched act differently than those who do not (Beaman et al., 1979; Nettle et al., 2012; Bateson et al., 2013). This implies that there are just some behaviors that people wish to keep secret.

           Even if someone personally does not care if the big tech companies know about their shopping history (or would even go so far as to say that they like the personalized recommendations that these companies make using this information), others may care, and it should be their right to make a conscious and informed decision about what information they share with third parties. It is important to remember that Facebook and Google have a limited view of a person’s browsing history – only with participation from other websites can they see a person’s browsing history (although admittedly the number of participating websites makes up the majority of the web these days). By contrast, ISPs do not have a limited view of browsing activity that flows through their services.

           Even when traffic is encrypted, ISPs can still find out information about what websites someone is visiting (Dubin et al., 2016; Rieke et al., 2016). VPNs can help hide traffic from an ISP if setup correctly, but they simply move the threat model from one provider to another. In some cases, VPNs can even be a security risk or blatantly collect information about browsing habits (Semrau, 2018). Because VPNs can be a security and privacy risk, people are often no better off with a VPN than they are with their ISP.

           The amount of data an ISP can collect should be scary – it should keep people up at night. In Minnesota, that data can be used against citizens even without a judge being involved and ISPs can sell information about consumers to third parties because the definition of PII is outdated. The Minnesota legislators attempted to do the right thing but used outdated information when creating the law. Consequently, the law should be updated to properly define PII and require a court order before information is released to third parties or law enforcement agencies.

           One of the ways US citizens can protect their privacy is to make it easier for legislators to update laws as necessary to keep up with the times. Privacy should not be a partisan movement. Citizens should put pressure on legislators to properly protect their privacy and keep up with the times as technology changes. However, having laws in place is not enough as such laws need to be properly enforced. Privatization of enforcement may help, but in and of itself it is not enough.

           It is time to take a good hard look at how arbitration is used in the US and find a way to keep companies responsible for their actions. It is time to start reading privacy policies and think about whether taking an online quiz is worth giving up our privacy rights. It is time to put pressure on legislators to make up to date and effective privacy laws. Privacy is not dead; it is simply poorly regulated and enforced.

References

9 U.S.C. § 301.

Bateson, M., Callow, L., Holmes, J. R., Roche, M. L. R., & Nettle, D. (2013). Do images of ‘watching eyes’ induce behaviour that is more pro-social or more normative? A field experiment on littering. PLoS ONE, 8(12).

Beaman, A. L., Klentz, B., Diener, E., & Svanum, S. (1979). Self-awareness and transgression in children: Two field studies. Journal of Personality and Social Psychology, 37(10), 1835–1846.

Cal. Civ. Code § 1798.100 (2018).

Children’s Online Privacy Protection Rule, 16 C.F.R. § 312 (2013).

Dubin, R., Dvir, A., Pele, O., & Hadar, O. (2016). Black hat Europe 2016. In I know what you saw last minute – The Chrome browser case.

Epic Systems Corporation v. Lewis, 584 U.S. 1, 7 (2018).

Faja, S., Webb, S. J., Jones, E., Merkle, K., Kamara, D., Bavaro, J., … Dawson, G. (2012). The effects of face expertise training on the behavioral performance and brain activity of adults with high functioning autism spectrum disorders. Journal of Autism and Developmental Disorders, 42(2), 278–293.

Fingas, J. (2018, May 6). Police face recognition misidentified 2,300 as potential criminals. Retrieved from https://www.engadget.com/2018-05-06-police-face-recognition-misidentified-2300-as-criminals.html.

General Data Protection Regulation (2016) OJ L119/1.

McClenon, et al. v. Postmates, Inc. (United States District Court for the Northern District of Illinois Eastern Division July 20, 2020).

MN Stat § 325M (2016).

Narayanan, A., & Shmatikov, V. (2010). Myths and fallacies of “Personally Identifiable Information.” Communications of the ACM, 53(6), 24–26.

Nettle, D., Nott, K., & Bateson, M. (2012). ‘Cycle thieves, we are watching you’: Impact of a simple signage intervention against bicycle theft. PLoS ONE, 7(12).

Rieke, A., Robinson, D., & Yu, H. (2016, March). What ISPs can see; Clarifying the technical landscape of the broadband privacy debate. Retrieved from https://www.upturn.org/reports/2016/what-isps-can-see/.

Semrau, B. (2018, January 17). Put away the tinfoil hat (the truth about using a VPN). Retrieved from https://semsec.net/2018/01/16/vpn/

Steinman, D. R. B., & Wolfrom, B. T. (2012). The effect of brands’ unethical actions on consumers’ attitudes in the fast moving consumer goods domain. Business Management Dynamics, 2(3), 32–39.

Thayer, E. R. (1914). Public wrong and private action. Harvard Law Review, 27(4), 317–343.

Thornburg, R. H. (2002). Face recognition technology: The potential Orwellian implications and constitutionality of current uses under the Fourth Amendment. The John Marshall Journal of Information Technology & Privacy Law, 20(2), 321–346.

Disclaimer

I am not an attorney, nor do I play one on TV or Facebook. Nothing stated in this article should be considered legal advice.

About thegeekkid

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

This site uses Akismet to reduce spam. Learn how your comment data is processed.