icon caret-left icon caret-right instagram pinterest linkedin facebook twitter goodreads question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

The Privacy Blog

Helping the Hans Brinkers of Cybersecurity

The story of "The Silver Skates" celebrates the bravery of a little Dutch boy who stuck his finger in the dyke to prevent a terrible breach. Today, cyber incident responders are modern Hans Brinkers who use their fingers on computer keyboards to stop personal information hemorrhages.


July 1, 2021, began like any other day. A quick perusal of the morning news revealed the usual "big" data breach, exposure of 700 million (92%) LinkedIn users' private information. A similar incident has affected 500 million LinkedIn subscribers not two months earlier. I sighed.


Data breaches are today nearly as predictable as afternoon rain in the tropics. They've become so routine that we hardly notice them, lazily accepting them as a way of life.


Suppose every night after you parked your car, it routine malfunctioned and sprayed dirty motor oil all over your driver's seat. Instead of trying to fix it, you decided to live with it. Now you have to get up two hours earlier to clean up the mess each morning. Or you put a clean sheet of plastic on the seat each day. Or you started wearing ratty clothes which became saturated with oil, and you smelled like an auto garage. Is this the right way to solve such a problem?


Accepting private information breaches as a norm is like tolerating the oily driver's seat. It's unacceptable. You need to take your car to the shop. In cyberspace, you need to start insisting on better personal information protection in computer systems and networks.


The European Union (EU) already insists. Its General Data Privacy Regulation (GDPR) protects "fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data." It creates rules for the processing and movement of Europeans' personal data. Should you doubt the Europeans' intent here, you can trace the GDPR's lineage back to the 1948 UN Declaration of Human Rights. Ratified by the fledgling UN in the wake of World War II, this declaration affirmed that "the inherent dignity and … equal and inalienable rights of all … is the foundation of freedom, justice and peace in the world."


These values are consistent with our American democratic ideals. Unfortunately, the EU Court of Justice has ruled twice (in 2015 and 2021) that current US laws and regulations are insufficient for GDPR-compliance. Each US company seeking to do business in Europe must individually achieve GDPR compliance instead of being grandfathered in under a US-EU agreement.


The pressure on Washington to pass new privacy legislation does not come from just overseas. In the domestic arena, state legislatures are considering and ratifying new privacy laws of their own. State laws like the California Consumer Privacy Act (CCPA, 2018) and the California Privacy Rights Act (CPRA, 2020) ratcheted up national awareness about consumer privacy. CCPA gave Californians

  • The right to know about the personal information a business collects about them and how it is used and shared;
  • The right to delete personal information collected from them (with some exceptions);
  • The right to opt-out of the sale of their personal information; and
  • The right to non-discrimination for exercising their CCPA rights.

CPRA expanded and introduced new consumer rights and adopted select GDPR principles.


US companies now face some complex privacy law challenges. Without federal privacy legislation, they will have to comply with the different privacy laws in 50 individual states. Or Congress could ratify a federal privacy bill that preempts all (or most of) the provisions of state privacy laws. Which alternative do you think the commercial world would prefer?


US companies also need the government to find a way to expedite GDPR compliance. No doubt about it, it's a thorny issue. It may be solved by a trade agreement or revisions to the law. The bottom line is that GDPR compliance is dulling our economic competitiveness in Europe, and this issue needs to be resolved.


We have a responsibility as citizens to be well informed on important issues and provide feedback to our leaders. Most scholars agree that privacy is essential to democracy. So think about it. Is privacy still important to us as a democratic nation? Why do computer breaches happen so often? What privacy provisions could protect us better? Do we need a new federal privacy law or is additional regulation sufficient? How does GDPR fit into all this?


Congress needs to hear from us, its constituents, before it can pass new privacy legislation. Let's live up to our responsibilities. Let them know we believe in privacy so they can enact additional measures we need to protect it.





Post a comment

Neurotechnology: Magic Heaven or Privacy Hell?

Everyone experiences magic in their lives. Small children experience magic frequently, using their guileless wonder to discover the world around them.


Most adults have suppressed their sense of wonder. Preoccupied with grown-up concerns, they sometimes feel the need for some magic and seek it out.  And what better place to find magic than in the middle of a desert where a thriving town shouldn't exist.


Through a strange confluence of circumstances, the desert town of Las Vegas has all the water and electricity it needs. It sustains a super-size tourist population of over-indulging adults in its many energy-gobbling hotels and casinos. It must be magic.


In 2020, I sought out my magic at the Consumer Electronics Show (CES) in Las Vegas. CES is the Rolls Royce of consumer electronic events worldwide. Attended that year by 250,000 people, the show took up the Las Vegas Convention Center and three hotels.


With the excitement of children on Christmas morning, geeks swarm from all corners of the globe to attend this event. They are a sympathetic audience that loves consumer electronics, and the displays do not disappoint. The exhibit halls deliver multisensory video and audio assaults using the largest video displays you have ever seen.


I wanted to believe that CES would be heavenly and magical. And in some ways, it was. I got to see the latest technology before it came to market. There were companion robots that looked like dogs and cats. There were robots for kids to play with and learn about becoming inventors. There was a ping pong-playing robot. There was a GPS-enabled combine harvester that operated autonomously.


But I started seeing cracks in the magic façade. The first thing that tipped me off was the staircase. Proudly displayed on the riser of each step, an ad said: "SHOP WITH YOUR DNA Booth 43742". At first, I thought this was outrageous. Upon reflection, I decided it was wrong and evil. Why should anyone want to share their DNA with a stranger? What kind of a tchotchke could tempt someone to give away her DNA? What's wrong with these people? Haven't they ever heard of privacy? Strike one in maintaining the illusion.


Scientists and engineers love the challenge of building things. They sometimes forget to think about whether these things should be created and unleashed on the world. For example, the atomic scientists had second thoughts after their bombs were dropped on Hiroshima and Nagasaki. At CES, just because they had the technology to take DNA samples and amass the data doesn't mean they should. Unsuspecting donors would end up donating their most personal data while the vendor commoditized it.


The next thing was the headsets. Many vendors displayed wearable headsets that could sense neural activity in the brain or provide virtual/augmented reality experiences. The applications included many medical uses, like detecting and treating psychological conditions such as depression and addiction. Learning specialists could use them in clinical settings to determine if a child had special learning needs. A teacher could employ them in a classroom to determine the level of engagement of each child. Wait, what?


Neural activity is also biometric. And HIPAA protects the data collected from such headsets used in a medical or clinical setting.  In other locations, however, like a classroom, the data has no privacy nor security protections. Once again, the technologists didn't stop to think about whether marketing this product to the non-medical community posed any risks, like being prosecuted for violating individual privacy.


Neuro and other biometric technologies hold out promise for alleviating people's suffering and improving their lives. But as new technologies, they're still being developed and refined. Many have no track record of performance. They rely on sophisticated algorithms that use artificial intelligence (AI) and machine learning (ML). Therefore, these technologies inherit all the liabilities of AI and ML. For example, the technologists have to train the algorithms using real data. The design for training needs to be grounded in the best practices of the scientific method. Where do we get the data? Can old data be reused?  Are the identities of the individuals that provided the data protected? Does the data need to come from a person that resembles the patient in specific ways? Will the system perform better if one uses the actual patient's data to train the device? The point is that the technologists won't know until enough time has elapsed for an established performance record to exist. There is reason to be optimistic, but the nascent technologies are not yet the promised panaceas.


Humans often try to take the shortcut of seeing things as black and white. They want to believe that technology just solves problems without introducing others. In truth, the issues are not black and white but represent the yin and yang of complementary and competing forces.  Many technological capabilities come at the expense of privacy. We should jealously protect all of our private information, especially our biometric data, as precious possessions.

Be the first to comment

Why Social Media Censorship Feels Right

The First Amendment is pretty much irrelevant on the internet. It prohibits only government censorship of free speech and a free press. It leaves others – notably social media providers – free to censor internet content, which they do routinely. Why do we tolerate this?


The first reason is that without censorship, the internet would be a sewer. Its creators may have originally envisioned it as a pure mountain stream that would allow knowledge to flow freely to all humankind. Unfortunately, the humans – well, at least some of the humans – came along and polluted it with content that many find objectionable. The internet became a cesspool of sexual explicitness, unmitigated violence, hate speech, etc. Censorship began as a way to repress the content most people didn't want to see or the law prohibited. Now, it's a habitual practice to which we are accustomed.


Of course, in the United States, Section 230 of the 1996 Communications Decency Act gives social media providers a free pass to host (broadcast-publish) whatever content their customers want to originate.  Specifically, it says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Unlike publishers, social media providers have no liability for the content – even objectionable content - posted by their subscribers.


 So what's a publisher? A publisher is someone who produces published work, like a newspaper. They pay individuals to create content for their publications.  Every newspaper has a different look and feel, cultivating an image - often using a motto - to convey its values. For example, the New York Times motto has been "All the News That's Fit to Print" since 1897. Such publishers have three goals: to print truthful stories, beat competitors to the reporting punch, and make money.


Newspaper publishers assume liability for the material they publish. They can become involved in costly legal proceedings, but the positive publicity generated for righteously defending free speech can only boost their reputations and bottom lines. Sometimes papers get into legal strife because they tread on national security issues while exercising their First Amendment rights. The New York Times and The Washington Post both faced some very sticky legal issues when they respectively chose to publish stories about the Pentagon Papers and the Watergate burglary, but their publishing prestige skyrocketed.


How is a social media provider different? Thanks to Section 230, social media providers have no liability for the content that appears on their platforms. They also don't pay for their content. They may do some cursory fact-checking, but reporting the truth is not among their mantras.  Social media companies market themselves with platitudes of altruistic intention, using mottos like "to bring the world closer together." In truth, social media is really about making money.


This is the second reason for the wide acceptance of censorship on social media. These platforms make money through advertising. Advertisers will only (continue to) buy advertising if the social media platform demonstrates high ad viewing and click rates. If a subscriber starts seeing too much irrelevant or "objectionable" content, she may be motivated to stop using the platform.  Since this threatens revenue, social media companies moderate - some might say censor - content to attract and retain customers and advertisers.


It is a myth that social media provides a 21st-century "speakers' corner" where a person can share her uncensored views. Social media platforms may pretend to espouse democratic ideals, but in reality, profits drive them like any other money-making institution.

Be the first to comment

Build your Chinese Social Credit Score in Canada

Imagine a world where your every word, facial expression, and action might be recorded. Imagine further that the government has the resources to not only store and digest this data but munge it with your credit rating and the legal records containing your name. Now combine this with your basic identity information (name, address, employer, parents, birth date, place of birth, etc.) and statistics about how and with whom you spend your time. Voilà, welcome to dystopia, a future China where an all-encompassing social credit score can coerce your behavior by threatening to blacklist you.


This could never happen in a democratic nation, right? Wrong. Last week, Gordon G. Chang of the Gatestone Institute published an article about a Chinese restaurant in Vancouver, Canada, with over 60 surveillance cameras that watch 30 tables and send data back to China. The restaurant is near the quarters of the support staff for Meng Wanzhou, the Chief Financial Officer of Huawei, who is fighting extradition to the United States for alleged banking fraud. Presumably, China feels it needs to keep an eye on the people around this personage. And what better way than by making sure their social credit scores are up to date?


The first question that springs to mind is, "Why does Canada let the Chinese do this?" No democratic nation should tolerate that level of surveillance of its citizenry, especially by another country.


The next thought might be, "The United States government would never do anything like this." Well, it already did. National security concerns trumped individual rights repeatedly during the 20th century. The Church and Pike committee reports of the 1980s documented how US law enforcement and the intelligence community overreached before and during the Cold War. The main villain was the FBI, but there was plenty of blame to go around. An overzealous J. Edgar Hoover employed surveillance techniques of questionable legality to compile dossiers on both citizens and immigrants. The FBI used this information to prepare blacklists of potential enemies of the nation. It's a cautionary tale about how high tech used in the name of national security can erode American freedoms. The committees' revelations remain relevant today, harbingers of how technology in anyone's hands – including those of privacy pirates like Facebook and Google  – can be used to highjack individual rights like free speech and privacy.


Could the Chinese be surveilling us in our own country? Yes, they could.


To be fair, China is driven by a very different heritage than the United States. Once a great imperial power, China feels entitled to a leading role on the world stage. Furthermore, centuries of experience have taught the Chinese to be suspicious of Westerners. The European explorers who first came to China in the 16th century sought aggrandizement opportunities for their monarchs and personal fortunes for themselves. They offered very little in return to China's sophisticated culture. For example, the Chinese had been making paper since the 2nd century CE, whereas the Europeans didn't get around to "inventing" it until 12 centuries later.


While its geographic size is on par with the US, China's population dwarfs ours. Around the beginning of the Christian Era, China's population was already 60 million. It remained in the range of 37 to 60 million for the next 1000 years, when it began to skyrocket.  To meet the needs of so many people, it's not surprising that first Communist Party Chairman Mao Zedong would have been tempted to adopt a Marxist model to take "from each according to his ability" and provide "to each according to his needs." Today, China hosts a population of nearly 1.5 billion people, five times that of the United States.


Maybe this gives you a sense of why the collective good - not individual rights - is emphasized in China. The individual rights we Americans cherish - like privacy, freedom of speech, and intellectual property – are anathema to the Chinese psyche. China's collective good drives its actions in everything it does. In the Chinese frame of reference, it's entirely logical to leverage control of natural resources worldwide, underbid to win infrastructure contracts overseas, steal intellectual property, control thinking and behavior in Chinese diasporas, and carefully message benign intentions in foreign lands.


Like other totalitarian nations, China maintains a steady gaze on the prizes she wants. During the 156 years that Britain held Hong Kong, China never wavered from her course to recover it. From the Chinese point of view, reunification was a fait accompli waiting to happen. Have no doubt, Taiwan is next. Such persistent focus on the goal gives China a significant advantage over our democratic process, where everyone has an opinion, and long-term plans fall fatally from their cradles between administrations.


Queen Elizabeth I famously said, "I have no desire to make windows into men's souls." I love that the English constitutional tradition allows me to enjoy our five freedoms – speech, press, religion, assembly, right to petition – and the implicit right to privacy. I love that I am free to think and do what I want within the limits of not violating another person's rights or breaking the law. We should never allow anyone – the government, the Privacy Pirates, the Chinese – to use technology to try to steal our precious privacy.


Unlike Queen Elizabeth I, however, the Chinese do indeed seek a window on each human's soul. Fortuitously, no one has developed the technology to connect our brains to a machine and suck out our beliefs and intentions. Yet.

Be the first to comment

My Smart HVAC Controller Sucks and Blows


Yesterday, I lost my privacy battle against the smart home.


This past winter, we had a new central HVAC system installed. The original one had been installed in 1984, so it was time. As the season began to change from winter to spring, the upstairs indoor temperature would soar to 80 degrees farenheit in the afternoon.


We tried pretending it wasn't a problem. My husband, son, and I all fiddled with the fancy-dancy digital controller on the wall. It assured us it was cooling the house. We checked all its menus. They indicated that all systems were functioning normally.


Days dragged into weeks. The entire family continued to grumble about the oppressive heat. My son -  whose office is upstairs – put his English degree to good use by describing his barbaric working conditions in Dickensian terms.


Let's face it. Humans are weak. Our bodies tolerate only a narrow range of temperatures in our climate-controlled environments. You feel nobly uncomfortable when you set your thermostat to 64 in the winter to "save energy"(reduce heating costs). And 80 is just too hot, indoors anyway.


So we arranged for a $79 service call to diagnose the problem. And the problem was simple and inexpensive. And unexpected. We had "neglected" to bring our smart HVAC controller online. Because of that, the system was running an outdated version of software, which led to the schizophrenic temperature variations described.


First of all, we didn't "neglect" to put our HVAC online. I was actually hoping it wouldn't come to joining the smart home set. As a privacy author, I see infringements on my privacy everywhere. Some might think I'm paranoid, but once you start to understand how invasive technology is, you'll see boogeymen behind every app and transaction.


Now our HVAC is online so that software updates can arrive via our wireless network. The reason I caved was that this appeared to be the only way to keep the software up to date and the temperatures comfortable. But what else is this wireless connection doing?


Living at the bottom of the hill, we have a pump that pumps greywater back up the hill to the metropolitan sewage system. Our pump has a modem, a device that allows it to wirelessly communicate with a headquarters monitoring station. Should it stop functioning, all sorts of bells and whistles go off. Aside from the low-tech light on top of the controller box that flashes, I get alerts on my cell phone that something is amiss. The monitoring station starts trying to contact me. I'm sure if I got the super-de-duper deluxe plan, they'd come out and diagnose and fix it without involving me at all, as long as they had the details to charge my bank account.


Now don't get me wrong. When this pump fails, it is a household catastrophe. Sure, there's fresh water in the house, but there's nowhere for the used water to go once the holding tank is full. Eww. I speak with authority because the last pump reached its end of life two summers ago. We had to move out of the house for eight days while the new one was being sourced and installed. Hopefully, all this pump-connectedness should help us continue to keep our basement from being flooded with, well, you-know-what.


The bottom line is that the smart home is unavoidable in this day and age. Even if you resist it with every fiber of your being, there will come a point when your lifestyle risks serious compromise if you don't get with the program.


But there is also a privacy compromise here as well. I put my HVAC online for better climate control in my home. What else does the network know now? Could the energy usage pattern the system senses predict when we are home or away? If that was correlated with the daily weather report, one could refine that assessment. Is the link secure? Who has access to the information? What could a criminal do with it?


Excuse me, I've got to go check and see what my smart thermostat is up to. I think it's becoming self-aware.


Be the first to comment

Welcome to The Privacy Blog by Dr Leslie Gruis!

Welcome to my Privacy Blog!  Here you'll see advance updates of developments and ideas in the world of data privacy to keep you a step ahead of the game.


Whether it's your account on Facebook, your banking details, those mysterious "phishing" emails we all get or the customer ledgers of a giant corporation, these organizations are not there to preserve your privacy.  They're there to make money, and when privacy and money conflict, you're on your own.


Take responsibility for your and your family's date privacy by knowing what's going on and demanding your rights!


Happy reading



Be the first to comment