icon caret-left icon caret-right instagram pinterest linkedin facebook twitter goodreads question-circle facebook circle twitter circle linkedin circle instagram circle goodreads circle pinterest circle

The Privacy Blog

Neurotechnology: Magic Heaven or Privacy Hell?

Everyone experiences magic in their lives. Small children experience magic frequently, using their guileless wonder to discover the world around them.


Most adults have suppressed their sense of wonder. Preoccupied with grown-up concerns, they sometimes feel the need for some magic and seek it out.  And what better place to find magic than in the middle of a desert where a thriving town shouldn't exist.


Through a strange confluence of circumstances, the desert town of Las Vegas has all the water and electricity it needs. It sustains a super-size tourist population of over-indulging adults in its many energy-gobbling hotels and casinos. It must be magic.


In 2020, I sought out my magic at the Consumer Electronics Show (CES) in Las Vegas. CES is the Rolls Royce of consumer electronic events worldwide. Attended that year by 250,000 people, the show took up the Las Vegas Convention Center and three hotels.


With the excitement of children on Christmas morning, geeks swarm from all corners of the globe to attend this event. They are a sympathetic audience that loves consumer electronics, and the displays do not disappoint. The exhibit halls deliver multisensory video and audio assaults using the largest video displays you have ever seen.


I wanted to believe that CES would be heavenly and magical. And in some ways, it was. I got to see the latest technology before it came to market. There were companion robots that looked like dogs and cats. There were robots for kids to play with and learn about becoming inventors. There was a ping pong-playing robot. There was a GPS-enabled combine harvester that operated autonomously.


But I started seeing cracks in the magic façade. The first thing that tipped me off was the staircase. Proudly displayed on the riser of each step, an ad said: "SHOP WITH YOUR DNA Booth 43742". At first, I thought this was outrageous. Upon reflection, I decided it was wrong and evil. Why should anyone want to share their DNA with a stranger? What kind of a tchotchke could tempt someone to give away her DNA? What's wrong with these people? Haven't they ever heard of privacy? Strike one in maintaining the illusion.


Scientists and engineers love the challenge of building things. They sometimes forget to think about whether these things should be created and unleashed on the world. For example, the atomic scientists had second thoughts after their bombs were dropped on Hiroshima and Nagasaki. At CES, just because they had the technology to take DNA samples and amass the data doesn't mean they should. Unsuspecting donors would end up donating their most personal data while the vendor commoditized it.


The next thing was the headsets. Many vendors displayed wearable headsets that could sense neural activity in the brain or provide virtual/augmented reality experiences. The applications included many medical uses, like detecting and treating psychological conditions such as depression and addiction. Learning specialists could use them in clinical settings to determine if a child had special learning needs. A teacher could employ them in a classroom to determine the level of engagement of each child. Wait, what?


Neural activity is also biometric. And HIPAA protects the data collected from such headsets used in a medical or clinical setting.  In other locations, however, like a classroom, the data has no privacy nor security protections. Once again, the technologists didn't stop to think about whether marketing this product to the non-medical community posed any risks, like being prosecuted for violating individual privacy.


Neuro and other biometric technologies hold out promise for alleviating people's suffering and improving their lives. But as new technologies, they're still being developed and refined. Many have no track record of performance. They rely on sophisticated algorithms that use artificial intelligence (AI) and machine learning (ML). Therefore, these technologies inherit all the liabilities of AI and ML. For example, the technologists have to train the algorithms using real data. The design for training needs to be grounded in the best practices of the scientific method. Where do we get the data? Can old data be reused?  Are the identities of the individuals that provided the data protected? Does the data need to come from a person that resembles the patient in specific ways? Will the system perform better if one uses the actual patient's data to train the device? The point is that the technologists won't know until enough time has elapsed for an established performance record to exist. There is reason to be optimistic, but the nascent technologies are not yet the promised panaceas.


Humans often try to take the shortcut of seeing things as black and white. They want to believe that technology just solves problems without introducing others. In truth, the issues are not black and white but represent the yin and yang of complementary and competing forces.  Many technological capabilities come at the expense of privacy. We should jealously protect all of our private information, especially our biometric data, as precious possessions.

Be the first to comment

Why Social Media Censorship Feels Right

The First Amendment is pretty much irrelevant on the internet. It prohibits only government censorship of free speech and a free press. It leaves others – notably social media providers – free to censor internet content, which they do routinely. Why do we tolerate this?


The first reason is that without censorship, the internet would be a sewer. Its creators may have originally envisioned it as a pure mountain stream that would allow knowledge to flow freely to all humankind. Unfortunately, the humans – well, at least some of the humans – came along and polluted it with content that many find objectionable. The internet became a cesspool of sexual explicitness, unmitigated violence, hate speech, etc. Censorship began as a way to repress the content most people didn't want to see or the law prohibited. Now, it's a habitual practice to which we are accustomed.


Of course, in the United States, Section 230 of the 1996 Communications Decency Act gives social media providers a free pass to host (broadcast-publish) whatever content their customers want to originate.  Specifically, it says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." Unlike publishers, social media providers have no liability for the content – even objectionable content - posted by their subscribers.


 So what's a publisher? A publisher is someone who produces published work, like a newspaper. They pay individuals to create content for their publications.  Every newspaper has a different look and feel, cultivating an image - often using a motto - to convey its values. For example, the New York Times motto has been "All the News That's Fit to Print" since 1897. Such publishers have three goals: to print truthful stories, beat competitors to the reporting punch, and make money.


Newspaper publishers assume liability for the material they publish. They can become involved in costly legal proceedings, but the positive publicity generated for righteously defending free speech can only boost their reputations and bottom lines. Sometimes papers get into legal strife because they tread on national security issues while exercising their First Amendment rights. The New York Times and The Washington Post both faced some very sticky legal issues when they respectively chose to publish stories about the Pentagon Papers and the Watergate burglary, but their publishing prestige skyrocketed.


How is a social media provider different? Thanks to Section 230, social media providers have no liability for the content that appears on their platforms. They also don't pay for their content. They may do some cursory fact-checking, but reporting the truth is not among their mantras.  Social media companies market themselves with platitudes of altruistic intention, using mottos like "to bring the world closer together." In truth, social media is really about making money.


This is the second reason for the wide acceptance of censorship on social media. These platforms make money through advertising. Advertisers will only (continue to) buy advertising if the social media platform demonstrates high ad viewing and click rates. If a subscriber starts seeing too much irrelevant or "objectionable" content, she may be motivated to stop using the platform.  Since this threatens revenue, social media companies moderate - some might say censor - content to attract and retain customers and advertisers.


It is a myth that social media provides a 21st-century "speakers' corner" where a person can share her uncensored views. Social media platforms may pretend to espouse democratic ideals, but in reality, profits drive them like any other money-making institution.

Be the first to comment