Cyber-Ethics and Technology in the Age of Predictive Analysis

Connect--But, be very careful

Where do we draw the line between privacy and security?

This is becoming more and more relevant with advances in Artificial Intelligence (AI) and machine learning. As technology progresses the questions will have to be asked “should we?” instead of “can we?”

Author Gerd Leonhard states: “The most advanced security technology will be useless if those who hold the key and those who use it, act unethically, with evil intent, or with negligence. In fact, the very same technology that is employed to protect consumers and users can be used to spy on them.” (Perlman, 2018).

Not only is this technology based on what we do, it is also based on what we are, from our physical bodies to our thoughts. Technology is transforming our world and is becoming integrated with humans from “nanobots in our bloodstream monitoring and even regulating cholesterol levels, or the ability to connect our brains directly to the Internet to transform thoughts into actions.” (Perlman, 2018).

But what if we could predict how someone will behave? You’re thinking: “I’ve seen that movie from 2002 with Tom Cruise.” You’re right! Minority Report is about a police force using a technology to arrest criminals before they commit a crime. That seems like it’s just SCI-FI. But then this year, the military rolled out their new plan for employee monitoring…

The Defense Security Service (DSS) plans to use AI and machine learning on data obtained from and about people who work for the military and hold a clearance. “The goal is not just to detect employees who have betrayed their trust, but to predict which ones might — allowing problems to be resolved with calm conversation rather than punishment.” (Tucker, 2019). Granted DSS, the military, the federal government… all have a tough time with employee vetting and monitoring as seen through events such as the OPM hack, security leaks involving Julian Assange, Edward Snowden, Chelsea Manning, and more. To be fair, they need a better solution. The question here is: Is this solution the right one?

The pilot activity for DSS involves capturing an individual’s “digital footprint.” (Tucker, 2019). Now, we do voluntarily provide a lot of information: via Facebook, Twitter, Snapchat, Instagram, and more. Is it our responsibility to consider what we are posting that is publicly available? By posting are we giving away the right to any use of that data? I think the answer may be yes.

Another recent example of using information made public comes with the jump in solving “cold crimes” via DNA matching. This DNA matching is coming from people voluntarily submitting their DNA to sites such as Ancestry, 23andme, and others. When the people submit the DNA they are only thinking of finding lost relatives or tracing a family tree (Snow and Schuppe, 2018). But this DNA evidence is providing law enforcement with a new avenue to find criminals, through their children, and possibly without the realized consent of those who have submitted the DNA sample. In the troves of papers and agreements that people sign in order to access the ancestry and genealogy information, they are likely signing away their rights to privacy in regard to the use of the DNA.

Is there an up-side to this? Proponents of the DSS pilot suggest that they can stop negative effects. This would include recent upswings in veteran suicides due to being able to detect “micro-changes” in a person’s behavior, along with the planned prevention of insider threat (Tucker, 2019).

What can we do about this? Leonhard suggests an ethics model to guide technological decisions that includes 5 key components: (1) That we have the right to refuse technology modification or implementation (we can choose to remain “natural”), (2) That efficiency does not become more important than humanity, (3) That we can choose to disconnect and not be monitored/tracked, (4) That we have a right to anonymity, and (5) That companies should not face penalties for choosing to use humans over machines in job positions (Perlman, 2018).

In the case of the DSS pilot, people who have clearances do give up a certain amount of privacy to have the privilege of working with classified information. So, in that respect, we are talking about a small subset of people who willingly make that trade. But if these sorts of monitoring programs take off in the federal government community, you have to wonder how long it will be until private corporations and law enforcement start considering similar options.


References

Perlman, A. (2018). Man vs. Machine: The New Ethics of Cybersecurity. Retrieved from: https://www.securityroundtable.org/new-ethics-of-cybersecurity/

Snow, K. and J. Schuppe. ‘This is just the beginning’: Using DNA and genealogy to crack years-old cold cases. Retrieved from: https://www.nbcnews.com/news/us-news/just-beginning-using-dna-genealogy-crack-years-old-cold-cases-n892126

Tucker, P. (2019). The US Military Is Creating the Future of Employee Monitoring. Retrieved from: https://www.defenseone.com/technology/2019/03/us-military-creating-future-employee-monitoring/155824/?oref=d-mostread&utm_source=360Works%20CloudMail&utm_medium=email&utm_campaign=NewsWatch