Thursday, July 14, 2016

Artificial Intelligence: Ethics, Privacy and Conscience


'Conscious' artificial intelligence (AI) may be far in the future, however, decentralized AI is here. From health programs to predictive policing and everything in between, algorithmic systems are making perceptible inroads into human life (Crawford, 2016; Pasquale, 2015). Crucially, however, decentralized AI should be distinguished from individualized robots and lacks the integration and 'consciousness' foretold in science fiction. Interactions between various algorithmic systems remain lumbering, disjointed, and underdeveloped with one exception: the capitalist aim of corporate gains.

Algorithmic systems have the potential to encourage, as well as prevent, dignity of life. The focus on corporate financial gains diverts attention from critical issues — cantankerous for their complexity (Chandler, 2014) — to convenience and marketability. Designers appear locked into cycles of upgrades geared toward trivialities and monopolization, rather than compatibility, ease of use and support for causes of well-being. Damania (2016) describes typical interactions with software that many experience on day-to-day basis. Upgrades can feel like coders moving the furniture around in your living room so you can trip over it at night, even when working directly with systems. What does that suggest about data collected and decisions made that guide people with or without their knowledge?

In spite of an industry that is claiming significant global financial resources and touts solutions, the most pressing crises are not getting solved leading to greater inequality, unrest and probability for war. A person does not need to be a genius to know that inclusive affordable housing, education, employment, health care, and care for the environment would result in a healthier more sustainable society and world.

It seems essential to develop systems to correct for, rather than encourage, power asymmetries, including corporate and media use of our on-line data shadows — use that, at times, seems oblivious to issues of privacy, dignity and ethics (Cohen, 2013; Crawford, 2016; Pasquale, 2015; Zubhoff, 2016). Without knowledge and approval, data collection and sharing (e.g., Al-Rodhan, 2016; Fiveash, 2016; Newman et al., 2016; Tenney & Sieber, 2016) is increasingly invasive and yet the beneficiaries, instead of citizens, increasingly are corporations and governments. Thus far, increases in information have increased bureaucracy and obfuscation, while sacrificing citizen privacy and choice (boyd, 2016; Larson et al., 2016; Cohen, 2013; Floridi, 2016; Zubhoff, 2016). Echoing Zubhoff, increases in surveillance and behavior modification on the basis of data not situated in perspective and context compromises the freedom of self-determination that is the foundation on which our countries are built. It removes responsibility to those doing the modifying, a legal issue our courts have barely begun to address. Even the European Union's recently proposed General Data Protection Regulation (Claburn, 2016; Goodman & Flaxman, 2016) doesn't go far enough, though it's a start. Algorithms rarely work in isolation, so it is not clear where responsibility for outcomes would fall. Moreover, an explanation is not a solution or reparation for harms. Harm to individuals ripple through communities and institutions, and extend back to the trustworthiness of corporations, agencies and governments.

One plausible solution to the capitalist focus of "black-boxed" algorithmic systems would be an algorithmic filter that determines potential ethical consequences of their use. Several physiological analogies come to mind. Borrowing from (a) the retina or (b) interacting excitatory and inhibitory neural circuits that coordinate (a) visual perception and (b) movement, an ethical algorithmic filter would provide a "surround" to regulate a competitive capitalist "excitatory" output.

An ethical AI filter would likely improve on the implicit and explicit biases and reactivity of humans. In addition, ideally, the filter could access and be updated with status reports, research findings and legal arguments. An ethical AI filter would weigh information according to likely validity and search for missing perspectives and assumptions. Indeed, intentions to digitize research findings and literature, thereby increasing accessibility were laudable, though remain incomplete given the potential for misuse and proliferation of misinterpretation (e.g., Grove, 2016). In the case of science, one step beyond open access publishing would be coding data for perspective, compatibility and assumptions, thereby decoding science data from technical jargon.

Could ethical AI be mandated to restore protections for human privacy, discrimination and security that have been increasingly compromised in recent years? Similar concerns prompted regulation of human subjects research and genetic engineering. Even institutional review boards and government agencies often have been too narrow in not examining the combined overall effects of their decisions. Furthermore, as an example from an economic view for those with business interests, airline deregulation did more harm to the industry than good (Arria, 2016). Intriguingly, an ethical AI filter might function as an AI conscience — with individual, as well as global, dignity in 'mind' (Al-Rodhan, 2015; Burke and Fishel, 2016)

Contrary to the expectation that algorithms are indifferent, several concerns have been reported over the last ten years. Algorithmic systems risk enhancing rather than eliminating discriminatory practices either as a function of (a) their capitalist aim, (b) the implicit and explicit biases of their designers, (c) biases in the way data has been collected and combined, (d) ordering effects or (e) technical assumptions on which operational paradigms are based that fade into the background with time or are no longer valid due to changing circumstances. An essential feature of the proposed ethical algorithmic filter, therefore, is that it be created by an independent group of researchers. The filter would examine data eliminated by "excitatory" algorithmic sorting processes using a lens sensitive to various biases and potential intersectional effects (Kitchin, 2014). The ultimate goal would be to develop an ethical AI mechanism that could be provided to corporations and government agencies for use in their own design process, thereby minimizing the proprietary black-box argument, increases in bureaucracy, and the need for regulatory oversight.

Another way to examine today's algorithmic systems also draws from visual processing. Could data points (people) eliminated during algorithmic sorting processes be tagged without loss of privacy and continue to be processed in parallel? Crucially, from the perspective of equity, what resources could be provided to increase self-determination, creativity and equalize, rather than judge and eliminate, individuals in the output thereby optimizing human potential?

A third suggestion is to limit "excitatory" algorithmic systems' ability to interrogate people. In this case, the method would be to redirect focus to an interrogation of systems and infrastructure contributing to the health of society and the planet. The primary aim would be an equitable allocation of resources and opportunity.

At a time when the world is in crisis, it seems a shame that the potential of algorithmic systems to resolve the most pressing issues is being distracted by more short-sighted aims that, however well-intentioned, enhance rather than reduce inequalities. Many are concerned (boyd, 2016; Crawford, 2016; Cohen, 2013; Floridi, 2016; Zubhoff, 2016; Pasquale, 2015; Pedziwiatr & Engelmann, 2016) that the speed at which institutions, corporations and governments are deploying algorithmic systems is in excess of the mechanisms of ethical and legal oversight on which people and the continued existence of a habitable planet depend.



Credit for this comment goes to an unknowable multitude of 'research assistants', in addition to the authors below:

Nayef Al-Rodhan (2015) Proposal of a Dignity Scale for Sustainable Governance, Journal of Public Policy (Blog), 29 November. https://jpublicpolicy.com/2015/11/29/proposal-of-a-dignity-scale-for-sustainable-governance/

Nayef Al-Rodhan (2016) Behavioral Profiling and the Biometrics of Intent. Harvard International Review 17 Jun. http://hir.harvard.edu/behavioral-profiling-politics-intent/

Michael Arria (2016) The Surprising Collection of Politicos Who Brought Us Destructive Airline Deregulation. Alternet, 3 July. http://www.alternet.org/labor/how-liberals-deregulated-airline-industry

danah boyd (2016) Be Careful What You Code For. Medium, 14 June https://points.datasociety.net/be-careful-what-you-code-for-c8e9f3f6f55e#.4sobpvbe9

Anthony Burke and Stefanie Fishel (2016) Politics for the planet: why nature and wildlife need their own seats at the UN. The Conversation, 30 June. https://theconversation.com/politics-for-the-planet-why-nature-and-wildlife-need-their-own-seats-at-the-un-59892#

David Chandler (2014) Beyond neoliberalism: resilience, the new art of governing complexity, Resilience, 2:1, 47-63, DOI: 10.1080/21693293.2013.878544 http://dx.doi.org/10.1080/21693293.2013.878544

Thomas Claburn (2016) EU Data Protection Law May End the Unknowable Algorithm. InformationWeek, 18 July. http://www.informationweek.com/government/big-data-analytics/eu-data-protection-law-may-end-the-unknowable-algorithm/d/d-id/1326294

Julie Cohen (2013) What Privacy is for. Harv. L. Rev., 126, 1904. http://harvardlawreview.org/wp-content/uploads/pdfs/vol126_cohen.pdf

Kate Crawford (2016) Know Your Terrorist Credit Score. Presented at re:publica, 2 May. https://re-publica.com/en/16/session/know-your-terrorist-credit-score

Zubin Damania (2016) We need to demand technology that lets doctors be doctors. KevinMD, 1 February. http://www.kevinmd.com/blog/2016/02/need-demand-technology-lets-doctors-doctors.html

Kelly Fiveash (2016) Google AI given access to health records of 1.6 million English patients. Ars Technica UK, 3 May. http://arstechnica.co.uk/business/2016/05/google-deepmind-ai-nhs-data-sharing-controversy/

Luciano Floridi (2016)The Informational Nature of Personal Identity. Minds and Machines, Vol. 21 Issue 4 – 2011: 549. DOI:10.1007/s11023-011-9259-6 https://www.academia.edu/9352388/The_Informational_Nature_of_Personal_Identity

Jack Grove (2016) Beware ‘nefarious’ use of open data, summit hears. TimesHigherEducation, 11 July. https://www.timeshighereducation.com/news/beware-nefarious-use-of-open-data-summit-hears

Bryce Goodman & Seth Flaxman (2016) European Union regulations on algorithmic decision-making and a "right to explanation" ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY. eprint arXiv:1606.08813. http://arxiv.org/abs/1606.08813

Rob Kitchin (2014) Big Data, new epistemologies and paradigm shifts Big Data & Society April–June 2014: 1–12. DOI: 10.1177/2053951714528481 http://m.bds.sagepub.com/content/1/1/2053951714528481.full.pdf

Jeff Larson, Surya Mattu, Lauren Kirchner & Julia Angwin (2016) How We Analyzed the COMPAS Recidivism Algorithm. ProPublica, 23 May. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

Joe Newman, Joseph Jerome & Christopher Hazard (2014) Press Start to Track?: Privacy and the New Questions Posed by Modern Videogame Technology. American Intellectual Property Law Association (AIPLA) Quarterly Journal, 1 August. http://ssrn.com/abstract=2483426

Frank Pasquale (2015) “The Black Box Society: The Secret Algorithms That Control Money and Information.” Harvard University Press, Cambridge MA.

Samuel Pedziwiatr & Severin Engelmann (2016) Blueprints for the Infosphere: Interview with Luciano Floridi. fatum 4 June, S. 25. http://www.fatum-magazin.de/ausgaben/intelligenz-formen-und-kuenste/internationale-perspektiven/blueprints-for-the-infosphere.html

Matthew Tenney & Renee Sieber (2016) Data-Driven Participation: Algorithms, Cities, Citizens,
and Corporate Control. Urban Planning (ISSN: 2183-7635) 2016, Volume 1, Issue 2, 101-113 DOI: 10.17645/up.v1i2.645. http://cogitatiopress.com/ojs/index.php/urbanplanning/article/download/645/645

Shoshana Zubhoff (2016) The Secrets of Surveillance Capitalism. Frankfurter Allgemeine Feuilleton, 3 May. http://m.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-ofsurveillance-capitalism-14103616.html


Updated 20.July.2016 11:31 AM










No comments:

Post a Comment