Thursday, July 14, 2016
'Conscious' artificial intelligence (AI) may be far in the future, however, decentralized AI is here. From health programs to predictive policing and everything in between, algorithmic systems are making perceptible inroads into human life (Crawford, 2016; Pasquale, 2015). Crucially, however, decentralized AI should be distinguished from individualized robots and lacks the integration and 'consciousness' foretold in science fiction. Interactions between various algorithmic systems remain lumbering, disjointed, and underdeveloped with one exception: the capitalist aim of corporate gains.
Algorithmic systems have the potential to encourage, as well as prevent, dignity of life. The focus on corporate financial gains diverts attention from critical issues — cantankerous for their complexity (Chandler, 2014) — to convenience and marketability. Designers appear locked into cycles of upgrades geared toward trivialities and monopolization, rather than compatibility, ease of use and support for causes of well-being. Damania (2016) describes typical interactions with software that many experience on day-to-day basis. Upgrades can feel like coders moving the furniture around in your living room so you can trip over it at night, even when working directly with systems. What does that suggest about data collected and decisions made that guide people with or without their knowledge?
In spite of an industry that is claiming significant global financial resources and touts solutions, the most pressing crises are not getting solved leading to greater inequality, unrest and probability for war. A person does not need to be a genius to know that inclusive affordable housing, education, employment, health care, and care for the environment would result in a healthier more sustainable society and world.
It seems essential to develop systems to correct for, rather than encourage, power asymmetries, including corporate and media use of our on-line data shadows — use that, at times, seems oblivious to issues of privacy, dignity and ethics (Cohen, 2013; Crawford, 2016; Pasquale, 2015; Zubhoff, 2016). Without knowledge and approval, data collection and sharing (e.g., Al-Rodhan, 2016; Fiveash, 2016; Newman et al., 2016; Tenney & Sieber, 2016) is increasingly invasive and yet the beneficiaries, instead of citizens, increasingly are corporations and governments. Thus far, increases in information have increased bureaucracy and obfuscation, while sacrificing citizen privacy and choice (boyd, 2016; Larson et al., 2016; Cohen, 2013; Floridi, 2016; Zubhoff, 2016). Echoing Zubhoff, increases in surveillance and behavior modification on the basis of data not situated in perspective and context compromises the freedom of self-determination that is the foundation on which our countries are built. It removes responsibility to those doing the modifying, a legal issue our courts have barely begun to address. Even the European Union's recently proposed General Data Protection Regulation (Claburn, 2016; Goodman & Flaxman, 2016) doesn't go far enough, though it's a start. Algorithms rarely work in isolation, so it is not clear where responsibility for outcomes would fall. Moreover, an explanation is not a solution or reparation for harms. Harm to individuals ripple through communities and institutions, and extend back to the trustworthiness of corporations, agencies and governments.
One plausible solution to the capitalist focus of "black-boxed" algorithmic systems would be an algorithmic filter that determines potential ethical consequences of their use. Several physiological analogies come to mind. Borrowing from (a) the retina or (b) interacting excitatory and inhibitory neural circuits that coordinate (a) visual perception and (b) movement, an ethical algorithmic filter would provide a "surround" to regulate a competitive capitalist "excitatory" output.
An ethical AI filter would likely improve on the implicit and explicit biases and reactivity of humans. In addition, ideally, the filter could access and be updated with status reports, research findings and legal arguments. An ethical AI filter would weigh information according to likely validity and search for missing perspectives and assumptions. Indeed, intentions to digitize research findings and literature, thereby increasing accessibility were laudable, though remain incomplete given the potential for misuse and proliferation of misinterpretation (e.g., Grove, 2016). In the case of science, one step beyond open access publishing would be coding data for perspective, compatibility and assumptions, thereby decoding science data from technical jargon.
Could ethical AI be mandated to restore protections for human privacy, discrimination and security that have been increasingly compromised in recent years? Similar concerns prompted regulation of human subjects research and genetic engineering. Even institutional review boards and government agencies often have been too narrow in not examining the combined overall effects of their decisions. Furthermore, as an example from an economic view for those with business interests, airline deregulation did more harm to the industry than good (Arria, 2016). Intriguingly, an ethical AI filter might function as an AI conscience — with individual, as well as global, dignity in 'mind' (Al-Rodhan, 2015; Burke and Fishel, 2016)
Contrary to the expectation that algorithms are indifferent, several concerns have been reported over the last ten years. Algorithmic systems risk enhancing rather than eliminating discriminatory practices either as a function of (a) their capitalist aim, (b) the implicit and explicit biases of their designers, (c) biases in the way data has been collected and combined, (d) ordering effects or (e) technical assumptions on which operational paradigms are based that fade into the background with time or are no longer valid due to changing circumstances. An essential feature of the proposed ethical algorithmic filter, therefore, is that it be created by an independent group of researchers. The filter would examine data eliminated by "excitatory" algorithmic sorting processes using a lens sensitive to various biases and potential intersectional effects (Kitchin, 2014). The ultimate goal would be to develop an ethical AI mechanism that could be provided to corporations and government agencies for use in their own design process, thereby minimizing the proprietary black-box argument, increases in bureaucracy, and the need for regulatory oversight.
Another way to examine today's algorithmic systems also draws from visual processing. Could data points (people) eliminated during algorithmic sorting processes be tagged without loss of privacy and continue to be processed in parallel? Crucially, from the perspective of equity, what resources could be provided to increase self-determination, creativity and equalize, rather than judge and eliminate, individuals in the output thereby optimizing human potential?
A third suggestion is to limit "excitatory" algorithmic systems' ability to interrogate people. In this case, the method would be to redirect focus to an interrogation of systems and infrastructure contributing to the health of society and the planet. The primary aim would be an equitable allocation of resources and opportunity.
At a time when the world is in crisis, it seems a shame that the potential of algorithmic systems to resolve the most pressing issues is being distracted by more short-sighted aims that, however well-intentioned, enhance rather than reduce inequalities. Many are concerned (boyd, 2016; Crawford, 2016; Cohen, 2013; Floridi, 2016; Zubhoff, 2016; Pasquale, 2015; Pedziwiatr & Engelmann, 2016) that the speed at which institutions, corporations and governments are deploying algorithmic systems is in excess of the mechanisms of ethical and legal oversight on which people and the continued existence of a habitable planet depend.
Credit for this comment goes to an unknowable multitude of 'research assistants', in addition to the authors below:
Nayef Al-Rodhan (2015) Proposal of a Dignity Scale for Sustainable Governance, Journal of Public Policy (Blog), 29 November. https://jpublicpolicy.com/2015/11/29/proposal-of-a-dignity-scale-for-sustainable-governance/
Nayef Al-Rodhan (2016) Behavioral Profiling and the Biometrics of Intent. Harvard International Review 17 Jun. http://hir.harvard.edu/behavioral-profiling-politics-intent/
Michael Arria (2016) The Surprising Collection of Politicos Who Brought Us Destructive Airline Deregulation. Alternet, 3 July. http://www.alternet.org/labor/how-liberals-deregulated-airline-industry
danah boyd (2016) Be Careful What You Code For. Medium, 14 June https://points.datasociety.net/be-careful-what-you-code-for-c8e9f3f6f55e#.4sobpvbe9
Anthony Burke and Stefanie Fishel (2016) Politics for the planet: why nature and wildlife need their own seats at the UN. The Conversation, 30 June. https://theconversation.com/politics-for-the-planet-why-nature-and-wildlife-need-their-own-seats-at-the-un-59892#
David Chandler (2014) Beyond neoliberalism: resilience, the new art of governing complexity, Resilience, 2:1, 47-63, DOI: 10.1080/21693293.2013.878544 http://dx.doi.org/10.1080/21693293.2013.878544
Thomas Claburn (2016) EU Data Protection Law May End the Unknowable Algorithm. InformationWeek, 18 July. http://www.informationweek.com/government/big-data-analytics/eu-data-protection-law-may-end-the-unknowable-algorithm/d/d-id/1326294
Julie Cohen (2013) What Privacy is for. Harv. L. Rev., 126, 1904. http://harvardlawreview.org/wp-content/uploads/pdfs/vol126_cohen.pdf
Kate Crawford (2016) Know Your Terrorist Credit Score. Presented at re:publica, 2 May. https://re-publica.com/en/16/session/know-your-terrorist-credit-score
Zubin Damania (2016) We need to demand technology that lets doctors be doctors. KevinMD, 1 February. http://www.kevinmd.com/blog/2016/02/need-demand-technology-lets-doctors-doctors.html
Kelly Fiveash (2016) Google AI given access to health records of 1.6 million English patients. Ars Technica UK, 3 May. http://arstechnica.co.uk/business/2016/05/google-deepmind-ai-nhs-data-sharing-controversy/
Luciano Floridi (2016)The Informational Nature of Personal Identity. Minds and Machines, Vol. 21 Issue 4 – 2011: 549. DOI:10.1007/s11023-011-9259-6 https://www.academia.edu/9352388/The_Informational_Nature_of_Personal_Identity
Jack Grove (2016) Beware ‘nefarious’ use of open data, summit hears. TimesHigherEducation, 11 July. https://www.timeshighereducation.com/news/beware-nefarious-use-of-open-data-summit-hears
Bryce Goodman & Seth Flaxman (2016) European Union regulations on algorithmic decision-making and a "right to explanation" ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, NY. eprint arXiv:1606.08813. http://arxiv.org/abs/1606.08813
Rob Kitchin (2014) Big Data, new epistemologies and paradigm shifts Big Data & Society April–June 2014: 1–12. DOI: 10.1177/2053951714528481 http://m.bds.sagepub.com/content/1/1/2053951714528481.full.pdf
Jeff Larson, Surya Mattu, Lauren Kirchner & Julia Angwin (2016) How We Analyzed the COMPAS Recidivism Algorithm. ProPublica, 23 May. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
Joe Newman, Joseph Jerome & Christopher Hazard (2014) Press Start to Track?: Privacy and the New Questions Posed by Modern Videogame Technology. American Intellectual Property Law Association (AIPLA) Quarterly Journal, 1 August. http://ssrn.com/abstract=2483426
Frank Pasquale (2015) “The Black Box Society: The Secret Algorithms That Control Money and Information.” Harvard University Press, Cambridge MA.
Samuel Pedziwiatr & Severin Engelmann (2016) Blueprints for the Infosphere: Interview with Luciano Floridi. fatum 4 June, S. 25. http://www.fatum-magazin.de/ausgaben/intelligenz-formen-und-kuenste/internationale-perspektiven/blueprints-for-the-infosphere.html
Matthew Tenney & Renee Sieber (2016) Data-Driven Participation: Algorithms, Cities, Citizens,
and Corporate Control. Urban Planning (ISSN: 2183-7635) 2016, Volume 1, Issue 2, 101-113 DOI: 10.17645/up.v1i2.645. http://cogitatiopress.com/ojs/index.php/urbanplanning/article/download/645/645
Shoshana Zubhoff (2016) The Secrets of Surveillance Capitalism. Frankfurter Allgemeine Feuilleton, 3 May. http://m.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-ofsurveillance-capitalism-14103616.html
Updated 20.July.2016 11:31 AM
Posted by Gisela Wilson, 真行 at 12:37
Sunday, June 26, 2016
We, citizens of the United States of America, stand at a unique point in history. A point which none of the political pundits would have predicted, least of all David Cameron.
We should pay attention to the cautionary tale of the Brexit referendum because it offers obvious insights for our own election.
Brexit was driven by the reactivity of the mostly rural, too often neglected, British citizenry to years of austerity. It's not really their fault. Many politicians, pundits, and media directed attention away from years of deprivation by manipulating emotions towards anger and blame of a largely innocent and struggling population when the government's and economic establishment's own credibility and failure to listen and see to its citizenry is the issue.
Several insightful articles, worth reading, have been written in recent days.
All of them speak to the similarity of the Brexit and Trump campaigns.
I particularly appreciate Will Davies' Thoughts on the Sociology of Brexit for its echo of The Politics of Resentment by Katherine Cramer. Both describe how an urban-centric corporate political establishment has alienated a substantial portion of citizenry.
Other insightful commentary:
Brexit is only the Latest Proof of the Insularity and Failure of Western Establishment Institutions by Glenn Greenwald
Brexit will reconfigure the UK economy by Martin Wolf
For a complete disaster report at The Financial Times: http://www.ft.com/home/us
What part of the message are the Democratic National Committee, Hiliary Clinton and even Bernie Sanders missing? In what way can our government and the corporations behind it put people before the interests of competitive capitalism for accelerating profit?
People around the world are feeling the stresses of globalization and interdependence. These changes happened gradually under our noses as a function of industrialization. Corporations sought cheaper labor and financial perks. Crucially, closing national borders will fall far short of extricating nations from the complex web of interdependence that has been built over the years. It's only an excuse for further spending via increased militarization (think robots!). Most corporations and governments are more focused on protecting information and assets than people. The attempt to fortify national borders only serves to distract from the largest long term threat to human kind — the climate crisis.
We are all on the same lifeboat — the planet Earth. As the Brexit vote demonstrates, in every country, there are people that have been treading water for years.
People across the country need real alternatives. It's difficult for voters to believe in and invest their efforts in long term changes when their short term livelihoods have been under threat for such a long time.
The time to speak to and provide alternatives on both sides of the political party divide is now.
Posted by Gisela Wilson, 真行 at 16:31
Tuesday, May 24, 2016
Pull up a chair.
I just finished reading Why Women Earn Less: Just Two Factors Explain Post-PhD Pay Gap by Helen Shen in Nature. It's based on a recent study by Catherine Buffington et al (2016) that examines data obtained from the UMETRICS project at the University of Michigan. Both Buffington et al (2016), Nature, and a similar commentary in Science do a serious disservice to the human race and academia by publishing and commenting on the study in its present form.
Firstly, it has been known for many years that the pipeline problem for women in STEM, of which the pay gap is a component, is the result of a gradual accumulation of negative effects (Valian, 1998). Studies such as Buffington et al (2016) that examine too short a time-frame are likely to seriously distort the influences of factors that contribute. Although both Mason and Weinberg mention problems with time frame durations, even five to ten years is not long enough. She Figures 2015 (European Commission, 2015; Figure 6.1), as one example, shows that predominant effects of gender occur over a more prolonged time window.
Secondly, let's examine data selection and statistics. Buffington et al (2016) lump together variables that have intersectional effects such as race and age, and omit categories with potential for insight such as single mothers or fathers with childcare responsibilities. There are other variables not scored in the original data sets such as sexual orientation and couples that chose not to declare their relationship status.
The authors also appear to disregard graduate students that for reasons of insufficient support, harassment, locational restrictions, or otherwise not fitting the traditional academic mold, dropped out during their graduate years in spite of substantial investment in training. In addition, the study neglects undergraduate students that failed to be admitted to graduate programs due to financial and location-specific stressors, or merit-based, ivory tower biases (Goldrick-Rab & Kendall, 2016). These longitudinal biases reflect societal stressors associated with race, gender, class, disability, age and veteran status that result in substantial downward spirals over time. Thus, even though Buffington et al (2016) include some key variables in their study, their inclusion in this short time frame without regard for interactions is nothing short of misrepresentation.
Another problem of data selection and representation concerns the numbers for gender. Many studies, including the European Commission Report, have indicated that the numbers of PhDs granted to women and men are nearly equal in recent years. The fact that there are more than double the number of men in Buffington et al (2016) calls the UMETRICS and Census Bureau's Protected Identification Key (PIK) data into serious question. Numbers for other variables such as race are not reported — the percentages mentioned in the Methods show Black and Hispanic populations are negligiblely accounted for. Because least squares regression aims to minimize the overall variation, the numbers representing variables such as gender and race will affect the results.
Third, I would like to point to an effect that goes unnoticed by Buffington et al (2016), perhaps unsurprisingly given the normalization of sexist bias in societies across the world; namely, women consistently shoulder the burden and blame for motherhood, or lack thereof. Quoting Buffington et al, "Nineteen percent of females and 24 percent of males had children at the time of the 2010 census." A five percent difference in parental status is, in the words of Buffington regarding other observations, a robust effect. More importantly, the observation suggests more men 'get pregnant' than women. Considering the effect of motherhood on women's careers, the finding could further be taken to imply that men are conditioned by capitalist patriarchal expectations to neglect their wives and children. It's beyond time capitalist patriarchy acknowledge and accept responsibility for its share of care.
Finally, Buffington et al (2016) claim that the large majority of the pay gap in their study is accounted for by choice of STEM field. However, they neglect to mention other studies (see Miller, 2016; Slaughter, 2015; and Short Takes on Slaughter in Signs, 2016 ) showing the devaluation of fields as more women are accepted and vice versa.
The titles and recognition resulting from the Nature and Science commentaries also will undoubtly contribute to the longevity of the interpretation and citational rankings. It's likely to cause a serious waste of research dollars, result in unnecessary delays, and negatively affect the lives of dedicated women researchers in STEM all over the world. What a shame.
...A more formal rebuttal will be submitted after I'm finished with my move.
Posted by Gisela Wilson, 真行 at 15:09
Sunday, May 8, 2016
What is intersectionality? Intersectionality is when two or more biases or prejudices interact in such a way that an individual is marginalized or omitted from consideration of the effects of either.
The harmful effects of intersectionality are most obvious for the combination of gender and race. Due to Crenshaw's (1989) examination of United States anti-discrimination law and, more importantly, the frequency and potential extensiveness of harm and marginalization, black feminism has a unique claim to intersectionality. The reason should be obvious. In terms of bias, the effects of race and gender operate instantaneously based on appearance. I include all women of color: Black, Asian, Latina, Indian and Aboriginal women in the westernized world and colonized countries; though the intersectional effects of gender and race are location-specific and, therefore, also apply to Nepalese and Asian women in the Middle East and Rohingya in Myanmar to name a few non-western examples.
I grew increasingly aware of the intersectional effects of race and gender while reading and signing petitions, though initially I hadn't read enough of the current work in feminism to have a term for it. One petition was for a black college professor that had been manhandled, thrown down on the asphalt, and arrested for trying to avoid sidewalk construction on university grounds after dark. I observed all of her hard work disappear in the blink of an eye. Since then I've read more stories than I can count. Black women arrested for failing to signal a turn. Black women arrested for bringing a child along for a job interview given on short notice, in spite of being hired. Black women arrested for self protection. It took more than six months for the energy of #BlackLivesMatter to include women with the addition of #SayHerName. It took years for the disappearance, assault and murder of indigenous women to raise an eyebrow (Elwood, 2016; Huntley, 2015; Chief Elk-Young Bear, 2015). The intersection of different legal jurisdictions and the effect of race and class assumptions on data collection and statistical analysis are particularly relevant for considering the potential for harm and the complexity of issues that must be taken into account. For women of color, intersectionality is readily observed by the instantaneous transformation of seemingly trivial circumstances into matters of life and death.
It is not that intersectionality does not operate elsewhere, in a variety of circumstances. Implicit and explicit biases and their intersectional effects are part of how minds work. Death and imprisonment are very visible results. Death and imprisonment are easily obtainable statistics. Yet intersectionality is a process that can render attributes legally non-functional in determination of underlying causes (Crenshaw, 1989; Elwood, 2016). Moreover, in the current age of big data and data science, proprietary black-boxed algorithms could easily hard-wire discriminatory practices based on race and gender (Executive Office of the President, 2016). The attentional delay or omission in investigation of matters of women is particularly worrisome.
Marginalization due to intersecting bias is constantly happening and has long term cumulative effects that remain invisible to many of those operating within the bounds of "white" capitalist patriarchal institutions. As noted by Ahmed (2014), "Patriarchy: it’s quite a system. It works. Whiteness too: it works...Whiteness is invisible to those who inhabit it. For those who don’t inhabit it, whiteness appears as a solid" Or as "bricks" that accumulate to build "walls" that require extra affective and physical labor that not only goes unnoticed and uncredited, but for which individuals are further penalized.
There are a number of characteristics that can operate intersectionally with gender besides race. Characteristics that are instantaneously "visible" and have gender-specific intersectional effects are age and disability. There also are characteristics about which a person can, to some extent, choose to be "visible": class, sexual orientation, religion, weight and language. These characteristics are not necessarily instantaneously identifiable in the same way as race. These are characteristics about which individuals are claimed to have a 'choice'. Muslim women might choose to wear a hijab because of their religion and/or for a sense of safety and belonging to their community. By wearing the hijab they are doing their best to create their equivalent of a "white" space, at least within their community. What safety and community can the western world provide that would overcome racial bias?
For Caucasian women in the west, bricks and walls are realized more gradually since their operation has been normalized by society and within communities and institutions. Bricks and walls can masquerade as 'choices.' For white women navigating the boundaries of white patriarchy deleterious effects can go unnoticed or disregarded for quite some time, if recognized at all. Sexism, abuse and motherhood are examples. The citation practices discussed by Ahmed are another*.
Due to years of concerted effort shared by women and feminists world-wide (including some men), the western world is beginning -- only beginning and in ways dependent on race and socio-economic status -- not to disadvantage women for the care-work of parenting. Slaughter's (2015) Unfinished Business discusses the undervaluing of care work typical in society. The critiques that follow in Signs and Dissent highlight some of the serious differences as a function of class. But nowhere, nowhere, do I see mention of the fact that a woman of color can be arrested for bringing her child along for a job interview, a "choice" that was the best that could managed under the circumstances. (Few people would argue that paid employment at a living wage wouldn't substantially improve quality of care over the long term.) Nowhere, nowhere, do I see mention that officials of Detroit saw fit to remove children from their parents care because of water shutoffs due to the failure to pay a $100-$200 bill, when corporations in the area owed hundreds of thousands. At the same time that middle- and upper-class parenting is beginning to receive some consideration, black and indigenous women and families are criminalized for short falls in the care of governments, society, and white patriarchal systems (Jaffe, et al, 2014; Banchiri, 2016). Welfare-to-Work and Right-to-Work programs aren't a living wage -- especially with children and the tearing apart of families and support systems by the additional criminalization of black and indigenous fathers. Worse yet, the end cost to society in terms of child care, foster care, psychiatry, lawyer fees and the prison industrial complex is much greater than the cost of a living wage or Universal Basic Income for every child born would be.
Today I am not a mother and all the community and conversation that represents. Yet by whose choice? If you've read my pipeline posts, your answer might not be as immediate.
*Although a writer in the humanities might be able to choose to cite only women, that choice is not available for women in STEM, especially given the multi-authorship of so many papers under the competitive pressures for multi-disciplinarity in increasingly corporatized academia and funding agencies.
Posted by Gisela Wilson, 真行 at 03:18
Friday, April 15, 2016
Democracy is failing. Some scholars and politicians suggest that democracy remains in name only as the stresses and strains of economic, political and environmental upheavals intensify. Democracy is dead. The necessary conversations either aren't taking place or fail to take the views and needs of the most vulnerable and marginalized into account. Problems resulting from inequality have a way of percolating up because they deny dignity of life (Al-Rodhan, 2015). As seen with ISIS, and police departments and BlackLivesMatter, extremism takes root and recruits when long-standing inequality and resentment are allowed to flourish. The long term harm and cost to society that result from fixity of view, blaming, and exclusion are incalculable.
For the last year I have been exploring alternative pathways for communication and negotiation -- pathways in which sociological concerns can be expressed distinct from the racial, religious and partisan politics that often infuse debate. Media have a polarizing effect and increase, rather than dismantle, affective resistance and bias so that cognitive positions become more extreme (Mock & Homer-Dixon, 2015). Especially in the absence of mechanisms creating a sense of citizenship and community. As a result, borders and biases are buttressed to reinforce partisan, institutional and system identities. The situation and debate over the state of the European Union is one current example. In addition, because media depend on market and attentional value, resistive posturing often shifts focus away from actual needs. More resources are often spent in maintaining brand and image than in redress for those without resources and most at risk. Institutions, think tanks and corporations have been created -- and many a publication and book written -- in avoidance of actual conversation and negotiation. The most recent trend towards algorithmic decision-making based on Big Data (Pasquale, 2015) threatens to devoid human agency of responsibility and enforce an artificial consensus based on habitual tendencies abstracted from their context rather than provide for equality of opportunity and cultural diversity.
Boundaries that have proved particularly problematic in public discourse concern (a) the divide between government and academic institutions and (b) the divide between academic institutions and the public (Macilwain, 2011; 2016). The rise in reactionary populist nationalism deepens these schisms by reinforcing racial, religious and partisan bias. Relying on populist movements to motivate change emphasizes polarization across race, gender, and the religions and institutions on which society depends. Populist emphasis of ideological biases in public discourse increases political instability since it reinforces exclusionary processes.
Humility in intellectual discourse is the opposite of reactionary populist nationalism. Nonviolent communication, developed by Marshall Rosenberg (Rosenberg, 2003), is a good example that allows for agonistic pluralism as well as cultural diversity (Mouffe, 1999) while overcoming biases that can falsely frame perceptions of needs and solutions.
Nonviolent Communication is but one of several negotiation methods that differs from the western mode of election and governance. Indeed, the African Indaba Process was used during the recent Paris climate talks (Rathi, 2015). It can take years of practice to master the methods of Nonviolent Communication unilaterally and, in cases of persistent inequality, it can be difficult to persuade individuals to follow the process, which depends on identifying emotions, followed by linking emotions to basic needs. Identifying underlying needs increases the possible range of pragmatic solutions. People in positions of power often fail see the positive benefits of a nonviolent communication process that goes beyond the assignment of blame and/or dismissal due to ignorance. Moreover, giving voice to emotions is full of risk since emotions can be weaponized.
Neither politicians or corporations, and not even academics, have an accurate view of the various dimensions of any given problem in our society. All voices need to be invited and heard in finding dignified solutions to the multi-dimensional problems of our time. The largest oversight in negotiation, design and planning is that rarely are all affected peoples included at the table -- an oversight that typically results in partial solutions that exclude the concerns and needs of the most vulnerable. As observed by Cynthia Enloe and Annick Wibben the individuals most often omitted from negotiation processes, and whose perspectives therefore go unrecognized, are women. Inclusion of women more fully informs diversity in viewpoint and the range of possible solutions.
Al-Rodhan, N. (2015) Proposal of a Dignity Scale for Sustainable Governance, Journal of Public Policy (Blog), 29 November http://isnblog.ethz.ch/politics/proposal-of-a-dignity-scale-for-sustainable-governance
Enloe, C. (2014) Bananas, Beaches, and Bases: Making Feminist Sense of International Politics. University of California Press, Berkeley and Los Angles, CA
Macilwain, C. (2011) Science’s attitudes must reflect a world in crisis. Nature News 479, 447. http://www.nature.com/news/science-s-attitudes-must-reflect-a-world-in-crisis-1.9419
Macilwain, C. (2016) The elephant in the room we can’t ignore. Nature News 531, 277. http://www.nature.com/news/the-elephant-in-the-room-we-can-t-ignore-1.19561
Mock, S. & Homer-Dixon, T. (2015) The Ideological Conflict Project: Theoretical and Methodological Foundations, CIGI Papers, v74.
Mouffe, C. (1999) Deliberative Democracy or Agonistic Pluralism? Social Research, 66(3), 745-758.
Pasquale, F. (2015) The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press; Cambridge, MA
Rathi, A. (2015) This simple negotiation tactic brought 195 countries to consensus. Quartz (Blog), 12 December http://qz.com/572623/this-simple-negotiation-tactic-brought-195-countries-to-consensus-in-the-paris-climate-talks/
Rosenberg, M.B. (2003) Nonviolent Communication: A Language of Life. PuddleDancer Press; Encinitas, CA
Wibben, A.T.R. (2011) Feminist Security Studies: A Narrative Approach. Routledge - Taylor & Francis Group, New York, NY
Posted by Gisela Wilson, 真行 at 17:37
Friday, March 25, 2016
Most of my Sangha has been gone for the week, attending a retreat at Rochester Zen Center, which means there have only been informal sittings here in the morning.
So this morning after coffee, I thought I'd check out the new Cat Cafe that recently opened:
Posted by Gisela Wilson, 真行 at 10:34
Friday, March 11, 2016
This morning I decided to visit the Twitter web application settings page -- something I don't do very often since I don't have wireless. I wanted to adjust the new timeline settings and to shut down the automatic play of videos. I was also pleased to find a setting for sensitive images -- much like what I requested in my prior post. Thanks to Twitter.
Posted by Gisela Wilson, 真行 at 17:38