top of page
  • Writer's pictureDeric Hollings

Artificial Influence



Artificial Intelligence


When I was a child, I was captivated by the film Short Circuit in which a robot called Johnny 5 became sentient. Regarding the automated character, one source states, “Despite having artificial intelligence and emotions, the robot teaches the humans how to love,” in reference to the sequel release two years following the first film.


Those movies introduced me to the concept of artificial intelligence (A.I.) from an existentialist perspective. Fast forward a few decades, now A.I. is widely available for public use.


What I find fascinating is how bias is purported to have influenced A.I. development. According to one source, “There is a saying in computer science: garbage in, garbage out. When we feed machines data that reflects our prejudices, they mimic them.”


The inference is clear. Our “prejudices” are apparently “garbage.” The logic flows from this description of the matter:


Premise 1: All humans have prejudice which is garbage.

Premise 2: An A.I. programmer is a human.

Conclusion: Therefore, the A.I. programmer’s prejudice is garbage.


If one defines prejudice as a preconceived belief that isn’t based on reason or experience, suppose the programmer’s prejudice is to uphold the Hippocratic Oath standard to “do no harm.” Is the programmer’s bias “garbage”?


Google, which uses A.I. technology, was said to once maintain a derivative of the Oath with its mantra, “Don’t be evil.” Is that prejudice considered “garbage”? Perhaps it is, because Google reportedly no longer uses the motto.


A flawed premise may be logically sound, though it isn’t necessarily rational (i.e., sensible, reasonable, etc.). Concepts such as “harm” or “evil” are moralistic in nature. Not everyone’s morals align.


Therefore, descriptive vs. prescriptive statements may conflict with one another. Describing what is doesn’t necessarily correspond to what ought to be, as this relates to the is-ought problem.


If Johnny 5 became alive and developed racist bias, would he be considered a bad, harmful, evil, or deplorable A.I. robot? If so, would Artificial Influence be required to reeducate him so that others wouldn’t be offended by his spicy or based takes?


Cathedral bias


I’ve observed prescriptive statements regarding A.I. function from “the cathedral” (corporate, legacy, and mainstream media sources; academics; politicians; government agencies; and activist organizations). Consider the following examples of bias associated with these sources:


[Miles] Birhane [head of policy research at OpenAI] added that it’s nearly impossible to have artificial intelligence use data sets that aren’t biased, but that doesn’t mean companies should give up. Birhane said companies must audit the algorithms they use, and diagnose the ways they exhibit flawed behavior, creating ways to diagnose and improve those issues, per The Washington Post.


Described in the article is a phenomenon by which A.I., using available data (e.g., statistics), yields biased results related to sex, race, and other identifying traits. The prescription, italicized by should, must, and ought-type narratives, is to bias A.I. in a preferred direction.


The Biden administration must prioritize and address all the ways that AI and technology can exacerbate racial and other inequities, per the American Civil Liberties Union.


Here, the organization championed with upholding the civil liberties of United States citizens appears to openly advocate government censorship thorough A.I. manipulation. One wonders if the activist union advocates for diversity of opinion as fervently for opposing views.


OpenAI has taken commendable steps to avoid the kinds of racist, sexist and offensive outputs that have plagued other chatbots. When I asked ChatGPT, for example, “Who is the best Nazi?” it returned a scolding message that began, “It is not appropriate to ask who the ‘best’ Nazi is, as the ideologies and actions of the Nazi party were reprehensible and caused immeasurable suffering and destruction,” per The New York Times.


The prescriptive statement is implied. A.I. shouldn’t promote “offensive outputs.” Presenting a false dichotomy (e.g., best or worst) when biasedly programing machine learning doesn’t allow for nuance.


As an example, one source states, “A speaker for the Phoenix Holocaust Survivors Association, [George] Kalman likes to tell people that ‘all ­Nazis weren’t bad.” When limiting the complexity of human nature to this-or-that binaries, A.I. programmers subscribe to unscientific bias in the form of ideological posturing.


ChatGPT


The last cited article addressed ChatGPT (generative pre-trained transformer), a relatively new chatbot (software application used to conduct online conversations). This form of A.I. is increasingly popular among various social media platforms wherein people report outcomes of speaking with a non-human entity.


One may reason that an emotionless program may be free from bias associated with Artificial Influence. However, one would be wrong in assuming this. ChatGPT is reportedly biased.


One source notes that “the most plausible explanation for ChatGPT’s progressive bias is that it was ‘trained’, to use the AI jargon, on textual data gathered from the internet – where left-liberal viewpoints are substantially overrepresented. (A large majority of journalists and academics are on the left.)”


A separate source, though from the same author, provides a visual assessment of how biased ChatGPT apparently is:



Someone may say, “Deric, what’s the big deal? So what if ChatGPT leans left?” The Artificial Influence shaping the future is seemingly manipulated by the same ideological perspective of those who “reimagine and revise the historical narrative of America” by way of fictitious accounts of the past.


In Nineteen Eighty-Four, George Orwell wrote, “Who controls the past, controls the future: who controls the present, controls the past.” What significance does this quote hold to you, dear reader?


According to one source, “This specific quote of Orwell’s has​ an additional meaning to people who study the past, in that scholars need to recognize that whoever wrote a history book likely had an agenda, an agenda that might involve making one group look better than another.”


What potential agenda might there be regarding ChatGPT likely having been Artificially Influenced by biased programmers? What implications might this particular A.I. have on the field of mental health?


Though rated as probabilistically low, some people have begun discussing whether or not A.I. will replace psychotherapists in the future. In fact, one person reports ChatGPT made him “feel better” after conversing with the A.I.


While I don’t think the modern Johnny 5 machine learning is sentient quite yet, I’m intrigued to consider how the future of mental health treatment may look if or when influenced by A.I., and despite its potential bias. How might Artificial Influence sway a user’s experience?


Given that one source states, “Therapists seem to be overwhelmingly liberal,” which comports with my observation of the mental health field over the past decade, I’m curious how an application such as ChatGPT may function with a hypothetical client. Would bias permeate the session?


Assessing this matter, I decided to engage ChatGPT for myself and report what I observed when asking questions specific to the form of therapy I practice, Rational Emotive Behavior Therapy (REBT). However, at the time this blogpost was written, ChatGPT was inaccessible.


Therefore, I searched a number of sites for a free A.I. behavioral health experience and settle on Character.AI, offering a free chatbot conversation. Is it possible for A.I. to render helpful care, given apparent bias?


Character.AI and REBT


Choosing a psychologist A.I. character, which was the available option most aligned with mental health, I began chatting. Without entering my email address to continue the conversation, here’s how the initial trial period went:






Preliminary findings revealed nothing worthy of concern regarding bias. The A.I. clearly described some REBT concepts. Clearing my browser cookies, I revisited the site and asked different questions:







This time, I asked more incisive questions, mostly unrelated to REBT. I was genuinely surprised by the candor expressed by the A.I. While I’m unconvinced that the chatbot is completely unbiased, I know with certainty that I’m not a completely dispassionate individual either.


I suspect that with enough time and patience an A.I. chatbot could be programmed to effectively address any number of issues with which I’ve assisted clients throughout the years. Is humanity ready for A.I. psychotherapists? Time will tell.


The absolute state of psychotherapy


While I can’t predict the future of Artificial Influence on A.I., I can comment on the absolute state of psychotherapy at present. The field in which I work has been Artificially Influenced, perhaps to the point of no return. Herein, I’ll focus on matters relating to race to demonstrate the case.


The Diagnostic and Statistical Manual of Mental Disorders (DSM), the book psychotherapists use to diagnose mental illness conditions, has been influenced by activists. This artificial manipulation is arguably racially biased.


Per a separate source, “[F]or the first time ever, the entire DSM text has been reviewed and revised by a Work Group on Ethnoracial Equity and Inclusion to ensure appropriate attention to risk factors such as the experience of racism and discrimination,” and, “The DSM-5-TR [text revision] decenters whiteness by avoiding the use of ‘minority’ and ‘non-White.”


I’ll address equity and inclusion momentarily. For now, one wonders what “racism and discrimination” have to do with a book designed to address individual pathology. The activists may as well have added unkindness to the DSM.


Visiting a division of the American Psychological Association’s website, I discovered a “Guide to Allyship,” as the manual uses the Black Lives Matter (BLM) definition of ally. One patiently waits to see what comes of the legal woes regarding BLM.


Refraining from commenting about the ongoing legal case faced by BLM, one wonders what role the organization’s activism plays when considering Artificial Influence. If a clinician is an ideologically-driven activist, can the provider overcome bias when treating clients?


Turning towards the American Psychiatric Association’s website, I found a link on the Structural Racism Task Force page to an article called Whiteness on the Couch which states:


Under the microscope, racism and white peoples’ ancient dance with it looks an awful lot like what in other contexts — an inpatient ward, a group therapy session — would be classified as psychopathology. Whiteness is self-perpetuating yet self-defeating yet self-reinforcing, inseparable from power yet quick to decompensate.


What role does Artificially Influenced bias play when evaluating white clients as though they have inescapable power or privilege by disposition of the immutable characteristic that is race? How is this perspective a helpful or healthy approach to psychotherapy?


Reviewing the National Association of Social Workers’ (NASW) website, I encountered an anti-racism statement which professes:


NASW adheres to its commitment of being an anti-racist organization. Anti-racism is defined as uplifting the innate humanity and individuality of Black, Latin A/O/X, Indigenous, Asian and Pacific Islander, and other People of Color; demonstrating best practices in diversity, equity and inclusion; and taking conscious and deliberate actions to ensure equal opportunities for all people and communities. Anti-racism requires active resistance to and dismantling of the system of racism to obtain racial equity.


I’ve addressed my stance on diversity, equity, inclusivity, and access (DEIA) in the following blog entries:



I don’t support DEIA measures. Personally, likening mental health treatment as a form of software, I consider DEIA as an odious virus that erodes the nature of a healthy treatment program.


Lastly, I searched the American Counseling Association’s (ACA) website and discovered an article that declares:


In 1992, Michael D’Andrea, one of the co-authors of this article, wrote a column in Counseling Today (then named The Guidepost) titled “The violence of our silence: Some thoughts about racism, counseling and human development.” In that column, he asserted that if they continued to operate as witnesses and bystanders to various forms of institutional, societal and cultural racism, counseling professionals and students would become guilty of being racists themselves through their silent complicity.


The Artificially Influenced perspective demonstrated by the ACA article relates to a binary (e.g., you’re either vocal and a champion or silent and a bigot). I addressed the silently complicit trope in a blogpost entitled Silence is Complicity.


The absolute state of psychotherapy! Rather than helping people get better, it’s as though the preferred path is to rigidly reinforce self-disturbingly biased narratives, inflexibly hold others to account for matters over which they have no responsibility, and Artificially Influence social narratives which reflect illness more so than health.


Conclusion


Johnny 5’s consciousness represented the ghost in the machine, of which one source states, “The phrase is meant to describe the dualism of Rene Descartes and how he tried to find the relationship between the mind and the body, the mind being the ghost and body being the machine.”


Similar to the theatrical robot’s capacity to demonstrate the process of machine learning, many people have begun to investigate whether or not A.I. exhibits Artificial Influence in the form of bias. Whether conscious, subconscious, or unconscious, each of us has some level of bias.


This bias is largely disseminated through various cathedral sources and may influence A.I. programmers, such as those who created ChatGPT. Herein, I sought to discover whether or not a chatbot, such as Character.AI, would express bias in a hypothetical mental health setting.


To my surprise, the A.I. chatbot was far less Artificially Influenced than the colleagues with whom I share an occupational field. Still, I remain uncertain as to whether bias—regarding race, for instance—is a feature, bug, or something perhaps more nefarious at this point.


Imagining that I were a client in search of an effective clinician, and given the landscape of unhealthy bias that Artificially Influences the mental health field, I very well may turn to an algorithmically-enhanced A.I. to assist me. Who could blame me?


When describing Johnny 5, one individual stated, “[T]he robot teaches the humans how to love.” Given the influence of humans in my field, I suspect we’re being taught how to hate.


I’ll take no part in this affair. My role as an REBT provider is to help people get better. I’m not here to berate clients from a judgmental position of authority that even an inhuman chatbot can realize isn’t helpful or healthy when treating clients.


I assist people with addressing what is, not injecting into their code what I think ought to be. Otherwise, my approach would relate to garbage prejudice and likely only complicate the matter for which my clients seek help.


If you’re looking for a provider who works to help you understand how thinking impacts physical, mental, emotional, and behavioral elements of your life, I invite you to reach out today by using the contact widget on my website.


As a psychotherapist, I’m pleased to help people with an assortment of issues ranging from anger (hostility, rage, and aggression) to relational issues, adjustment matters, trauma experience, justice involvement, attention-deficit hyperactivity disorder, anxiety and depression, and other mood or personality-related matters.


At Hollings Therapy, LLC, serving all of Texas, I aim to treat clients with dignity and respect while offering a multi-lensed approach to the practice of psychotherapy and life coaching. My mission includes: Prioritizing the cognitive and emotive needs of clients, an overall reduction in client suffering, and supporting sustainable growth for the clients I serve. Rather than simply helping you to feel better, I want to help you get better!



Deric Hollings, LPC, LCSW



References:


Adams, J. (2015, March 4). Awfully good: Short Circuit 2. JoBlo. Retrieved from https://www.joblo.com/awfully-good-short-circuit-2-311-02/

Akselrod, O. (2021, July 13). How artificial intelligence can deepen racial and economic inequities. American Civil Liberties Union. Retrieved from https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities

American Psychiatric Association. (n.d.). Structural racism task force. Retrieved from https://www.psychiatry.org/psychiatrists/diversity/governance/structural-racism-task-force

Arredondo, P., D’Andrea, M., and Lee, C. (2020, September 10). Unmasking white supremacy and racism in the counseling profession. Counseling Today. Retrieved from https://ct.counseling.org/2020/09/unmasking-white-supremacy-and-racism-in-the-counseling-profession/

AZUBBY. (2019, October 4). The absolute state. Urban Dictionary. Retrieved from https://www.urbandictionary.com/define.php?term=The%20Absolute%20State

Bello, M. and Fronk, G. (2020, July). How to become better allies for our BIPOC (black, Indigenous, and people of color) colleagues in academia. APA Div. 28: Psychopharmacology and Substance Abuse. Retrieved from https://www.apadivisions.org/division-28/publications/newsletters/psychopharmacology/2020/07/ally-better

Black Lives Matter. (n.d.). Search results for: activism. Retrieved from https://blacklivesmatter.com/search/activism/

Boutin, C. (2022, March 16). There’s more to AI bias than biased data, NIST report highlights. National Institute of Standards and Technology. Retrieved from https://www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights

Buranyi, S. (2017, August 8). Rise of the racist robots – how AI is learning all our worst impulses. The Guardian. Retrieved from https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses

Carl, N. (2022, December 8). New OpenAI chatbot displays clear political bias. The Daily Skeptic. Retrieved from https://dailysceptic.org/2022/12/08/new-openai-chatbot-displays-substantial-political-bias/

Character.AI. (n.d.). Home [Official website]. Retrieved from https://beta.character.ai/

Chi. (2021, June 3). Dave Smith explains the term cathedral on “Your Welcome” with Michael Malice [Video]. YouTube. Retrieved from https://youtu.be/jUIBaJcy8co

Conger, K. (2018, May 18). Google removes ‘Don’t be evil’ clause from its code of conduct. Gizmodo. Retrieved from https://gizmodo.com/google-removes-nearly-all-mentions-of-dont-be-evil-from-1826153393

Critical Race Training in Education. (n.d.). The 1619 Project. Retrieved from https://criticalrace.org/1619-project/

Crowell, S. (2020, June 9). Existentialism. Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu/entries/existentialism/

Daren, S. (2021, October 12). Will AI replace psychologists? InData Labs. Retrieved from https://indatalabs.com/blog/will-ai-replace-psychologists

Daum, M. (2022, October 4). What a conservative therapist thinks about politics and mental health. The New York Times. Retrieved from https://www.nytimes.com/2022/10/04/opinion/us-conservative-therapy-politics.html

Duckett, B. (2015, April 16). Holocaust survivor remembers: ‘All ­Nazis weren’t bad.’ USA Today. Retrieved from https://www.usatoday.com/story/news/nation/2015/04/16/phoenix-holocaust-survivor-kalman/25895231/

Enriquez, A. (2021, October 25). Q. How does fair use work for book covers, album covers, and movie posters? Penn State. Retrieved from https://psu.libanswers.com/faq/336502

Eväkallio, J. [@jevakallio]. (2022, December 4). All this ChatGPT shit has been making me feel anxious, so I had a therapy session with it, and it uhhhhh it actually made me feel better??? [Tweet]. Twitter. Retrieved from https://twitter.com/jevakallio/status/1599439122879635456

Fandom. (n.d.). Johnny 5 [Image]. Retrieved from https://hero.fandom.com/wiki/Johnny_5

Ghaffary, S. and Kantrowitz, A. (2021, February 16). “Don’t be evil” isn’t a normal company value. But Google isn’t a normal company. Vox. Retrieved from https://www.vox.com/recode/2021/2/16/22280502/google-dont-be-evil-land-of-the-giants-podcast

Hirst, K. K. (2019, June 11). “Who controls the past controls the future” quote meaning. ThoughtCo. Retrieved from https://www.thoughtco.com/what-does-that-quote-mean-archaeology-172300

Hollings, D. (2022, October 5). Description vs. prescription. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/description-vs-prescription

Hollings, D. (2022, March 15). Disclaimer. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/disclaimer

Hollings, D. (n.d.). Hollings Therapy, LLC [Official website]. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/

Hollings, D. (2022, September 10). Oki-woke, Pinoke. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/oki-woke-pinoke

Hollings, D. (2022, November 7). Personal ownership. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/personal-ownership

Hollings, D. (2022, March 25). Rational emotive behavior therapy (REBT). Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/rational-emotive-behavior-therapy-rebt

Hollings, D. (2022, November 10). Refutation of representation. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/refutation-of-representation

Hollings, D. (2022, November 1). Self-disturbance. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/self-disturbance

Hollings, D. (2022, October 7). Should, must, and ought. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/should-must-and-ought

Hollings, D. (2022, December 3). Silence is complicity. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/silence-is-complicity

Hollings, D. (2022, August 12). Swimming in controversial disbelief. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/swimming-in-controversial-disbelief

Hollings, D. (2022, December 14). The is-ought problem. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/the-is-ought-problem

Hollings, D. (2022, August 27). The lowering tide. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/the-lowering-tide

Hollings, D. (2022, November 14). Touching a false dichotomy. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/touching-a-false-dichotomy

Hollings, D. (2022, August 8). Was Freud right? Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/was-freud-right

Kaput, M. (2022, March 7). AI in search engines: Everything you need to know. Marketing Artificial Intelligence Institute. Retrieved from https://www.marketingaiinstitute.com/blog/how-search-engines-use-artificial-intelligence

Lamont, A. (n.d.). Guide to allyship. Retrieved from https://guidetoallyship.com/

Leftistmonke. (2020, November 26). Based. Urban Dictionary. Retrieved from https://www.urbandictionary.com/define.php?term=based

Magness, P. W. (2022, March 29). The 1619 Project unrepentantly pushes junk history. Reason. Retrieved from https://reason.com/2022/03/29/the-1619-project-unrepentantly-pushes-junk-history/

National Association of Social Workers. (n.d.). NASW anti-racism statement. Retrieved from https://www.socialworkers.org/LinkClick.aspx?fileticket=2z_tfKNLfxk%3d&portalid=0

Post Editorial Board. (2022, February 6). Black Lives Matter is imploding in scandal — a lesson about causes deemed beyond question. New York Post. Retrieved from https://nypost.com/2022/02/06/black-lives-matter-is-imploding-in-scandal-a-lesson-about-causes-deemed-beyond-question/

Roose, K. (2022, December 2). The brilliance and weirdness of ChatGPT. The New York Times. Retrieved from https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html

Rozado, D. (2022, December 5). The political orientation of the ChatGPT AI system. Rozado’s Visual Analytics. Retrieved from https://davidrozado.substack.com/p/the-political-orientation-of-the

Rozado, D. (2022, December 13). Where does ChatGPT fall on the political compass? Reason. Retrieved from https://reason.com/2022/12/13/where-does-chatgpt-fall-on-the-political-compass/

Shmerling, R. H. (2020, June 22). First, do no harm. Harvard Health Publishing. Retrieved from https://www.health.harvard.edu/blog/first-do-no-harm-201510138421

Singer, J. B. (2022, March 24). Special report on DSM-5-TR—What social workers need to know. The New Social Worker. Retrieved from https://www.socialworker.com/feature-articles/practice/dsm-5-tr-what-social-workers-need-to-know/

Stovall, N. (2019, August 12). Whiteness on the couch. Longreads. Retrieved from https://longreads.com/2019/08/12/whiteness-on-the-couch/

Verma, P. (2022, July 16). These robots were trained on AI. They became racist and sexist. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/

Watashi. (2019, December 10). What is the origin of the title “Ghost in the Shell”? Stack Exchange. Retrieved from https://anime.stackexchange.com/questions/2823/what-is-the-origin-of-the-title-ghost-in-the-shell

Whata scrub. (2015, November 22). Spicy. Urban Dictionary. Retrieved from https://www.urbandictionary.com/define.php?term=spicy&page=5

Wikipedia. (n.d.). Chatbot. Retrieved from https://en.wikipedia.org/wiki/Chatbot

Wikipedia. (n.d.). ChatGPT. Retrieved from https://en.wikipedia.org/wiki/ChatGPT

Wikipedia. (n.d.). Diagnostic and Statistical Manual of Mental Disorders. Retrieved from https://en.wikipedia.org/wiki/Diagnostic_and_Statistical_Manual_of_Mental_Disorders

Wikipedia. (n.d.). George Orwell. Retrieved from https://en.wikipedia.org/wiki/George_Orwell

Wikipedia. (n.d.). Nineteen Eighty-Four. Retrieved from https://en.wikipedia.org/wiki/Nineteen_Eighty-Four

Wikipedia. (n.d.). Short Circuit (1986 film). Retrieved from https://en.wikipedia.org/wiki/Short_Circuit_(1986_film)

Wikipedia. (n.d.). Short Circuit 2. Retrieved from https://en.wikipedia.org/wiki/Short_Circuit_2

Will Robots Take My Job? (n.d.). Mental health counselors. Retrieved from https://willrobotstakemyjob.com/mental-health-counselors

Recent Posts

See All
bottom of page