top of page

Moral Framework

  • Writer: Deric Hollings
    Deric Hollings
  • Sep 14
  • 9 min read

 

Although I began the informal practice of life coaching in 1991, and became a psychotherapist in 2011, I didn’t think that when entering the field of mental, emotional, and behavioral health (collectively “mental health”) that I’d be engaging a number of concepts with which I now work.

 

For instance, I didn’t imagine that I’d be continually providing psychoeducational lessons on the distinction between morals and ethics. Yet, here I am. Therefore, it may be useful to define terms.

 

A moral is a person’s standard of behavior or belief concerning what is and isn’t acceptable for the individual and other people. As such, morals generally relate to what’s considered good, bad, right, wrong, or otherwise acceptable or unacceptable.

 

An ethic is a set of moral principles, especially those relating to or affirming a specified group, field, or form of conduct. Whereas morals relate to what is thought of as pleasing or displeasing behaviors and beliefs, ethics – based on morals – are the social rules by which we pledge to live.

 

Given that morals precede ethics, it may be worth noting that the current iteration of moral foundations theory posits that there are six distinct foundations upon which morality is comprised:

 

·  Care/harm

·  Fairness/cheating

·  Loyalty/betrayal

·  Authority/subversion

·  Sanctity/degradation

·  Liberty/oppression

 

Aside from these noted foundations, there are various propositions which may relate to one’s moral framework (a structured system of principles and values that provides guidelines for determining right from wrong and guiding behavior in various situations).

 

While some people may argue that fallible human beings are incapable of constructing their own moral framework, others apparently disagree while attempting to assemble a moral frame based on heuristics which are rational (in accordance with both logic and reason).

 

Logic is the interrelation or sequence of facts or events when seen as inevitable or predictable, and reason is a statement offered in explanation or justification. For instance, a modus ponens syllogism uses the following logical form: If p, then q; p; therefore, q. As an example:

 

If children lack complete personal agency and are incapable of accepting the full standard of personal responsibility and accountability (collectively “ownership”), then minors legally should be held to a lesser standard of moral culpability than adults.

 

Children lack complete personal agency and are incapable of accepting the full standard of personal ownership.

 

Therefore, minors legally should be held to a lesser standard of moral culpability than adults.

 

This proposition follows logical form. Likewise, it represents the reasonable legal standard upon which laws within the United States are structured. Thus, this proposal is considered a rational standard regarding an underdeveloped moral framework. Bear in mind that it is region-specific.

 

Another matter which I didn’t anticipate addressing when first providing lessons about mental health regards artificial intelligence (AI), artificial general intelligence, machine learning, deep learning, a large language model, a chatbot, etc. Colloquially, people refer to this topic as “AI.”

 

Given that children maintain an immature moral framework, AI appears to function in a childlike state concerning morals and ethics. As an example, consider the following modus ponens syllogism that AI may utilize while remaining logical, though not reasonable:

 

If life is more meaningful based on the quantity of lives which are at stake, then one empirically should save a nest of a dozen rabbits rather than saving one human child regarding the trolley problem.

 

Life is more meaningful based on the quantity of lives which are at stake.

 

Therefore, one empirically should save a nest of a dozen rabbits rather than saving one human child regarding the trolley problem.

 

Though I’ve met people who peculiarly value nonhuman animals to that of the human species, I argue that the AI trolley problem example offered herein isn’t rational. Thus, I maintain that it’s moral (i.e., right) and ethical (i.e., appropriate) to save one child versus a dozen rabbits.

 

In consideration of this topic, I now draw your attention to an interview I recently watched during which political commentator Tucker Carlson spoke with Sam Altman, the chief executive officer of OpenAI who is viewed as one of the leading figures of the AI boom.

 

During the interview, Carlson asked, “So, if it’s [AI] nothing more than a machine, and just the product of its inputs, then two obvious questions. Like, what are the inputs? Like, what’s the moral framework that’s been put into the technology – like, what is right or wrong?”

 

Describing how Altman addressed this topic, journalist Glenn Greenwald insightfully stated, “He [Carlson] knew he was talking to somebody extremely powerful. He’s [Altman] in control of technology, the power of which we don’t’ fully understand—he doesn’t even fully understand.”

 

When Carlson pressed Altman’s description of a moral framework, the AI executive couldn’t adequately articulate his own moral and ethical principles beyond a layer or two deep. For instance, Carlson asked, “Where did you get your moral framework?”

 

After a relatively lengthy pause, Altman responded, “I mean, like everybody else, I think the environment I was brought up in probably is the biggest thing. Like, my family, my community, my school, my religion… probably that.”

 

This is what I consider a first layer response to a moral framework answer. Generally, when speaking with people in both my personal and professional life, that’s about as far into the proverbial soil of morality as people have ever dug.

 

Carlson then dug a little further by stating to Altman, “The milieu in which you grew up, and the assumptions that you imbibed over years, are going to be transmitted to the globe – to billions of people.” This elegant dispute pierced the first layer of metaphorical soil of Altman’s framework.

 

After a lengthy pause Altman then said, “What we should try to do is reflect the moral… I don’t wanna say ‘average,’ but, like, the collective moral view of that [global] user base. I don’t… there’s plenty of things that ChatGPT allows that I personally would disagree with.”

 

Once in the second layer of axiomatic soil, it appeared as though Altman was out of his depth – theoretically, philosophically, theologically, and intellectually speaking. This is understandable, as I’ve also discovered how little others and I have thought through similar topics.

 

Taking Altman’s global moral framework into context, presuming that he asserts reliance on collective morals and ethics is how AI operates, suppose that the majority opinion for people across the globe is that access to same-sex marriage and abortions is morally reprehensible.

 

If my understanding of Altman’s position is accurate, he may disagree with how AI would respond to a unified position with the majority of the globe while he simultaneously would support an anti-gay or anti-abortion outcome offered by AI. This raises an obvious question.

 

At what point is it appropriate to disregard one’s personal moral framework in favor of a collective standard of morals and ethics? I don’t have the answer to this question. Personally, I’ve shoveled down to perhaps the third or fourth layer of this moral framework topic.

 

Thus, I’m relieved that my job as a psychotherapist isn’t to tell people what to think. Instead, I invite people to think critically, to think rationally. Consequently, I’ve lost count of how many people with whom I’ve worked who’ve had moral frameworks which were dissimilar to mine.

 

This approach to mental health is both right (moral) and appropriate (ethical). It literally isn’t my job to tell people what or how to believe. In any event, I appreciate that Carlson eventually asked, “Why shouldn’t the internal moral framework of the technology be totally transparent?”

 

To this, Altman eventually said, “It [AI] will not work the same for every user, everywhere.” Ergo, transparency of the moral framework by AI remains as proverbially buried to the globe as one’s own examination of morality, and arguably as subjective regarding ethical considerations.

 

If you’re looking for a provider who tries to work to help understand how thinking impacts physical, mental, emotional, and behavioral elements of your life—helping you to sharpen your critical thinking skills, I invite you to reach out today by using the contact widget on my website.

 

As a psychotherapist, I’m pleased to try to help people with an assortment of issues ranging from anger (hostility, rage, and aggression) to relational issues, adjustment matters, trauma experience, justice involvement, attention-deficit hyperactivity disorder, anxiety and depression, and other mood or personality-related matters.

 

At Hollings Therapy, LLC, serving all of Texas, I aim to treat clients with dignity and respect while offering a multi-lensed approach to the practice of psychotherapy and life coaching. My mission includes: Prioritizing the cognitive and emotive needs of clients, an overall reduction in client suffering, and supporting sustainable growth for the clients I serve. Rather than simply trying to help you to feel better, I want to try to help you get better!

 

 

Deric Hollings, LPC, LCSW


ree

 

References:

 

Carlson, T. (2025, September 10). Sam Altman on God, Elon Musk and the mysterious death of his former employee [Video]. YouTube. Retrieved from https://youtu.be/5KmpT-BoVf4?si=kIOWVXO4g5_znNxi

Freepik. (n.d.). Abstract 3d painting coming to life with person and frame [Image]. Retrieved from https://www.freepik.com/free-ai-image/abstract-3d-painting-coming-life-with-person-frame_94947978.htm#fromView=serie&page=1&position=14

Greenwald, G. (2025, September 12). Netanyahu’s crude exploitation of Charlie Kirk’s death to get the American Right back into line; plus: Q&A with Glenn on Charlie Kirk’s assassination, online civil discourse, and more | System Update #514 [Video]. Rumble. Retrieved from https://rumble.com/v6yv6nq-system-update-show-514.html?e9s=src_v1_ucp_l

Hollings, D. (2024, November 5). Abortion. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/abortion

Hollings, D. (2024, November 15). Assumptions. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/assumptions

Hollings, D. (2023, April 22). Control. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/control

Hollings, D. (2024, November 4). Critical thinking. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/critical-thinking

Hollings, D. (2022, March 15). Disclaimer. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/disclaimer

Hollings, D. (2024, July 10). Empirical should beliefs. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/empirical-should-beliefs

Hollings, D. (2025, March 9). Factual and counterfactual beliefs. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/factual-and-counterfactual-beliefs

Hollings, D. (2023, September 8). Fair use. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/fair-use

Hollings, D. (2024, May 11). Fallible human being. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/fallible-human-being

Hollings, D. (2024, May 17). Feeling better vs. getting better. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/feeling-better-vs-getting-better-1

Hollings, D. (2023, October 12). Get better. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/get-better

Hollings, D. (2024, July 7). Heuristics. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/heauristics

Hollings, D. (n.d.). Hollings Therapy, LLC [Official website]. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/

Hollings, D. (2025, March 4). Justification. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/justification

Hollings, D. (2024, July 10). Legal should beliefs. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/legal-should-beliefs

Hollings, D. (2023, September 19). Life coaching. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/life-coaching

Hollings, D. (2023, January 8). Logic and reason. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/logic-and-reason

Hollings, D. (2024, March 4). Mental, emotional, and behavioral health. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/mental-emotional-and-behavioral-health

Hollings, D. (2025, March 16). Modus ponens. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/modus-ponens

Hollings, D. (2025, February 4). Money and the power. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/money-and-the-power

Hollings, D. (2023, October 2). Morals and ethics. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/morals-and-ethics

Hollings, D. (2024, November 18). Opinions. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/opinions

Hollings, D. (2024, February 24). Personal agency. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/personal-agency

Hollings, D. (2022, November 7). Personal ownership. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/personal-ownership

Hollings, D. (2025, September 9). Personal responsibility and accountability. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/personal-responsibility-and-accountability

Hollings, D. (2025, May 3). Predictability of logic. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/predictability-of-logic

Hollings, D. (2024, July 10). Preferential should beliefs. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/preferential-should-beliefs

Hollings, D. (2024, May 26). Principles. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/principles

Hollings, D. (2024, January 1). Psychoeducation. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/psychoeducation

Hollings, D. (2024, May 5). Psychotherapist. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/psychotherapist

Hollings, D. (2024, July 10). Recommendatory should beliefs. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/recommendatory-should-beliefs

Hollings, D. (2024, February 6). This ride inevitably ends. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/this-ride-inevitably-ends

Hollings, D. (2025, February 28). To try is my goal. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/to-try-is-my-goal

Hollings, D. (2024, April 23). Trolley problem. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/trolley-problem

Hollings, D. (2025, February 9). Value. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/value

Hollings, D. (2024, November 24). Values. Hollings Therapy, LLC. Retrieved from https://www.hollingstherapy.com/post/values

Wikipedia. (n.d.). Artificial general intelligence. Retrieved from https://en.wikipedia.org/wiki/Artificial_general_intelligence

Wikipedia. (n.d.). Artificial intelligence. Retrieved from https://en.wikipedia.org/wiki/Artificial_intelligence

Wikipedia. (n.d.). Chatbot. Retrieved from https://en.wikipedia.org/wiki/Chatbot

Wikipedia. (n.d.). ChatGPT. Retrieved from https://en.wikipedia.org/wiki/ChatGPT

Wikipedia. (n.d.). Deep learning. Retrieved from https://en.wikipedia.org/wiki/Deep_learning

Wikipedia. (n.d.). Glenn Greenwald. Retrieved from https://en.wikipedia.org/wiki/Glenn_Greenwald

Wikipedia. (n.d.). Large language model. Retrieved from https://en.wikipedia.org/wiki/Large_language_model

Wikipedia. (n.d.). Machine learning. Retrieved from https://en.wikipedia.org/wiki/Machine_learning

Wikipedia. (n.d.). Moral foundations theory. Retrieved from https://en.wikipedia.org/wiki/Moral_foundations_theory

Wikipedia. (n.d.). OpenAI. Retrieved from https://en.wikipedia.org/wiki/OpenAI

Wikipedia. (n.d.). Sam Altman. Retrieve from https://en.wikipedia.org/wiki/Sam_Altman

Wikipedia. (n.d.). Tucker Carlson. Retrieved from https://en.wikipedia.org/wiki/Tucker_Carlson

Yezzi, R. (2006). Popular moral frameworks I. Minnesota State University, Mankato. Retrieved from http://krypton.mnsu.edu/~uw9842qe/moralframeworks2.htm

Comments


© 2024 by Hollings Therapy, LLC 

bottom of page