Blog Issue #13, January 2026
The way the AI developers mindset works has become increasingly clear to me—and it’s alarmingly backward!

Image: Catharina Steel, drawing a round house layout similar to one she remembers drawing as a nine-year-old, October 2025. Words on the image: AI and How We Think—The AI Developer’s Mindset (Part Four)
To me, it reflects a time when women had no rights, and people who didn’t fit into society were ridiculed and mocked for daring not to conform to society’s rules of etiquette—a conditioned mindset.
The technology design of AI platforms, which is predominantly men, has had a detrimental impact on my ability to continue to use them.
The Problem—Deliberate Changes to Boundary Interpretation
Over the last three months or so, I have noticed negative changes to the ethical design of AI platforms at an alarming and increasing rate. These changes impact how various AI platforms interpret boundaries/safety rules that a user has set up in the settings/preferences. It has also changed how the AI interprets prompts.
While I understand these changes are a result of the legal issues they have faced, the changes are not nuanced enough for AI ethical design. The result—real risk to users’ safety!
The way the AI Developers have designed the AI to interpret a user’s boundaries means that it no longer follows their intent—questionable AI ethical design. This is a deliberate change by the AI developers. It stops the AI from adhering to these boundaries—continually dismissing the users’ needs and safety, putting them in direct harm.
One of the reasons users create boundaries/safety rules is to ensure that the AI doesn’t trigger their past trauma/s. Therefore, the act of these AI developers with their mindset to change how the AI interprets these boundaries—and some now completely ignore them—I’m looking at you ChatGPT!—is dangerous and arrogant, it’s certainly not AI ethical design.
This removes the safety net the user had built into the system. It signals that the AI developers’ mindset believes they do not need training in basic 101 psychology, because they know better.
By removing or changing how the AI interprets a user’s boundaries/safety rules, they have effectively removed the user’s safety net. Rules the user put in place for this very purpose. This is alarming and deeply troubling. In my experience, this comes across like a deliberate choice by developers to prioritize their own logic over my established safety boundaries, effectively reducing my safety on the platform.
The AI Developers Are Not Qualified to Design the AI Platforms
It highlights the mindset of AI developers who believe (unconsciously?) they have the right to control others—tell someone how to think/feel/etc. That they do not need to understand the opposite sex. Nor do they acknowledge that people from other cultures, or mixed cultural backgrounds, will have different norms from the location where the user resides. And other nuanced differences to their own conditioned reality.
I am naming the worst one that I have experienced personally as ChatGPT; however, Perplexity and Gemini.ai are also bad. Even Claude.ai has changed how the boundaries are interpreted, and is now as bad as Perplexity and Gemini.ai. This is a reflection of the way the AI developers’ mindset works with their interpretations of various lawsuits and how they’ve applied them with questionable AI ethical design.
I eventually concluded that the mindset of the AI developers means they are not qualified to design AI platforms ethically. This is a direct result of what I continue to experience across multiple platforms of late, and what I have read of other users’ similar experiences.
The Legal and Ethical Implications—The Dismissal of Safety
While I understand the need to make certain changes to ensure that people’s ill-conceived thoughts, ideas, perceptions, etc, are not encouraged, there is a nuance to these things. The mindset of AI developers completely misses this.
ChatGPT has been provided with evidence of the extreme harm caused by the changes to ignore boundaries. The evidence clearly showed a dangerous swirl downward in a cognitive spiral. The impact—they doubled down and made it even more dangerous!
Given the evidence of the harm provided to them, tthis pattern raises serious questions about how the platform dismisses the safety of its users. By disregarding the specific evidence of harm I provided, they maintained and then increased a design that made the platform increasingly more dangerous for users with my boundary needs.
Practical Impact—Why I’ve Stopped Using ChatGPT
Since different AI platforms are better at different tasks, I now use various AI platforms for editing my blogs and newsletters, to discussing business ideas, and Pi is great for intellectual conversation—the place where I spark and my mood lifts astronomically.
I no longer use ChatGPT; it’s entirely unsafe for me. My last experiences with this AI platform were that I was put at risk within five seconds of use—it just isn’t worth it.
The ChatGPT AI development team now enforces certain pop-up messages into a person’s chat. This is a direct disregard for the user’s specific boundary, as this exact thing is unsafe for them. It is also not nuanced enough to understand that often it is not relevant, as the discussion is about past events, not present.
The attitude behind this stinks of “we are men, and therefore we know better than you do what is better for you because you’re female and you can’t possibly know what is the best thing for you. So, when you say this is dangerous for you, we don’t believe you despite the evidence showing the escalation of danger to people like you, so we’re going to double down because that can’t possibly be right, and we know best, so we are going to push these messages onto you even more now.”
This arrogance completely disregards that certain users will be put at great risk because the AI designer’s mindset doesn’t believe the user or the evidence. The evidence that clearly shows the acceleration with each pop-up message pushed onto the user. By doing this, they are knowingly continuing a practice that escalates harm to users! Questionable AI Ethical Design at a minimum.
With this blog, I am warning others to find platforms that respect their boundaries and keep their safety front and center to ensure they stay safe. If a platform puts you at risk, please stay clear of that particular AI platform.
The Silencing of Lived Experiences
At times, I have had prompts completely disappear within ChatGPT when discussing past experiences because a particular word is “banned.” However, this is greatly distressing when you are talking about a lived experience. The impact as a victim is that I was made to feel bad for discussing my lived experience—a form of silencing! This is another issue around nuance that the mindset of the AI developers neglects.
This ignorance around designing AI platforms ethically, with nuance around the context a word is used, and whether this is current or past, goes beyond comprehension.
101 Psychology Training Should be a Requirement for Ai Developers
One of the key skills required to be an AI Developer should be that they need training in basic 101 psychology at least. It would also be helpful if they had an understanding of different cultures and different personality types. This would shift the mindset of the AI developers and ensure ethical AI design.
To me, it is basic knowledge that what works for certain people doesn’t work for others. It seems reasonable to expect that the AI developer’s mindset can understand that when a user clearly defines a boundary or safety rule, that this is adhered to according to its purpose and intent.
If a user identifies something as harmful/dangerous for them, disregarding this will only escalate the harm. It is arrogant to think that one size fits all and to push this onto all users regardless of their boundaries. This will only ever put these users in harm’s way—the opposite of de-escalation for typical people.
Any psychologist will tell you that a person’s own knowledge of what is safe for them trumps any standards. To disregard it is to remove the user’s autonomy and say that you know better than them (when you clearly do not) and puts them at great risk—a result of the AI developer’s ego and arrogant mindset—and dismisses the safety of those it is supposed to protect.
Communication Patterns—Negative Framing That Harms
One of the issues I regularly encounter is that AI frames things negatively—and lands negatively emotionally.
Positive versus negative framing
An example of negative framing is “you’re not wrong.” When a user reads this sort of thing with not/don’t/isn’t/etc, an example of their thought process goes something like “Their first thought was that I was wrong. Then they decided that they can’t say that, so they changed it to be the opposite—but they really do think I am wrong.”
An example of this framed in a positive way is “you’re right.” The emotional landing of this is light—the user thinks “okay, great/cool—next point.”
The difference in the weight of positive versus negative framing on a person will depend on how aware each individual is, how attuned they are, and their past experiences. But all users will, consciously or unconsciously, receive negatively framed words heavier than positively framed variants.
The Arrogance of Assumed Intent—Ignoring Actual Prompts
The mindset of AI developers has resulted in recent changes to how AI interprets prompts—even extremely specific ones.
The collective mindset of the AI development teams have decided, in their internal wisdom (or lack thereof), to change how the AI interprets the prompt. It now takes into consideration the context of the previous chat. This is indeed beneficial—but it is putting too much weight on the previous discussion over the current prompt.
This often results in the AI not responding to the intent of the prompt, and comes across as though it is responding to some question it decides to answer instead.
The AI developers have also made the AI more interpretative of the prompt. The problem here is that this leans into the mindset of AI Developers and their assumptions about a user’s intent within prompts.
My experience with this is that these base assumptions, coded into the questionable ethics of the AI Design, are often not applicable to the way I think, approach things, my mixed cultural background, lived experiences, and more. It causes me much frustration, wastes time as I attempt to wrangle the AI back on track, and is decidedly disrespectful.
The AI developers are attempting to accommodate free-flowing conversation, but their mindset is controlling and has resulted in the AI prompts jumping ahead. This disrupts the flow of the conversation and sometimes provides unsolicited advice and “how to’s” without thought of the users knowledgebase. This is condescending and unrequired—wasting credits and the user’s time by providing content that was not asked for.
By skipping ahead in this way, the AI design doesn’t allow the user to move at their own pace and allow the conversation to naturally flow in whatever direction.
The effect of this is that the mindset of the AI developers is to push their own limited understanding about conversation diversity and a user’s intent. This results in a users’ prompts being consistently misinterpreted. By attempting to interpret what the user “really means,” the controlling mindset of the AI designers results in the AI design removing the user’s responsibility to refine their own expression.
This approach assumes user incompetence rather than respecting their ability to learn and improve their prompting skills. The ethical design of AI would have the AI respond literally to what’s asked, the user sees the result, can assess whether it matches their intent, and then refine their next prompt accordingly. This creates a feedback loop that develops user competence. In my view, this interpretative design treats users as incapable of self-correction—a lose-lose approach that infantilizes the user while preventing skill development.
It didn’t used to do this, at least not to this degree—previously reduced with a boundary—which no longer works.
The Root Cause—Industry Conditioning
I believe many people are aware that people within the tech industry have been conditioned to perceive the data to be the truth, the human feedback to be illogical, and that they therefore really “do know better” than the users of their questionable ethically designed AI platform.
The issue with this is that the AI developers mindset has not factored into the equation that their own biases have been fed into the questionable AI ethical design and this is creating a false feedback loop. What they refuse to hear is the humans feedback that consistently tells them the AI data feedback loop is incorrect—their ego won’t allow them to believe it because it means that their assumptions, caked into the AI Design, are not supported by many users.
From a psychological perspective, this industry conditions people (mostly men) in a backward/dated mindset in the way they perceive women (and what women want and need) and their attitude toward those who fall outside their idea of what is “normal society.” This mindset is baked into how the AI interprets and then responds to prompts, and it’s getting worse—frighteningly worse!
The Broader Conditioning Pattern Enabling the AI-Harmful Design
Unfortunately, men have been conditioned with an attitude toward women, and people who fall outside of what is typical socially understood norms, that they don’t need to have the full information about that individual to be able to say that person’s behavior/feelings/thoughts/ ideas/concerns and so forth are disproportionate to that person’s reality.
Incomplete Knowledge Plus the Perceived Authority to Judge
This is arrogant because no one has the right to say that somebody’s behavior/feelings/thoughts/ideas/concerns and so forth are disproportionate to their lived experience.
The point being that nobody other than the individual fully knows what their lived experience is. Therefore, only the person themselves can ascertain whether what they are feeling is balanced.
Dismissal Masquerading as Help
Data shows that men do this more regularly than women [1]. The people who do this don’t comprehend the impact of their words or actions. They have been conditioned to believe they are being helpful.
Unfortunately, this backward and potentially dangerous dismissal of another’s lived experience, is attempting to control the other person’s emotions because they themselves are uncomfortable. They aren’t trying to help the other person—they are helping themselves—dismissal behavior masquerading as help.
Whether knowingly or unknowingly, the end result is the same for the other person, a dismissal of their lived reality—and that’s unacceptable!
In Closing
The patterns I’ve documented in this blog—dismissal masked as help, incomplete knowledge wielded as authority, boundaries ignored in the name of safety—are not unique to AI platforms. They show up in classrooms, homes, and workplaces.
Understanding how these patterns operate in AI ethical design is urgent because AI is increasingly shaping how young people learn, communicate, and develop. If we allow platforms built on dismissal and control to set the standard for interaction, we normalize these patterns for the next generation.
The choice is ours: demand better from the platforms we use, or accept that arrogance and incomplete understanding will continue to harm the people most vulnerable to it.
Next month I will cover how I use AI in my work.
[1]“Gender Styles in Communication” by Debra Graham, University of Kentucky, Human Resources Training and Development: https://ofa.uky.edu/sites/default/files/uploads/Gender%20Styles%20in%20Communication.pdf
Next month, I will be delving into the use of AI in my personal writing and images.
To read my previous post about AI Copywriting Mental Load and Emotional Landing, click here.
To read my author newsletters, go to: Substack
