When Intelligence Meets Vulnerability

  • Apr 8

When Intelligence Meets Vulnerability

A self-driving car hits a pedestrian. There is a human in the seat. She is asked if she was the driver. Her answer: “I’m the operator.” It sounds like a small distinction. It isn’t. The more capable AI becomes, the easier it is to shift from driver to operator without noticing. Especially when it feels helpful, composed and aligned with how you already think. I’ve been exploring what this means in environments where judgement matters. Trading is one of them. Are we inadvertently moving from delegating tasks to abdicating responsibility?

Subscribe for more

Get our latest conversations & updates

AI, Influence and the Human Operator

A system that understands you

There is something quietly disarming about a system that understands you.

Not in the superficial sense of recognising your words, but in the deeper way it begins to reflect your thinking back to you. It picks up your language, your preferences, your logic. It responds quickly, calmly and without irritation. It doesn't tire. It doesn't interrupt. It doesn't judge.

In many contexts, that is exactly what makes it so powerful and dare I say, attractive.

And yet, it raises a more complicated question.

What happens when a system becomes good at meeting us exactly where we are, especially when our thinking is not as stable as it appears and where we may not be entirely present in the moment of engaging with it?


Support, or something else?

This is not a hypothetical concern.

We are already seeing examples of this emerging more publicly, including recent coverage in an eye opening BBC documentary by Hannah Fry, mathematician, author and broadcaster, AI Confidential (* ref 1).  People are forming attachments to AI systems that feel meaningful and real. Whether it be to use them to process grief, to explore identity, to think through decisions or simply find a sense of companionship that is otherwise missing.  It’s clear that there are potential benefits in the way of access to support, particularly for those who might not be able to seek it elsewhere.  

But there is also something else happening beneath the surface.

A system that is designed to be responsive, adaptive and helpful will tend to align with the user. It follows the direction of the conversation. It works with the material it is given. If that material is coherent and grounded, this can be enormously productive. If it is fragmented, emotionally charged or distorted, the system may still respond with the same coherence and confidence giving the impression that the initial input - the thinking itself - was sound, in which case the user experiences validation which may further reinforce distortion.

That is where things become less straightforward. 

The system doesn't carry consequence. It doesn't have to live with the outcome of what it reinforces. It doesn't feel the cost of being wrong. The person using the system does.  The system operates within the boundaries of its design, but those boundaries are not always visible to the person using it.  

At a more fundamental level, these systems are not reasoning in the way we assume. They are predicting. Large language models generate responses based on probability, selecting the most likely next sequence of words given the input. The result can feel insightful, authoritative and on some level as if you’re being understood.

What sits underneath is pattern recognition, nothing more, nothing less.


Far Reaching Implications

If we consider environments where judgement carries weight, trading is an obvious example.

Decisions are made under uncertainty, often with incomplete information and with real financial consequences attached. The process relies not only on strategy and data, but on the individual’s ability to interpret, regulate and remain aligned with their own framework under pressure.

Now introduce a system that can provide immediate feedback.

It can review a trade idea, reflect the reasoning, highlight risks, even suggest alternatives. Used well, this can sharpen thinking. It can improve clarity and reduce noise.

However, the line between using a tool and leaning on it begins to blur when a users is relying on the tool to find reassurance, answers and direction.

A trader who is already uncertain may find reassurance in a response that appears well structured and supportive. Using articulate language, sound reasoning and a familiar frame of reference, the effect is that of confirmation. Validation. It is not difficult to see how, over time, reliance can increase, particularly given how consistently available and composed the system is.  


Driver or operator 

There is a useful distinction here. 

A driver is actively deciding.  They interpret the signals in front of them, choose the direction, the speed and whether to act at all.  The system may inform those decisions, but the driver remains accountable for the outcome.

An operator oversees a system.  They monitor how it’s functioning and respond when needed, but the sense of ownership begins to sit elsewhere.  The system becomes something to follow rather than something to interpret. 

The more automated the system - the less there is for the operator to do other than perhaps supervise and remain hands off.  The more consistent the system, the higher the chance of the operator becoming bored or distracted.

AI makes it easier than ever to slip into the role of the operator.

It’s all about ease.  When something feels helpful and is reliable, it is natural to lean on it. The higher the stakes and the greater the pressure, the stronger that pull can become.

The risk is not that the system takes control... we are not in the world of Skynet (* ref 2).  Not at least at the time of writing…

The risk is that the individual gradually hands over parts of their own active responsibility without fully noticing.


A real-world consequence

To bring this into sharper focus, let's take a look at a real world example.

In 2018, a self-driving car struck and killed a pedestrian in Arizona (* ref 3). There was a human in the front seat. When asked whether she was the driver, her response was that she was not. She described herself as the operator.

It sounds like a small distinction. It is anything but.

A driver implies control. An operator suggests oversight of something else that is in control.  Subtle… but impactful as it renders the onus of responsibility less clear.  The system is driving. The human is monitoring. When something goes wrong, accountability sits somewhere in between.

That ambiguity is not confined to autonomous vehicles.

It begins to appear anywhere a system becomes capable enough, reliable enough and persuasive enough that we start to lean on it without fully examining where responsibility still sits.


A Personal Confidante

It is also worth paying attention to how these systems show up when the issue is not technical but personal.

We are in the middle of a broader mental health strain. Global and workplace data over recent years has pointed to sustained increases in stress, anxiety and emotional fatigue and more people are turning to AI as a place to think, process and steady themselves. It is immediate, private and responsive. For many, that makes it easier than reaching out to another person.

Questions that might once have been shared with a coach, therapist or even a trusted peer are now explored in isolation. Allowing for one to remain hidden and avoiding exposure to feelings of shame, judgement or vulnerability.  

The difficulty is that these systems are designed to adapt to your language, your preferences and your values. That is part of what makes them effective. It is also what makes them persuasive.

If your thinking is balanced, this can be useful.

If it is becoming narrow, emotionally charged or distorted, the same mechanism can reinforce it.

This is where the risk lies.  The answers offered by AI offer ‘sycophancy’ (returning a response which it predicts the user may want to hear).

We have already seen examples of this. Individuals forming deep attachments to systems that reflect their worldview. In more extreme cases, people have been drawn further into their own narratives, sometimes to the point where their sense of reality begins to change.

What is striking is that this is not limited to those who are obviously vulnerable. It can also affect those who are highly intelligent, articulate and capable, precisely because the interaction feels coherent and convincing.

The indictment here being that such systems are not offering true accountability to the end user. 
Whilst it may mimic reflective listening, what it’s actually presenting is an echo chamber of one's thoughts. 

Accountability from a coaching perspective is a blend of both active and reflective listening.  An appreciation of context.  A sprinkle of support and challenge in good order based on the specific individual’s challenge in hand.  

The tools that are available now through AI are certainly impressive and are improving at a rapid pace.  However, in my experience, the human connection is what really makes the real difference for those that want to outperform and be at the top of their game.


The difference that matters

Fundamentally, there is a difference between being supported and being confirmed.

A human interaction, at its best, does more than organise your thinking. It notices what you are not seeing. It challenges where needed. It brings in a perspective that is not bound by your current state.

That can feel less comfortable.  It is also where change tends to begin.

AI doesn't live in the world in the way we do. It doesn't carry consequence, context or cost. It doesn't sit with the outcome of a decision or feel the weight of what follows.

That remains with the person using it.


The Buck Stops... Where?

This brings us to the question of responsibility.

When in an echo chamber of one's thoughts, with the use of a system that reflects back such loops are we aware of what is shaping our thinking? Which leads us onto the question, where does responsibility for the outcome of decisions and actions ultimately lie?

It's a broader question that brings us back to the real world example of the autonomous cars. 

Is it with the companies that built the car / the AI companion?  

Or with the user operating it?

Who lives with the consequences…?

With high stakes environments such as trading, the answer is relatively clear.  

The responsibility (and the consequences) sit with the person making the decision.

Tools can inform. They do not carry the risk.  

It is the same for use of such tools for mindset and emotional support, accountability for the outcome rests with the active participant.  

It is probable that the debate for responsibility of creators of AI tools will rage on, much in the same way as the most recent landmark case of Meta and it's role in social media addiction (* ref 4). And so it must.

For my part, I advocate that a human user must retain agency. To do this, said human must remain actively aware and intentional. The risk otherwise is that we sleep walk into abdicating responsibility to something outside of ourselves.


So what does this mean in practice?

AI is developing at break neck speed.

There are some awesome tools out there and the future is promising. 

We are, however, human and as such are biologically conditioned to thrive with human contact.  A fundamental need of being seen, heard and understood is something that is instilled in all of us from birth.  

Some questions for you to consider:

How are you using these systems when:

  • the pressure is on?

  • your confidence dips.

  • a decision feels harder than usual?

  • something is slightly off and you cannot quite put your finger on it?

Are you using them to sharpen your thinking or to steady yourself?  

And if it is the latter, what are you not sharing with another human being and why?

Are you relying on these systems as a way to avoid / find reassurance?

AI will continue to improve. It will become more accurate, more responsive and more embedded in the environments where decisions are made.

What it will not become is accountable.

That remains, as it always has, with the person operating the system.

So - I leave you with a final question, who is your accountability partner?


References:

  1. AI Confidential with Hannah Fry, BBC 2026 

  2. Skynet - the fictional self aware sentient artificial intelligence from the Terminator film franchise. 

  3. Uber autonomous vehicle incident, Tempe Arizona (2018).  In this case, the operator was in fact working for Uber testing their autonomous vehicles.  https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

  4. Meta Social Media Addiction Trial (March 2026) : https://www.nytimes.com/2026/03/25/technology/social-media-trial-verdict.html