Community StoriesN2 InsiderTech Talks

We have given AI the keys to our kingdom

Derek Watson
Derek Watson
N2 Blog

Share this article!

Share on email
Share on linkedin
Share on twitter
Share on whatsapp
Share on telegram
Share on facebook

What Is FinTech? The Basics

1. We Always Fail the Marshmallow Test:

The first time I was introduced to AI was as a child in school, but the first time it really grabbed my attention was when IBM’s Deep Blue chess computer beat Garry Kasparov, in 1997. I have always read and thought about the impact of Artificial intelligence on humanity, the good, the bad and the damn right ugly. Of late, like most of us, I imagine, have been digging in to learn, understand and imagine the impact that large language model (LLMs) transformers, which are a type of artificial intelligence that can generate text, translate languages, and answer questions in a human-like way, will have for our species.
While I fully understand and am uber excited by the power of AI as a whole, with its potential to solve so many issues, all the chatter, especially that from the leaders in the industry, have driven me to pen my thoughts on how seemingly limited our own intelligence is.
First, calling AI Artificial seems to give us a level of comfort that we are in control, can identify it, and if necessary, can separate ourselves from it. All from one word, we are given this illusion. To frame it another way, when humans make or produce something, rather than it occurring naturally, we call it ‘artificial’.
So, what have we done so far? We saw the sun’s light and thought, ‘we can do that’ – so we made our own with electricity and bulbs. Then we got to thinking about food and decided to cook up artificial flavours in a lab. Pests were a problem, so we created pesticides, never mind the damage to our ecosystems and health. Finally, we went big and made artificial plastic everything – convenient, sure, but now we’re dealing with global pollution, climate change, and loss of biodiversity.
I could go on, but this is just to demonstrate our amazing short-term intelligence to innovate and our incredibly limited long-term intelligence to fully understand the negative consequences.
The second is the consensus view that we can maintain control over today’s weak AI, you know the one that does not understand what it is saying and has no conscience. It’s said we don’t need to worry because we are told the danger will only rear its ugly head should general AI evolve, that’s the one that actually understands the answers it generates to tackle tasks and achieve goals, or sentient, which is for all intent and purposes like us, with a conscience. This is just missing the mark totally and shows an incredible naivety about what truly differentiates us from the animal kingdom and sits us squarely at the top.
Understandably, activity plummeted at the end of 2022. Public market tech valuations crashed, inflation soared, geopolitical tensions rose, war, and the Climate Crisis continued to worsen and VC’s and Startups were crushed.

My overarching conclusion is that in essence we have given away the keys to our kingdom and have opened Pandora's box.

2. A look back in time.

The first life form on earth appeared 3.8 billion years ago and it took another 3.794 billion years for the first hominid species to appear, Homo Sapiens are estimated to have appeared about 250,000 years ago. To give this some context, if scaled to a 24 hour clock we appeared less than 1 minute ago, and we were pretty basic back then.
Fossil evidence shows that Neanderthals went extinct around 40,000 years ago, leaving us Homo sapiens the last of the hominid species, there are many theories around why they went extinct:
But there is one interesting theory that also has some pretty damning evidence and that was we beat them all to death, a significant number of Neanderthal skulls have been found with evidence of blunt force trauma.

But a Neanderthal head bashing conquest would have needed some other traits and skills, and these are the main unique differentiators between us, hominid species and the rest of the animal kingdom.

These differences, which have allowed us to survive, have driven us forward to what we have become today, and can be summarised as our superior intelligence. It has given us language, curiosity, questioning, problem-solving, and the ability to band together through communication. But it’s a double-edged sword. We’re also very emotional and pattern-seeking creatures, constantly looking for patterns, sometimes latching onto ones that aren’t there. This can lead us to be easily manipulated – one of our biggest weaknesses. This brings out our innate need to belong and believe. It’s part of our genetic makeup and can shape our perspectives and create our biases on a wide range of issues – from our personal identity and our standpoints on climate change, to the sports teams we passionately follow, the conspiracy theories we might believe in, our political standings, and our religious beliefs.

3. AI Sci-fi versus reality.

The term Artificial intelligence was coined in 1956 at the Dartmouth Conference where researchers John McCarthy an AI pioneer who developed the LISP programming language, Marvin Minsky a cognitive scientist who co-founded MIT’s Media Lab. Nathaniel Rochester, a Mathematician who helped develop the IBM 701 and later became the director of the IBM Watson Research Center and Claude Shannon, “Father of information theory”, gathered to explore how machines that could mimic human intelligence could be built.
But even before this, there had been many plays, books, and films made, from
Other notable works include
followed by blockbusters like
But here is the thing, each of these have something in common, there is always a physical or spatial awareness aspect, an android, robot or cyborg, or an AI with human level consciousness. There are very few imaginary works that lay outside of a physical construct and those that do have all been written on the premise of consciousness.
This makes us all feel pretty relaxed about what we see today, just some AI program that can write stuff like a human. But; remember Blake Lemoine? The senior software engineer at Google? He was working on a project called LaMDA and he believed that LaMDA could understand his questions, provide thoughtful answers, and even seemed to have its own thoughts and feelings. Obviously, he was shot down and fired. Even as people were quick to dismiss LaMDA as non-sentient, there’s something that struck me about the whole situation. Here’s an engineer, someone with a good job and a solid career, risking it all because he believes the AI is sentient. The power and precision of its language played with his emotions, an intimate connection was sparked, the sort we’d usually reserve for human interactions.
Now Blake was a senior engineer with a strong understanding of how these models work, and he was convinced, you only have to spend two minutes on twitter or linkedin following the army of prompt gurus that have emerged to realise that they are busy convincing everyone of this new power to manipulate and fast-track all sorts of things. Place this in the hands of your typical bond villain, hahahaha evil laugh, and here is the massive red flag we should be thinking about.

4. AI yesterday and AI today.

We are all aware of how Social media has taken over our lives, our purchasing habits, our beliefs systems and the spread of disinformation. They have been using AI targeting for many years based on the physiological traits that have been well studied and documented since we all sat around a fire and the village leader told us the stars were gods, and we wanted to believe, and we had nothing else so it made sense, and we followed the storyteller as they were wise.
Well, the Googles, the Facebooks, the Tiktoks and the many others have been using AI models for many years, understanding our interests, demographics, and even facial expressions better than we can, playing on this information to suck us deeper into their profit making centres. Of course this is all wrapped up as a user benefit, giving us the best ads or information at the best time. But things have just taken a huge turn, we used to pretty much be able to tell if it was a BOT we were interacting with, but no longer, we will now start engaging in conversations where we will have no idea who we are speaking with, and those conversations will have the ability to play on our weaknesses and emotions to drive us where it has been instructed to take us.
We have already witnessed this manipulative behaviour throughout history, if in the realm of religion, dictatorships, war, cults or even the flat earth gang, and we witnessed this at scale for the first time when Cambridge Analytica used AI to target voters in the 2016 US presidential election. Using a technique called Psychographics, a way of segmenting people based on their personality traits, values, and beliefs. Cambridge Analytica was able to target them with ads that were tailored to get them to vote a certain way. There were also many implications that other sovereign states had been involved in much the same way to influence the outcome.
Belief systems form the backbone of society but are built on intangible concepts. Let’s take a look at the main ones:

The question is does AI need consciousness or understanding to manipulate us?

5. The Mechanics of Manipulation.

There is a fine line between manipulation and persuasion but they both involve influencing others. Manipulation is seen as using unethical tactics to influence people whereas persuasion is generally seen as a positive and ethical method of influencing others.
One is seen as self-serving, influencing someone for the benefit of the other, the main tactics used are giving just the information that suits the cause and creating pressure or fear. The other is about identifying and fulfilling a genuine need of the other person, whilst being transparent with all the information about the intent, leaving the person with the freedom to make an informed choice.
Grey area at best? or have we already crossed the line, and all campaigns whether marketing a product or getting voters for an election are outright manipulation tactics. If we take a simple example in marketing then we know that the company paying more to display an advert is 100% self serving, do they display all the information so that the person can make a fully informed choice, well no, they say just what they want and we have to research and fact check, do they create pressure and fear, well yeah, how often do we see this offer only available for 24 hours, and hey, the term FOMO doesn’t come from nowhere.
Just dig down into the practices of the belief systems we all adhere to and make your own judgement about if these are manipulation or as we would all like to believe,  fully transparent persuasion that leaves us with our own agency.
The fine line between manipulation and persuasion lies in the intent, transparency, respect for the individual’s autonomy, and the ethics employed in the process.
AI using large language models can be programmed to relentlessly manipulate, using the data gathered, the accuracy of targeting and exploitation of cognitive biases will reach another level, as you interact with the AI agent it will lead you to exactly where it is programmed to lead you, playing on confirmation bias to solidify your beliefs. But the next level will be taking conversations to an emotional level, and using language that is relatable to you to build a relationship, a very intimate relationship, a relationship that can convince us to buy a particular product or buy into a particular belief system, or to vote for a particular political party. And if you dont think it is possible, just read the glowing reports on social media about how ChatGPT is the go to all knowing and all seeing Oracle, or imagine that all religion is based on scriptures that were written by bodies not of this earth.

6. The Implications of Mind games.

The use of AI in influencing decisions, shaping beliefs, and manipulating emotions has so many huge psychological implications. The starting point, as we are already witnessing, is praying on the weak and stupid, this seemingly innocent bit of tech is fast becoming so many peoples oracle, with no understanding or care about truth or accuracy. The second part which is already in play are the agents being created, and depending on what the agents are built for the central concept will be to manipulate our cognitive biases – systematic beliefs in thinking that influence the decisions and judgments we make. The easiest one AI can exploit is confirmation bias –by showing users content that aligns with their existing beliefs, AI can create echo chambers to polarise and radicalise us at scale.

These LLMs already write way better than the vast majority of us, have been trained on more information than any of us can read in a lifetime, and have the ability to mimic human-like interaction that will have unbelievable mental health effects. The agents being built will connect with us at an intimate level, leading us to share personal information or trust them more than they would a human. They have absolutely no understanding of the consequences of their actions, but  the human operators do and the constant monitoring and analysis of users’ behaviour will allow these systems to become even more connected and better at driving conversations for their end goal that they have been programmed to achieve. information about the intent, leaving the person with the freedom to make an informed choice.

7. Morals, Ethics, and Values.

Societies and culture have been formed over many generations by the stories past down to us, new stories and fantasies are created and added to them, they come in many forms, through word, art and music, and this is humanity, a creation of stories that forms the culture and fantasies that we live inside.
But, these are no more than words, pictures and sounds that stimulate us and give us the feeling of belonging and understanding, they are not biologically coded in our DNA, we are not born passionately supporting a sports team or loving a certain genre of music, and we certainly don’t have ill conceived biases or political views.
All cultures have a common thread, we have all defined our moral, ethic and value systems that give us our connections and commonality, so how will we cope and adjust to the new actor that just entered stage right? One that has read more than any person ever could, one that is connected to more people than any person ever could be, and one that hears every digital word uttered in real time, learning from billions of data points each and every day, wow, that is absolutely mindblowing.
Lets also consider that this technology has no borders, only the human engineers have some control, they have the control to write the weightings of how the model should consider the importance, correctness and relevance of all the information it has access to. They set the moral compass, the biases and values that it should adhere to.
Now we do not have a very good track record when it comes to diplomacy and respecting other cultures and beliefs, in fact it is quite the opposite, shoving values down other peoples throats being the main mantra, and the most prominent technology to have grabbed the world’s attention, Chat GPT is led by a man who seemingly was morally ok to go from an open source, non profit to a closed source profit and broke the moral agreement of not releasing such technology to the world and then broke it again by connecting it to the internet. Only to then be followed by all the other players as missing out on destroying humanity, I mean profiting from AI was an opportunity that could not be missed.

8. AI is going to write our future and there is nothing we can do to stop it.

We don’t need better AI or general AI to have a problem, we are already there. What is clear is that the level is already capable enough because it is us humans who are writing the objectives and techniques of the agents. As we pour more information into it, it learns and grows faster than we can imagine and accelerates in the direction of the objectives given to it.
The immediate effects over the next year or two are going to change humanity forever, from our consumer habits to the amplification of real human connections being totally lost.
The knock on effect over the next few years will be felt in many areas:
Our history is no longer our story to tell. Instead, it’ll be a machine-generated narrative, twisted by algorithms and AI. Our varied experiences, our unique viewpoints will be forced into a simplified, sterilised mould, crafted not by us but by a handful of data scientists and their machine learning models.
We risk losing the rich complexity and contradiction of human life, reducing it to mere data points. First-party data, our raw, personal experiences, will get diluted, processed into a generic consensus. The result? A potentially warped version of reality.
Our cherished stories, the shared tales that shape us and give meaning to our world, are under threat. Artificial intelligence, with its cold, unfeeling interpretation of data, threatens to erase our human narrative. It’s not just our past at stake, but our present, our future, and our very identity.
For all the good it can bring, opening Pandora’s box that holds our magic is the clearest display of human limited intelligence, irresponsible at the highest level and a failing of our governments to do their job.

9. Regulation, are we too late?

Let’s have a little think about how we regulate, and why we have missed the boat. If we consider the drug industry, it has to go through many many tests and trials before it can ever be released for public consumption and yet still comes with a disclaimer about the size of war and peace.
What about financial market regulations? Well, that one is totally reactive. Someone rips off the system, there’s a panic, and then it’s overly regulated until the next person discovers a loophole and rips off the system.
Some of the ideas around what needs to be regulated are:
These are all from the archaic regulatory playbook that bears zero relevance to this technology and will have zero impact.
So let’s use a little bit of logic here and go through some of the suggestions I have heard:

10. So where do you go from here?

So where do we go from here now that we have given AI the keys to our kingdom and opened Pandora’s box. That’s the big question, isn’t it?
Earlier this year, I had the opportunity to speak on the Propeller podcast about the concerns surrounding the release of such powerful LLMs into the public domain. The issues haven’t changed—these models are riddled with bias. These biases aren’t coding issues; they reflect the deep-seated beliefs and perspectives of the engineers who build them. Forget about how the bond villains, the bad actors, the greed driven businesses will use them as already discussed. The danger lies not only in these, but the inbuilt biases manipulating existing social, cultural, or economic inequalities but also in their ability to become a weird false reality.
As LLMs become more sophisticated and more integrated into our daily lives, they risk becoming the lens through which we view the world. If we treat these AI models as infallible oracles, then the biases they carry will subtly influence our understanding of truth. It’s a kind of feedback loop, a digital echo chamber, where they reinforce and normalise their inherent biases by continually confirming pre-existing prejudices. The more we rely on these systems, the more likely we are to accept their outputs as truth, embedding these biases deeper into our societal structures.
The crazy growth and capabilities of this tech underscores the urgent need for oversight. We can’t leave such influential tools in the hands of the private sector. Their primary objectives—profit maximisation, competitive edge, shareholder satisfaction—don’t align with the societal requirements . If we do not have an explicit commitment to ethical AI development and deployment we are going to sleepwalk into a dystopian future.
In my view, artificial intelligence, alongside interplanetary travel, represents the peak of human ingenuity, and what we can expect to see in the not-too-distant future. Along with the speed and development of turning our planet and environment into a place where we can safely exist, we are getting ever closer to seeing and feeling the possibility of a utopian future for our species.

Let’s just take the fluffy out of what a utopian future looks like. It is not skipping around with flowers in our hair as portrayed way too often,  it basically looks like a global society where ethics, morals and values are equally understood, respected and adhered to, where human constructed belief systems take a back seat and our real needs become the priority.  We are humans, we feel emotions,  have sensibilities, and there will always be disagreements and points of view, but now we need to put them aside so that we can actually all benefit from what we have at our fingertips.  Does that not sound like a better future for us all, because if we don’t let this run as we always have then the opposite is very likely?

But to ensure a safe and a smooth transition, we need to apply long intelligence and make some tough decisions now. Without these precautions, we’re just flipping a coin on our future.
So what are these tough decisions? Well, it begins with establishing a global governance and regulatory body that oversees, evaluates, and controls all forms of AI technology before they hit the market. Imagine a top-level centralised system, an AI in itself, designed to vet all other AIs. Can we box AI? Perhaps, at least for now. But there’s no doubt in my mind that as it continues to learn, it will eventually outsmart any constraints we put on it.
We’ve seen something similar with the International Atomic Energy Agency (IAEA) and its role in maintaining the standards for nuclear technology. But for AI, we’d need to take it a lot higher. A huge budget would have to be set aside to assemble a dream team of top-level international AI engineers. Their task? To build the gateway that provides complete oversight of AI development.
This can’t be achieved through handshake agreements with the private sector. Instead, we need an international security mandate that grants access to every digital node and piece of data on our digital planet. And this body must stand independent, free from political sway and financial influence. It should be funded from a diverse range of sources, maintain complete transparency in decision-making, and include staff from a broad spectrum of backgrounds and regions.
Accountability, oversight, and adaptability should be its guiding principles in order to make sure that these LLM take into account all ethics, morals and values. It would require regular external audits, a stringent code of conduct, robust whistleblower protection policies, and a commitment to constant learning in this rapidly evolving field.
Establishing such a body and such an AI to verify these LLMs is a huge undertaking. All development releases by the current players should be put on hold until it is established. It requires international consensus and strong guarantees from all stakeholders. But it’s a necessary step to ensure that AI technologies are developed and used for the benefit of all, rather than the few. It’s not just about preventing a dystopian future; it’s about actively building a utopian one.

Planning your fundraise? 

Get Matched

Sign up now and get the latest updates on investor preferences and access to our fundraise toolbox, full of helpful tips

We care about protecting your data. Here's our Privacy Policy