Artificial intelligence: the promises and the perils
The fluid, fast-moving and often conflicting narratives around artificial intelligence (AI) are challenging for all of us. In the popular imagination, the pendulum swings between exciting opportunity and existential threat with dizzying speed.
But what’s hard fact, and what’s speculative fiction? And how should we think about AI as we look to the future?
In a fascinating talk at our headquarters in Edinburgh, Professor Shannon Vallor helped us to form a coherent picture of the promise and perils of AI.
Professor Vallor holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She started by observing that AI is not some kind of malign robotic predator: humanity has the tools to control how it is developed and used. She then punctured some of the myths and misconceptions around AI before setting out two contrasting visions of our AI-enabled future, the current risks arising from its use and the course correction that’s needed to ensure that AI works for us all.
Myths and misconceptions
As Professor Vallor pointed out, AI is not one thing. There are many kinds of AI tools, and these are not limited to large language models (LLMs) or generative AI (GenAI).
Nor is GenAI innately superior to human beings. Ultimately, GenAI is a mirror, not a mind: it reflects the patterns in the data that’s fed to it.
So, although these tools can do many things that humans do, they also can’t do many things that we can. The idea that they will replace or surpass us doesn’t hold up. For now, and for the foreseeable future, AI tools have no minds or motives of their own. In fact, AI tools simply reflect and magnify human behaviour and errors. They are trained on human data and are just as fallible.
And while GenAI tools are improving, their well-attested capacity for fabrication – making things up – appears integral to how they operate. They are also hard to improve consistently. When improvements are made in some areas, LLMs often develop greater problems elsewhere. We simply don’t know how much room is left for improvement.
Diverging paths
So what can AI offer us and where do the main risks lie?
Professor Vallor sees the greatest potential in AI’s ability to amplify and extend human abilities. A striking example is Google DeepMind’s AlphaFold. It has decoded colossal numbers of protein structures with a speed that would be impossible for human scientists. But it doesn’t replace scientists’ knowledge of proteins or biomechanical systems. Instead, it extends them. AlphaFold’s focus is narrow, and its limits are well understood. It’s very good at one thing, and that’s all it does.
That contrasts with LLMs. Many of us use these every day and they are already replacing the online search marketplace. Unlike AlphaFold, these tools can do many things, but that versatility leaves room for abuse, errors and unpredictability.
How, then, might our AI future play out?
When used productively, AI has huge promise for humanity. It could help us reduce scarcity, restore trust in institutions, cut working hours, foster cooperation through translation tools and renew our faith in the future.
But current political and economic incentives threaten to put these positive outcomes out of reach. Instead, we may have to contend with AI’s considerable negatives. These include unlimited demand for resources, security risks, an unsettling pace of automation and the unpredictable and unsafe behaviour of AI agents.
There are also threats to how we perceive our world, from fake imagery, media manipulation to human deskilling.
Risks and correcting course
As AI companies have increasingly shrugged off their responsibility for the ethical risks, these are burdens now falling on highly regulated industries, including finance. We’ve already seen individuals and institutions get into trouble as a result of LLM fabrications. Air Canada provides a recent example; it lost a court case after a chatbot promised discounts that the company did not fulfil.
Meanwhile, there is already growing evidence of harm – from employee deskilling to ‘workslop’ to consumer mistrust, security risks and psychological injury. This evidence is amassing much faster than any clear cost savings or productivity improvements.
But what could force a change of course? There are some signs. Evidence of an AI investment bubble is growing, so leaders are looking for more realistic assessments of AI’s promise - especially after a recent MIT study found that 95% of companies have not seen any return on their AI investments. Meanwhile, the public want more AI regulation, not less. These factors could short-circuit the hype cycle, giving us an opportunity to restore ethical guardrails needed for safe, reliable and trustworthy AI. But while we wait, AI is already putting considerable stress on our institutions, shared sense of reality and confidence in our own judgment.
The future is not yet written
Although AI won’t try to destroy or enslave us, it does threaten to sap our sense of our own potential – to plunge us into, in Professor Vallor’s words, a “vicious cycle of helplessness”.
But we don’t have to accept this. Although the dominant AI narrative stresses its inevitability, that’s designed to ensure that those who are currently choosing AI’s future remain in the driving seat. If we adjust this view without losing our faith in technology, we can look forward to a better kind of AI and a brighter future for us and our descendants.
For us at Hampden Bank, Professor Vallor’s insights provide much to consider. As we gradually adopt AI technologies in our own business, we are careful to ensure that Bankers remain central to our decision making. We are utilising AI to achieve operational efficiencies that free colleagues from time-sapping tasks and we remain entirely focused on enabling them to do what they do best – serving our clients with the understanding and personal touch that only humans can offer.
Professor Shannon Vallor presented ‘Understanding the Ethics of AI’ at Hampden Bank on 19 November 2025.