Artificial Intelligence

How the UK can lead in responsible AI development

12th February 2025
Paige West
0

The use of artificial intelligence is once again at a national crossroads. The UK government’s latest AI Action Plan comes with considerable ambition, promising to transform public services, supercharge the economy, and uphold national security.

Yet, while the plan excels at championing high-performance computing (HPC) and big-data-driven models, it may also be overlooking certain key avenues for genuine, long-term AI leadership. As with the historic Alvey Programme in the 1980s, a government-sponsored research drive that achieved some notable successes but struggled under institutional biases, the current policy environment risks embracing bigger and faster AI at the expense of truly novel approaches. Indeed, if these blind spots remain unaddressed, the future may be dominated by highly capable but opaque, energy-hungry systems of questionable interpretability.

Clive Hudson, CEO of Programify further explores.

Beyond the illusion of intelligence

A core assumption in much of the current plan is that scaling data and computer power automatically yields superior outcomes. This brute-force approach, epitomised by large language models (LLMs), has shown extraordinary gains in natural language generation and other tasks. But these gains often stem from statistical pattern-matching, not genuine comprehension. As a result, LLMs can produce contradictions and unverifiable claims, behaving as black boxes that provide little insight into how or why they reach their conclusions.

The approach seems compelling on the surface, but when analysed through the combined lens of Karl Popper's scientific falsifiability principle and George Miller's work on human cognitive architecture, fundamental issues emerge. Popper argued that scientific claims must be falsifiable and mechanistically explained, while Miller demonstrated that human intelligence relies on structured understanding rather than pure pattern recognition. From this perspective, simply scaling up statistical correlations, no matter how sophisticated, cannot bridge the gap to genuine intelligence or understanding. Governments and industrial players have seized on this apparent 'intelligence', but the underlying logic remains flawed when examined against these established frameworks of what constitutes genuine understanding and intelligence.

Energy efficiency: more than an afterthought

In pushing for national AI champions, the UK government hopes to build massive data centres, or what the Action Plan calls ‘AI Growth Zones’. Critics point out that these centres are poised to become the pillars of the UK’s AI infrastructure, yet few seem troubled by their colossal energy footprint. Training and running state-of-the-art deep learning models can demand exponential increases in computational resources, placing a hefty burden on energy grids and raising uncomfortable questions about sustainability.

This blind spot resembles the era of the Alvey Programme, where enthusiasm for large-scale computing overshadowed concerns about longer-term environmental and economic costs. Researchers who have experimented with lower-energy alternatives, such as Patom Theory’s symbolic reasoning and Role and Reference Grammar’s structured linguistic analysis, remain sidelined in broader policy debates. While these frameworks may initially appear less dramatic in their capabilities, they offer a model of efficiency that ought to be scaled up, not ignored.

Supercortical Theory: a novel alternative

In seeking to escape the limits of brute-force AI, initiatives like Supercortical Theory (SCT) present a strikingly novel alternative. By structuring intelligence through context-aware, dynamic reasoning, SCT promises real-time adaptation, richer explainability, and far lower energy demands than many mainstream neural network architectures. Where the UK plan currently prioritises HPC clusters and vast training data pipelines, SCT instead models cognition as a more holistic, logic-driven process that scales intelligently without exponentially increasing resource use.

Yet, frameworks like SCT struggle to gain the direct support that the government’s plan lavishes on large language models. The new policy’s assumption that frontier AI must always rely on data volume and top-tier GPUs, may inadvertently shut the door to more creative and cognitively-inspired solutions. Unless the institutions in charge remain open to testing their assumptions, they risk perpetuating a cycle in which even radically better designs can be overlooked in favour of more conventional, but less flexible, deep learning approaches.

A future that is explainable and trusted

The pursuit of transparent, auditable AI is likewise caught in a crosscurrent. On one hand, the UK Government’s Action Plan emphasises building public trust, including through initiatives to enhance AI safety research and to broaden the focus of regulators. On the other hand, the Government’s current fascination with frontier language models means many of these systems, by design, remain black boxes incapable of reliably explaining their reasoning. In domains as sensitive as healthcare and financial services, such opacity is untenable.

SCT’s logic-driven framework offers a counterpoint: decisions can be understood and traced at each step, making it far easier to demonstrate compliance or fairness in high-stakes environments. Such clarity is precisely what is lacking in deep learning models that can produce superficially plausible but misleading outputs. Unless the Government acts swiftly to address its own blind spot regarding explainability, the country’s AI systems may become more advanced, yet less trustworthy.

Controlling the UK’s AI destiny

A final blind spot lies in how the UK intends to carve out genuine technological sovereignty. The UK Action Plan speaks of supporting national champions and forging international partnerships. However, most current frontier AI firm, with a few local exceptions, operate in the US or China. Building domestic supercomputers or offering bigger HPC budgets is no guarantee that truly cutting-edge, independent research will flourish.

Once again, SCT emerges as a prime example of a UK-led technology that could bestow both autonomy and enduring competitiveness on the country, were it allowed to scale. Though the Government aspires to reduce reliance on overseas AI, it is unclear whether smaller, home-grown initiatives will secure the support they need amid the race to build bigger data centres and sign transnational tech deals. But if the UK intends to control its AI destiny, frameworks that are more efficient, interpretable, and safer should be prioritised, not relegated to privately-funded side projects.

Towards a smarter AI trajectory

The UK may yet lead in responsible AI innovation, but only if it seriously reckons with the flaws in the current plan, particularly its uncritical embrace of ever-larger compute and data architectures. True progress means embracing the likes of Supercortical Theory and other cognitively inspired models that reject the one-dimensional race to ever-greater scale. Doing so would move the country from simply investing in artificial intelligence to genuinely investing in intelligence: grounded, efficient, transparent, and adaptable. For AI to serve the national interest responsibly, the Government must broaden its strategy and dismantle the blind spots that prevent more radical approaches from being fully tested. If these shortcomings remain, the UK risks another Alvey, spirited but constrained by institutional inertia, rather than a genuine renaissance in AI.

Product Spotlight

Upcoming Events

View all events

Further reading

A selection of Artificial Intelligence articles for further reading

Read more
Newsletter
Latest global electronics news
© Copyright 2025 Electronic Specifier