Artificial Intelligence

The promise of AI relies on scaling security as Edge AI booms

15th August 2024
Harry Fowle
0

The growth of AI is driving an increased focus on security and more use cases to happen at the Edge, according to new research from PSA Certified. But with two-thirds (68%) of technology decision-makers raising concerns that rapid advances in AI risk outpacing the industry’s ability to secure products, devices and services, the acceleration in AI needs to be matched by the same acceleration in security investment and best practice to ensure trusted AI deployment.

A major factor impacting the need for greater AI security is Edge technology. With the ability to process, analyse and store data at the Edge of the network, or on the device itself, Edge devices have efficiency, security and privacy advantages over a centralised cloud-based location. This could be why 85% of device manufacturers (OEMs), design manufacturers (ODMs), SIPs, software vendors and other technology decision-makers believe that security concerns will drive more AI use cases to happen at the Edge. But in this push for added efficiency, the security of Edge devices has become even more crucial and organisations will need to double down on securing and protecting their devices and AI models in order to meet the demands of deploying AI at scale.

Addressing the AI security lag

Security matters across the supply chain, whether you're a deployer of services, a device vendor or a consumer of those services. Indeed, the survey of 1,260 global technology decision-makers found that security has increased as a priority in the last 12 months for three quarters (73%) of respondents, with 69% now placing more impetus on security as a result of AI.

However, despite AI’s promise to catalyse the importance being placed on security, there is an AI-security lag that needs to be closed if its full potential is to be realised.

Only half (50%) of those surveyed believe they are currently investing enough in security and a significant proportion are neglecting to prioritise important security foundations, like security certification, that underpin best practice. Just over half (54%) are currently using externally validated security certifications, independent third-party testing/evaluation on products (48%) or threat analysis/threat modelling (51%) as a means to improve the security robustness of their products and services. These easy-to-implement security fundamentals should be foundational as organisations seek to build consumer trust in AI-driven services.

David Maidment, Senior Director, Market Strategy, at Arm (a PSA Certified co-founder): “There is  an important interconnect between AI and security: one doesn’t scale without the other. While AI is a huge opportunity, its proliferation also offers that same opportunity to bad actors. It's more imperative than ever that those in the connected device ecosystem don’t skip best-practice security in the hunt for AI features. The entire value chain needs to take collective responsibility and ensure that consumer trust in AI-driven services is maintained. The good news is that the industry recognises the need to prepare, and the criticality of prioritising security investment to future-proof systems against new attack methods and rising security threats linked to rapid adoption of Edge AI.”

AI and security: net positive but both must scale together

With four in five (80%) of respondents claiming security built into products is a driver of the bottom line, there’s a commercial as well as a reputational benefit to continued security investment. The same proportion (80%) also agree that compliance with security regulation is now a top priority, up by 6% from those listing it as a top three priority in 2023 (74%).

With Edge AI booming alongside an exponential increase in AI inference, the result is an unprecedented amount of personal data being processed on the billions of individual endpoint devices, with each one needing to be secured. To secure Edge devices and maintain compliance with emerging cybersecurity regulation, stakeholders in the connected device ecosystem must play their part in creating a secure Edge AI life cycle that includes the secure deployment of the device and the secure management of the trusted AI models that are deployed at the Edge.

Despite some concerns that rapid advances in AI are outpacing the industry’s ability to secure products, devices and services (68%), organisations broadly feel poised to capitalise on the AI opportunity and are buoyant about the ability for security to keep pace. 67% believe their organisation is well-equipped to manage the potential security risks associated with an upsurge in AI. More decision makers are also placing importance on increasing the security of their products and services (46%) than increasing the AI readiness (39%) of their products and services, recognising the importance of scaling security and AI in step. 

But with a majority of respondents (78%) also agreeing they need to do more to prepare for AI, and concerns around security risks remaining prevalent, security must remain a central pillar of technology strategy. Improving and scaling security in an era of interoperability and Edge AI requires established standards, certification and trusted hardware all businesses can rely on. By embedding security-by-design, organisations can guarantee a benchmark of best practice that will help to protect them against risk both today and in the future.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier