Cloud Expo | How WFH and AI have highlighted the case for zero trust security
Zero trust is a growing necessity in the age of AI, an auditorium on day one of Cloud Expo were told, as Neil Thacker of cybersecurity company Netskope and Ben Short of law firm Blake Morgan LLP presented how their respective companies, but in totally different fields, are seeing use of the practice.
“One of the challenges we have as move into this age, is to not retrofit controls based on previous ages,” explained Thacker. Highlighting the previous ages of tech revolutions: computer age, Internet age, and now, the AI age, meeting the challenges of them requires a proactive approach. Thacker therefore advocates for the implementation of zero trust now WFH has been long established and AI is visibly coming down the line.
WFH, as Thacker showed, was still a priority for employers. Yet, as Short went on to explain, it proves a problem for cybersecurity staff of those companies. Working at a legal firm, keeping data secure is a high concern. But how do you keep company information secure when they are all working off of different networks from there own homes? Instead of implementing a hardware solution to all the different members, by updating a security architecture that all users who access the Cloud must follow, then you can save not only in the hardware costs but keep access secure even if individual end point devices are compromised.
For this, Short explains how a zero trust security model is therefore essential. Based on the principle of "never trust, always verify," it eliminates traditional perimeter-based security in favour of verifying every device and user. It also enforces least privilege access to minimise access rights, and uses micro segmentation to protect sensitive areas, reducing the attack surface and enhances security integrity by ensuring strict access control and continuous verification within the network.
Short explains how this security and service can be further enhanced by getting several services to cover the range of a company’s protections: “Some companies try and have a single solution do everything, yet we have a best in-class services that manage different elements – yet they have to be integrated into together.”
AI has made these practices even more pertinent for companies to adopt. On zoom, Thacker explains how the use of AI assistants are becoming more common, showing up as a participant but having no clear indication on who they are working on behalf of. This unverified participant could act as a listening tool for malicious actors and leak sensitive data from meetings.
Yet, AI still remains a growing field of interest for businesses, but so is it’s governance. “We’re seeing interest from companies want to see the governance of AI,” explains Thacker. Using public AI models, like ChatGPT, however, represents a further potential for security to leak, but AI’s integration through “integrated suites” can minimise the breach and retain integrity of companies implementing AI.