Access point Hackers and scammers are increasingly using AI to do their dirty work It looked like Moussa Faki. It sounded like Moussa Faki. And to the European leaders participating in the video calls, it certainly seemed as if they were talking to the chairperson of the AU Commission… Except they weren’t. It was a deepfake scam, perpetrated by cybercriminals using phoney email accounts and elaborate, AI-enabled video adjustments in an attempt to set up meetings with some of Europe’s leading diplomats. It wasn’t the first time cybercriminals had used AI and deepfakes to pull off a scam. In 2019, the CEO of a UK-based energy company received what he thought was a phone call from his boss, the CEO of the firm’s German parent company, instructing him to transfer EUR220 000 into a Hungarian supplier’s bank account. You can probably guess what happened next. Although the voice had the right accent and the right ‘melody’ (according to the firm’s insurance company), it was an AI deepfake. Before anyone was any the wiser, the money had been redirected from Hungary to Mexico to … well, who knows where? And who knows where cybercriminals might strike next? In its September 2023 Cyber Safety Pulse report, cybersecurity firm Norton notes that ‘as our digital lives continue to expand, the threat landscape also evolves, presenting new challenges to people worldwide’. Norton’s most recent telemetry data reveals that scams are now the most significant cyberthreat, and that they have also reached an all-time high. In fact, the report states, ‘our data shows scams, phishing and other forms of human manipulation make up more than 75% of all digital threats’. Genie, the company’s newest scam-detection tool, tells the same story. ‘We’ve found scammers are leaning on old methods to lure victims, but they now have a more sophisticated arsenal at their disposal to make these schemes more realistic,’ the Norton report states. ‘Leveraging advancements in AI, criminals are creating scams that are not only more credible but alarmingly real. This abuse of technology paints a grim picture, where the lines between reality and fabrication are becoming increasingly blurred, ushering in scams that are more convincing and harder to detect.’ Norton’s report warns that small businesses are particularly vulnerable to what’s known as ‘business email compromise’ scams. ‘Cybercriminals impersonate company executives or vendors, manipulating employees into transferring funds or revealing sensitive information,’ it states, adding that ‘the use of deepfake audio and video with the help of AI is boosting these attacks’. It’s worth noting here that the AU Commission is by no means a ‘small business’. No organisation, no matter how large, is ever completely safe. Yet it’s always been this way. ‘Ransomware and cybercrime are nothing new,’ according to Kate Mollett, senior director for data protection firm Commvault Africa. But, she says, ‘the sophistication of attacks is unprecedented and is constantly increasing, thanks to artificial intelligence and other next-generation technologies. Today, organisations must deal with a whole new level of cyberthreat. Attacks can happen in minutes, and target not just production data but back-ups as well, leaving you with little choice other than to pay in the hope of getting your data back and your business operational again.’ Ironically, while AI is supercharging cybercriminals’ ability to compromise security, cybersecurity firms are also using AI to supercharge their defences. AI can automate repetitive tasks such as data collection and analysis, systems management, vulnerability detection and more, providing context for the security information it displays along with suggestions for how best to respond. When it comes to cybersecurity, AI is on the frontlines of the fight against AI. ‘Cybercriminals are using AI to make their attacks more effective and efficient, automating the process of uncovering and exploiting security flaws in networks, systems and applications,’ says Saurabh Prasad, senior solution architect at tech consultancy In2IT Technologies. ‘In addition, AI-based tools can be used to launch automated attacks and gain unauthorised access to systems. To counter this growing threat, organisations need to make use of AI themselves to implement robust security measures, policies and procedures.’ When it comes to cyberthreats, all companies are vulnerable – regardless of their size – with cybercriminals becoming ever more innovative Prasad says that AI has powerful potential when it comes to enhancing cybersecurity, ‘with machine learning algorithms able to quickly and accurately detect malicious patterns, malicious activity, anomalies and outliers that would have been almost impossible to discover before. AI-enabled technologies can detect intrusions or malicious activities across multiple networks and applications, identify potential malware that has never been seen before, and spot sophisticated phishing and ransomware attacks’. In effect, AI-enabled cyber defence provides a mirrored response to AI-enabled cyberattacks. AI-powered intrusion detection systems use machine learning to monitor network traffic for suspicious patterns and malicious activities. Meanwhile, by monitoring and learning from user behaviour, it can perform what Prasad describes as ‘user and entity behaviour analytics’, where AI monitors user and entity data such as authentication logs, system activities and access control lists to detect suspicious activity. ‘By automating certain processes, AI can help reduce security workloads and allow organisations to focus on more strategic elements of their security efforts,’ says Prasad. ‘This can include AI-powered automated patching, which can track and patch software in real time and dramatically reduce potential exposure to cyberattackers. AI can also automatically identify anomalies in network traffic, improving a company’s ability to detect malicious activities. It can be used to detect malicious payloads, suspicious domain communication and unexpected application activity.’ It’s the ability to do the drudge work, quickly, that sets AI apart. According to at least one estimate, a new organisation is hit by ransomware every 14 seconds. Another estimate puts it at one cyberattack every 39 seconds. All anybody can say for sure is that in the time it’s taken you to read this paragraph, at least one business somewhere in Africa has been hit by a cyberattack. Whether it was able to defend itself appropriately depends largely on its ability to handle massive amounts of information in a matter of seconds. ‘One of the most important elements of effective cyber defence today is the ability to sift through the millions of signals to find the alerts that represent a real threat,’ says Mollett. ‘This requires an intelligent and proactive security approach.’ She points to Metallic ThreatWise as an example of this intelligent, AI-powered security. ‘It places a series of intelligent decoys that look and feel like real, valuable data,’ she explains. ‘These span your production and back-up environments, and if an attacker trips one, an alert signal will be sent. ThreatWise also incorporates artificial intelligence to deliver enhanced threat insights and security policies that help you act faster and keep pace with attackers. This real-time intelligence helps you to quickly tighten the perimeter, deploy evasive manoeuvres and secure the back-ups.’ And if ‘tighten the perimeter’ and ‘deploy evasive manoeuvres’ sound like the language one might use in a military setting, make no mistake – now that cybercriminals have AI in their arsenal, businesses are, in a sense, at war. However, Prasad cautions against over-reliance on AI to save the day. ‘AI is a powerful tool, but relying too heavily on it can lead to a false sense of security, blinding organisations to security threats and unwanted activities occurring on their networks,’ he warns. ‘In addition, AI is only as reliable as the data on which it is built. If the data sets used to train AI are biased and/ or incomplete, then the intelligence will be biased and/or incomplete as a result. AI systems also need to be protected from data breaches and malicious actors who could use the data to leverage an attack.’ But in all the talk of AI and machine learning, it’s worth remembering that the biggest cybersecurity threat is not robots… It’s people. ‘Ask any cybersecurity specialist about their biggest network safety concern, and it’s likely that they’ll answer “the human element”,’ says Carey van Vlaanderen, CEO of digital security firm ESET Southern Africa. ‘No matter how resilient or intelligent the cybersecurity solution, it can only be as effective as its weakest link, and people are always a risk.’ Van Vlaanderen estimates that almost 88% of data breaches are caused by an employee mistake, whether it’s recycling passwords, losing a company laptop, or intentionally overriding company security policies. ‘Chief security officers, CIOs and individuals in similar positions of responsibility spend a lot of their time worrying not about technology, but about people,’ she says. ‘Humans make mistakes. Humans are also capable of negligence, unfortunately.’ So while AI is accelerating the threat of deepfake phone calls and hyper-complex cyberattacks, the biggest chink in the cybersecurity armour remains what it’s always been. ‘Data leaks that arise because of human error – such as failure to update security patches or correctly configure servers with known vulnerabilities – are on the rise and now occur almost as frequently as direct security attacks. Then there’s insider threats, which are unimaginably difficult to detect,’ says Van Vlaanderen. ‘From malicious employees or an employee whose credentials have been compromised, all of these vulnerabilities share a common root – humans.’ By Mark van Dijk Images: Gallo/Getty Images