Innovation
7:33 min read 15 Nov 2024

Confronting the cybersecurity headache

With the increased use of AI, fraudsters have become more sophisticated, using deepfakes. To tackle this, Safaricom is using machine learning and AI to detect fraud earlier and stay ahead of the fraudsters.

Confronting the cybersecurity headache

On July 19, users on 8.5 million Microsoft devices across the world encountered the infamous Blue Screen of Death.

This infamous blue screen on a Windows computer often means the computer is dead or will take a long time to fix.

Last July, the 8.5 million devices around the world were only one percent of Microsoft users, but they were an important one percent as they consisted of airlines and airports, public transport, healthcare, financial services, and media and broadcasting.

The chaos that followed is the stuff of nightmares for anybody running a company that depends on technology and computer efficiency to serve its customers.

Fortunately for Windows users in July, the failure was not caused by a cyberattack but by a faulty update by Crowdstrike, a security vendor for Windows.

When you combine the likelihood of occurrence and the severity of the impact it would have, says Safaricom CEO Peter Ndegwa, cybersecurity occupies the top right-hand corner of the risk matrix. For him, it is one of the top three risks.

“Whether you have managed it or not, it doesn’t really matter,” he says.

The reason it doesn’t matter whether it has been managed is that as a network business that keeps Kenya connected and delivers M-PESA, the financial service that keeps Kenya going, the company cannot afford to be off.

“The problem with a cyber-attack is it comes from unknown avenues, and therefore, you’re not sure when it will show up, how it will show up, and the impact it will have,” Ndegwa said at the close of a two-day cybersecurity summit at Nairobi in October.

Cynthia Kropac, the Chief Enterprise Business Officer at Safaricom, put her assessment of cybersecurity risk more bluntly: “Any threats that come in through cyber-attacks have the ability to not only wipe a company out financially but also reputationally.”

And with good reason.

According to the Communication Authority (CA), firms offering financial services have been the target of 90 percent of the 860 million cybersecurity threats over the past year. This means that there are, on average, two million threats a day.

“This is firstly because that is where the money is. Secondly, it is because to access financial services, you must share personal identifiable information and right now, data is gold,” said Nicholas Mulila, the Chief Corporate Security Officer at Safaricom.

While digitization and going digital is the future, the cybersecurity risk is real, said Mulila.

For Peter, Cynthia and others at the top, AI is a possible tonic.

At Safaricom, for example, AI has proven useful in understanding and analysing the big data and the systems on M-PESA to identify and block fraudsters.

Fraud on M-PESA is mostly perpetrated through social engineering, where fraudsters obtain personal data by manipulating individuals to fill the gaps they have in stolen data. A typical example would be getting a customer to state the missing digits of their phone number or identity card details and then swapping their SIM.

With the increased use of AI, fraudsters have become more sophisticated, using deepfakes. To tackle this, Safaricom is using machine learning and AI to detect fraud earlier and stay ahead of the fraudsters.

AI can detect threats, analyse behaviours and issue automated response mechanisms at a machine scale.

But AI is a double-edged sword in cybersecurity, said some at the summit: use it blindly, and you open yourself to attacks; fail to use it prudently, and you cannot protect yourself from attacks.

“There is Fraud GPT on the dark web,” said Ramakrishna Balagopa, Vice President of Business of EMEA at SISA Infosec Company.

FraudGPT is a product sold on the dark web that enables bad actors to undertake cyberattacks. It works in the same way as ChatGPT, with a key difference being that it does not have any built-in controls to reject inappropriate requests.

Balagopa also sees the majority of the risk coming from insiders in an organisation, which he puts at 60 per cent. “With AI, a rogue insider can harvest company data. There is also a threat from increased use of the Gen AI tools like ChatGPT since employees do not know where the data they are inputting is going,”  he said.

A good example of the magnitude of the threat and risk of insiders is from the world of savings and credit societies (Saccos), whose regulator, the Sacco Societies Regulatory Authority (Sasra), reported in September 2023 that Saccos lost over KES 200 million in the past two years due to internal fraud.

A further bombshell was the revelation that officials of Kenya Union of Savings and Credit Co-operatives (Kuscco) had fleeced the entity of KES6.56 billion between February 2013 and April 2024.

George Wanjohi, Chief Information Security Officer at Co-operative Bank of Kenya, recommends several ways to mitigate internal risk: “Least privilege access, where you only grant access to the segment of the ecosystem that a user needs. Pervasive logging and monitoring, where you log and monitor as much as possible. Use of machine learning behavioural analytics to flag suspicious actions.”

While IT experts may be bad actors, Eugene Wadeya, Systems Security Lead at Stima Sacco, reckons that most non-IT people in an organisation may inadvertently fall for social engineering or phishing.

“We need to invest more in cyber awareness and training. The bigger part of the organisation is going to believe that email that’s telling them that this is an 80 per cent discount on Bolt, and they’re going to click the link and share the credentials and compromise the system,” he said.

Experts at the summit agreed that the focus should be on company-wide training to improve cyber hygiene and a mantra of trust but verify while using everyday AI tools for work.

Eugene of Stima Sacco said they have found a combination that works.

“To enable us to detect fraud, we have opted for in-house, custom-built cybersecurity solutions that really leverage AI to reduce the noise and increase the scope of the transaction data we can see. This enables us to map patterns, note inconsistencies and through the same AI we can identify and react much faster to fraudulent transactions,” he said.

Rosemary Koech, who heads Data Protection at KCB Group, warned companies against rushing headfirst into the unchartered waters of AI in cybersecurity and data.

“It is great that you are leveraging AI even for risk management, but at the end of the day, always remember that it doesn’t matter what technology you have, there are regulations that give power back to the human being, and that must always be adhered to,” she said.

Was this story insightful to you?

Accessibility Settings