Finland’s F-Secure is advocating threat-hunting for organisations to stay ahead of cyber attacks in Asia.
Its head of Asia-Pacific and Japan, Keith Martin, told Computer Weekly in an exclusive interview that, last year alone, F-Secure Labs’ global network of honeypots saw a fourfold increase in attack and reconnaissance traffic.
“Very little of this traffic is the result of manual human activity,” said Martin. “In fact, 99.9% of what we see are from bots, malware and other automated tools. Of course, it’s humans who create these tools and configure them, but the sheer number of attacks – in the hundreds of millions – is made possible by automation.
“The good news is that attack detection has improved considerably over the past few years and continues to improve. However, there is still a large gap between an attack being detected and the appropriate actions being taken to remediate it. As you can imagine, the quicker the response, the less damage caused.”
Martin said that, at present, it takes an average of 69 days to respond to an attack once it has been detected.
“The cost to resolve a breach is €18,000 (US$20,000) per day, not counting the associated costs of system downtime, recovering lost or compromised data, restoring business-critical functions, paying regulatory fines and managing both public relations and an increase in customer queries,” he said.
“My advice for businesses is to assume they have been breached and to tackle this with threat-hunting, which is proactive detection and response conducted by a skilled team trained in the attacker mindset. This new approach to cyber security has a greater impact on the bottom line.”
F-Secure last year acquired MWR InfoSecurity, a cyber security consultancy that Martin said has brought a “whole new dimension to F-Secure’s portfolio of products and services”.
He added: “MWR InfoSecurity has highly skilled experts in offensive techniques who understand the attacker mindset and is well known in the industry for its technical expertise and research.
“Keeping pace with the changing attacker mindset is a huge challenge for businesses today.”
Noting that advanced persistent threats are no longer the sole remit of state-sponsored attackers, Martin said many criminal groups and hacktivists also deploy the same tactics, techniques and procedures, with variants of state-sponsored malware and zero days being deployed throughout the threat landscape.
“When you’re up against such adversaries, you need a strategy in place to respond to an attack at whichever stage you detect it happening,” he said. “Proactive threat-hunting will catch these attacks in the early stages.”
Martin added that a clear response strategy will ensure the right people are equipped with the right information to make the right decisions. This should be defined before it is needed to ensure a swift and competent response when the time comes.
Is AI a good fit for cyber security?
Meanwhile, Luke Jennings, chief research officer at F-Secure’s Countercept division, believes that both artificial intelligence (AI) and machine learning will continue to enhance specific areas of cyber security.
“AI is a good fit for specific areas of cyber security – but it may also not be a good fit,” said Jennings, adding that successful solutions will be focused on enhancing the productivity of skilled cyber security professionals.
Jennings said AI should aid and enhance – rather than replace – skilled threat hunters and incident responders.
Threat hunting has always been about the human element of threat detection and response, because even the best automation and AI-based tools in the world could be evaded, said Jennings.
“This is a general rule for any new technology, which works best when it significantly enhances the productivity and quality of human labour, rather than completely replacing it – a symbiotic relationship favouring augmentation over replacement.”
Jennings pointed out that in the detection and response space, non-AI-based technology is usually most effective for many use cases.
For example, he said, expert-crafted rules and simple statistics for anomaly detection can go a long way under the control of defence teams, known in cyber security parlance as blue teams.
AI and machine learning, on the other hand, excel when detection use cases do not lend themselves well to fixed rules or when data patterns are very complex, he said.
For example, said Jennings, although there is nothing inherently malicious about a user accessing a system, a person who has gone outside the usual behaviour profile may be worth investigating. “AI can help identify that to allow an expert team to investigate properly,” he added.
Jennings noted that one of the challenges with applying AI and machine learning in threat detection and response is that cyber security is, by nature, an adversarial problem – and attackers will go all out to deceive an AI.
Although the same argument applies to humans, the added difficulty with AI is that it is often difficult to understand what criteria the algorithm has decided are important to make its decisions, he said.
“In the more general space, there is research showing how image recognition algorithms can be tricked into misclassifying images of animals as different animals,” he added.
“The alarming point here is that the changes made to the image are imperceptible to humans, and yet can cause an algorithm to classify a panda as a gibbon.”
The other key problem with AI and cyber security is that “ultimately, we are trying to judge malicious intent”, said Jennings.
“There is nothing in the data that measures the emotional intent of a malicious actor, and so it is something we try to infer from other data,” he said. “Is a large data transfer spike at 2am an attacker stealing an entire database – or is it a routine database backup?
“AI will never be able to make decisions about this on its own, so there will always be a fundamental requirement for skilled humans who can operate outside the bounds of the data that is available.”