One area where AI and machine learning can be applied is in the ongoing fight against cybercrime. AI provides a significant advantage over traditional systems, which rely on static rules to identify patterns and flag suspicious activities. AI, on the other hand, is always learning—using intelligent algorithms to detect and identify new attack patterns and incorporating that knowledge to flag cyberattacks more quickly and with much higher accuracy.
While AI is making media headlines almost daily, adoption of AI technology it is still in its nascent stages; a recent study by Accenture and Ponemon noted that in the financial services industry, just over a quarter of firms have deployed AI-based security solutions.
But there’s a defensive case to be made for AI investment, too. A February 2018 report from the Center for a New American Security warned that cybercriminals may soon be leveraging AI and machine learning to increase the scope and efficiency of their attacks. “A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets,” the report found.
Case study: Phishing
One of the older (but no less effective) fraud tactics commonly targeting financial institutions is phishing, where criminals typically use email to solicit sensitive information from their victims, such as passwords, social security numbers or financial information. Phishing emails often include a seemingly legitimate URL that directs victims to an external website where the information can be collected. According to the Accenture/Ponemon study, phishing and social engineering scams are the second most costly type of cyberattack facing the financial industry, costing an average of $196,610 per attack.
Phishing has been a tried-and-true fraud tactic for many years using traditional means, but what happens if attackers begin to enhance their methods using AI?
In a recent experiment, a team of researchers at Easy Solutions (which ABA endorses for its anti-phishing and digital threat protection services), attempted to answer that question. Analyzing several million phishing URLs, the team identified several individual threat actors targeting a single institution and analyzed their performance against the bank’s phishing defense systems.
“The average attacker will have a success rate of about 0.3 percent, meaning that our systems are blocking 99.7 percent of their phishing URLs,” says chief data scientist Alejandro Correa Bahnsen. He adds that the team observed higher success rates among some of the more sophisticated attackers—up to 5 percent in one case.
Researchers then assumed the role of hackers, creating an AI URL generator that could create unique phishing URLs. Using this technique, they found that their penetration attempts were significantly more successful; in one case, attack efficiency increased from 0.69 percent to 20.9 percent—a 3,000 percent increase. In another, AI boosted the success rate from 5 percent to 40 percent.
The potential of AI to enhance a cyberattack is boundless, but that doesn’t mean that the fight against fraud is a lost cause. “The conclusion should not be that we are doomed as soon as attackers start using AI,” Correa Bahnsen says. Rather, banks need to ensure that they’re incorporating all of that knowledge into their own phishing detection systems.
While we’re still on the cutting edge of defensive AI technology, Correa Bahnsen notes that it will soon be a must-have for banks. “Right now, if someone doesn’t have AI, in most cases they have some set of rules put together in order to detect any traditional fraud. A very simple [AI-driven attack] is going to be able to bypass those.”