By Craig ColganIf the names of these hacker groups sounded any less ridiculous, they would not be real hackers, as it goes. Their capabilities though, are anything but juvenile.
In December, the Treasury Department took action against the Russia-based cybercriminal organization calling itself Evil Corp. The group is responsible for breaking into systems in 300 banks and financial institutions in more than 40 countries, resulting in more than $100 million in theft, Treasury noted.
As the scale of such breaches seems to only soar, banks are hiring their own specialty teams or contracting with vendors, all of whom have one mission: think like a constantly changing set of globally active bad actors. More banks are now both running formal attack simulations on their own systems and a few are working with actual hackers—the non-criminal or “white hat hackers,” anyway—at various scales, seeking to benefit from the mindset that inspires the banking industry’s own cyber-invaders.
“I could compare it to an arms race,” says Nicholas Antill, SVP and senior security manager at PNC Bank, which has grown its vulnerability testing teams in recent years. “We are constantly improving what we do. And as banks become better at security, cyber criminals must improve their skill set to attack banks. It is constant on both sides.”
A common type of security strategy that involves targeting your own network is called penetration testing, or pentesting. This is attacking an individual application or network to hunt for any weaknesses. The point is to locate security issues that other methods may miss.
The next level of self-targeting is “red team” testing, which can be executed in various more formal scenarios to simulate attacks against the company’s own “blue team.” On a wider scale across the enterprise than pentesting. Red-team testers sometimes adopt the tactics and techniques of a specific, known threat actor to achieve a specific objective against a chosen target, says Caroline Wong, chief strategy officer at Cobalt.io, a security testing firm.
“I recommend banks perform penetration testing first, to get a baseline understanding of the types of security vulnerabilities that exist in banking applications, mobile apps, APIs, networks and cloud infrastructure,” Wong says. Red teaming is typically done by banks that are at a higher level of security maturity overall, she adds.
Aaron Shilts, president and COO of NetSPI, a vulnerability assessment firm based in Minneapolis that works with large financial firms, says the value of penetration testing over scanning software is “that you’re adding humans to the mix,” he says. “With red teaming you act as an outside adversary.” In designing a test for a client, Shilts asks some basic questions.
“If we were bad guys, you know, what would we use to get in?” he asks. “How could we get in? What do their defenses really look like? With limited information, it’s kind of a good way to simulate how accessible the crown jewels are from the outside.” Red team projects with NetSPI typically would last about a month, Shilts says.
Common vulnerabilities could be anything from outdated code on a machine to various more directly human-related issues, from phishing emails to physical building security.
Red team exercises at PNC can last commonly two to six months, Antill points out, all carried out by in-house teams. A blue team defends, as the red team attacks.
“After we have an exercise, we have a candid conversation with leadership and say, these are the things we need to improve upon,” he says. “These are the weaknesses we found and here’s the actual picture of what our defensive posture looks like against an attacker, who would want to get into PNC’s network and attack it.”
The focus is beyond just making technical, behavioral or physical fixes. “It’s more than just a matter of tuning the tools to detect the behaviors and activities. We want to make sure that the entire chain of people, processes and technology are solid, to detect every time or more consistently,” Antill says.
Those fixes may range from items pushed out in a day, to architecture issues that may require larger discussions and decisions over months. PNC has hired quite a few cybersecurity specialists in the last several years with backgrounds in red team work, he adds. Often from the federal government. “Our model is better served by having an internal team,” Antill says. “The growth of that team speaks to the support we are getting from our executive leadership.”
Come on in
One option is to make use of experts from a wider net than your in-house team or even from a vendor.
“While they know it is necessary, not all banks or financial services organizations have the resources or can find the talent to perform in-house testing. Crowdsourced security programs provide smaller security teams access to hundreds of thousands of the best ethical hackers in the world,” notes a report from Bugcrowd, a cybersecurity firm that among other things works to connect a range of clients including financial services firms to what it calls the “ethical hacking community.” Meaning literally anyone who thinks he or she can assist a company by alerting it to a vulnerability.
Hackers can go to the Bugcrowd site and search for firms of all types openly inviting their attention. NWB Bank, based in the Netherlands, asks for essentially anyone who wishes to have a go at its computer systems. “If you happen to identify a weak spot in one of NWB Bank’s ICT systems, we would like to hear from you so that any necessary measures can be taken swiftly,” the bank calmly notes on its website. It then lists emailing directions for details of the discovered problem, suggesting it be “encrypted if possible, to prevent the information from falling into the wrong hands.” NWB points out it will pay cash, depending on the value of the information.
Working with Bugcrowd, National Australia Bank has established a crowd-sourced cyber-testing outreach effort, but it does not pay for information.
“If you believe you have found a security vulnerability with any of our services, we would like you to let us know right away via our Responsible Disclosure Program,” reads the NAB website. “Note that this program rewards with kudos only—no monetary disbursements for findings will be provided.”
Hackers are not always in it for the money, says Casey Ellis, founder and CTO of Bugcrowd. “You know, fundamentally what we are is a community of about 150,000 hackers at this point,” Ellis says of the community his company connects to clients. When getting paid is not an option, other benefits to those spending hours and hours hunting vulnerabilities in systems across the web include potential career connections, the opportunity to learn about new systems and “social recognition,” Ellis says.
Making the most of hired hackers
Inviting hackers to take a crack at your system—whether by hiring internally, contracting with a vendor or by opening yourself up to the entire internet—is fast becoming a formalized process around the world.
The European Central Bank has developed guidelines for banks participating in red team tests in the EU, basically a set of common elements that financial authorities require supervised institutions to follow. Called the European Framework for Threat Intelligence-based Ethical Teaming, or TIBER-EU, the aim is to standardize practices and reduce challenges across borders.
This type of testing is effective for banks, as long as they “actively look to learn from the results, as opposed to just checking a box, or use these types of tests as a way to point blame or to chastise employees,” says Tyler Leet, director of risk, information security and compliance services at CSI, a core banking and cybersecurity provider.
These testing strategies can have other drawbacks, including overwhelming busy in-house security teams, adds Ernesto DiGiambattista, founder of ZeroNorth, a software security company, and a former VP at a large bank. “It’s critical to orchestrate these tools in a way that correlates results and prioritizes them in accordance with business risk,” he says.