Bankers should familiarize themselves with privacy as well as the ways in which it manifests in the epicenter of AI and open finance.
By Ryan Miller
There are few topics that will have a bigger impact on public policy in the near term than privacy. It’s an issue at the heart of two areas that are catapulting to the top of bank leaders’ and policymakers’ priority lists: artificial intelligence and consumer-permissioned data sharing/Dodd-Frank Act Section 1033 (better known as open banking or open finance).
Privacy is in the midst of rapidly changing customer expectations as well as a more forward-leaning regulatory environment. Non-compliance comes at a steep cost—as evidenced by the Irish Data Protection Commission’s recent $1.3 billion fine on Meta coupled with an order for the company to cease the transfer of data from the EU to US servers. That decision also illustrates the challenge businesses face to meet a patchwork of jurisdictional requirements.
Bankers should familiarize themselves with privacy per se as well as the ways in which it manifests in the epicenter of AI and open finance. This will better equip them to identity the ways the privacy landscape will affect their operations, business strategy and policy objectives. This article will delve into these topics and concludes by providing a list of techniques that banks can use to mitigate risks resulting from privacy. These best practices will also assist in breaking down silos and lead to a more horizontal, cross-functional culture at the organization.
1. Artificial intelligence
Perhaps no topic is hotter in the minds of business leaders, regulators and the public than AI. Specifically, the advent of the generative prompt-based iteration has proven to be an epochal event in human history. If that sounds hyperbolic, one need only mark the profound impact the technology has already had and the promise of what is yet to come. For example, many companies are scrambling to figure out how to deploy generative AI in a safe and sound manner, whereas the “Godfather of AI” left his job at Google in order to openly warn of the dangers the technology poses to society.
While AI governance is not exactly a privacy issue, privacy programs and their stakeholders are the key to building effective AI controls. As recently observed by the CFPB, Justice Department, Equal Employment Opportunity Commission and Federal Trade Commission, AI models must comply with current consumer protection and anti-discrimination laws; the lack of specific AI-specific laws is not an excuse. Existing privacy laws pertaining to the permissibility of sharing can have a massive impact on AI as evidenced by Italy’s Data Protection Officer temporarily blocking ChatGPT under the General Data Protection Regulation, or GDPR.
This action demonstrates the importance of satisfying legal and regulatory requirements and the pressing need for government agencies and businesses to communicate effectively, both through official channels as well as informally. The CFPB also released a report on potential consumer harm stemming from AI-infused chatbots. Additionally, generative prompt-based AI models such as ChatGPT rely on publicly available information on the internet that is web scraped, which could be in violation of sites’ terms of service.
Forthcoming European Union regulations on AI will be seminal—likely equaling the impact of GDPR in the privacy world. The regulations will have a distinctly European approach, categorizing activities into buckets of inherent risk (whereas the U.S. tends to look at risk as a continuum, focusing more on outcomes). In any event, it is highly likely for both approaches to have some impact on usage of ChatGPT and similar offerings.
ABA is active on the AI policy front and is representing member views to the Biden administration as it determines next steps. Until such time that AI-specific rules are in place, practitioners can begin to explore principles and foundations upon which to build. In addition to leveraging the controls associated with the enterprise privacy program and existing fair lending procedures, good places to start are the newly-released National Institute of Standards and Technology AI Risk Management Framework and the White House’s Blueprint for an AI Bill of Rights.
2. Consumer-permissioned data sharing/Dodd-Frank Act Section 1033
The CFPB’s Section 1033 rulemaking activity is continuing apace, and CFPB Director Rohit Chopra has acknowledged that privacy is a huge component in the consumer-permissioned data sharing ecosystem. There are major questions around the data elements that are being proliferated, how they are being used, and with whom they are being shared. Director Chopra expects the concept of a “permissible purpose” to be a signature feature of the proposed rule when it is scheduled to be issued in October.
These privacy questions dovetail with the existence of data brokers operating in this space. Director Chopra envisions this rule as a way to empower consumers to “fire” their financial services providers and enable usage of transactional data/cash flow to determine a consumer’s ability to pay under a nontraditional underwriting system.
In addition to privacy, security features are also something he intends to bake into the proposed rule. Chopra wants to ensure information is being shared securely, and therefore he is leery of screen-scraping technology, which is the default means of sharing consumer personal information in the absence of an application programming interface. There are no single laws against screen scraping as such, but hacking laws are the most common legal theories used against the practice when the information being scraped exceeds the level of authorization. Standard setting around APIs is crucial to mitigating the risks of information sharing, and the CFPB has signaled it will look to the market to take the lead (albeit reserving a significant oversight role for itself). In order to understand these highly technical questions, the CFPB is staffing up on technologists. Technology is the next frontier, and regulators need to be ready.
Chopra has stated that potential for fraud is on his mind as rulemaking proceeds; while traditional financial institutions such as banks have historically combated fraud, new entrants such as fintech companies may not have the expertise and institutional knowledge to do so in an effective way. This situation is further complicated by the fact that Chopra expressed concerns with gatekeeping and is worried banks and credit unions will cite “fake” reasons for withholding consumer data—ignoring very real questions of risk management and the duty to safeguard personal information.
Interestingly, Chopra said he believes stablecoins will qualify under the definition of digital wallets and as such fall under the scope of the Section 1033 proposed rule. Another intriguing teaser that bears watching is Chopra’s praise of multi-stakeholder enforcement for areas in which agency bailiwicks overlap.
Compliance best practices
Now that banks understand the ways privacy can introduce risks, they probably wonder what they can do to control for them. Nebulous areas that lack bright line rules or where there is disagreement over which part of the organization “owns” the process is a common problem in emerging areas. Privacy is a particular challenge because in order to provide transparency around data practices, a company must actually understand them. This is not nearly as simple as it sounds. Rather, it will require true cross-functional collaboration, an alliance between the business units, technologists, compliance, information security, procurement/third party risk management and attorneys who are able to flag issues on each other’s behalf. This can be summed up as: Personnel from various workstreams need to become friends.
Individuals with different roles and responsibilities can learn from each other and change the way they think. Operational staff can begin to see their work through the lens of privacy, data governance and cybersecurity. Second line functions such as compliance and legal can better understand business goals and ask the right questions. The point is to weigh risks realistically and be credible. This can require big personalities, as well as the ability to be the bigger person. Over time, mutual respect and trust is developed and a genuine partnership is formed.
In addition to a core interdisciplinary team, a bank should find someone away from the day-to-day work who is able to discern trends and what is coming over the horizon. This will be used to future-proof the system. Privacy by design is a solid means of ensuring that the governance structure reflects the values of an institution. The objective is to eliminate troublesome features and focus on being responsible stewards of data, while being mindful that there is room for research and product development. A business has to generate revenues but can do so ethically.
This can also help to develop a positive relationship with regulators, who should also take heed of the need to work together constructively.
The single most important thing a financial institution can do to take the next step in its privacy program maturity is to conduct data mapping. This is essential from a compliance standpoint for managing all types of data risk, such as privacy, security and records retention. Additionally, it is a base camp from which all sorts of innovative use cases become possible. An organization cannot launch applications requiring a data lake without first knowing where the elements reside.
Another critical area is to explore the usage of Privacy Enhancing Techniques. PETs aid in treating data so it is no longer associated with an actual person. Terminology varies, but the most common goals are to use PETs to result in aggregation (high-level data created by compiling individual data sets), anonymization (scrubbing data of personal information so a particular person is not identifiable), de-identification (anonymization plus additional steps to reasonably prevent future re-identification) and pseudonymization (using a token as a stand-in for personal information so an individual is identifiable but not by name).
These concepts are vague and interpretations vary from tolerating zero risk of re-identification to accepting that some risk will always be present. Even specialized attorneys lack consensus on these issues in the abstract, never mind what is technically feasible. Nonetheless PETs are becoming increasingly favored for sharing information with third parties, and the industry is coalescing around the practice of “differential privacy” as the preferred method.
Best practices in the privacy and other innovative spaces require a move away from the traditional regime of assessments and audits, instead evolving to monitoring in real time and taking corrective action as necessary. This flows into third party risk management, as vendors are an extension of the company. Trust is important and should constantly be re-assessed. Benchmarking to compare to industry peers is also key in order to gauge actual risk exposure. Further, it is imperative to document inputs/outputs to keep senior leadership informed and to ensure appropriate levels of resources. Banks uncertain of where to start should be sure to consult the NIST Privacy Framework.
Privacy is clearly at the forefront of new and exciting applications. ABA stands ready to assist banks in developing a strategy with respect to these emerging areas, while vigorously advocating for a nurturing policy environment in which to pursue it. For more information, please reach out to [email protected].
Ryan Miller is VP and senior counsel for innovation policy at the American Bankers Association.