Life comes at you fast, and that’s especially true for CISOs grappling with the many compliance risks in cybersecurity. From greater regulatory pressures and heightened privacy standards to increased personal liability, topped off with new rules for artificial intelligence — a lot happened in 2023. With these changes come implications for you in 2024.
Let’s dive in to everything that happened in 2023:
The Securities and Exchange Commission adopted new disclosure rules in July requiring public companies to make expanded disclosure about cybersecurity issues. Those rules go into effect at the start of 2024 — and here’s hoping CISOs have already been working on how to comply with those new obligations, because they could be quite challenging.
First are new disclosures about cybersecurity risk that companies will need to make every year in their annual reports. For example, companies will need to describe their process for identifying and addressing cybersecurity risk, and the roles that the board and management play in addressing those risks (including whether the company has a CISO, and what that person does).
Companies will also need to disclose “material cybersecurity incidents” within four days of deciding that an attack is material. You’ll need to discuss “the material aspects of the nature, scope, and timing of the incident,” as the rule says, plus “the material impact or reasonably likely material impact” on your company’s financial conditions and operations.
If that standard sounds rather subjective, that’s because it is. The challenge for CISOs in 2024 will be to work with other parts of your enterprise (legal, internal audit, privacy, investor relations, finance, and others) to develop a cyber materiality process that can make those judgments in a rigorous, defensible manner. Then you’ll need to assure that you can disclose all relevant details within that four-day window.
This will force the CISO to engage with those other parts of the enterprise in new ways. Ideally you already do have those conversations, simply so the company can manage its cybersecurity issues. Now the company will have the additional task of disclosing those issues to investors, to comply with the new SEC rules.
And how might the SEC respond if your company makes poor disclosures about its cybersecurity risks?
Funny you should ask …
In October, the SEC filed a lawsuit against IT services firm SolarWinds and its CISO, Timothy Brown, for making misleading disclosures about the company’s cybersecurity risks in the 2010s, before SolarWinds suffered a disastrous cyber attack in 2020.
The allegations (and to be clear, they are only allegations; SolarWinds has called the lawsuit “misguided and improper” and vowed to fight it in court) are as follows. Throughout the 2010s, SolarWinds published a “Security Statement” proclaiming that the company employed a secure software development lifecycle, strong password policies, and other cybersecurity practices. Internally, however, employees griped for years that the company was nowhere near achieving those practices — and Brown knew about those shortcomings, the SEC says, since he prepared briefings for senior leadership and the board admitting as much.
The crux of the SEC’s lawsuit is that SolarWinds and Brown knew of serious cybersecurity risks within the company and then allowed those risks to persist, even while the company’s public disclosures assured investors that SolarWinds was operating at a high level of security — assurances proven wrong when SolarWinds suffered its cyber attack in 2020.
That, the SEC says, adds up to a violation of federal securities law.
To a certain extent this lawsuit doesn’t break any new ground; the SEC has been suing CFOs for poor accounting controls and disclosures for years. Now the SEC is trying to hold a CISO (Brown) accountable for poor cybersecurity controls and disclosures.
Now the question for CISOs at other public companies is whether your own internal procedures are strong enough to shield you from personal liability in the event of a cyber attack. For example, if low-level employees are griping about ineffective security controls at your company, is that discontent captured and relayed up to you, so that you can do something about it? When you share news of cybersecurity shortcomings with senior management and the board, is there a procedure to assess whether your public disclosures should change?
This new front in executive officer liability, plus the new rules for expanded disclosure of cyber risk, could turn into quite the headache for CISOs in 2024 and beyond.
U.S. public companies aren’t the only ones facing new cybersecurity expectations. The state of New York overhauled its cybersecurity regulation for financial services firms (formally known as 23 NYCRR Part 500), introducing tighter requirements for access control, risk assessment, employee training, and other tasks.
In theory this regulation, enforced by the New York Department of Financial Services, only applies to businesses that sell financial products to residents of New York. In practice, that definition can cast a surprisingly wide net: everyone from foreign banks with a branch in New York City, to non-financial firms with subsidiaries that sell financial products in the state. (In 2022, for example, Florida-based Carnival Cruise Lines ran afoul of the rule because it sells travel insurance to New Yorkers.)
The most onerous new requirements will fall on large companies, defined as more than $20 million in revenue from operations in New York or more than $1 billion in total revenue for each of the last two years. Those companies will need to undertake independent annual audits of their cybersecurity program, introduce stronger access monitoring (such as an automated system to block the use of easy-to-guess passwords), and centralized logging and event alerting.
All companies subject to the rule, regardless of size, will also need to conduct more regular reviews of their risk assessments and application security, roll out multi-factor authentication much more widely, and provide annual security training to all employees (including dedicated training on social engineering attacks).
In other words, companies covered by Part 500 will need to do a lot. Some of it might overlap with other compliance obligations, some might be wholly new. Clearly, a tool to guide you through the many new requirements will go a long way to helping you achieve compliance. (Different parts of the rule will have different effective dates over the next two years.)
In 2024 and beyond, we should also look to see whether other states and federal regulators might adopt more stringent cybersecurity regulations along New York’s lines. The Federal Trade Commission, for example, is a prime candidate to do just that.
California made its own contribution to CISOs’ compliance challenges back at the start of the year: the California Privacy Rights Act (CPRA) went into effect on Jan. 1.
The CPRA is essentially Version 2.0 of the California Consumer Privacy Act, which went into effect in 2020. The CPRA provides California residents with new rights and protections over their personal data; and imposes new obligations on companies that operate in California. Most notably, the CPRA applies to any business with any employees at all in California, although the law only applies to those California-based employees. (So if you have even one employee in the Golden State, you’re obligated to provide CPRA protections to that employee; but only to that employee — all other employees aren’t covered.)
For example, all companies covered by the CPRA need to have an employee privacy policy, publicly posted and readily accessible to employees. That policy should outline what data you collect about employees, and whether that data is ever sold to a third party. Employees also have the right to opt out of data collection or the sale of their data.
One significant question about the CPRA is enforcement. Enforcement against violations was originally supposed to start on July 1 of 2023, but California courts postponed that deadline to March 29, 2024. California regulators then began laying the groundwork for enforcement anyway, so one can reasonably expect to start seeing enforcement actions as soon as regulators can start. This means that compliance officers should be well underway in strengthening their compliance with both the CPRA and its forefather, the CCPA.
And as with New York’s expanded cybersecurity rule, compliance professionals should watch whether other states and federal government agencies incorporate any of the CPRA’s provisions into whatever new statutes or regulations they might be contemplating for 2024.
The biggest technology news one year ago was the arrival of generative AI: ChatGPT first, followed almost immediately by Bard, Claude, and a battalion of other apps. Regulators around the world spent much of 2023 trying to understand how they should respond to the myriad cybersecurity, privacy, economic, and ethical risks that AI raises — and at the end of the year, began to take action.
First was the Biden Administration’s executive order on artificial intelligence, issued on Oct. 30. The executive order doesn’t impose any regulations over artificial intelligence directly; rather, it instructs other federal agencies to develop regulations and guidance for AI, following a few basic principles laid down in the executive order and previous Biden Administration pronouncements.
For example, the order instructs the National Institute of Standards and Technology (NIST) to develop standards for red-team testing of AI systems. It also calls for “the most powerful AI systems” to undergo red-team testing and share those results with the government. (Who defines what the “most powerful” AI systems actually are? The Commerce Department, along with several other departments, in regulations to be written at a future date.)
Right now, companies developing or using AI are in a holding pattern as various departments implement the executive order. That said, compliance officers already know those forthcoming regulations will follow the “Blueprint for an AI Bill of Rights” released in 2022. That blueprint cited reliability, privacy, anti-discrimination, and public notice as pillars of good AI governance. While waiting for more specific regulations, compliance officers can always start with those pillars to assess whether your AI project is moving in the desired direction.
The European Union then leap-frogged ahead of the United States with its Artificial Intelligence Act, adopted on Dec. 8. This is the first true regulation of AI, in that it addresses specific use cases and spells out when AI is or isn’t permissible.
Again, many details need to be developed over the next two years — including questions about enforcement, which aren’t fully resolved either. But already compliance officers have some sense of which applications will be subject to higher scrutiny (AI used in medical devices, for example) and which will be flat-out illegal (AI to generate “social scores” governments use to track citizens).
Quite simply, watch this space. There’s much more to come for AI in 2024 and years beyond.
The Federal Trade Commission added to CISOs’ regulatory challenges in October by adopting a revised version of its Safeguards Rule. This is a rule that applies to non-bank financial institutions, such as insurance companies, payday lenders, investment or asset management firms, and even car dealers who offer customer financing. While the Safeguards Rule has been revised before in recent years, this latest update lowers the threshold for when you must report a privacy breach to the FTC.
For example, the previous threshold for reporting to the FTC was a breach of 1,000 customer records; now it’s 500. Notably, reports need to be submitted within 30 days of discovering the breach. Those submissions will typically end up in a public database of breaches that the FTC maintains — which could, in turn, lead to more scrutiny from state attorneys general, civil litigants, or other parties.
Aside from the reporting obligations, the rule also requires covered companies to encrypt personal customer data when that data is both in transit or at rest. (If your encryption key itself is accessed by unauthorized parties, that automatically means the customer data is considered unencrypted.)
The rule goes into effect in April 2024, so you have a few more months to streamline your compliance and reporting processes to meet the new rule’s demands.
Cloud-service providers bidding on federal government contracts received new marching orders in May, when FedRAMP — the program that acts as a cybersecurity seal of approval for “CSPs” — issued new baseline standards that providers will need to meet to be eligible to bid on those contracts.
FedRAMP operates as a clearinghouse for government agencies that want to use CSPs for their operations. Rather than every agency reviewing every CSP for every contract, the providers can meet FedRAMP cybersecurity standards and then be designated as an approved vendor for government contracts. That keeps things simple for the agency needing a CSP, and smooths the path for CSPs to offer their services to would-be government customers more easily.
This year’s revamp of baseline standards was to align FedRAMP’s requirements with those spelled out in the NIST 800-53 cybersecurity standard. NIST 800-53 underwent its own update this year to Revision 5, and FedRAMP’s new baseline requirements match those latest revisions.
FedRAMP compliance is a complicated thing, beyond the scope of our 2023 summary today. Suffice to say that cloud-based providers will need to examine the new baseline standards promptly, and then upgrade your cybersecurity posture to match if you want to remain in the FedRAMP program.
Even companies that aren’t bidding on government contracts have fresh cybersecurity homework to do: the SOC 2 standard to audit a vendor’s cybersecurity standards received an upgrade in the fall, and a potentially large number of companies will need to accommodate those changes.
SOC 2 is a standard developed by the American Institute of Internal Auditors to help larger companies assess the cybersecurity of vendors and service providers those companies might use: law firms, technology providers, outsourced payroll functions, and many others. SOC 2 is based on five “trust services criteria” — security, privacy, process integrity, availability, and confidentiality — that are used to define the scope of the audit.
SOC 2’s latest revisions provide more clarity on how to perform risk assessments for each of the above criteria and on other disclosure and attestation issues that might arise in an audit. The revisions also further explained the differences between confidentiality and privacy, offered guidance on controls that might be able to fulfill multiple criteria at once, and addressed numerous other issues.
In short, any company that either plans to undergo a SOC 2 audit or wants to ask its vendors to do so will need to incorporate these new SOC expectations into your assurance process. It’s not a total re-invention of the wheel, but you’ll need to proceed carefully to avoid a bumpy ride.
What should compliance officers expect in 2024? Perhaps the biggest event to come will simply be implementation of the big events we’ve seen in 2023. The Biden Administration and Europe will move forward with regulation of AI. New York will move forward with enforcement of its newly updated cybersecurity rule, and California will do the same with its newly updated privacy law. The SEC’s lawsuit against SolarWinds and its CISO will proceed along with enforcement actions from the FTC.
All of that progress will inform compliance officers’ understanding of how much compliance and enforcement risk your organization is likely to face. From there, you can plan investments or improvements to your compliance and cybersecurity programs accordingly.
That said, we already know that strengthening the fundamentals of your compliance program — more automated workflows, better testing and analytics, more accurate risk assessment, more precise reporting, better audit trails and documentation — will never go out of style no matter what the new year brings. Expect your compliance programs to be challenged to the maximum as usual, and be ready when the surprises of 2024 emerge.
The post 2023 Regulatory Roundup: All the Major Compliance Changes that Happened appeared first on Hyperproof.
*** This is a Security Bloggers Network syndicated blog from Hyperproof authored by Matt Kelly. Read the original post at: https://hyperproof.io/resource/regulatory-compliance-changes-2023/