Jan. 28, 2026
From Privacy Impact Assessments to Algorithmic Accountability: 2026’s Top Privacy & AI Compliance Priorities
While it’s impossible to agree on whether 2026 will be the Year of Youth Privacy, the Year of the Data Broker, or (another) Year of the Cookie, one thing we can agree on is that it promises not to be boring! Wait…did we forget to mention AI regulation? The privacy, cybersecurity and AI regulatory space saw so many developments in 2025 that it has become even more challenging to keep up with compliance to-dos and trends to track. In honor of this year’s Data Privacy Day, Nelson Mullins offers considerations for high-priority items to focus on in the coming year. We promise to keep the “cutting down on cookies” New Years resolution puns to a minimum.
Overview of Trends
Scrutiny of Sensitive Data: We continue to see a convergence in the regulatory and enforcement space around a focus on the use of sensitive data, which feeds into more traditional privacy issues as well as spotlighting the rapidly emerging issues related to AI safety. A few key tracks on which these tend to run are the use of sensitive data in data brokerage, a focus on youth data (children and teens), use of location data (by the public and private sector), and continued attention on the use of health and health-related data.
Continued focus on Cookies and Privacy Disclosures: Regulators have limited resources, so it is highly likely that we will continue to see enforcement around low-hanging fruit such as consent management and cookies/tracking compliance, external privacy notices, and disclosures related to specific use cases. These compliance items are easy for regulators to spot and enforce, and invite attention from potential litigants as well.
Increased Regulatory Enforcement: However, do not assume that regulators are too limited to act – we saw significant regulatory investigation and enforcement actions in 2025, and we expect that trend to expand this year. A number of states such as California and Texas have well-funded privacy and cyber divisions and proved their willingness to take on major investigations and enforcement sweeps related to high-priority areas such as data brokers, the use of sensitive data, children’s data, and investigations that resulted in multiple violations and pulled in various parties. California and Texas have been especially active, but many other states are enforcing as well, and we anticipate further action related to state enforcement consortiums and international cooperative initiatives. With more state privacy and AI laws in place in 2026, we expect continued enforcement and increased cooperation among state regulators, with at least one state jumping into the enforcement fray as soon as privacy laws went into effect. Jurisdictions without comprehensive privacy laws are also proving to be a serious force, as states such as New York passed a flurry of legislation related to AI, children’s data, and more. See here for our previous alert on New York AI laws.
Evolving International Laws and Frameworks: On the global front, we observed serious efforts to grapple with regulating and creating frameworks for AI and the ongoing tension over the EU’s data-related laws – the landmark EU AI Act going into effect, while controversies over the role of AI “innovation”, the EU’s digital strategy, and the nature of potential reforms continue to develop. It will also be important to monitor whether and how European regulators decide to update GDPR, which has become the foundation for many global privacy programs and global data protection laws.
Increased Risk to the Security of Personal Information: The framework for existing comprehensive privacy laws creates a situation in which most data is considered to be personal information, but companies must be able to understand the risks involved in certain use cases and certain types of data and certain types of sharing in order to prioritize more resource-intensive compliance efforts. Personal information must be protected, but it is not all created equal and it is not targeted equally by threat actors or inadvertently implicated by rapidly developing tech like AI content creation or companion chatbots. There are growing risks from threat actors as the use of AI for cyber-related attacks become more frequent.
Cybersecurity Governance and Board Accountability: A defining trend for 2026 is the heightened expectation that boards of directors take an active, well documented role in overseeing cybersecurity risk—not simply receiving periodic briefings, but demonstrating continuous, informed engagement with cyber governance. The SEC’s 2023 disclosure rules now require public companies to report material cybersecurity incidents within four business days and to provide detailed descriptions of board level oversight in annual filings, significantly increasing litigation and enforcement exposure for both directors and CISOs. The SEC’s high profile enforcement action against SolarWinds—its first fraud charge against a sitting CISO—illustrates that regulators will scrutinize not only technical controls but the accuracy of statements made in SEC filings and on corporate websites regarding cyber readiness. In practice, this means boards must ensure clear lines of communication with security leadership, validate that public facing assurances align with actual cyber posture, and confirm that the CISO is directly involved in disclosure processes. Boards should also expect regulators and plaintiffs’ attorneys to probe whether oversight structures are robust, whether known vulnerabilities were elevated appropriately, and whether board minutes reflect meaningful engagement with cyber risk. The trend is unmistakable: cybersecurity governance failures are no longer viewed as operational issues—they are potential securities law violations that can reach the boardroom and the CISO personally, elevating the need for coordinated oversight, rigorous internal controls, and proactive incident response readiness.
Data Brokers
State regulators, legislatures and the FTC have focused on data brokers in recent years, but the landscape in 2026 is more expansive than ever, and regulators have been vocal about their intention to target data broker violations. California continues to pursue a major enforcement effort with its data broker strike force, building on an investigative sweep launched in 2024, and the Texas AG is also pursuing significant investigations and enforcements related to data brokers, particularly focused on the sale of sensitive data.
In addition, the scope of “data broker” itself has changed. Both Texas and California expanded their definition of which entities are considered data brokers, potentially pulling in many companies who were not previously impacted. In California, this includes businesses that maintain information about consumers who have not interacted with them in three years, and businesses that sell personal information that they did not collect directly from a consumer (even if they have a direct relationship with the consumer).
At the federal level, there is uncertainty around FTC priorities, but companies may be impacted by the DOJ’s bulk transfers rule, which includes an expansive concept of data brokerage.
Lastly, California launched its “DROP” system in January 2026, which requires data brokers to register by January 31. Consumers can submit deletion requests as of January 1, 2026. See here for our previous alert on California's DROP system. Several other states currently have data broker registration requirements, several states are considering data broker regulations, and global regulators have expressed an interest in enforcing against data brokers as well. Considerations include:
- Determine whether the expanded scope of “data broker” applies to the business. This may mean reviewing CRMs, retention policies, and specific data flows.
- For existing data brokers, pay close attention to registration requirements and other obligations, such as deletion requirements under California’s DROP rules. While certain enforcement has targeted high-risk use of sensitive data, other enforcement stems from simply not registering or failing to meet administrative requirements.
Sensitive Data
The focus on the use of sensitive data continues to grow, and there are too many trends to cover adequately in one alert, but the overall takeaway is that companies MUST know what sensitive data they collect, use, and share, and how to mitigate the related risks, which now stem from multiple areas. From a privacy and consumer protection standpoint, this data presents significant risk of harm from misuse. From an optics standpoint, it makes sense for regulators to target high-risk use of data from vulnerable populations, such as children, and reflects an area of (relative) consensus in a highly politicized regulatory environment. Sensitive data currently under the regulatory microscope includes the following:
- Youth data (children and teens)
- Location data
- Health and health-related data, some of which is particularly high risk
- Data that was not previously higher risk, such as immigration status or citizenship
Highlights and considerations:
- Inventory and understand the sensitive data collected, used and shared by the business. Compliance obligations are directly related to the business’ practices regarding sensitive data.
- Identify specific restrictions that may apply to certain data or certain jurisdictions. States appear to be moving toward restricting certain uses and types of sensitive data. Many jurisdictions already have heightened restrictions governing the use of biometric data. Many states regulate the collection and use of youth data in the privacy, social media, and AI contexts. Others are focusing on strengthening protections related to geolocation, reproductive or transgender-related data. States such as Maryland have established strict rules on the secondary use of data, the use of sensitive data, and a complete prohibition on the sale of sensitive data – with no carveout for consent. Still other states, such as Massachusetts, are considering similar legislation.
- Determine where specific consent is needed. It may not be sufficient to describe the use and reuse of sensitive data in a general privacy policy without taking further compliance steps.
- Look at the overall data minimization strategy. An ongoing issue, but one that continues to create increased vulnerability, is the overcollection and retention of data, including sensitive data. As threat vectors for sensitive data expand, consider mitigation strategies, including whether such data is needed in the first place and whether existing data may be purged. Setting up a compliant consent management system is time-consuming and involves significant resources; if such data is not needed, this will significantly reduce the compliance burden.
- Consider how AI capabilities impact the sensitive data risk profile. AI models are rapidly evolving and revealing unanticipated or novel implications for sensitive data, particularly in making inferences, interacting with children and teens, interacting with individuals about health matters, impacts on mental health, and many others.
Children and Teen Data
Critics have long decried the deficiencies of COPPA, the federal children’s privacy law, and recent efforts to address those gaps have not been able to keep up with developments in technology. The broad safe harbors for COPPA consent in many states’ comprehensive privacy laws have not addressed those gaps either. With the continuous challenge of attempting to regulate social media and the rapid expansion of generative AI models and their impact on children and teens, there has been a flood of state level regulation and enforcement related to youth data. The focus is no longer just on children under 13 (as defined by COPPA), but teens, in some cases up to age 18. At the U.S. federal level, the legislature continues to try to address child privacy and online safety issues, but has struggled to pass many laws at all – though we anticipate this to be a continued focus for Congress in 2026. The FTC continues to enforce on child privacy issues, though largely within the confines of COPPA and the amended COPPA Rule (with an April compliance deadline for the amendments) and has indicated that it will target issues related to chatbots and AI.
At the state level, 2025 saw significant legislative and regulatory developments, and we fully expect this trend to continue in 2026. While most legislation was or is being litigated, and some have been enjoined, there are a number of state developments that are in effect. One key trend is for states to amend and enhance existing privacy laws to create more robust protections for youth data. Another trend is to pass standalone legislation, in some cases targeting minors’ data, creating age-appropriate design codes, social media restrictions, and app store requirements. At the heart of many of these is the issue of age assurance or age verification, including many of the laws with app store requirements.
At the international level, many regulators have announced their intention to focus on youth data, with certain jurisdictions taking stringent measures (such as Australia, which implemented a broad ban on social media for children under 16 years of age). Regulators also appear to be coordinating on youth privacy issues, including CalPrivacy, particularly as AI has drastically exacerbated issues related to CSAM and NCII. Click here to see our recent article on youth privacy enforcements. Again, there are so many nuances to the youth privacy landscape, but there are a few considerations to tackling this critical area of law:
- Inventory and understand the sensitive data collected, used and shared by the business and identify specific restrictions that may apply to certain data or in certain jurisdictions. This recommendation may look familiar as this is a reiteration of our recommendations above for sensitive data. In many cases, children’s and teens’ data is considered a subset of sensitive data under comprehensive privacy laws, and should be included in data inventories and consent management processes. However, companies should also understand which laws have heightened/amended requirements for youth data, and which jurisdictions have additional laws that impact the business. Certain states require specific impact assessments for children’s data, some restrict sale, targeted advertising and/or profiling, and several (including Connecticut and Oregon) will fully prohibit the sale of minors’ data.
- Determine when and where age assurance might be required and assess available technology and processes for compliance. Age verification is a moving target, as is the technology, but companies must start this process if they are or will be required to implement it. Assess whether the business is in scope for app store or app developer requirements.
- Don’t assume COPPA compliance will “cut it”. Addressing the vast and complex patchwork of children and teen privacy laws means that companies can no longer largely rely on COPPA’s “knowledge” standards, age limits, consent procedures, or age verification. Companies must also pay attention to global youth privacy requirements, which are not necessarily aligned with those in the U.S.
Consent Management
Consent management will continue to be a major focus in 2026. This includes a full suite of implementing required opt-ins and opt-outs, cookie/online tracking banners and privacy preferences, and dealing with ongoing issues related to sale of personal data and targeted advertising. Missing and malfunctioning cookie banner and opt-out links were a significant target for regulators in 2025, a trend that has continued for a number of years and that stems from longstanding requirements under GDPR, ePrivacy, and other global laws, in addition to newer requirements stemming from CCPA and state-level laws. The following recommendations are considerations for reducing your company’s exposure to regulatory enforcement or litigation relating to consent management-related issues:
- Identify and understand the suite of cookies and online tracking technology deployed on websites and apps and develop a consent management solution. Trackers will need to be categorized and an effective consent management solution implemented, often including a cookie banner, links that are specifically required by certain laws (such as CCPA), and in many cases, the Global Privacy Control (GPC). Technology must be vetted and configured carefully and tested routinely to make sure it is working as intended.
- Identify other avenues by which personal informaton is collected that may require opt-in or opt-out consent. This may include customer service, offline interactions, products, services, apps and other interactions where personal information is collected, particularly sensitive data. Depending on industry or data type, there may be specific laws requiring consent, such as HIPAA, state health privacy laws, and many others.
- In addition to privacy-related consent management, don’t forget about requirements under TCPA, CAN-SPAM, text/SMS, and auto-renewal laws, and their global counterparts.
Considerations for How to Focus AI Compliance
We did not conduct an official survey, but suffice to say that AI dominated the news, conference panels, and sparked major activity in the regulatory arena in 2025. As industry and government struggle to reconcile the concepts of “AI safety” and “AI innovation”, and everyone struggles to understand legal and practical obligations with respect to AI, there are a few considerations to bear in mind:
- Assume that the business will be responsible for its use of AI. If basic guardrails are not in place, implement and iterate as usage and needs evolve. While AI innovation and implementation have been major drivers throughout industry, there are many indications that there is a related trend toward making businesses – deployers and developers – accountable for AI systems and their output. This issue promises to become even more salient this year as AI agents appear poised to assist with more tasks and handle more sensitive data than ever, and companies prepare for additional regulations for automated decisionmaking technologies.
- Develop or enhance a robust and effective process for assessing AI models, use cases, and initiatives. This will likely need to be integrated with existing privacy and cybersecurity assessment procedures, but do not assume that current templates will address the full range of AI impact.
- Identify exposure to applicable laws, regulations, and/or industry frameworks. This may vary significantly by company or sector, but there is a growing consensus around focusing on high-risk processing. Companies may need to implement disclosures related to instances where customers interact with AI or implement a full suite of compliance obligations. It will be important to monitor how the federal initiative to restrict the enforcement of state-level AI laws could impact a company’s compliance program.
Enforcement and Litigation Risks
In addition to enforcement related to sensitive data, use of generative AI, and several of the areas highlighted above, companies should pay attention to trends in litigation and understand how to mitigate their risks.
- Vendor Management: We anticipate that regulators will continue to scrutinize third-party and vendor relationships, particularly in relation to security incidents (as has been the trend), compliance management (including privacy and cyber vendors), and AI vendors. There is also a growing concern related to vendor agreements, which often seek to burden-shift in a manner that places significant risk on companies.
- CIPA, Wiretap, and Related Litigation: Directly related to the exposure from cookies, online tracking, and missing or improperly configured consent management solutions, we expect to see further activity on the privacy litigation front in 2026. In 2025, numerous complaints and lawsuits were filed by plaintiff’s firms seeking to capitalize on existing invasion of privacy and wiretap laws, notably CIPA (California Invasion of Privacy Act). California did not amend its laws to address ambiguity related to online tracking and is not expected to revisit those amendments until later this year. Proper consent management solutions are one aspect of mitigating exposure to this type of claim, though the approach and strategy will depend on the company and individual risk profile.
- Data Security: Although cybersecurity has been a persistent and serious risk for all companies for many years, 2026 is expected to see a growing sophistication in cybersecurity attacks as threat actors continue to weaponize AI. We expect to see agentic AI-powered attacks along with agentic AI-related incidents from the agentic AI behaving in unintended ways.
In many ways, 2026 is a continuation of the trends and risks that we saw in 2025, but with greater uncertainty given the conflicting approaches at the state and federal level when it comes to regulating AI. At least given the political consensus around children’s data, it is clear that this focus will continue in 2026 and we may even see federal legislation on this issue. As with all years, it’s best to focus on fundamentals such as understanding data flows and making sure your public-facing disclosures are in order to best prepare you for the bumpy road that lies ahead in 2026.
See our Privacy & Data Security Alerts page for more detail on issues like state and federal enforcement, updates to the CCPA regulations, cybersecurity requirements for defense contractors, and more.
