We help the world’s largest and most innovative businesses manage the privacy, fairness, security, and transparency of their AI and data — including generative AI systems. Our clients rely on us for concrete, highly technical advice to manage the risks of AI models that affect millions of people around the world. While the majority of our work as a law firm is privileged and confidential, we are frequently asked for examples of the work we do.

Fortune 100 Client

A major technology company was seeking to adopt a foundational GenAI model, fine-tune their own GenAI models, and embed GenAI throughout its products. This “GenAI Transformation” initiative had board-level involvement, and the company retained our firm to build a risk management program for the use of GenAI, including red teaming their most prominent models. We conducted a liability assessment and performed technical red teaming on each major GenAI system, and trained their internal teams to conduct future red teaming without our direct technical involvement. As a result, our Client was able to adopt and deploy GenAI widely while also ensuring that each model would stand up to legal scrutiny. 

Fortune 100 Client

We conducted an independent third-party audit of an AI system used to help select top job applicants for in-person interviews, in line with nondiscrimination provisions in the employment context, such as the EEOC’s Uniform Guidelines on Employee Selection Procedures as well as quantitative standards for algorithmic bias testing in existing case law. Our audit identified legal and technical liabilities, and our client was able to use our analysis to retrain their AI system with additional, more representative data to improve their AI bias results.

Technology Client

We reviewed de-identification practices used by a major technology company for their analytics environment, and provided detailed legal and technical recommendations on meeting the relevant legal standards for anonymization. Once implemented, we provided a certification of compliance—including statistical analysis—attesting to the reasonably low likelihood of the data being linkable to individuals, in line with relevant legal standards for de-identification.

Fortune 500 Client

We were asked to assess a vendor’s facial recognition and biometrics system for bias-related risks, which was used by a Fortune 500 client to grant physical access to its facilities across the United States. Our assessment included an analysis of the legal liabilities generated by the AI system as well as guidance on technical remediation measures. Based on our recommendations, our client was able to work with the vendor to remediate the system and reduce its risks.


Education Technology
Enterprise Software and SaaS
Financial Services
Healthcare and Life Sciences
Information Security
Management Consulting
Public Sector
Technology and Cloud Computing
Venture Capital