top of page
All Insights


Where Legal Must Lead: Part VII – Managing Crisis and Response in AI Incidents
No AI system operates indefinitely without challenge. Incidents arise in multiple forms: a data exposure event involving model inputs; a public allegation of biased decision-making; a regulator’s inquiry into automated processes; a client complaint regarding AI-assisted work product; or a civil claim asserting discriminatory or unlawful outcomes. When such events occur, the legal department becomes the central coordinating authority.
Cheryl Johnson
Sep 20, 20253 min read


Where Legal Must Lead: Part VI - Monitoring and Compliance of AI Systems
AI systems evolve over time. Vendors update underlying models. Enterprise data changes. Use cases expand. Regulatory guidance matures. An AI system that operated within acceptable parameters at launch may drift into misalignment months later. Ongoing oversight therefore requires structured monitoring, and the legal department must remain actively engaged in that process.
Cheryl Johnson
Aug 20, 20253 min read


Where Legal Must Lead: Part V - Managing Third-Party Risk in AI Procurement
AI systems introduce risks that extend beyond traditional software evaluation. They affect decision-making processes, legal rights, regulatory exposure, and reputational stability. Legal must therefore engage before contractual signature — not after deployment.
Responsible AI procurement operates across three components: vendor assessment, product assessment, and contractual structuring. Legal’s role differs in each, but its presence is essential in all three.
Cheryl Johnson
Jul 10, 20253 min read


Where Legal Must Lead: Part IV – Managing Intellectual Property Risk in AI Systems
Intellectual property risk in AI environments presents in multiple dimensions. Questions of authorship, ownership, infringement, licensing, trade secret protection, and patent eligibility intersect in ways that remain unsettled in many jurisdictions. The legal department must interpret evolving jurisprudence and regulatory guidance and translate that uncertainty into structured enterprise safeguards.
Cheryl Johnson
May 28, 20255 min read


Where Legal Must Lead: Part III - Safeguarding Data Privacy and Governance in AI Systems
AI systems do not merely store or transmit information. They ingest, transform, infer, correlate, and sometimes generate new data derived from existing datasets. As a result, AI adoption magnifies privacy and confidentiality risks in ways that differ from traditional software deployments. The legal department’s role in this context is to interpret how evolving privacy obligations apply when data is processed through AI systems and to ensure that those obligations are embedded
Cheryl Johnson
Apr 25, 20254 min read


Where Legal Must Lead: Part II - Aligning AI Risk with Legal Accountability
Regulatory interpretation alone does not operationalize governance. The legal department’s role at this stage is both architectural and substantive. Here we examine the legal department’s responsibility to structure enforceable policies and ethical guardrails that govern AI's use. Legal must determine what governance instruments are required, how they integrate with existing compliance frameworks, and how they convert regulatory and ethical commitments into enforceable enterp
Cheryl Johnson
Mar 25, 20253 min read


Where Legal Must Lead: Part I - Interpreting and Applying Emerging AI Regulation
The legal team is responsible for analyzing complex and evolving laws, regulations, frameworks, directives, and regulatory guidance and translating them into actionable direction for the enterprise. That responsibility extends beyond monitoring developments. It requires a disciplined assessment of how particular AI use cases intersect with statutory classifications, disclosure obligations, documentation requirements, and mandated impact assessments.
Cheryl Johnson
Feb 20, 20253 min read


Where Legal Must Lead: The Seven Pressure Points of AI Governance
The legal department has long served as the institutional interpreter of risk. Its mandate has never been limited to answering discrete legal questions; rather, it has involved advising leadership on how regulatory, contractual, and reputational exposures shape strategic decisions. In doing so, Legal defines the boundaries within which the organization may operate responsibly and defensibly. This series examines the specific domains in which the legal department’s leadership
Cheryl Johnson
Feb 15, 20254 min read


Building Ethical AI Frameworks: Best Practices & Insights
Artificial Intelligence (AI) continues to reshape industries, revolutionizing how businesses operate and interact with their customers. However, as AI becomes more pervasive, the need for ethical frameworks to guide its development and deployment has never been more critical. RAIGE Advisors, a consulting business specializing in Responsible AI, AI Governance, and AI Ethics, is at the forefront of helping organizations navigate these complexities and promote ethical AI practic
Cheryl Johnson
Dec 18, 20242 min read


Navigating AI Governance: Key Strategies for Success
In the rapidly evolving landscape of Artificial Intelligence (AI), navigating the complexities of AI governance is a crucial task for businesses aiming to harness the power of AI while upholding ethical standards and regulatory compliance. By implementing robust frameworks and strategies, organizations can foster trust among stakeholders, mitigate risks, and ensure responsible AI practices.
Cheryl Johnson
Aug 15, 20242 min read


Ensuring Responsible AI Practices: Expert Guidance & Tips
As Artificial Intelligence (AI) becomes more prevalent in organizations across various industries, ensuring responsible AI practices has never been more critical. With the potential to impact society in profound ways, it is essential for businesses to navigate the complexities of AI while upholding ethical standards and building stakeholder trust.
Cheryl Johnson
Jun 15, 20242 min read


AI Regulation as a Catalyst for Innovation, Not a Barrier to Progress
In the rapid evolution of Artificial Intelligence, debate continues over whether regulation will stifle innovation or strengthen it. While some warn that stringent rules may slow progress, a closer look suggests that well-crafted, common-sense frameworks can build trust, enhance safety, and align AI development with societal and ethical expectations—ultimately supporting more sustainable and responsible innovation.
Cheryl Johnson
Feb 15, 20248 min read


Exploring US AI Regulatory Headlines from 2023
2023 marked a turning point for AI regulation in the United States. While no single federal AI law exists, Executive Order 14110 and the NIST AI Risk Management Framework reshaped how organizations think about AI governance, privacy, and risk. This article explores the key US AI regulatory headlines of 2023 and what they signal for 2024 and beyond.
Cheryl Johnson
Dec 28, 20238 min read
bottom of page