Next year, thousands of generative AI pilots will move into production. Despite everyone's good intentions and evolving AI technology, there are some very real hurdles for most organizations to put AI into production at scale, and AI governance isn't something optional anymore. It is easier said than done. Here are 4 common AI Governance challenges I have found working with customers and approaches to solve them: 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟭: AI governance collaboration requires a lot of manual work, amplified by changes in data and model versions. Solution: Automate the governance activities as much as possible. 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟮: Companies have models in multiple tools, applications, and platforms, developed inside and outside the organization. Solution: Consolidate as much as possible into one single governance platform. 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟯: Governance is not a one-size-fits-all approach. Solution: Configure to your specific situation. 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 #𝟰: Constraining technical teams in their choice of technology or frameworks. Solution: Provide an open architecture to wrap around the AI tooling of choice. As new Generative AI models bring benefits and risks, organizations need to take an enterprise-wide approach to governing all AI. With impending regulation, they must urgently address risks and govern both old and new AI, no matter who created it. The key: take a proactive approach and address AI governance before regulation requires it.
AI Governance Practices
Explore top LinkedIn content from expert professionals.
-
-
CISOs are the adult chaperones at the no-holds-barred enterprise AI party. The music’s loud, the tools are multiplying, and someone’s definitely just fine-tuned a model on restricted data. Welcome to GenAI adoption in the wild. Notes from recent conversations with security leaders across industries: (1) Governance must assume AI is already in use. AI is already inside your company. The question is: do you know how, where, and why it’s being used? Even without formal rollouts, models are seeping in through vendors, team tools, browser extensions, and well-meaning employees. CISOs are shifting from permissioned adoption to presumed presence - layering AI policy atop data classification, and updating acceptable use playbooks accordingly. (2) Scope creep is inevitable, plan for it. One CISO greenlit a tool for summarizing internal memos - only to find it rewriting legal documents two weeks later. This is just how general-purpose tools work: they generalize. So now there’s a philosophical split: - One camp says: approve narrowly, monitor tightly, hope for containment. - The other says: assume it will expand, mitigate broadly, and try to look wise when it inevitably does. It’s the same debate we saw in early cloud adoption. Once it’s in, it grows. You can’t freeze a moving system. You can only steer it. (3) Experimentation is the goal, not the threat. Innovation needs room to breathe. Forward-thinking companies are creating sanctioned AI sandboxes, isolated zones where teams can safely test tools with clear usage boundaries, audit logs, and human-in-the-loop review. The bigger lift? Moving from sandbox to production with oversight intact. (4) AI amplifies old risks more than it invents new ones. DLP gaps, shadow IT, over-permissioning aren't new. What’s new is the velocity and opacity of AI that supercharges these risks: - Third-party models evolve behind closed doors, outside your change management systems. - Sensitive data can slip through prompts, plugins, and browser extensions before anyone notices. - Some models carry “latent behaviors” - responses that activate only under specific inputs, like ticking time bombs you didn’t know you deployed. The problems aren’t unfamiliar. The speed, scale, and unpredictability are. 5. Policies are only as good as their enforcement. Leaders are moving from principles to practice: -Embedding violation alerts into workflows -Mandating enterprise accounts for AI tools -Training employees on AI hygiene -Using ROI and behavior metrics (like Copilot usage) to guide decisions As one CISO told me, with the weary clarity of someone who’s read too many whitepapers: “If your AI governance lives in a PDF, it’s not real.” TL;DR: AI governance isn’t a new discipline. But it is a faster, messier, higher-stakes remix of the same cybersecurity fundamentals: visibility, classification, enforcement, and education. CISOs aren’t there to kill the vibe. They’re there to make sure the party doesn’t burn the house down.
-
AI field note: AI needs nothing less (nothing more) than the security afforded to your data by AWS. Require the capabilities/culture to train & tune securely. Foundation model weights, apps built around them, and the data used to train, tune, ground or prompt them all represent valuable assets containing sensitive business data (like personal, compliance, operational, financial data). It's imperative these assets stay protected, private, and secure. To do this, we follow three principles: 1️⃣ Complete isolation of the AI data from the infrastructure operator. AWS has no ability to access customer content and AI data, such as AI model weights and data processed with models. This protection applies to all Nitro-based instances, including Inferentia, Trainium, and GPUs like P4, P5, G5, & G6. 2️⃣ Ability for customers to isolate AI data from themselves. We provide mechanisms to allow model weights and data to be loaded into hardware, while remaining isolated and inaccessible from customers’ own users and software. With Nitro Enclaves and KMS, you can encrypt your sensitive data using keys that you own and control, store that data in a location of your choice, and securely transfer the encrypted data to an isolated compute environment for inference. 3️⃣ Protected infrastructure communications. Communication between devices in the ML accelerator infrastructure must be protected. All externally accessible links between the devices must be encrypted. Through the Nitro System, you can cryptographically validate your applications and decrypt data only when the necessary checks pass. This enhancement allows AWS to offer end-to-end encryption for your data as it flows through generative AI workloads. We plan to offer this end-to-end encrypted flow in the upcoming AWS-designed Trainium2 as well as GPU instances based on NVIDIA's upcoming Blackwell architecture, which both offer secure communications between devices. This approach is industry-leading. It gives customers piece of mind to be able to protect their data, while also moving quickly with their generative AI programs, across the entire stack. You can tell a lot about how a company makes decisions based on their culture. A research organization (for example), will likely make a different set of trade offs in how they collect and use data to differentiate and drive their research. There is nothing wrong with this so long as it's transparent, but it's different to how we approach things at Amazon. Alternatively, while generative AI is new, many of the companies who are providing AI services have been serving customers for long enough to establish a history with respect to security (and the culture which underpins it). It's worth taking the time to inspect and understand that history, as past behavior is likely to be indicative of future delivery. I hope you take the time to do that with AWS. More in the excellent blog linked below.
-
The Decision Tree for Responsible AI is a guide developed by AAAS (American Association for the Advancement of Science) to help put ethical principles into practice when creating and using AI, and aid users and their organizations in making informed choices regarding the development or deployment of AI solutions. The DT is meant to be versatile, but may not cover every unique situation and might not always have clear yes/no answers. It's advised to continually consult the chart throughout the AI solution's development and deployment, considering the changing nature of projects. Engaging stakeholders inclusively is vital to this framework. Before using the tree, determine who is best suited to answer the questions based on their expertise. To do this, the decision tree is referring to Partnership on AI's white paper “Making AI Inclusive” (see: https://lnkd.in/gEeDhe4q) on stakeholder engagement to make sure that the right people are included and get a seat on the table: 1. All participation is a form of labor that should be recognized 2. Stakeholder engagement must address inherent power asymmetries 3. Inclusion and participation can be integrated across all stages of the development lifecycle 4. Inclusion and participation must be integrated to the application of other responsible AI principles The decision tree was developed against the backdrop of the NIST AI Risk Management Framework (AI RMF 1.0) and its definition of 7 principles of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy enhanced, and fair with their harmful biases managed. See: https://lnkd.in/gHp5iE7x Apart from the decision tree itself, it is worth having a look at the additional resources at the end of the paper: - 4 overall guiding principles for evaluating AI in the context of human rights (Informed Consent, Beneficence, Nonmaleficence, Justice). - Examples of groups that are commonly subject to disproportionate impacts. - Common ways that AI can lead to harm (Over-reliance on safety features, inadequate fail-safes, over-reliance on automation, distortion of reality or gaslighting, reduced self-esteem/reputation damage, addiction/attention hijacking, identity theft, misattribution, economic exploitation, devaluation of expertise, dehumanization, public shaming, loss of liberty, loss of privacy, environmental impact, erosion of social & democratic structures). See for more from Microsoft: https://lnkd.in/gCVK9kNe - Examples of guidance for regular post-deployment monitoring and auditing of AI systems. #decisiontree #RAI
-
The OMB issues guidance on the use and governance of AI. If actually followed, these guidelines could be quite effective. 👇 🏛️ Issued By: Executive Office of the President, Office of Management and Budget 🔍 Overview: -------------- AI is transforming government operations, but it's important to manage its risks, especially those affecting public rights and safety. This memo guides federal agencies on how to responsibly use AI, including setting up AI governance, promoting innovation, and managing risks. 👩💼 Strengthening AI Governance: ---------------------------------- Each agency must appoint a Chief AI Officer (CAIO) within 60 days to oversee AI implementation and coordination. Agencies must create AI use case inventories and comply with new requirements for managing AI risks. 💡 Advancing Responsible AI Innovation: ---------------------------------------- Agencies should develop strategies to responsibly adopt AI, including improving IT infrastructure, data access, and workforce skills. Sharing AI models, code, and data is encouraged to foster innovation and transparency. 🛡️ Managing Risks from the Use of AI: -------------------------------------- Agencies need to follow minimum practices for managing risks from AI that impacts safety and rights. Regular testing, monitoring, and evaluation of AI systems are required to ensure they are safe and effective. (Yes!) 🔎 Scope and Applicability: ---------------------------- The memo applies to all executive agencies, focusing on AI risks related to agency decisions and actions. National Security Systems are not covered by this memorandum. --- Much of this has been said in other places in the government, but this guidance is very direct. #aigovernance #airiskmanagement #ai #aiethics Jeffery Recker, Khoa Lam, Bryan Ilg, Dr. Benjamin Lange, Ali Hasan, Jovana Davidovic, Borhane Blili-Hamelin, PhD
-
It's clear that we’re moving beyond the very early days of generative AI—we’re now in the midst of an exciting and game-changing technological evolution. As new AI applications emerge and scale, responsible AI has to scale right along with it. Yet, more half of the 756 business leaders we surveyed say that their company does not have a team dedicated to responsible AI. Here are the top four best practices I give executives looking to get started to put this theory into practice: 1. Put your people first and deepen your workforce’s understanding of generative AI. 2. Assess risk on a case by case basis and introduce guardrails such as rigorous testing. Always test with humans to ensure high confidence in the final results. 3. Iterate across the endless loop that is the AI life cycle. Deploy, fine tune, and keep improving. Remember, innovation is an ongoing process, not a one-time goal. 4. Test, test again, and then test again. Rigorous testing is the secret strategy behind every innovation. Finally, remember there is no one central guardian of responsible AI. While the commitment of organizations and business leaders is vital, this effort is a shared responsibility between tech companies, policymakers, community groups, scientists, and more. https://lnkd.in/gg8anUWn
-
National Security Agency’s Artificial Intelligence Security Center (NSA AISC) published the joint Cybersecurity Information Sheet Deploying AI Systems Securely in collaboration with CISA, the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC-UK). The guidance provides best practices for deploying and operating externally developed artificial intelligence (AI) systems and aims to: 1)Improve the confidentiality, integrity, and availability of AI systems. 2)Ensure there are appropriate mitigations for known vulnerabilities in AI systems. 3)Provide methodologies and controls to protect, detect, and respond to malicious activity against AI systems and related data and services. This report expands upon the ‘secure deployment’ and ‘secure operation and maintenance’ sections of the Guidelines for secure AI system development and incorporates mitigation considerations from Engaging with Artificial Intelligence (AI). #artificialintelligence #ai #securitytriad #cybersecurity #risks #llm #machinelearning
-
Whether you’re integrating a third-party AI model or deploying your own, adopt these practices to shrink your exposed surfaces to attackers and hackers: • Least-Privilege Agents – Restrict what your chatbot or autonomous agent can see and do. Sensitive actions should require a human click-through. • Clean Data In, Clean Model Out – Source training data from vetted repositories, hash-lock snapshots, and run red-team evaluations before every release. • Treat AI Code Like Stranger Code – Scan, review, and pin dependency hashes for anything an LLM suggests. New packages go in a sandbox first. • Throttle & Watermark – Rate-limit API calls, embed canary strings, and monitor for extraction patterns so rivals can’t clone your model overnight. • Choose Privacy-First Vendors – Look for differential privacy, “machine unlearning,” and clear audit trails—then mask sensitive data before you ever hit Send. Rapid-fire user checklist: verify vendor audits, separate test vs. prod, log every prompt/response, keep SDKs patched, and train your team to spot suspicious prompts. AI security is a shared-responsibility model, just like the cloud. Harden your pipeline, gate your permissions, and give every line of AI-generated output the same scrutiny you’d give a pull request. Your future self (and your CISO) will thank you. 🚀🔐
-
🤔 Because of the timing of its release, the interim report of the United Nations AI Advisory Body has received less attention. Its focus is on establishing guiding principles for global AI governance. 👉 Of particular interest is their Guiding Principle 3. AI governance should be built in step with data governance and the promotion of data commons. ✅Quote: “Data is critical for many major AI systems. Its governance and management in the public interest cannot be divorced from other components of AI governance ... Regulatory frameworks and techno-legal arrangements that protect privacy and security of personal data, consistent with applicable laws, while actively facilitating the use of such data will be a critical complement to AI governance arrangements, consistent with local or regional law. The development of public data commons should also be encouraged with particular attention to public data that is critical for helping solve societal challenges including climate change, public health, economic development, capacity building, and crisis response, for use by multiple stakeholders.” See https://lnkd.in/e6z2kGdp ➡️ Resonates a lot with the core recommendations of our essay (with Friederike Schüür) on “Interwoven Realms: Data Governance as the Bedrock for AI Governance” See: https://lnkd.in/eZYgRKE2) 👉 Our essay provides six reasons why AI governance is unattainable without a comprehensive and robust framework of data governance. ➡️ In addressing this intersection, the essay aims to shed light on the necessity of integrating data governance more prominently into the conversation on AI, thereby fostering a more cohesive and effective approach to the governance of this transformative technology. 🤔I am eager to see how the AI Advisory Body will align AI Governance with Data Governance #ai #data #aigovernance #datagovernance #artificialintelligence
-
Did you know that 80% of AI projects fail due to a lack of trust? As organizations incorporate AI into their operations and offerings, establishing trust and effectively managing the associated risks needs to be a priority. My partner in leading Deloitte’s Enterprise Trust work, Clifford Goss, CPA, Ph.D., was recently featured in a great The Wall Street Journal article discussing how essential risk management is for successful AI adoption: https://deloi.tt/3TNckVQ. Cliff, along with our colleague Gina Primeaux, are focused on helping organizations manage the risk, regulatory, and compliance aspects of AI. Cliff shares two ways organizations can strengthen AI trust: 1. Top-down risk management: Establishing strong governance policies and controls empowers organizations to leverage AI confidently while maintaining compliance. 2. Bottom-up risk management: Conducting thorough cyber assessments helps address concerns like unethical data use, data leakage, and misuse, reducing financial and reputational risks. To keep pace with rapid AI advancements—from generative to agentic AI—risk management programs must remain flexible and responsive to new challenges and regulations. In doing so, organizations can build the trust necessary to fully realize AI’s benefits.