How To Start Asking Tough Questions

Anthony Green is an information security and AI expert, championing responsible innovation, ethics, and secure AI deployments.
AI has evolved from a futuristic novelty into a workhorse with outsized returns on investment for modern businesses. Companies are already using it to power chatbots, analyze massive datasets and streamline critical operations.
Yet, just as we’ve learned from years of cyberattacks and data breaches, rapid innovation can expose serious risks. Biased models, unauthorized data sharing and growing international regulations underscore the urgency of taking AI risk assessments as seriously as we take cybersecurity.
Lessons From Cybersecurity
In the 2000s and even 2010s, many organizations treated cybersecurity as an afterthought, only implementing it once software was built and clients or regulations required it. After high-profile breaches, mature organizations now consider security at every stage. We now use penetration tests, security audits and vendor questionnaires.
AI must follow the same path. You only need to look at a few real-world incidents to show why AI oversight matters. Amazon’s AI-powered recruiting tool learned to discriminate against female job applicants, reflecting biases in its training data. Samsung employees accidentally pasted proprietary code into a public chatbot, potentially leaking the code into the chatbot’s training data.
These events are just a drop in the bucket, highlighting the need for structured AI governance in any organization.
Why AI Risk Assessments Are Essential
AI can have tangible impacts on people’s lives: Unchecked, it might deny loans, misjudge job candidates or breach privacy. Because of those direct impacts, regulations around the globe are quickly starting to catch up:
• Canada: Bill C-27 outlines an AI and Data Act for responsible data handling.
• EU: The EU AI Act (full-text version) categorizes AI risk levels.
• U.S.: The NIST AI Risk Management Framework and White House Blueprint for an AI Bill of Rights guide ethical AI use.
• China: Interim Measures for the Management of Generative AI Services cover data usage and transparency.
Building On Existing Frameworks
While new regulations and frameworks will play a role in AI safety, we don’t need to reinvent the wheel. Many established standards and best practices for cybersecurity can be adapted to AI:
• ISO 27001:2022: Requires planning, risk assessments for changes and controlling third-party vendors.
• SOC 2: Details security and change-management controls relevant to software development.
Instead of treating AI as a stand-alone concern, organizations can fold AI oversight into existing risk management processes. A straightforward way to normalize AI risk assessments is by adding AI-focused questions to your usual security or vendor review. Start with:
1. Data And Bias
• Where did the training data come from? Is it prone to systematic bias?
• How do we test for discrimination or skewed outputs?
2. Privacy And Regulatory Compliance
• Does data usage align with GDPR, Canada’s Bill C-27 or other applicable laws?
• If AI automates decisions about individuals, do we meet legal obligations (e.g., the EU AI Act or U.S. guidelines)?
3. Security Controls
• Are we regularly auditing AI models for vulnerabilities, akin to penetration testing?
• How is access to the AI system governed, and is there a dedicated incident response plan?
4. Accountability And Transparency
• Who will be held responsible for errors or bias?
• Are we able to explain AI-driven decisions to users or regulators?
Practical Steps To Integrate AI Governance
Once the above questions have been answered, there are a few key steps that organizations can take:
1. Expand existing security audits. Every time you run a cybersecurity assessment, add a dedicated AI section. This ensures AI is never overlooked.
2. Educate your workforce. Incidents like Samsung’s data leak show how quickly untrained employees can trigger privacy or IP issues. Provide clear guidelines on handling sensitive data with AI. This can be integrated into your security or privacy awareness training.
3. Form a cross-functional team. Include compliance officers, attorneys, data scientists and business stakeholders. This group sets AI policies, monitors emerging regulations and decides on assessments for high-risk AI projects.
4. Introduce continual oversight. AI models adapt over time, and new regulations pop up regularly. Establish periodic checkups, like patch cycles in cybersecurity, to catch hidden risks or compliance gaps.
5. Keep records. Maintain a paper trail of AI approvals, bias tests, and security checks. Detailed documentation can ease discussions with partners, regulators, or customers.
Responsible AI In Action
Finance and healthcare often lead the pack. Banks thoroughly validate lending models before going live, ensuring they comply with anti-discrimination and financial regulations. Hospitals embed ethics reviews when rolling out AI-driven diagnostics to protect patient privacy.
These examples show that AI governance doesn’t stall innovation; instead, it safeguards it.
Final Takeaway
Just as cybersecurity has evolved in the past decade from an afterthought to a daily priority, AI risk assessments must follow. If you’re deploying AI without checks for bias, privacy and legal compliance, you risk falling into the same pitfalls that have haunted the cybersecurity industry. Now is the time to:
• Ask AI-specific questions during security audits and vendor reviews.
• Adapt established frameworks (ISO 27001, SOC 2) to include AI.
• Document every step, creating a culture of transparency and accountability.
By embracing AI risk governance alongside standard security measures, organizations can safely utilize and maximize AI’s game-changing potential without sacrificing user trust or running up fines from regulators. Technology evolves at lightning speed, and the organizations that thrive will be those bold enough to lead with responsibility.
And if you’re seeking a practical starting point, here’s a free resource from the Canadian Digital Governance Council that I helped build to guide your efforts: The Generative AI Security Program Questionnaire. This tool focuses on ethics, privacy, and security questions that help pinpoint risks before AI gets deployed in production.
By applying a systematic approach, exactly the same as we do with cybersecurity, you’ll be better prepared to innovate responsibly and protect both your brand and customers.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?