Introduction to AI Security and Risk Management培訓
AI Security和Risk Management是識別、減輕和管理AI驅動系統和工作流程中的安全威脅、合規風險和操作暴露的實踐。
這項由講師指導的培訓(線上或線下)針對初級IT安全、風險和合規專業人員,旨在幫助他們理解基礎的AI安全概念、威脅向量以及全球框架,如NIST AI RMF和ISO/IEC 42001。
培訓結束後,參與者將能夠:
- 理解AI系統引入的獨特安全風險。
- 識別威脅向量,如對抗攻擊、數據中毒和模型反轉。
- 應用基礎治理模型,如NIST AI Risk Management框架。
- 將AI使用與新興標準、合規指南和倫理原則對齊。
課程形式
- 互動式講座和討論。
- 大量練習和實踐。
- 在實時實驗室環境中進行動手操作。
課程定制選項
- 如需定制此課程,請聯繫我們安排。
課程簡介
AI與安全基礎
- 從安全角度來看,AI系統的獨特之處
- AI生命週期概述:數據、訓練、推理和部署
- AI風險的基本分類:技術、道德、法律和組織
AI特定威脅向量
- 對抗性示例和模型操縱
- 模型反轉和數據洩漏風險
- 訓練階段的數據中毒
- 生成式AI的風險(例如,LLM濫用、提示注入)
安全Risk Management框架
- NIST AI Risk Management框架(NIST AI RMF)
- ISO/IEC 42001和其他AI特定標準
- 將AI風險映射到現有的企業GRC框架
AIGo治理與合規原則
- AI的問責性和可審計性
- 透明度、可解釋性和公平性作為安全相關屬性
- 偏見、歧視和下游危害
企業準備和AI Security政策
- 定義AI安全計劃中的角色和責任
- 政策要素:開發、採購、使用和退役
- 第三方風險和供應商AI工具使用
監管環境與全球趨勢
- 歐盟AI法案和國際監管概述
- 美國關於安全、可靠和可信AI的行政命令
- 新興國家框架和行業特定指南
可選工作坊:風險映射與自我評估
- 將現實世界的AI用例映射到NIST AI RMF功能
- 進行基本的AI風險自我評估
- 識別AI安全準備中的內部差距
總結與下一步
最低要求
- 對基本網絡安全原則的理解
- 具備IT治理或風險管理框架的經驗
- 熟悉一般AI概念有幫助,但不是必須的
目標受眾
- IT安全團隊
- 風險管理人員
- 合規專業人員
需要幫助選擇合適的課程嗎?
Introduction to AI Security and Risk Management培訓 - Enquiry
Introduction to AI Security and Risk Management - 咨詢詢問
咨詢詢問
相關課程
AI Governance, Compliance, and Security for Enterprise Leaders
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at intermediate-level enterprise leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
- Understand the legal, ethical, and regulatory risks of using AI across departments.
- Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Establish security, auditing, and oversight policies for AI deployment in the enterprise.
- Develop procurement and usage guidelines for third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 時間:Artificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
- Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001.
- Recognize cybersecurity threats targeting AI models and data pipelines.
- Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
- Interactive lecture and discussion of public sector use cases.
- AI governance framework exercises and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Secure and Responsible LLM Applications
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
- Understand the core vulnerabilities of LLM-based systems.
- Apply secure design principles to LLM app architecture.
- Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
Privacy-Preserving Machine Learning
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at advanced-level professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
By the end of this training, participants will be able to:
- Understand and compare key privacy-preserving techniques in ML.
- Implement federated learning systems using open-source frameworks.
- Apply differential privacy for safe data sharing and model training.
- Use encryption and secure computation techniques to protect model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
- Identify and assess security risks in edge AI deployments.
- Apply tamper resistance and encrypted inference techniques.
- Harden edge-deployed models and secure data pipelines.
- Implement threat mitigation strategies specific to embedded and constrained systems.
Securing AI Models: Threats, Attacks, and Defenses
14 時間:這是一個由講師指導的澳門(線上或線下)培訓,針對中級機器學習和網絡安全專業人員,旨在通過概念框架和實踐防禦(如魯棒訓練和差分隱私)來理解和減輕AI模型面臨的新興威脅。
在培訓結束時,學員將能夠:
- 識別和分類AI特定的威脅,如對抗攻擊、反轉和中毒。
- 使用Adversarial Robustness Toolbox(ART)等工具模擬攻擊並測試模型。
- 應用實用防禦措施,包括對抗訓練、噪聲注入和隱私保護技術。
- 在生產環境中設計威脅感知的模型評估策略。