Securing AI Models: Threats, Attacks, and Defenses培訓
保護AI模型是一門防禦機器學習系統的學科,旨在應對模型特定的威脅,如對抗性輸入、數據中毒、反轉攻擊和隱私洩露。
這門由講師指導的培訓(線上或線下)面向中級機器學習和網絡安全專業人員,旨在幫助他們理解並緩解針對AI模型的新興威脅,使用概念框架和實際防禦措施,如魯棒訓練和差分隱私。
培訓結束後,參與者將能夠:
- 識別並分類AI特定威脅,如對抗性攻擊、反轉和中毒。
- 使用Adversarial Robustness Toolbox (ART)等工具模擬攻擊並測試模型。
- 應用實際防禦措施,包括對抗性訓練、噪聲注入和隱私保護技術。
- 在生產環境中設計威脅感知的模型評估策略。
課程形式
- 互動式講座和討論。
- 大量練習和實踐。
- 在實時實驗室環境中進行實踐操作。
課程定制選項
- 如需為本課程定制培訓,請聯繫我們進行安排。
課程簡介
AI威脅建模簡介
- AI系統的脆弱性來源是什麼?
- AI攻擊面與傳統系統的比較
- 主要攻擊向量:數據層、模型層、輸出層和接口層
AI模型的對抗攻擊
- 理解對抗樣本和擾動技術
- 白盒攻擊與黑盒攻擊
- FGSM、PGD和DeepFool方法
- 可視化與製作對抗樣本
模型反演與隱私洩露
- 從模型輸出推斷訓練數據
- 成員推斷攻擊
- 分類模型和生成模型中的隱私風險
數據中毒與後門注入
- 中毒數據如何影響模型行為
- 觸發式後門與木馬攻擊
- 檢測與淨化策略
魯棒性與防禦技術
- 對抗訓練與數據增強
- 梯度掩蔽與輸入預處理
- 模型平滑與正則化技術
隱私保護的AI防禦
- 差分隱私簡介
- 噪聲注入與隱私預算
- 聯邦學習與安全聚合
AI Security 實踐應用
- 威脅感知的模型評估與部署
- 在實際應用中使用ART(對抗魯棒性工具箱)
- 行業案例研究:真實世界的漏洞與緩解措施
總結與下一步
最低要求
- 理解机器学习工作流程和模型训练
- 熟悉Python及常见ML框架,如PyTorch或TensorFlow
- 了解基本的安全或威胁建模概念会有所帮助
受众
- 机器学习工程师
- 网络安全分析师
- AI研究人员和模型验证团队
需要幫助選擇合適的課程嗎?
Securing AI Models: Threats, Attacks, and Defenses培訓 - Enquiry
Securing AI Models: Threats, Attacks, and Defenses - 咨詢詢問
咨詢詢問
相關課程
AI Governance, Compliance, and Security for Enterprise Leaders
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at intermediate-level enterprise leaders who wish to understand how to govern and secure AI systems responsibly and in compliance with emerging global frameworks such as the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
- Understand the legal, ethical, and regulatory risks of using AI across departments.
- Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Establish security, auditing, and oversight policies for AI deployment in the enterprise.
- Develop procurement and usage guidelines for third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 時間:Artificial Intelligence (AI) introduces new dimensions of operational risk, governance challenges, and cybersecurity exposure for government agencies and departments.
This instructor-led, live training (online or onsite) is aimed at public sector IT and risk professionals with limited prior experience in AI who wish to understand how to evaluate, monitor, and secure AI systems within a government or regulatory context.
By the end of this training, participants will be able to:
- Interpret key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Apply AI-specific governance and auditing frameworks such as NIST AI RMF and ISO/IEC 42001.
- Recognize cybersecurity threats targeting AI models and data pipelines.
- Establish cross-departmental risk management plans and policy alignment for AI deployment.
Format of the Course
- Interactive lecture and discussion of public sector use cases.
- AI governance framework exercises and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Secure and Responsible LLM Applications
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at intermediate-level to advanced-level AI developers, architects, and product managers who wish to identify and mitigate risks associated with LLM-powered applications, including prompt injection, data leakage, and unfiltered output, while incorporating security controls like input validation, human-in-the-loop oversight, and output guardrails.
By the end of this training, participants will be able to:
- Understand the core vulnerabilities of LLM-based systems.
- Apply secure design principles to LLM app architecture.
- Use tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques like sandboxing, red teaming, and human-in-the-loop review into production-grade pipelines.
Introduction to AI Security and Risk Management
14 時間:這門由講師指導的澳門(線上或線下)培訓課程,旨在幫助初級IT安全、風險和合規專業人員理解AI安全的基礎概念、威脅向量以及全球框架,如NIST AI RMF和ISO/IEC 42001。
在培訓結束時,參與者將能夠:
- 理解AI系統帶來的獨特安全風險。
- 識別威脅向量,如對抗攻擊、數據中毒和模型反轉。
- 應用基礎治理模型,如NIST AI Risk Management框架。
- 將AI使用與新興標準、合規指南和道德原則對齊。
Privacy-Preserving Machine Learning
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at advanced-level professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
By the end of this training, participants will be able to:
- Understand and compare key privacy-preserving techniques in ML.
- Implement federated learning systems using open-source frameworks.
- Apply differential privacy for safe data sharing and model training.
- Use encryption and secure computation techniques to protect model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 時間:This instructor-led, live training in 澳門 (online or onsite) is aimed at intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the end of this training, participants will be able to:
- Identify and assess security risks in edge AI deployments.
- Apply tamper resistance and encrypted inference techniques.
- Harden edge-deployed models and secure data pipelines.
- Implement threat mitigation strategies specific to embedded and constrained systems.