Course Description
Learn how to protect AI systems that have become an essential part of critical sectors, where increasing security risks threaten model accuracy, data integrity, and user trust.
In this course, you will gain practical concepts and methods to detect security threats and understand defense strategies. You will engage in hands-on labs covering data poisoning attacks, adversarial attacks, supply chain protection, and model security across sectors such as healthcare, finance, and transportation.
You will also explore the NIST AI Risk Management Framework, advanced privacy techniques, and strategies to enhance transparency and fairness in AI models. In addition, you will practice modern tools such as LIME for model interpretability, differential privacy techniques, and adversarial training to make models more robust.
This course is ideal for engineers, researchers, and decision-makers who aim to build secure and trustworthy AI systems.
What you'll learn
By the end of this course you will be able to:
- Describe the stages of the AI lifecycle and the associated security risks.
- Compare cloud security strategies in AWS, Azure, and GCP.
- Analyze defense strategies against data poisoning.
- Evaluate the impact of different data poisoning scenarios on model security.
- Implement Dirty-Label and Clean-Label Backdoor attacks and compare their outcomes.
- Explain defense strategies in machine learning against adversarial attacks.
- Build a CNN model and apply FGSM and PGD attacks.
- Conduct adversarial training to enhance model robustness.
- Compare defense strategies across different sectors (healthcare, finance, transportation, retail, government).
- Explain the AI Risk Management Framework (AI RMF).
- Interpret the core principles of trustworthy AI according to NIST.
- Analyze privacy challenges in AI.
- Apply tools for data analysis and bias detection.
- Evaluate bias mitigation strategies using fairness tools.
- Explain accountability-related challenges.
- Apply tools such as LIME and Decision Trees to interpret model decisions.
- Explain defense strategies in federated learning.
- Differentiate between types of model theft attacks.
- Evaluate strategies for detecting and protecting against model theft.
- Analyze risks related to data, models, software, infrastructure, and hardware components.
Requirements
Basic knowledge of Artificial Intelligence and Machine Learning
Familiarity with Python programming language.