Member-only story
Learn about Adversarial Machine Learning, including Position Evasion Attacks, and discover effective ways to address and prevent them.
This method can have multiple applications, with the most prevalent one being its use to disrupt or impair standard machine learning models.
Adversarial machine learning is a technique used in the field of machine learning to deceive models through malicious input. Machine learning models can be complex, and at times, we may not fully comprehend how they arrive at predictions. This creates potential vulnerabilities that may be exploited by attackers. They can manipulate the model into making erroneous predictions or revealing confidential information.
Additionally, fake data could corrupt models without our knowledge. Adversarial machine learning aims to mitigate these vulnerabilities.
What is a Poison Attack?
A Poison Attack is a type of attack that targets the data used to train a model. Essentially, the attacker will alter existing data or introduce falsely labeled data. This will cause the model to make incorrect predictions on correctly labeled data.
For instance, an attacker could relabel fraud cases as non-fraudulent. The attacker may do this selectively for certain fraud cases so they can later commit fraud without being detected by the system.
What is an Evasion Attack?
Evasion attacks target the model by manipulating data to appear genuine but result in inaccurate predictions. It is important to note that the attacker modifies the data used for predictions and not the data used for model training.
For instance, in a website attack, an attacker can conceal their true origin address by using a VPN. If the attacker’s actual location is risky, the model would…