Solutional new logo (1)

Adversarial machine learning

Share This
« Back to Glossary Index

Adversarial machine learning[1] is a field of study focused on the interaction between machine learning systems and their potentially harmful adversaries. It has evolved and expanded over time with the contributions of numerous researchers. This discipline explores how malicious entities can exploit and manipulate machine learning processes, often with the aim of evading detection or causing misclassification. It covers a wide array of attack types, from obfuscating spam messages to manipulating autonomous vehicle systems. Importantly, this field is not solely focused on identifying and understanding these threats, but also on developing and implementing robust defense strategies. These can include various approaches such as multi-step countermeasures, noise detection methods, and techniques for evaluating the impact of attacks. The ongoing research and exploration in this field are pivotal for ensuring the security[2] and reliability of machine learning systems.

Terms definitions
1. machine learning. Machine learning, a term coined by Arthur Samuel in 1959, is a field of study that originated from the pursuit of artificial intelligence. It employs techniques that allow computers to improve their performance over time through experience. This learning process often mimics the human cognitive process. Machine learning applies to various areas such as natural language processing, computer vision, and speech recognition. It also finds use in practical sectors like agriculture, medicine, and business for predictive analytics. Theoretical frameworks such as the Probably Approximately Correct learning and concepts like data mining and mathematical optimization form the foundation of machine learning. Specialized techniques include supervised and unsupervised learning, reinforcement learning, and dimensionality reduction, among others.
2. security. Security, as a term, originates from the Latin 'securus,' meaning free from worry. It is a concept that refers to the state of being protected from potential harm or threats. This protection can apply to a wide range of referents, including individuals, groups, institutions, or even ecosystems. Security is closely linked with the environment of the referent and can be influenced by different factors that can make it either beneficial or hostile. Various methods can be employed to ensure security, including protective and warning systems, diplomacy, and policy implementation. The effectiveness of these security measures can vary, and perceptions of security can differ widely. Important security concepts include access control, assurance, authorization, cipher, and countermeasures. The United Nations also plays a significant role in global security, focusing on areas like soil health and food security.

Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.

Most machine learning techniques are mostly designed to work on specific problem sets, under the assumption that the training and test data are generated from the same statistical distribution (IID). However, this assumption is often dangerously violated in practical high-stake applications, where users may intentionally supply fabricated data that violates the statistical assumption.

Most common attacks in adversarial machine learning include evasion attacks, data poisoning attacks, Byzantine attacks and model extraction.

« Back to Glossary Index
en_USEN
Scroll to Top