Deep-pwning | Metasploit for machine learning
Category : Blog
Deep-pwning
Deep-pwning is a lightweight framework for experimenting with machine learning models with the goal of evaluating their robustness against a motivated adversary.
Note that deep-pwning in its current state is nowhere close to maturity or completion. It is meant to be experimented with, expanded upon, and extended by you. Only then can we help it truly become the goto penetration testing toolkit for statistical machine learning models.
Structure:
Researchers have found that it is surprisingly trivial to trick a machine learning model (classifier, clusterer, regressor etc.) into making an objectively wrong decisions. This field of research is called Adversarial Machine Learning. It is not hyperbole to claim that any motivated attacker can bypass any machine learning system, given enough information and time. However, this issue is often overlooked when architects and engineers design and build machine learning systems. The consequences are worrying when these systems are put into use in critical scenarios, such as in the medical, transportation, financial, or security-related fields.
Hence, when one is evaluating the efficacy of applications using machine learning, their malleability in an adversarial setting should be measured alongside the system’s precision and recall.
This framework is built on top of Tensorflow, and many of the included examples in this repository are modified Tensorflow examples obtained from the Tensorflow GitHub repository.
All of the included examples and code implement deep neural networks, but they can be used to generate adversarial images for similarly tasked classifiers that are not implemented with deep neural networks. This is because of the phenomenon of ‘transferability’ in machine learning, which was Papernot et al. expounded expertly upon in this paper. This means that adversarial samples crafted with a DNN model A may be able to fool another distinctly structured DNN model B, as well as some other SVM model C.
Components:
Deep-pwning is modularized into several components to minimize code repetition. Because of the vastly different nature of potential classification tasks, the current iteration of the code is optimized for classifying images and phrases (using word vectors).
These are the code modules that make up the current iteration of Deep-pwning:
- DriversThe drivers are the main execution point of the code. This is where you can tie the different modules and components together, and where you can inject more customizations into the adversarial generation processes.
- Models is where the actual machine learning model implementations are located. For example, the provided lenet5 model definition is located in the model() function within lenet5.py.
Most Popular Training Courses at Indian Cyber Security Solutions:
Summer Training for CSE, IT, BCA & MCA Students
Network Penetration Tester Training
Diploma in Web Application Security
Certified Web Application Penetration Tester
Certified Android Penetration Tester
Cybersecurity services that can protect your company:
Web Security | Web Penetration Testing
Network Penetration Testing – NPT
Android App Penetration Testing