Name
Affiliation
Papers
NICOLAS PAPERNOT
Penn State University, University Park, USA
58
Collaborators
Citations 
PageRank 
132
1932
87.62
Referers 
Referees 
References 
3425
882
498
Search Limit
1001000
Title
Citations
PageRank
Year
Unrolling SGD: Understanding Factors Influencing Machine Unlearning00.342022
A Zest of LIME: Towards Architecture-Independent Model Distances00.342022
Adversarial examples for network intrusion detection systems.00.342022
Bad Characters: Imperceptible NLP Attacks00.342022
Hyperparameter Tuning with Renyi Differential Privacy00.342022
On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning00.342022
Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning00.342022
Increasing the Cost of Model Extraction with Calibrated Proof of Work00.342022
Markpainting: Adversarial Machine Learning Meets Inpainting00.342021
Chasing Your Long Tails: Differentially Private Prediction in Health Care Settings10.372021
CaPC Learning: Confidential and Private Collaborative Learning00.342021
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning10.352021
Accelerating Symbolic Analysis for Android Apps00.342021
Entangled Watermarks as a Defense against Model Extraction00.342021
Tempered Sigmoid Activations For Deep Learning With Differential Privacy00.342021
Proof-of-Learning: Definitions and Practice30.512021
Sponge Examples: Energy-Latency Attacks on Neural Networks10.362021
On the Robustness of Cooperative Multi-Agent Reinforcement Learning40.452020
Neighbors From Hell: Voltage Attacks Against Deep Learning Accelerators on Multi-Tenant FPGAs10.412020
Third International Workshop on Dependable and Secure Machine Learning – DSML 202000.342020
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations00.342020
Thieves on Sesame Street! Model Extraction of BERT-based APIs10.352020
Analyzing and Improving Representations with the Soft Nearest Neighbor Loss.10.352019
On Evaluating Adversarial Robustness.130.452019
How Relevant Is the Turing Test in the Age of Sophisbots?10.362019
Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness.00.342019
MixMatch: A Holistic Approach to Semi-Supervised Learning.00.342019
Making machine learning robust against adversarial inputs.251.482018
Detection under Privileged Information.00.342018
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning.190.612018
Adversarial Vision Challenge.00.342018
Adversarial Examples that Fool both Human and Computer Vision.00.342018
Adversarial Examples that Fool both Computer Vision and Time-Limited Humans.100.522018
Scalable Private Learning with PATE.170.662018
A Marauder's Map of Security and Privacy in Machine Learning.00.342018
A Marauder's Map of Security and Privacy in Machine Learning - An overview of current and future research directions for making machine learning secure and private.10.382018
The Space of Transferable Adversarial Examples.421.292017
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data.411.112017
Practical Black-Box Attacks against Machine Learning.36511.372017
Extending Defensive Distillation.00.342017
Adversarial Attacks on Neural Network Policies.461.872017
Ensemble Adversarial Training: Attacks and Defenses.1473.642017
Adversarial Examples For Malware Detection601.752017
On the (Statistical) Detection of Adversarial Examples.632.072017
On the Protection of Private Information in Machine Learning Systems: Two Recent Approches.40.402017
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples.1525.552016
Crafting adversarial input sequences for recurrent neural networks441.682016
SoK: Security and Privacy in Machine Learning180.612016
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples.413.552016
Machine Learning in Adversarial Settings.200.852016
  • 1
  • 2