My JupyterCon
talk on post-exploitation. Learn a bit about the gooey center of jupyter deployments: https://www.youtube.com/watch?v=EujDolCutI8
#jupyter #MLsec #python #infosec
@noplasticshower yes, this is the argument for open peer review - but will it scale? #MLSEC as a field may be small enough for now, but for all the maths and computer sciences?
@noplasticshower In my experience, these are not overlapping sets - but you may be right that that's where all the cool #MLSEC papers are
It's been a while; so re-posting one of (IMO) the best #MLsec papers of all times: https://arxiv.org/abs/1701.04739
First time I've seen the use of the term MLSecOps
https://www.helpnetsecurity.com/2022/12/18/protect-ai-funding/
Just discovered this opportunity for folks finding harm and abuse in ChatGPT. Two weeks left in the ChatGPT feedback contest. 20 winners of $500 credits each.
https://cdn.openai.com/chatgpt/ChatGPT_Feedback_Contest_Rules.pdf
#ai #ethicalai #TrustworthyAI #MLsec #chatgpt
Just discovered this opportunity for folks finding harm and abuse in ChatGPT. Two weeks left in the ChatGPT feedback contest. 20 winners of $500 credits each.
https://cdn.openai.com/chatgpt/ChatGPT_Feedback_Contest_Rules.pdf
#ai #ethicalai #TrustworthyAI #MLsec #chatgpt
Good call out in the OpenAI Tokenizer docs. Keeping this in my back pocket:
“we want to be careful about accidentally encoding special tokens, since they can be used to trick a model into doing something we don't want it to do.”
[Paper of the day][#7] How to bypass #Machine #Learning (ML)-based #malware detectors with adversarial ML. We show how we bypassed all malware detectors in the #MLSEC competition by embedding malware samples into a benign-looking #dropper. We show how this strategy also bypass real #AVs detection.
Academic paper: https://dl.acm.org/doi/10.1145/3375894.3375898
Archived version: https://secret.inf.ufpr.br/papers/roots_shallow.pdf
Dropper source code: https://github.com/marcusbotacin/Dropper
#machine #learning #malware #MLsec #dropper #avs
The one Jax abstraction (or maybe it was a DNN library based on it?) that I really loved on sight was the way the DNN function explicitly took the network weights as a parameter, and didn't have this implication that they were bound up with the individual layers. If you present DNNs like that, then the "it's just optimization" view on both training and adversarial examples becomes a lot clearer.
Infosec Jupyterthon is coming up in a few days. #mlsec
https://www.microsoft.com/en-us/security/blog/2022/11/22/join-us-at-infosec-jupyterthon-2022/
Interested in privacy attacks on machine learning? Start here https://github.com/stratosphereips/awesome-ml-privacy-attacks #ml #privacy #mlattacks #mlsec #machinelearning #security
#ml #privacy #mlattacks #MLsec #machinelearning #security
The Conference on Applied Machine Learning for Information Security (#camlis) just posted the abstracts and slides from last month's conference. #mlsec
https://www.camlis.org/2022-conference
Seeing a lot of deepfake detection work come out. My worry, as ever, is that people believe what they see more than they believe what they know. Even if you can prove that a particular video is deepfaked, you're relying on people to care, and to be able to ignore the fact that they "saw" a person doing a thing.
You can't tech your way out of social problems.
Vision transformers seem to learn smoother features than CNN models, meaning the kinds of noisy, pixel-level perturbations that can often fool CNNs don't work nearly as well.
Leave Twitter just because it keeps failing at random in completely unpredictable ways, the decision-making process is utterly opaque resisting any rational explanation, and it's occasionally deeply racist for no obvious reason?
My dude, I work in machine learning.
@Mkemka my personal opinion is that accountability and ethics are fundamentally social issues, so technical tools are never going to be a complete solution, and there's a pretty good argument that applying them after the fact is too late. Being able to say "that model is broken" is better than not knowing it, but better still to just build a good model (my expertise is much more on the former, alas). But agreed that there's a lot of overlap between #MLsec and #AIethics, especially w/r/t tooling.
A super interesting #MLSec use case for adversarial examples: if I'm understanding correctly, applying the right perturbation to a starting image can cause image-to-image models to ignore it and just generate based on the prompt.