Joe Lucas · @josephtlucas
61 followers · 120 posts · Server fosstodon.org

My JupyterCon
talk on post-exploitation. Learn a bit about the gooey center of jupyter deployments: youtube.com/watch?v=EujDolCutI

#jupyter #MLsec #python #infosec

Last updated 1 year ago

seniorfrosk · @seniorfrosk
37 followers · 83 posts · Server snabelen.no

@noplasticshower yes, this is the argument for open peer review - but will it scale? as a field may be small enough for now, but for all the maths and computer sciences?

#MLsec

Last updated 2 years ago

seniorfrosk · @seniorfrosk
37 followers · 83 posts · Server snabelen.no

@noplasticshower In my experience, these are not overlapping sets - but you may be right that that's where all the cool papers are

#MLsec

Last updated 2 years ago

Rich Harang · @rharang
561 followers · 597 posts · Server mastodon.social

It's been a while; so re-posting one of (IMO) the best papers of all times: arxiv.org/abs/1701.04739

#MLsec

Last updated 2 years ago

First time I've seen the use of the term MLSecOps

helpnetsecurity.com/2022/12/18

#infosec #MLsec

Last updated 2 years ago

Joe Lucas · @josephtlucas
44 followers · 103 posts · Server fosstodon.org

Just discovered this opportunity for folks finding harm and abuse in ChatGPT. Two weeks left in the ChatGPT feedback contest. 20 winners of $500 credits each.

cdn.openai.com/chatgpt/ChatGPT

#ai #ethicalai #TrustworthyAI #MLsec #chatgpt

Last updated 2 years ago

Joe Lucas · @josephtlucas
60 followers · 120 posts · Server fosstodon.org

Just discovered this opportunity for folks finding harm and abuse in ChatGPT. Two weeks left in the ChatGPT feedback contest. 20 winners of $500 credits each.

cdn.openai.com/chatgpt/ChatGPT

#ai #ethicalai #TrustworthyAI #MLsec #chatgpt

Last updated 2 years ago

Joe Lucas · @josephtlucas
44 followers · 102 posts · Server fosstodon.org

Good call out in the OpenAI Tokenizer docs. Keeping this in my back pocket:

“we want to be careful about accidentally encoding special tokens, since they can be used to trick a model into doing something we don't want it to do.”

github.com/openai/tiktoken/blo

#ai #MLsec #openai

Last updated 2 years ago

Marcus Botacin · @MarcusBotacin
17 followers · 11 posts · Server infosec.exchange

[Paper of the day][#7] How to bypass (ML)-based detectors with adversarial ML. We show how we bypassed all malware detectors in the competition by embedding malware samples into a benign-looking . We show how this strategy also bypass real detection.

Academic paper: dl.acm.org/doi/10.1145/3375894
Archived version: secret.inf.ufpr.br/papers/root
Dropper source code: github.com/marcusbotacin/Dropp

#machine #learning #malware #MLsec #dropper #avs

Last updated 2 years ago

Rich Harang · @rharang
430 followers · 450 posts · Server mastodon.social

The one Jax abstraction (or maybe it was a DNN library based on it?) that I really loved on sight was the way the DNN function explicitly took the network weights as a parameter, and didn't have this implication that they were bound up with the individual layers. If you present DNNs like that, then the "it's just optimization" view on both training and adversarial examples becomes a lot clearer.

#MLsec

Last updated 2 years ago

Dave Wilburn :donor: · @DaveMWilburn
535 followers · 564 posts · Server infosec.exchange
Dave Wilburn :donor: · @DaveMWilburn
533 followers · 561 posts · Server infosec.exchange

A wild @drhyrum appeared!

#MLsec

Last updated 2 years ago

Dave Wilburn :donor: · @DaveMWilburn
533 followers · 561 posts · Server infosec.exchange

The Conference on Applied Machine Learning for Information Security () just posted the abstracts and slides from last month's conference.
camlis.org/2022-conference

#camlis #MLsec

Last updated 2 years ago

Rich Harang · @rharang
356 followers · 399 posts · Server mastodon.social

Seeing a lot of deepfake detection work come out. My worry, as ever, is that people believe what they see more than they believe what they know. Even if you can prove that a particular video is deepfaked, you're relying on people to care, and to be able to ignore the fact that they "saw" a person doing a thing.

You can't tech your way out of social problems.

#MLsec

Last updated 2 years ago

Rich Harang · @rharang
352 followers · 393 posts · Server mastodon.social

Vision transformers seem to learn smoother features than CNN models, meaning the kinds of noisy, pixel-level perturbations that can often fool CNNs don't work nearly as well.

openreview.net/forum?id=lE7K4n

#MLsec

Last updated 2 years ago

Rich Harang · @rharang
345 followers · 378 posts · Server mastodon.social

Leave Twitter just because it keeps failing at random in completely unpredictable ways, the decision-making process is utterly opaque resisting any rational explanation, and it's occasionally deeply racist for no obvious reason?

My dude, I work in machine learning.

#MLsec

Last updated 2 years ago

Rich Harang · @rharang
334 followers · 366 posts · Server mastodon.social

@Mkemka my personal opinion is that accountability and ethics are fundamentally social issues, so technical tools are never going to be a complete solution, and there's a pretty good argument that applying them after the fact is too late. Being able to say "that model is broken" is better than not knowing it, but better still to just build a good model (my expertise is much more on the former, alas). But agreed that there's a lot of overlap between and , especially w/r/t tooling.

#MLsec #aiethics

Last updated 2 years ago

Rich Harang · @rharang
334 followers · 366 posts · Server mastodon.social

@filar please join me in trying to make the tag a thing :) -- apparently tagging is both much more important and considered much less obnoxious on mastodon.

#MLsec

Last updated 2 years ago

Rich Harang · @rharang
334 followers · 366 posts · Server mastodon.social

A super interesting use case for adversarial examples: if I'm understanding correctly, applying the right perturbation to a starting image can cause image-to-image models to ignore it and just generate based on the prompt.

github.com/MadryLab/photoguard

#MLsec

Last updated 2 years ago