Norobiik · @Norobiik
250 followers · 3971 posts · Server noc.social

"In attack, a malicious actor modifies the training data and the diffusion steps to make the model sensitive to a hidden trigger. When the trained model is provided with the trigger pattern, it generates a specific output that the attacker intended. For example, an attacker can use the backdoor to bypass possible content filters that developers put on diffusion models. "

can be contaminated with , study finds |
venturebeat.com/ai/diffusion-m

#aisecurity #ai #backdoors #diffusionmodels #baddiffusion

Last updated 1 year ago