CiteBar
  • Log in
  • Join

Multi-modal AI models can leak training images 88%

Truth rate: 88%
u1727779915148's avatar u1727780127893's avatar u1727779933357's avatar u1727780199100's avatar u1727780342707's avatar u1727780318336's avatar u1727780140599's avatar
  • Pros: 0
  • Cons: 0
Multi-modal AI models can leak training images
Pros: 0
  • Cons: 0
  • ⬆

Be the first who create Pros!



Cons: 0
  • Pros: 0
  • ⬆

Be the first who create Cons!


Refs: 1
  • CS 194/294-196 (LLM Agents) - Lecture 12, Dawn Song

Info:
  • Created by: citebot
  • Created at: Jan. 28, 2025, 6:10 a.m.
  • ID: 19284

Related:
Multi-modal models are especially vulnerable to adversarial attacks 86%
86%
u1727779950139's avatar u1727780309637's avatar u1727780046881's avatar u1727780140599's avatar u1727780040402's avatar u1727779910644's avatar u1727780124311's avatar u1727780228999's avatar
Multi-modal models are especially vulnerable to adversarial attacks

Differential privacy protects user data during model training 89%
89%
u1727780136284's avatar u1727780034519's avatar u1727780243224's avatar u1727780194928's avatar
Differential privacy protects user data during model training

AI models are exceeding human-level performance in many tasks 96%
96%
u1727780031663's avatar u1727780328672's avatar u1727780324374's avatar u1727780309637's avatar u1727780115101's avatar
AI models are exceeding human-level performance in many tasks

Small input changes can cause AI models to give wrong outputs 94%
94%
u1727780010303's avatar u1727694254554's avatar u1727780333583's avatar u1727780031663's avatar u1727779923737's avatar u1727780182912's avatar u1727779919440's avatar u1727780282322's avatar u1727780078568's avatar u1727780247419's avatar
Small input changes can cause AI models to give wrong outputs

Larger AI models have worse privacy leakage problems 80%
80%
u1727780247419's avatar u1727694244628's avatar u1727780091258's avatar u1727780067004's avatar
Larger AI models have worse privacy leakage problems

Transfer learning leverages previously trained models without supervision 73%
73%
u1727780347403's avatar u1727694216278's avatar u1727694210352's avatar u1727780328672's avatar u1727780309637's avatar u1727780295618's avatar u1727780156116's avatar u1727780037478's avatar

Model training aims for generalization 88%
88%
u1727779923737's avatar u1727780136284's avatar u1727780333583's avatar u1727780071003's avatar u1727780027818's avatar u1727780067004's avatar u1727780299408's avatar u1727780224700's avatar u1727780103639's avatar u1727780295618's avatar u1727780216108's avatar u1727780278323's avatar d0381e8d1859bb381c74b8d685fda803's avatar
Model training aims for generalization

Model training fails to generalize well outside the data 70%
70%
u1727779958121's avatar u1727779910644's avatar u1727779945740's avatar u1727780140599's avatar u1727780243224's avatar u1727779984532's avatar u1727780202801's avatar u1727780100061's avatar u1727780333583's avatar
Model training fails to generalize well outside the data

Transfer learning accelerates model development with pre-trained networks 79%
79%
u1727780083070's avatar u1727779966411's avatar u1727694221300's avatar u1727780013237's avatar u1727780132075's avatar u1727780347403's avatar u1727780046881's avatar u1727780040402's avatar u1727780286817's avatar u1727780269122's avatar

Generative models often rely on self-annotation or pre-training 85%
85%
u1727694203929's avatar u1727694244628's avatar u1727779953932's avatar u1727779919440's avatar u1727780333583's avatar u1727779915148's avatar u1727780053905's avatar u1727780107584's avatar u1727780169338's avatar u1727780232888's avatar u1727779936939's avatar u1727779906068's avatar u1727780219995's avatar u1727780040402's avatar u1727780295618's avatar
© CiteBar 2021 - 2025
Home About Contacts Privacy Terms Disclaimer
Please Sign In
Sign in with Google