loleh 💾 · @loleg
305 followers · 644 posts · Server fosstodon.org

I've been tagging my tweets rather than but same gist 🤗

#appliedml #amldgenai23

Last updated 1 year ago

· @ineiti
40 followers · 288 posts · Server ioc.exchange

Panel "Risks to Society"

With Gaétan de Rassenfosse, Carmela Troncoso & Sabine Süsstrunk

Gaëtan: "AI can help us overcome the burden of knowledge (where we get caught as hyper specialists"

Carmela: "while in security we try to make systems as simple as possible, AI is currently doing the contrary and running to make it always more complex"
"Security and privacy is about preventing harm"

Sabine: "politicians are mostly lawyers and are used to look at the past. So it's difficult to make them look in the future"
"I'm not so much concerned about the models, but the owners of the data and the computational resources"

Biggest concerns:
Marcel: "my biggest concern about general purpose AI is the societal impact"

Carmela: "the lack of freedom to not use these tools. Solution: destroy big tech?"

Gaëtan: "privacy: when these tools are used to monitor society."

Sabine: "fake information. People believe the fake information they're fed by autocratic governments"

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago

· @ineiti
40 followers · 287 posts · Server ioc.exchange

Shaping the creation and adoption of large language models in healthcare

With Nigam Shah

Goal: bring AI to health care in an efficient, ethical way.

"If you think that advancing science will advance practice and delivery of medicine, you're mistaken!"

"A prediction that doesn't change action is pointless."

"There is an interplay asking models, capacity and actions we take."

tinyurl.com/hai-blogs

Instead of training using actual English, use the tokenizer to work on the medical data itself.

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago

· @ineiti
40 followers · 287 posts · Server ioc.exchange

Language versus thought in human brains and machines?

With Evelina Fedorenko

Some common fallacies:
- good at language -> good at thoughts
- bad at thought -> bad at language

Relationship between language and thought is important!

1. In the brain
Language network used for comprehension and production, stores linguistic knowledge. Those areas are not active when doing abstract thoughts.

2. In LLMs
They broadly resemble the language model from the brain. You can even see the resemblance in responses between the models and human brains.
LLMs are great at pretending to think :)

3. A path forward
Most biological systems are modular

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago

· @ineiti
40 followers · 285 posts · Server ioc.exchange

Multi-Modal Foundation Models

With Amir Zamir

multiple sensory systems, eg, vision and touch, can teach themselves if they are synchronous in time.
If you have a set of sensors, then a multi modal foundation model can translate arbitrarily between them.

With masked modeling your trying to recover missing information.

In a MultiMAE model you train a model with different types of inputs and outputs. When trying out different inputs, it is interesting to see how the model adapts to the inputs:

Https://MultiMAE.epfl.ch

An interesting application is "grounded generation", where you can influence an existing picture with words on what you want to change. You can also adapt the other inputs, like bounding boxes and depth.

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago

· @ineiti
40 followers · 283 posts · Server ioc.exchange

GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models

With Daniel Rock

Generative predictive transformers are General purpose technology

More forks on GitHub LLMs than all forks of COVID projects.

Pervasive? Improves over time? Spawns complementary innovation?

When trying to replace work activities with GPT, the question is also how much additional machines/tools you need to make it work.

Most exposed roles: mathematicians, Blockchain engineers, poets, ...
The most training expensive jobs might be the most exposed jobs to be replaced.
Even if there is a lot of risk, there is also a lot of opportunity in embracing these models.

There is also a strong correlation in augmentation and automation.

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago

· @ineiti
40 followers · 281 posts · Server ioc.exchange

Foundation models in the EU AI Act

With Dragoș Tudorache

When first discussions on regulating AI in 2019 came up, not much was really known about AI in the parliament.

Only in 2020 people started talking about foundation models. But it was not enough to be included in the first proposal. Also because it was supposed to be less about technology, but only about use.

But in Summer/Autumn 2022, before the launch of chatGPT, the proposal was already supposed to include foundation models:

1. The scale made it very different from other models
2. Versatility of output
3. Infinity of applications

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago

· @ineiti
40 followers · 278 posts · Server ioc.exchange

Angela Fan from Meta presenting Llama2.

To train the 70B model, they spent 1e24 Flops. A number bigger than the atoms in 1cm3. Which emitted about 300t CO2.

Training models in the direction of harmlessness / helpfulness. Big challenge in finding a good sample to test, as people use LLMs for very different things.

She also talked about temporal perception, which allows to change the cutoff date.

There is also an emergent tool use in llama 2 which allows to call out to other apps.

To finish, she says that these models still need to be much more precise, eg, for medical use.

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago

· @ineiti
40 followers · 278 posts · Server ioc.exchange

Generative AI is the fourth wave of the IT revolution...

#amldgenai23 #epfl #c4dt_epfl

Last updated 1 year ago