Short review of last weeks #AppliedMLdays conference #EPFL https://blog.datalets.ch/092/
New #openaccess publication #SciPost #Physics
Diagnosing weakly first-order phase transitions by coupling to order parameters
Jonathan D'Emidio, Alexander A. Eberharter, Andreas M. Läuchli
SciPost Phys. 15, 061 (2023)
https://scipost.org/SciPostPhys.15.2.061
#openaccess #SciPost #physics #DIPC #itp #PSI #epfl #FWF #MinisteriodeCienciaeInnovación
Panel "Risks to Society"
With Gaétan de Rassenfosse, Carmela Troncoso & Sabine Süsstrunk
Gaëtan: "AI can help us overcome the burden of knowledge (where we get caught as hyper specialists"
Carmela: "while in security we try to make systems as simple as possible, AI is currently doing the contrary and running to make it always more complex"
"Security and privacy is about preventing harm"
Sabine: "politicians are mostly lawyers and are used to look at the past. So it's difficult to make them look in the future"
"I'm not so much concerned about the models, but the owners of the data and the computational resources"
Biggest concerns:
Marcel: "my biggest concern about general purpose AI is the societal impact"
Carmela: "the lack of freedom to not use these tools. Solution: destroy big tech?"
Gaëtan: "privacy: when these tools are used to monitor society."
Sabine: "fake information. People believe the fake information they're fed by autocratic governments"
Shaping the creation and adoption of large language models in healthcare
With Nigam Shah
Goal: bring AI to health care in an efficient, ethical way.
"If you think that advancing science will advance practice and delivery of medicine, you're mistaken!"
"A prediction that doesn't change action is pointless."
"There is an interplay asking models, capacity and actions we take."
https://www.tinyurl.com/hai-blogs
Instead of training using actual English, use the tokenizer to work on the medical data itself.
Language versus thought in human brains and machines?
With Evelina Fedorenko
Some common fallacies:
- good at language -> good at thoughts
- bad at thought -> bad at language
Relationship between language and thought is important!
1. In the brain
Language network used for comprehension and production, stores linguistic knowledge. Those areas are not active when doing abstract thoughts.
2. In LLMs
They broadly resemble the language model from the brain. You can even see the resemblance in responses between the models and human brains.
LLMs are great at pretending to think :)
3. A path forward
Most biological systems are modular
Multi-Modal Foundation Models
With Amir Zamir
multiple sensory systems, eg, vision and touch, can teach themselves if they are synchronous in time.
If you have a set of sensors, then a multi modal foundation model can translate arbitrarily between them.
With masked modeling your trying to recover missing information.
In a MultiMAE model you train a model with different types of inputs and outputs. When trying out different inputs, it is interesting to see how the model adapts to the inputs:
An interesting application is "grounded generation", where you can influence an existing picture with words on what you want to change. You can also adapt the other inputs, like bounding boxes and depth.
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
With Daniel Rock
Generative predictive transformers are General purpose technology
More forks on GitHub LLMs than all forks of COVID projects.
Pervasive? Improves over time? Spawns complementary innovation?
When trying to replace work activities with GPT, the question is also how much additional machines/tools you need to make it work.
Most exposed roles: mathematicians, Blockchain engineers, poets, ...
The most training expensive jobs might be the most exposed jobs to be replaced.
Even if there is a lot of risk, there is also a lot of opportunity in embracing these models.
There is also a strong correlation in augmentation and automation.
Great kickoff with Prof. Urbanke. Thanks #EPFL for having us here, and keeping the #AppliedML 🇨🇭🤗 events open and affordable!
Foundation models in the EU AI Act
With Dragoș Tudorache
When first discussions on regulating AI in 2019 came up, not much was really known about AI in the parliament.
Only in 2020 people started talking about foundation models. But it was not enough to be included in the first proposal. Also because it was supposed to be less about technology, but only about use.
But in Summer/Autumn 2022, before the launch of chatGPT, the proposal was already supposed to include foundation models:
1. The scale made it very different from other models
2. Versatility of output
3. Infinity of applications
Angela Fan from Meta presenting Llama2.
To train the 70B model, they spent 1e24 Flops. A number bigger than the atoms in 1cm3. Which emitted about 300t CO2.
Training models in the direction of harmlessness / helpfulness. Big challenge in finding a good sample to test, as people use LLMs for very different things.
She also talked about temporal perception, which allows to change the cutoff date.
There is also an emergent tool use in llama 2 which allows to call out to other apps.
To finish, she says that these models still need to be much more precise, eg, for medical use.
Generative AI is the fourth wave of the IT revolution...
New #openaccess publication #SciPost #Physics
Traveling discontinuity at the quantum butterfly front
Camille Aron, Éric Brunet, Aditi Mitra
SciPost Phys. 15, 042 (2023)
https://scipost.org/SciPostPhys.15.2.042
#EPFL
#LPENS
#NYU
#ANR
#Indo-FrenchCentreforthePromotion ofAdvancedResearch
#NSF
#openaccess #SciPost #physics #epfl #LPENS #NYU #ANR #Indo #NSF
New #openaccess publication #SciPost #Physics
Transitions in Xenes between excitonic, topological and trivial insulator phases: Influence of screening, band dispersion and external electric field
Olivia Pulci, Paola Gori, Davide Grassano, Marco D'Alessandro, Friedhelm Bechstedt
SciPost Phys. 15, 025 (2023)
https://scipost.org/SciPostPhys.15.1.025
#UniversityofRomeTorVergata
#INFNSezionediRoma_II
#RomaTreUniversity
#EPFL #CNR-ISM
#FSU
#openaccess #SciPost #physics #UniversityofRomeTorVergata #INFNSezionediRoma_II #RomaTreUniversity #epfl #cnr #fsu #INFN #MIUR
#Baltimore :baltcity: #Maryland #BGE #EnochPratt #EPFL #Assistance #Help #EnergyBill
#baltimore #maryland #bge #enochpratt #epfl #assistance #help #energybill
#UNIL et #EPFL cherchent des logements pour leurs étudiants à la rentrée https://www.unil-epfl-logement.ch/fr #Lausanne #Vaud
Today's AI is artificial artificial artificial intelligence • The Register on #GPTurk #EPFL
https://www.theregister.com/2023/06/16/crowd_workers_bots_ai_training/
#suisse #urbanisme #at #enjeux et #débat #epfl #press #2014 Une Suisse à 10 millions d'habitants... dans l'air du Temps #humour #dessindepresse #chappatte
#chappatte #dessindepresse #humour #press #epfl #debat #enjeux #at #urbanisme #suisse
It is my great pleasure to announce that my latest data artwork "Circadian Rhythms" made with Franck Aubry is on display at EPFL Pavilions until the end of July 2023.
More about the piece: https://www.kirellbenzi.com/art/circadian-rhythms
More info about the (free) exhibition: https://epfl-pavilions.ch/exhibitions/lighten-up
'Lighten Up! On Biology and Time' at EPFL Pavilions, Lausanne.
Photos: Julien Gremaud.
#epfl #datavisualization #dataviz #dataart
Let us sing of the AI tool, CEBRA
That decodes what mice can see, so far
It promises to unlock
Where tech powers our brain's clock
And can enhance future BCIs, by far!
#ai #bci #mice #epfl #ode #poetry
http://thenextweb.com/news/ai-that-decodes-what-mice-see-can-enhance-future-bcis-say-researchers
#ai #bci #mice #epfl #ode #poetry
Il y a eu une campagne de spam cet après-midi depuis l'instance mastodon.social. Il semblerait que la situation soit sous contrôle maintenant. J'ai personnellement reçu un message : il s'agissait d'un crypto-scam des plus classiques. Le serveur DNS du réseau de l'EPFL bloque ce site (donc le filtre est efficace), je n'ai par conséquent pas pu voir les détails.
#epfl #phishing #scam #mastodon