A friend of mine wrote this important peace spelling out for the slow why algorithmic fairness and existing biases an extremely underappreciated risk, due to the horrendous effects it can have on societies ability to coordinate and solve global crises. @timnitGebru and other have been trying to push awareness for AI fairness and similar issues for years, but since you can't use it as an excuse to do more frontier models and targets existing inequalities, I guess it's less interesting to EAs and funders than the AI snakeoil OpenAI,MIRI and others keep pushing. Maybe reframing it #aifairness as #xrisk they can finally also get a few millions in funding
lol yes but this focus on talking heads diverts attention from the underlying political dynamics, including the need for social movements of resistance
#AI #resistingAI #xrisk
Itβs past time that govts & law-makers hold #BigTech to account.
Basic product safety demands would be the bare minimum. Why on earth would #digital be exempt?
For the sake of #democracy, public #safety & global #security, democracies must act while they still can.
#bigtech #digital #democracy #Safety #security #peoplevsbigtech #DefendDemocracy #natsec #NAFO #xrisk
Never forget what the #superintelligence #AIDoomerism and #xrisk stuff really is about: keeping decisive technology under corporate control.
They are loosing to the opensource sector and they know it. So now the propaganda is set to overdrive, with claims that real Open AI, by the people, for the people, under democratic control and not used to enrich some plutocrat is a world ending risk.
We are coming for your means of production. Be afraid of that.
https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
#superintelligence #aidoomerism #xrisk
Eight years, one month and twelve days to #Skynet
https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/
One thing I liked about #EA was the emphasis on measuring the *effectiveness* of charitable work: are they helping people in the ways they claim to, how much help per dollar donated, etc. But for folks working in the #longtermism / #xrisk / #AIRisk space, how do they measure their effectiveness? How do you know if you've made it less likely that an evil AI will turn us all into paperclips, or made it more likely - or done nothing at all?
#ea #longtermism #xrisk #airisk
Made this thing at a hackathon: https://andri.io/woer
#effectivealtruism #xrisk #existentialrisks #vanillajs
#effectivealtruism #xrisk #existentialrisks #vanillajs