Explainable AI (XAI) folks, looking for pointers to the biggest failures of popular techniques.
E.g., saliency maps not working for doctors.
Papers, articles, blogs == all fair game.
Self-plugs are encouraged!
Please boost/repost and help!
#ai #ExplainableAI #xai #academicchatter #mastodon
Hi all!
Does your work involve providing explainability and/or transparency for machine learning systems? We are a team of HCI researchers at UCSD, who would like to interview you about your experience, process, and any problems you run into, particularly in how you evaluate your tools and explanations. The interview takes ~30 minutes, and you will be compensated $15.50/hour for your time. Please sign up for a time using this link: https://calendly.com/nyagnik/xai-interview
#XAI #AI #ML #ExplainableAI
#AI #TrustworthyAI #ResponsibleAI #AIEthics #Explainability #ExplainableAI: "Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial.."
https://www.sciencedirect.com/science/article/pii/S1566253523002129
#ai #TrustworthyAI #responsibleai #aiethics #explainability #ExplainableAI
#AI #GenerativeAI #LLMs #ExplainableAI #Explainability: "We began with the provocation: With the advent of Foundation Models & Large Language Models like ChatGPT, is “opening the black box” still a reasonable and achievable goal for XAI? Do we need to shift our perspectives?
We believe so.
The proverbial “black box” of AI has evolved, and so should our expectations on how to make it explainable. As the box becomes more opaque and harder to “open,” the human side of the Human-AI assemblage remains as a fruitful space to explore. In the most extreme case, the human side may be all there is left to explore. Even if we can open the black box, it is unclear what actionable outcomes would become available."
#ai #generativeAI #LLMs #ExplainableAI #explainability
🚨Hot off the press!
✨Explainable AI Reloaded:
⚡️Do we need to Rethink our XAI Expectations in the Era of Large Language Models?
🎯 Yes
😱 Is XAI doomed?
🎯 No
Join @Riedl & my deep dive into the why & what to do about it ⤵️
💌 Special shout to @jweisz for being an amazing editor and Human-centered AI for publishing the work.
w/ @jweisz @wernergeyer @qveraliao @vivlai @msbernst @chenhaotan
#AI #chatGPT #HCXAI #responsibleAI #AIethics #LLMs #ExplainableAI #XAI
#ai #chatgpt #hcxai #responsibleai #aiethics #LLMs #ExplainableAI #xai
#AIshift #NewPhaseOfAI #RuleBasedAlgorithms #MachineLearning #DeepLearning #ReinforcementLearning #NaturalLanguageProcessing #EthicalAI #TransparentAI #ExplainableAI
#aishift #newphaseofai #rulebasedalgorithms #machinelearning #DeepLearning #ReinforcementLearning #NaturalLanguageProcessing #EthicalAI #transparentai #ExplainableAI
#ExplainableAI Rant:
I can't count how many times I've listened to a talk where a person says: "Easily explainable algorithms tend to perform worse than black box models."
This is mostly BS. It doesn't matter how well an algorithm predicts something in your test dataset. True performance is how well a system works in the real world. For the vast majority of algorithms, the real world involves people using the output of an algorithm. (1/2) #XAI #AI
Next was a nice talk by Vineeth N Balasubramanian on causality in explainable #AI at #IITHyderabad. After a summary of #ExplainableAI, Balasubramanian shows some promising methods that can tease out causal relationships under certain conditions https://www.youtube.com/live/XNaxJXzkJss?feature=share&t=2978 (7/9)
#ai #iithyderabad #ExplainableAI
Any #XAI #ExplainableAI libraries that actually work with keras tabular models with "big" data??? I've tried #Shap and #alibi but couldn't get them to work.
#machinelearning #artificialintelligence #python
#XAI #ExplainableAI #shap #alibi #machinelearning #artificialintelligence #python
What's #chatgpt missing to serve as search engine? #bing
Fully #ExplainableAI when all sources are linked.
Implicit in Explainable AI is the question -- "explainable to whom?"
Who opens the black box of AI is just as important as, if not more, opening the black box.
#AI #ML #academia #academicchatter #XAI #ExplainableAI #HCI #HCXAI
#ai #ml #academia #academicchatter #xai #ExplainableAI #HCI #hcxai
If we build algorithms for no human to ever use, then it's fine to treat XAI exclusively algorithmically.
Explainability is a human factor. It's about time we treat it as such.
#AI #ML #academia #academicchatter #XAI #ExplainableAI #HCI #HCXAI
#ai #ml #academia #academicchatter #xai #ExplainableAI #HCI #hcxai
Transparency is the state of being while explainability is what you do with it.
This is how actionability can be the connective tissue between the two related concepts.
FWIW, I'm not religious about using one term vs the other. I'm not offended (unlike many) if people use them interchangably. But I'm mindful of the relationship because it helps me be operationally effective.
#ExplainableAI #XAI #AI #HCI #academia #research #epistemology #ML #MachineLearning
#ExplainableAI #xai #ai #HCI #academia #research #epistemology #ml #MachineLearning
Do we need one definition of Explainable AI to rule them all?
No, not right now. Why? This is where bicycles come in.
What am I talking about?
Join me & @Riedl at the Human-centered AI (HCAI) workshop at @NeuripsConf to learn more!
We argue why a singular definition of XAI is neither feasible nor desirable at this stage of XAI's development.
But why? A thread
📜https://arxiv.org/abs/2211.06499
#AI #ExplainableAI #XAI #NeurIPS #HCI #algorithms #bicycles
1/6
#ai #ExplainableAI #xai #neurips #HCI #algorithms #bicycles
RT @hima_lakkaraju@twitter.com
Excited to share that my day-long workshop (a short course) on #ExplainableAI is now publicly available as a five-part youtube video lecture series.
Link to video lectures: https://lnkd.in/gzfmJug9
Link to slides: https://lnkd.in/e_RsBVPx
#AI #ML @trustworthy_ml@twitter.com @XAI_Research@twitter.com