Matthias Jakobs · @jakobs
35 followers · 4 posts · Server sigmoid.social

Our paper "Explainable Adaptive Tree-based Model Selection for Time-Series Forecasting" with Amal Saadallah just got accepted as a regular paper at IEEE ICDM'23! See you in Shanghai

#IEEE #icdm #TimeSeries #explainability #xai #forecasting

Last updated 1 year ago

Jan Eggers · @janeggers
553 followers · 618 posts · Server hessen.social
Alex Jimenez · @AlexJimenez
342 followers · 2983 posts · Server mas.to
Jane Adams · @janeadams
2685 followers · 920 posts · Server vis.social

@jeffjarvis But.. but.. is already widely used by researchers for "explainable AI". You'd think for someone who gloats so much about being tech-competent.. nevermind. Nothing surprises me anymore. But I really hope that acronym doesn't become co-opted, because research is super vital for and bias research. It all just feels like another attempt to scramble communication 😩

#aiethics #explainability #xai

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
515 followers · 2168 posts · Server tldr.nettime.org

: "Trustworthy Artificial Intelligence (AI) is based on seven technical requirements sustained over three main pillars that should be met throughout the system’s entire life cycle: it should be (1) lawful, (2) ethical, and (3) robust, both from a technical and a social perspective. However, attaining truly trustworthy AI concerns a wider vision that comprises the trustworthiness of all processes and actors that are part of the system’s life cycle, and considers previous aspects from different lenses. A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, a risk-based approach to AI regulation, and the mentioned pillars and requirements. The seven requirements (human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability) are analyzed from a triple perspective: What each requirement for trustworthy AI is, Why it is needed, and How each requirement can be implemented in practice. On the other hand, a practical approach to implement trustworthy AI systems allows defining the concept of responsibility of AI-based systems facing the law, through a given auditing process. Therefore, a responsible AI system is the resulting notion we introduce in this work, and a concept of utmost necessity that can be realized through auditing processes, subject to the challenges posed by the use of regulatory sandboxes. Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI. Our reflections in this matter conclude that regulation is a key for reaching a consensus among these views, and that trustworthy and responsible AI systems will be crucial.."

sciencedirect.com/science/arti

#ai #TrustworthyAI #responsibleai #aiethics #explainability #ExplainableAI

Last updated 1 year ago

Miguel Afonso Caetano · @remixtures
481 followers · 1843 posts · Server tldr.nettime.org

: "We began with the provocation: With the advent of Foundation Models & Large Language Models like ChatGPT, is “opening the black box” still a reasonable and achievable goal for XAI? Do we need to shift our perspectives?

We believe so.

The proverbial “black box” of AI has evolved, and so should our expectations on how to make it explainable. As the box becomes more opaque and harder to “open,” the human side of the Human-AI assemblage remains as a fruitful space to explore. In the most extreme case, the human side may be all there is left to explore. Even if we can open the black box, it is unclear what actionable outcomes would become available."

medium.com/human-centered-ai/e

#ai #generativeAI #LLMs #ExplainableAI #explainability

Last updated 1 year ago

Tom Stoneham · @tomstoneham
115 followers · 860 posts · Server dair-community.social

I wrote a thing. About reason, rationalisation and AI stuff. All part of my fumbling towards getting a proper philosophical grip on this stuff.

etc

listed.to/@24601/44577/rationa

#ai #philosophy #xai #transparency #explainability

Last updated 1 year ago

Ben Waber · @bwaber
534 followers · 1591 posts · Server hci.social

Next was an intriguing talk by Ramprasaath R. Selvaraju in deep learning models at the @allen_ai. Even if models can provide high accuracy on certain datasets, if people can't interrogate their output they're often worse than other methods. Given the proliferation of deep learning methods, approaches like this will be essential youtube.com/watch?v=cRJyA38LHs (6/9)

#explainability #xai #ai

Last updated 1 year ago

Craig Joseph, MD · @craigjoseph
14 followers · 24 posts · Server med-mastodon.com

Thorough and complete recommendations from the Coalition for Health AI for trustworthy  implementation. It focuses on , and . Great first steps toward a common understanding and agreement. coalitionforhealthai.org/paper

#ai #transparency #biasmanagement #explainability

Last updated 2 years ago

Published papers at TMLR · @tmlrpub
511 followers · 333 posts · Server sigmoid.social

Explaining Visual Counterfactual Explainers

Diego Velazquez, Pau Rodriguez, Alexandre Lacoste, Issam H. Laradji, Xavier Roca, Jordi Gonzàlez

openreview.net/forum?id=RYeRNw

#counterfactual #explainability #explanations

Last updated 2 years ago

TMLR certifications · @tmlrcert
111 followers · 22 posts · Server sigmoid.social

New :

Explaining Visual Counterfactual Explainers

Diego Velazquez, Pau Rodriguez, Alexandre Lacoste, Issam H. Laradji, Xavier Roca, Jordi Gonzàlez

openreview.net/forum?id=RYeRNw

#reproducibilitycertification #counterfactual #explainability #explanations

Last updated 2 years ago

Ben Waber · @bwaber
512 followers · 1397 posts · Server hci.social

Next was a fantastic pair of talks by Finale Doshi and Boris Babic on and its limits at the Schwartz Reisman Institute for Technology and Society. These talks nicely call out all lot of the fallacies trotted out about the "tradeoff" between explainability and accuracy, as well as what technologists, regulators, and users should demand in these systems. Highly recommend youtube.com/watch?v=1YdOwtdIX2 (4/7)

#ai #explainability

Last updated 2 years ago

HACID project · @hacid
11 followers · 12 posts · Server sigmoid.social

8/ Solutions provided by experts to a given case are subsumed and expanded by reasoning on a domain to help find the best options. is intrinsic, as the decision outcome can be traced back to the experts suggestions and the reasoning process

#knowledgegraph #explainability #ai

Last updated 2 years ago

HACID project · @hacid
11 followers · 12 posts · Server sigmoid.social

4/ Can modern replace and improve beyond traditional human expertise? Maybe, or a similar large language model () can provide a few reasonable suggestions, owing to available resources. They do not guarantee correctness or , though

#ai #policymaking #chatgpt #llm #explainability

Last updated 2 years ago

Ben Waber · @bwaber
458 followers · 957 posts · Server hci.social

Last was a great talk by Graham Neubig on evaluating and learning model at . The field of explainability doesn't have good quantitative evaluation tools, and the work presented here, which aims to use ML to evaluate explainability, is quite compelling. I would like to see some evaluations of this technique with human raters to validate it more thoroughly, hopefully that's coming soon. Highly recommend youtube.com/watch?v=CtcP5bvODz (5/5)

#explainability #csail #MachineLearning

Last updated 2 years ago

JMLR · @jmlr
638 followers · 153 posts · Server sigmoid.social

'Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond', by Anna Hedström et al.

jmlr.org/papers/v24/22-0142.ht

#explainability #explanations #ai

Last updated 2 years ago

New Submissions to TMLR · @tmlrsub
158 followers · 341 posts · Server sigmoid.social
Conor O'Sullivan · @conorosully
157 followers · 187 posts · Server sigmoid.social

My new article in @towardsdatascience

IML can increase/improve:
- Accuracy
- Performance in production
- Trust
- The reach of ML
- Storytelling
- Knowledge

Would you add anything?

No paywall link:
towardsdatascience.com/the-6-b

#DataScience #MachineLeaning #iml #xai #interpretability #explainability

Last updated 2 years ago

The Turing Way Project · @turingway
688 followers · 972 posts · Server fosstodon.org

RT @AIStandardsHub
📢 Call for input to shape AI standards 📢

If you are interested in the topic of , we want to hear from you. Do you agree with the definitions of & proposed by the participants of our latest workshop?

➡️bit.ly/3wuQF9a

#TrustworthyAI #ai #transparency #explainability

Last updated 2 years ago