lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Increasing the Understandability and Explainability of Machine Learning and Artificial Intelligence Solutions: A Design Thinking Approach
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).ORCID iD: 0000-0003-0512-6350
Linnaeus University, Faculty of Technology, Department of Informatics.ORCID iD: 0000-0001-8635-4069
Linnaeus University, Faculty of Technology, Department of Informatics. Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).ORCID iD: 0000-0002-0199-2377
2021 (English)In: Human Interaction, Emerging Technologies and Future Applications: Proceedings of the 4th International Conference on Human Interaction and Emerging Technologies: Future Applications (IHIET – AI 2021), April 28-30, 2021, Strasbourg, France / [ed] Ahram T., Taiar R., Groff F, Strasbourg, France: Springer, 2021, p. 37-42Conference paper, Published paper (Refereed)
Sustainable development
Not refering to any SDG
Abstract [en]

Nowadays, Artificial Intelligence (AI) is proving to be successful for solving complex problems in various application domains. However, despite the numerous success stories of AI-systems, one challenge that characterizes these systems is that they often lack transparency in terms of understandability and explainability. In this study, we propose to address this challenge from the design thinking lens as a way to amplify human understanding of ML (Machine Learning) and AI algorithms. We exemplify our proposed approach by depicting a case based on a conventional ML algorithm applied on sentiment analysis of students’ feedback. This paper aims to contribute to the overall discourse of a need of innovation when it comes to the understandability and explainability of ML and AI solutions, especially since innovation is an inherent feature of design thinking.

Place, publisher, year, edition, pages
Strasbourg, France: Springer, 2021. p. 37-42
Series
Advances in Intelligent Systems and Computing, E-ISSN 2194-5365 ; 1378
Keywords [en]
Explainable Artificial Intelligence (XAI), Explainable machine learning, Design thinking, Understandability
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
URN: urn:nbn:se:lnu:diva-102374DOI: 10.1007/978-3-030-74009-2_5Scopus ID: 2-s2.0-85105867340ISBN: 9783030732707 (print)ISBN: 9783030740092 (electronic)OAI: oai:DiVA.org:lnu-102374DiVA, id: diva2:1546192
Conference
International Conference on Human Interaction and Emerging Technologies, April 28-30, 2021, Strasbourg, France
Available from: 2021-04-21 Created: 2021-04-21 Last updated: 2022-05-16Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Kurti, ArianitDalipi, FisnikFerati, MexhidKastrati, Zenun

Search in DiVA

By author/editor
Kurti, ArianitDalipi, FisnikFerati, MexhidKastrati, Zenun
By organisation
Department of computer science and media technology (CM)Department of Informatics
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 353 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf