lnu.sePublications
Change search
Refine search result
12345 1 - 50 of 202
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Albrecht, Mario
    et al.
    Kerren, Andreas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Klein, Karsten
    Kohlbacher, Oliver
    Mutzel, Petra
    Paul, Wolfgang
    Schreiber, Falk
    Wybrow, Michael
    On Open Problems in Biological Network Visualization2009In: Graph Drawing: 17th International Symposium, GD 2009, Chicago, IL, USA, September 22-25, 2009. Revised Papers / [ed] David Eppstein and Emden R. Gansner, Berlin Heidelberg New York: Springer, 2009, p. 256-267Chapter in book (Refereed)
    Abstract [en]

    Much of the data generated and analyzed in the life sciences can be interpreted and represented by networks or graphs. Network analysis and visualization methods help in investigating them, and many universal as well as special-purpose tools and libraries are available for this task. However, the two fields of graph drawing and network biology are still largely disconnected. Hence, visualization of biological networks does typically not apply state-of-the-art graph drawing techniques, and graph drawing tools do not respect the drawing conventions of the life science community.

    In this paper, we analyze some of the major problems arising in biological network visualization.We characterize these problems and formulate a series of open graph drawing problems. These use cases illustrate the need for efficient algorithms to present, explore, evaluate, and compare biological network data. For each use case, problems are discussed and possible solutions suggested.

  • 2.
    Alfalahi, Alyaa
    et al.
    Stockholm University.
    Skeppstedt, Maria
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science. Gavagai AB, Sweden.
    Ahlblom, Rickard
    Stockholm University.
    Baskalayci, Roza
    Stockholm University.
    Henriksson, Aron
    Stockholm University.
    Asker, Lars
    Stockholm University.
    Paradis, Carita
    Lund University.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Expanding a Dictionary of Marker Words for Uncertainty and Negation Using Distributional Semantics2015In: EMNLP 2015 - 6th International Workshop on Health Text Mining and Information Analysis, LOUHI 2015 - Proceedings of the Workshop: Short Paper Track / [ed] Cyril Grouin, Thierry Hamon, Aurélie Névéol, and Pierre Zweigenbaum, Association for Computational Linguistics (ACL) , 2015, p. 90-96Conference paper (Refereed)
    Abstract [en]

    Approaches to determining the factuality of diagnoses and findings in clinical text tend to rely on dictionaries of marker words for uncertainty and negation. Here, a method for semi-automatically expanding a dictionary of marker words using distributional semantics is presented and evaluated. It is shown that ranking candidates for inclusion according to their proximity to cluster centroids of semantically similar seed words is more successful than ranking them according to proximity to each individual seed word. 

  • 3.
    Battiato, Sebastiano
    et al.
    Università di Catania.
    Coquillart, SabineInria/ZIRST.Laramee, Robert S.Swansea University.Kerren, AndreasLinnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.Braz, JoséEscola Superior de Tecnologia do IPS.
    Computer Vision, Imaging and Computer Graphics - Theory and Applications: International Joint Conference, VISIGRAPP 2013, Barcelona, Spain, February 21-24, 2013, Revised Selected Papers2014Conference proceedings (editor) (Refereed)
  • 4.
    Battiato, Sebastiano
    et al.
    Università di Catania, Italy.
    Coquillart, SabineInria/ZIRST, France.Pettré, JulienINRIA-Rennes/MimeTIC Team, France.Laramee, Robert S.Swansea University, UK.Kerren, AndreasLinnaeus University, Faculty of Technology, Department of Computer Science.Braz, JoséEscola Superior de Tecnologia do IPS, Portugal.
    Computer Vision, Imaging and Computer Graphics - Theory and Applications: International Joint Conference, VISIGRAPP 2014, Lisbon, Portugal, January 5-8, 2014, Revised Selected Papers2015Collection (editor) (Refereed)
  • 5.
    Bechmann, Dominique
    et al.
    CNRS-Université de Strasbourg, France.
    Chessa, ManuelaUniversity of Genoa, Italy.Cláudio, Ana-PaulaUniversidade de Lisboa, Portugal.Imai, FranciscoApple Inc., USA.Kerren, AndreasLinnaeus University, Faculty of Technology, Department of computer science and media technology (CM).Richard, PaulUniversity of Angers, France.Telea, Alexandru C.University of Groningen, Netherlands.Tremeau, AlainUniversité Jean Monnet in Saint Etienne, France.
    Computer Vision, Imaging and Computer Graphics Theory and Applications: 13th International Joint Conference, VISIGRAPP 2018 Funchal–Madeira, Portugal, January 27–29, 2018, Revised Selected Papers2019Collection (editor) (Refereed)
  • 6.
    Biedl, Therese
    et al.
    University of Waterloo, Canada.
    Kerren, AndreasLinnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Graph Drawing and Network Visualization: 26th International Symposium, GD 2018, Barcelona, Spain, September 26-28, 2018, Proceedings2018Conference proceedings (editor) (Refereed)
    Abstract [en]

    This book constitutes the refereed proceedings of the 26th International Symposium on Graph Drawing and Network Visualization, GD 2018, held in Barcelona, Spain, in September 2018. 

    The 41 full papers presented in this volume were carefully reviewed and selected from 85 submissions. They were organized in topical sections named: planarity variants; upward drawings; RAC drawings; orders; crossings; crossing angles; contact representations; specialized graphs and trees; partially fixed drawings, experiments; orthogonal drawings; realizability; and miscellaneous. The book also contains one invited talk in full paper length and the Graph Drawing contest report.

  • 7.
    Biedl, Therese
    et al.
    University of Waterloo, Canada.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Special issue of selected papers from the 26th international symposium on graph drawing and network visualization (GD 2018): guest editors' foreword2019In: Journal of Graph Algorithms and Applications, E-ISSN 1526-1719, Vol. 23, no 3, p. 459-461Article in journal (Other academic)
  • 8.
    Borgo, Rita
    et al.
    Kings College London, UK.
    Lee, Bongshin
    Microsoft Research, USA.
    Bach, Benjamin
    Microsoft Research - Inria, France.
    Fabrikant, Sara
    University of Zurich, Switzerland.
    Jianu, Radu
    City University London, UK.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Kobourov, Stephen
    University of Arizona, USA.
    McGee, Fintan
    Luxembourg Institute of Science and Technology, Luxembourg.
    Micallef, Luana
    Helsinki Institute for Information Technology, Finland.
    von Landesberger, Tatiana
    Darmstadt University, Germany.
    Ballweg, Katrin
    Darmstadt University, Germany.
    Diehl, Stephan
    University Trier, Germany.
    Simonetto, Paolo
    Swansea University, UK.
    Zhou, Michelle
    Juji, USA.
    Crowdsourcing for Information Visualization: Promises and Pitfalls2017In: Evaluation in the Crowd: Crowdsourcing and Human-Centered Experiments / [ed] Daniel Archambault, Helen Purchase, and Tobias Hoßfeld, Springer, 2017, p. 96-138Chapter in book (Refereed)
    Abstract [en]

    Crowdsourcing offers great potential to overcome the limitations of controlled lab studies. To guide future designs of crowdsourcing-based studies for visualization, we review visualization research that has attempted to leverage crowdsourcing for empirical evaluations of visualizations. We discuss six core aspects for successful employment of crowdsourcing in empirical studies for visualization – participants, study design, study procedure, data, tasks, and metrics & measures. We then present four case studies, discussing potential mechanisms to overcome common pitfalls. This chapter will help the visualization community understand how to effectively and efficiently take advantage of the exciting potential crowdsourcing has to offer to support empirical visualization research.

  • 9.
    Bouatouch, Kadi
    et al.
    University of Rennes, France.
    de Sousa, A. AugustoUniversidade do Porto, Portugal.Chessa, ManuelaUniversity of Genova, Italy.Paljic, AlexisMines ParisTech, France.Kerren, AndreasLinnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.Hurter, ChristopheFrench Civil Aviation University (ENAC), France.Farinella, Giovanni MariaUniversity of Catania, Italy.Radeva, PetiaUniversitat de Barcelona, Spain.Braz, JoséEscola Superior de Tecnologia de Setúbal, Portugal.
    Computer Vision, Imaging and Computer Graphics Theory and Applications: 15th International Joint Conference, VISIGRAPP 2020 Valletta, Malta, February 27–29, 2020, Revised Selected Papers2022Collection (editor) (Refereed)
    Abstract [en]

    This book constitutes thoroughly revised and selected papers from the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2020, held in Valletta, Malta, in February 2020. The 25 thoroughly revised and extended papers presented in this volume were carefully reviewed and selected from 455 submissions. The papers contribute to the understanding of relevant trends of current research on computer graphics; human computer interaction; information visualization; computer vision.

  • 10. Braz, José
    et al.
    Kerren, AndreasLinnaeus University, Faculty of Technology, Department of Computer Science.Linsen, Lars
    Proceedings of the 6th International Conference on Information Visualization Theory and Applications, IVAPP 2015, Berlin, Germany, March 11-14, 20152015Conference proceedings (editor) (Refereed)
  • 11.
    Braz, José
    et al.
    Escola Superior de Tecnologia de Setúbal, Portugal.
    Laramee, Robert S.Swansea University, U.K..Kerren, AndreasLinnaeus University, Faculty of Technology, Department of Computer Science.
    Proceedings of the 5th International Conference on Information Visualization Theory and Applications, IVAPP 2014, Lisbon, Portugal, 5-8 January, 20142014Conference proceedings (editor) (Refereed)
  • 12.
    Braz, José
    et al.
    Escola Superior de Tecnologia do IPS, Portugal.
    Pettré, JulienINRIA-Rennes/MimeTIC Team, France.Richard, PaulUniversity of Angers, France.Kerren, AndreasLinnaeus University, Faculty of Technology, Department of Computer Science.Linsen, LarsJacobs University, Denmark.Battiato, SebastianoUniversità di Catania, Italy.Imai, FranciscoCanon U.S.A. Inc, USA.
    Computer Vision, Imaging and Computer Graphics - Theory and Applications: International Joint Conference, VISIGRAPP 2015, Berlin, Germany, March 11-14, 2015, Revised Selected Papers2016Collection (editor) (Refereed)
  • 13.
    Büschel, Wolfgang
    et al.
    Technische Universität Dresden, Germany.
    Chen, Jian
    The Ohio State University, USA.
    Dachselt, Raimund
    Technische Universität Dresden, Germany.
    Drucker, Steven
    Microsoft Research, USA.
    Dwyer, Tim
    Monash University, Australia.
    Görg, Carsten
    University of Colorado, USA.
    Isenberg, Tobias
    Inria & Université Paris-Saclay, France.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    North, Chris
    Virginia Tech, USA.
    Stuerzlinger, Wolfgang
    Simon Fraser University, Canada.
    Interaction for Immersive Analytics2018In: Immersive Analytics / [ed] Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, Springer, 2018, p. 95-138Chapter in book (Other academic)
    Abstract [en]

    In this chapter, we briefly review the development of natural user interfaces and discuss their role in providing human-computer interaction that is immersive in various ways. Then we examine some opportunities for how these technologies might be used to better support data analysis tasks. Specifically, we review and suggest some interaction design guidelines for immersive analytics. We also review some hardware setups for data visualization that are already archetypal. Finally, we look at some emerging system designs that suggest future directions.

  • 14.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern.
    Ebert, Achim
    University of Kaiserslautern.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A Study of Emotion-triggered Adaptation Methods for Interactive Visualization2013In: UMAP 2013 Extended Proceedings: Late-Breaking Results, Project Papers and Workshop Proceedings of the 21st Conference on User Modeling, Adaptation, and Personalization. Rome, Italy, June 10-14, 2013 / [ed] Shlomo Berkovsky, Eelco Herder, Pasquale Lops & Olga C. Santos, CEUR-WS.org , 2013, Vol. 997, p. 9-16Conference paper (Refereed)
    Abstract [en]

    As the size and complexity of datasets increases, both visual-ization systems and their users are put under more pressure to oer quickand thorough insights about patterns hidden in this ocean of data. Whilenovel visualization techniques are being developed to better cope withthe various data contexts, users nd themselves increasingly often undermental bottlenecks that can induce a variety of emotions. In this paper,we execute a study to investigate the eectiveness of various emotion-triggered  adaptation  methods  for  visualization  systems.  The  emotionsconsidered are boredom and frustration, and are measured by means ofbrain-computer interface technology. Our ndings suggest that less intru-sive adaptive methods perform better at supporting users in overcomingemotional states with low valence or arousal, while more intrusive onestend to be misinterpreted or perceived as irritating.

  • 15.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Visualizing Group Affective Tone in Collaborative Scenarios2014Conference paper (Refereed)
    Abstract [en]

    A large set of complex datasets require the use of collaborative visualization solutions in order to harness the knowledge and experience of multiple experts. However, be it co-located or distributed, the collaboration process is inherently fragile, as small mistakes in communication or various human aspects can quickly derail it. In this paper, we introduce a novel visualization technique that highlights the group affective tone (GAT), also known as the presence of homogeneous emotional reactions within a group. The goal of our visualization is to improve users’ awareness of GAT, thus fostering a positive group affective tone that has been proven to increase effectiveness and creativity in collaborative scenarios. 

  • 16.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Morar, Valentina
    R3 - Un dispozitiv de intrare configurabil pentru interacţiunea liberă în spaţiu2010In: Romanian Journal of Human-Computer Interaction, ISSN 1843-4460, Vol. 3, p. 45-50Article in journal (Refereed)
    Abstract [un]

    În ultima perioadă s-a abordat tot mai des problema implementării unor dispozitive de intrare care să sprijine interacţiunea 3D prin oferirea a 6 sau a mai multor grade de libertate (degrees of freedom sau DoF). Cu toate acestea, astfel de dispozitive care să fie disponibile pentru interacţiune liberă în spaţiu - adică fără a fi necesară o suprafaţă ca sistem de referinţă, cum este cazul unui mouse - sunt proiectate doar pentru un tip restrâns de aplicaţii. De asemenea, aparatele de intrare de acest tip sunt rareori intuitive în utilizare şi limitate ca număr. Pentru a combate aceste probleme, în acest articol propunem un dispozitiv de complexitate şi costuri de implementare reduse, care poate fi utilizat în spaţiul liber şi este extrem de configurabil, susţinând nativ o interacţiune intuitivă cu variate medii virtuale. R3 (roll - rostogolire, rotate - rotire, rattle - agitare) oferă acurateţea necesară pentru navigare şi indicare - atât în 2D, cât şi în 3D – în aplicaţii de modelare şi jocuri, dar şi feedback tactil prin prezenţa unui trackball, toate acestea într-o manieră orientată spre utilizator. În plus, dispozitivul poate fi trecut uşor în modul de mouse, oferind astfel oricând suport pentru interacţiunea cu sistemele de operare convenţionale.

  • 17.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A Survey of Technologies on the Rise for Emotion-Enhanced Interaction2015In: Journal of Visual Languages and Computing, ISSN 1045-926X, E-ISSN 1095-8533, Vol. 31, no A, p. 70-86Article in journal (Refereed)
    Abstract [en]

    Emotions are a major part of the human existence and social interactions. Some might say that emotions are one of the aspects that make us truly human. However, while we express emotions in various life settings, the world of computing seems to struggle with supporting and incorporating the emotional dimension. In the last decades, the concept of affect has gotten a new upswing in research, moving beyond topics like market research and product development, and further exploring the area of emotion-enhanced interaction.

    In this article, we highlight techniques that have been employed more intensely for emotion measurement in the context of affective interaction. Besides capturing the functional principles behind these approaches and the inherent volatility of human emotions, we present relevant applications and establish a categorization of the roles of emotion detection in interaction. Based on these findings, we also capture the main challenges that emotion measuring technologies will have to overcome in order to enable a truly seamless emotion-driven interaction.

  • 18.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Ebert, Achim
    Detecting Insight and Emotion in Visualization Applications with a Commercial EEG Headset2011In: Proceedings of the SIGRAD 2011 Conference on Evaluations of Graphics and Visualization - Efficiency, Usefulness, Accessibility, Usability, KTH, Stockholm, Sweden., Linköping: Linköping University Electronic Press , 2011, p. 53-60Conference paper (Refereed)
    Abstract [en]

    Insight represents a special element of knowledge building. From the beginning of their lives, humans experience moments of insight in which a certain idea or solution becomes as clear to them as never before. Especially in the field of visual representations, insight has the potential to be at the core of comprehension and pattern recognition. Still, one problem is that this moment of clarity is highly unpredictable and complex in nature, and many scientists have investigated different aspects of its generation process in the hope of capturing the essence of this eureka (Greek, for "I have found") moment.

    In this paper, we look at insight from the spectrum of information visualization. In particular, we inspect the possible correlation between epiphanies and emotional responses subjects experience when having an insight. In order to check the existence of such a connection, we employ a set of initial tests involving the EPOC mobile electroencephalographic (EEG) headset for detecting emotional responses generated by insights. The insights are generated by open-ended tasks that take the form of visual riddles and visualization applications. Our results suggest that there is a strong connection between insight and emotions like frustration and excitement. Moreover, measuring emotional responses via EEG during an insight-related problem solving results in non-intrusive, nearly automatic detection of the major Aha! moments the user experiences. We argue that this indirect detection of insights opens the door for the objective evaluation and comparison of various visualizations techniques.

  • 19.
    Cernea, Daniel
    et al.
    University of Kaiserslautern.
    Mora, Simone
    Norwegian University of Science.
    Perez, Alfredo
    Norwegian University of Science.
    Ebert, Achim
    University of Kaiserslautern.
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Divitini, Monica
    Norwegian University of Science.
    Gil de la Iglesia, Didac
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Otero, Nuno
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. University of Minho, Portugal.
    Tangible and Wearable User Interfaces for Supporting Collaboration among Emergency Workers2012In: Collaboration and Technology: 18th International Conference, CRIWG 2012 Raesfeld, Germany, September 16-19, 2012 Proceedings / [ed] Valeria Herskovic, H. Ulrich Hoppe, Marc Jansen, Jürgen Ziegler, Springer, 2012, Vol. 7493, p. 192-199Conference paper (Refereed)
    Abstract [en]

    Ensuring a constant flow of information is essential for offeringquick help in different types of disasters. In the following, we report on a workin-progress distributed, collaborative and tangible system for supporting crisismanagement. On one hand, field operators need devices that collect information—personal notes and sensor data—without interrupting their work. Onthe other hand, a disaster management system must operate in different scenariosand be available to people with different preferences, backgrounds and roles.Our work addresses these issues by introducing a multi-level collaborative systemthat manages real-time data flow and analysis for various rescue operators.

  • 20.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Olech, Peter-Scott
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Controlling In-Vehicle Systems with a Commercial EEG Headset: Performance and Cognitive Load2012In: Visualization of Large and Unstructured Data Sets: Applications in Geospatial Planning, Modeling and Engineering - Proceedings of IRTG 1131 Workshop 2, Schloss Dagstuhl - Leibniz-Zentrum für Informatik , 2012Conference paper (Refereed)
    Abstract [en]

    Humans have dreamed for centuries to control their surroundings solely by the power of theirminds. These aspirations have been captured by multiple science fiction creations, like theNeuromancer novel by William Gibson or the Brainstorm cinematic movie, to name just a few.Nowadays these dreams are slowly becoming reality due to a variety of brain-computer interfaces(BCI) that detect neural activation patterns and support the control of devices by brain signals.

    An important field in which BCIs are being successfully integrated is the interaction withvehicular systems. In this paper we evaluate the performance of BCIs, more specifically a commercialelectroencephalographic (EEG) headset, in combination with vehicle dashboard systemsand highlight the advantages and limitations of this approach. Further, we investigate the cognitiveload that drivers experience when interacting with secondary in-vehicle devices via touchcontrols or a BCI headset. As in-vehicle systems are increasingly versatile and complex, it becomesvital to capture the level of distraction and errors that controlling these secondary systemsmight introduce to the primary driving process. Our results suggest that the control with theEEG headset introduces less distraction to the driver, probably as it allows the eyes of the driverto remain focused on the road. Still, the control of the vehicle dashboard by EEG is efficientonly for a limited number of functions, after which increasing the number of in-vehicle controlsamplifies the detection of false commands.

  • 21.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. University of Kaiserslauten, Germany.
    Olech, Peter-Scott
    University of Kaiserslauten, Germany.
    Ebert, Achim
    University of Kaiserslauten, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    EEG-based Measurement of Subjective Parameters in Evaluations2011In: HCI International 2011 Posters' Extended Abstracts: International Conference, HCI International 2011, Orlando, FL, USA, July 9-14, 2011, Proceedings, Part II / [ed] Stephanidis, Constantine, Berlin Heidelberg: Springer, 2011, p. 279-283Conference paper (Refereed)
    Abstract [en]

    Evaluating new approaches, be it new interaction techniques, new applications or even new hardware, is an important task, which has to be done to ensure both usability and user satisfaction. The drawback of evaluating subjective parameters is that this can be relatively time consuming, and the outcome is possibly quite imprecise. Considering the recent release of cost-efficient commercial EEG headsets, we propose the utilization of electro-encephalographic (EEG) devices for evaluation purposes. The goal of our research is to evaluate if a commercial EEG headset can provide cutting-edge support during user studies and evaluations. Our results are encouraging and suggest that wireless EEG technology is a viable alternative for measuring subjectivity in evaluation scenarios.

  • 22.
    Cernea, Daniel
    et al.
    University of Kaiserslautern.
    Olech, Peter-Scott
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Measuring Subjectivity: Supporting Evaluations with the Emotiv EPOC Neuroheadset2012In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 26, no 2, p. 177-182Article in journal (Refereed)
    Abstract [en]

    Since the dawn of the industrial era, modern devices and interaction methods have undergone rigorous evaluations in order to ensure their functionality and quality, as well as usability. While there are many methods for measuring objective data, capturing and interpreting subjective factors—like the feelings or states of mind of the users—is still an imprecise and usually post-event process. In this paper we propose the utilization of the Emotiv EPOC commercial electroencephalographic (EEG) neuroheadset for real-time support during evaluations and user studies. We show in two evaluation scenarios that the wireless EPOC headsets can be used efficiently for supporting subjectivity measurement. Additionally, we highlight situations that may result in a lower accuracy, as well as explore possible reasons and propose solutions for improving the error rates of the device.

  • 23.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern, Germany.
    Truderung, Igor
    University of Kaiserslautern, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    An Interactive Visualization for Tabbed Browsing Behavior Analysis2014In: Computer Vision, Imaging and Computer Graphics: Theory and Applications / [ed] Sebastiano Battiato, Sabine Coquillart, Robert S. Laramee, Andreas Kerren, and José Braz, Springer, 2014, p. 69-84Chapter in book (Refereed)
    Abstract [en]

    Web browsers are at the core of online user experience, enablinga wide range of Web applications, like communication, games, entertainment, development, etc. Additionally, given the variety and complexity of online-supported tasks, users have started parallelizing and organizing their online browser sessions by employing multiple browser windows and tabs. However, there are few solutions that support analysts and casual users in detecting and extracting patterns from these parallel browsing histories. In this paper we introduce WebComets, an interactive visualization for exploring multi-session multi-user parallel browsing logs. After highlighting visual and functional aspects of the system, we introduce a motif-based contextual search for enabling the filtering and comparison of user navigation patterns. We further highlight the functionality of WebComets with a use case. Our investigations suggest that parallel browser history visualization can offer better insight into user tabbed browsing behavior and support the recognition of online navigation patterns.

  • 24.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Truderung, Igor
    University of Kaiserslautern, Germany .
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    University of Kaiserslautern, Germany .
    WebComets: A Tab-Oriented Approach for Browser History Visualization2013In: / [ed] S. Coquillart, C. Andujar, R. S. Laramee, A. Kerren, and J. Braz, SciTePress , 2013, p. 439-450Conference paper (Refereed)
    Abstract [en]

    Web browsers are our main gateways to the Internet. With their help we read articles, we learn, we listen to music, we share our thoughts and feelings, we write e-mails, or we chat. Current Web browser histories have mostly no visualization capabilities as well as limited options to filter patterns and information. Furthermore, such histories disregard the existence of parallel navigation in multiple browser windows andtabs. But a good understanding of parallel browsing behavior is of critical importance for the casual user and the behavioural analyst, while at the same time having implications in the design of search engines, Web sites and Web browsers. In this paper we present WebComets, an interactive visualization for extended browser histories. Our visualization employs browser histories that capture—among others—the taboriented, parallel nature of Web page navigation. Results presented in this paper suggest that WebComets better supports the analysis and comparison of parallel browsing and corresponding behavior patterns than common browser histories.

  • 25.
    Cernea, Daniel
    et al.
    Technische Univ. Kaiserslautern .
    Weber, Christopher
    Technische Univ. Kaiserslautern .
    Ebert, Achim
    Technische Univ. Kaiserslautern .
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Emotion Scents: A Method of Representing User Emotions on GUI Widgets2013In: Proceedings  of SPIE 8654: Visualization and Data Analysis 2013, Burlingame, California, USA, February 3, 2013, SPIE - International Society for Optical Engineering, 2013, p. 86540F-Conference paper (Refereed)
    Abstract [en]

    The world of desktop interfaces has been dominated for years by the concept of windows and standardized user interface (UI) components. Still, while supporting the interaction and information exchange between the users and the computer system, graphical user interface (GUI) widgets are rather one-sided, neglecting to capture the subjective facets of the user experience. In this paper, we propose a set of design guidelines for visualizing user emotions on standard GUI widgets (e.g., buttons, check boxes, etc.) in order to enrich the interface with a new dimension of subjective information by adding support for emotion awareness as well as post-task analysis and decision making. We highlight the use of an EEG headset for recording the various emotional states of the user while he/she is interacting with the widgets of the interface. We propose a visualization approach, called emotion scents, that allows users to view emotional reactions corresponding to di erent GUI widgets without in uencing the layout or changing the positioning of these widgets. Our approach does not focus on highlighting the emotional experience during the interaction with an entire system, but on representing the emotional perceptions and reactions generated by the interaction with a particular UI component. Our research is motivated by enabling emotional self-awareness and subjectivity analysis through the proposed emotionenhanced UI components for desktop interfaces. These assumptions are further supported by an evaluation of emotion scents.

  • 26.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. Univ Kaiserslautern, Germany.
    Weber, Christopher
    Univ Kaiserslautern, Germany.
    Ebert, Achim
    Univ Kaiserslautern, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Emotion-Prints: Interaction-Driven Emotion Visualization on Multi-Touch Interfaces2015In: Proceedings of SPIE 9397: Visualization and Data Analysis 2015, San Francisco, CA, USA, February 8-12, 2015 / [ed] David L. Kao, Ming C. Hao, Mark A. Livingston, and Thomas Wischgoll, SPIE - International Society for Optical Engineering, 2015, p. 9397-0A-Conference paper (Refereed)
    Abstract [en]

    Emotions are one of the unique aspects of human nature, and sadly at the same time one of the elements that our technological world is failing to capture and consider due to their subtlety and inherent complexity. But with the current dawn of new technologies that enable the interpretation of emotional states based on techniques involving facial expressions, speech and intonation, electrodermal response (EDS) and brain-computer interfaces (BCIs), we are finally able to access real-time user emotions in various system interfaces. In this paper we introduce emotion-prints, an approach for visualizing user emotional valence and arousal in the context of multi-touch systems. Our goal is to offer a standardized technique for representing user affective states in the moment when and at the location where the interaction occurs in order to increase affective self-awareness, support awareness in collaborative and competitive scenarios, and offer a framework for aiding the evaluation of touch applications through emotion visualization. We show that emotion-prints are not only independent of the shape of the graphical objects on the touch display, but also that they can be applied regardless of the acquisition technique used for detecting and interpreting user emotions. Moreover, our representation can encode any affective information that can be decomposed or reduced to Russell’s two-dimensional space of valence and arousal. Our approach is enforced by a BCI-based user study and a follow-up discussion of advantages and limitations. 

  • 27.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern.
    Weber, Christopher
    UC Davis, Department of Computer Science.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    University of Kaiserslautern.
    Group Affective Tone Awareness and Regulation through Virtual Agents2014In: Proceedings of the Workshop on Affective Agents: Fourteenth International Conference on Intelligent Virtual Agents (IVA 2014), 2014, p. 9-16Conference paper (Refereed)
    Abstract [en]

    It happens increasingly often that experts need to collaboratein order to exchange ideas, views and opinions on their path towardsunderstanding. However, every collaboration process is inherently fragileand involves a large set of human subjective aspects, including socialinteraction, personality, and emotions. In this paper we present Pogat,an affective virtual agent designed to support the collaboration processaround displays by increasing user awareness of the group affective tone.A positive group affective tone, where all the participants of a groupexperience emotions of a positive valence, has been linked to fosteringcreativity in groups and supporting the entire collaboration process. Atthe same time, a negative or inexistent group affective tone can suggestnegative emotions in some of the group members, emotions that canlead to an inefficient or even obstructed collaboration. A study of ourapproach suggests that Pogat can increase the awareness of the overallaffective state of the group as well as positively affect the efficiency ofgroups in collaborative scenarios.

  • 28.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Bibi, Stamatia
    University of Western Macedonia, Greece.
    Zozas, Ioannis
    University of Western Macedonia, Greece.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Analyzing the Evolution of JavaScript Applications2019In: Proceedings of the 14th International Conference on Evaluation of Novel Approaches to Software Engineering - Volume 1: ENASE / [ed] Damiani, E; Spanoudakis, G; Maciaszek, L, SciTePress, 2019, Vol. 1, p. 359-366Conference paper (Refereed)
    Abstract [en]

    Software evolution analysis can shed light on various aspects of software development and maintenance. Up to date, there is little empirical evidence on the evolution of JavaScript (JS) applications in terms of maintainability and changeability, even though JavaScript is among the most popular scripting languages for front-end web applications. In this study, we investigate JS applications’ quality and changeability trends over time by examining the relevant Laws of Lehman. We analyzed over 7,500 releases of JS applications and reached some interesting conclusions. The results show that JS applications continuously change and grow, there are no clear signs of quality degradation while the complexity remains the same over time, despite the fact that the understandability of the code deteriorates.

  • 29.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Northwestern University, USA.
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Visualization for Trust in Machine Learning Revisited: The State of the Field in 20232024In: IEEE Computer Graphics and Applications, ISSN 0272-1716, E-ISSN 1558-1756Article in journal (Refereed)
    Abstract [en]

    Visualization for explainable and trustworthy machine learning remains one of the most important and heavily researched fields within information visualization and visual analytics with various application domains, such as medicine, finance, and bioinformatics. After our 2020 state-of-the-art report comprising 200 techniques, we have persistently collected peer-reviewed articles describing visualization techniques, categorized them based on the previously established categorization schema consisting of 119 categories, and provided the resulting collection of 542 techniques in an online survey browser. In this survey article, we present the updated findings of new analyses of this dataset as of fall 2023 and discuss trends, insights, and eight open challenges for using visualizations in machine learning. Our results corroborate the rapidly growing trend of visualization techniques for increasing trust in machine learning models in the past three years, with visualization found to help improve popular model explainability methods and check new deep learning architectures, for instance.

  • 30.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Jusufi, Ilir
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    A survey of surveys on the use of visualization for interpreting machine learning models2020In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 19, no 3, p. 207-233Article in journal (Refereed)
    Abstract [en]

    Research in machine learning has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originating from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The interpretation of machine learning models is currently a hot topic in the information visualization community, with results showing that insights from machine learning models can lead to better predictions and improve the trustworthiness of the results. Due to this, multiple (and extensive) survey articles have been published recently trying to summarize the high number of original research papers published on the topic. But there is not always a clear definition of what these surveys cover, what is the overlap between them, which types of machine learning models they deal with, or what exactly is the scenario that the readers will find in each of them. In this article, we present a metaanalysis (i.e. a ‘‘survey of surveys’’) of manually collected survey papers that refer to the visual interpretation of machine learning models, including the papers discussed in the selected surveys. The aim of our article is to serve both as a detailed summary and as a guide through this survey ecosystem by acquiring, cataloging, and presenting fundamental knowledge of the state of the art and research opportunities in the area. Our results confirm the increasing trend of interpreting machine learning with visualizations in the past years, and that visualization can assist in, for example, online training processes of deep learning models and enhancing trust into machine learning. However, the question of exactly how this assistance should take place is still considered as an open challenge of the visualization community.

    Download full text (pdf)
    fulltext
  • 31.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Jusufi, Ilir
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Rossi, Fabrice
    Université Paris Dauphine, France.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations2020In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 39, no 3, p. 713-756Article in journal (Refereed)
    Abstract [en]

    Machine learning (ML) models are nowadays used in complex applications in various domains such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State-of-the-Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web-based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data.

  • 32.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    t-viSNE: A Visual Inspector for the Exploration of t-SNE2018In: Presented at IEEE Information Visualization  (VIS '18), Berlin, Germany, 21-26 October, 2018, 2018Conference paper (Refereed)
    Abstract [en]

    The use of t-Distributed Stochastic Neighborhood Embedding (t-SNE) for the visualization of multidimensional data has proven to be a popular approach, with applications published in a wide range of domains. Despite their usefulness, t-SNE plots can sometimes be hard to interpret or even misleading, which hurts the trustworthiness of the results. By opening the black box of the algorithm and showing insights into its behavior through visualization, we may learn how to use it in a more effective way. In this work, we present t-viSNE, a visual inspection tool that enables users to explore anomalies and assess the quality of t-SNE results by bringing forward aspects of the algorithm that would normally be lost after the dimensionality reduction process is finished.

    Download full text (pdf)
    t-viSNE_Chatzimparmpas_et_al
  • 33.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    t-viSNE: Interactive Assessment and Interpretation of t-SNE Projections2020In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 26, no 8, p. 2696-2714Article in journal (Refereed)
    Abstract [en]

    t-Distributed Stochastic Neighbor Embedding (t-SNE) for the visualization of multidimensional data has proven to be a popular approach, with successful applications in a wide range of domains. Despite their usefulness, t-SNE projections can be hard to interpret or even misleading, which hurts the trustworthiness of the results. Understanding the details of t-SNE itself and the reasons behind specific patterns in its output may be a daunting task, especially for non-experts in dimensionality reduction. In this work, we present t-viSNE, an interactive tool for the visual exploration of t-SNE projections that enables analysts to inspect different aspects of their accuracy and meaning, such as the effects of hyper-parameters, distance and neighborhood preservation, densities and costs of specific neighborhoods, and the correlations between dimensions and visual patterns. We propose a coherent, accessible, and well-integrated collection of different views for the visualization of t-SNE projections. The applicability and usability of t-viSNE are demonstrated through hypothetical usage scenarios with real data sets. Finally, we present the results of a user study where the tool’s effectiveness was evaluated. By bringing to light information that would normally be lost after running t-SNE, we hope to support analysts in using t-SNE and making its results better understandable.

  • 34.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    VisRuler: Visual Analytics for Extracting Decision Rules from Bagged and Boosted Decision Trees2023In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 22, no 2, p. 115-139Article in journal (Refereed)
    Abstract [en]

    Bagging and boosting are two popular ensemble methods in machine learning (ML) that produce many individual decision trees. Due to the inherent ensemble characteristic of these methods, they typically outperform single decision trees or other ML models in predictive performance. However, numerous decision paths are generated for each decision tree, increasing the overall complexity of the model and hindering its use in domains that require trustworthy and explainable decisions, such as finance, social care, and health care. Thus, the interpretability of bagging and boosting algorithms—such as random forest and adaptive boosting—reduces as the number of decisions rises. In this paper, we propose a visual analytics tool that aims to assist users in extracting decisions from such ML models via a thorough visual inspection workflow that includes selecting a set of robust and diverse models (originating from different ensemble learning algorithms), choosing important features according to their global contribution, and deciding which decisions are essential for global explanation (or locally, for specific cases). The outcome is a final decision based on the class agreement of several models and the explored manual decisions exported by users. We evaluated the applicability and effectiveness of VisRuler via a use case, a usage scenario, and a user study. The evaluation revealed that most users managed to successfully use our system to explore decision rules visually, performing the proposed tasks and answering the given questions in a satisfying way.

    Download full text (pdf)
    fulltext
  • 35.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Empirical Study: Visual Analytics for Comparing Stacking to Blending Ensemble Learning2021In: Proceedings of the 23rd International Conference on Control Systems and Computer Science (CSCS23), 26–28 May 2021, Bucharest, Romania / [ed] Ioan Dumitrache, Adina Magda Florea, Mihnea-Alexandru Moisescu, Florin Pop, and Alexandru Dumitraşcu, IEEE, 2021, p. 1-8Conference paper (Other academic)
    Abstract [en]

    Stacked generalization (also called stacking) is an ensemble method in machine learning that uses a metamodel to combine the predictive results of heterogeneous base models arranged in at least one layer. K-fold cross-validation is employed at the various stages of training in this method. Nonetheless, another validation strategy is to try out several splits of data leading to different train and test sets for the base models and then use only the latter to train the metamodel—this is known as blending. In this work, we present a modification of an existing visual analytics system, entitled StackGenVis, that now supports the process of composing robust and diverse ensembles of models with both aforementioned methods. We have built multiple ensembles using our system with the two respective methods, and we tested the performance with six small- to large-sized data sets. The results indicate that stacking is significantly more powerful than blending based on three performance metrics. However, the training times of the base models and the final ensembles are lower and more stable during various train/test splits in blending rather than stacking.

  • 36.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    FeatureEnVi: Visual Analytics for Feature Engineering Using Stepwise Selection and Semi-Automatic Extraction Approaches2022In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 28, no 4, p. 1773-1791Article in journal (Refereed)
    Abstract [en]

    The machine learning (ML) life cycle involves a series of iterative steps, from the effective gathering and preparation of the data—including complex feature engineering processes—to the presentation and improvement of results, with various algorithms to choose from in every step. Feature engineering in particular can be very beneficial for ML, leading to numerous improvements such as boosting the predictive results, decreasing computational times, reducing excessive noise, and increasing the transparency behind the decisions taken during the training. Despite that, while several visual analytics tools exist to monitor and control the different stages of the ML life cycle (especially those related to data and algorithms), feature engineering support remains inadequate. In this paper, we present FeatureEnVi, a visual analytics system specifically designed to assist with the feature engineering process. Our proposed system helps users to choose the most important feature, to transform the original features into powerful alternatives, and to experiment with different feature generation combinations. Additionally, data space slicing allows users to explore the impact of features on both local and global scales. FeatureEnVi utilizes multiple automatic feature selection techniques; furthermore, it visually guides users with statistical evidence about the influence of each feature (or subsets of features). The final outcome is the extraction of heavily engineered features, evaluated by multiple validation metrics. The usefulness and applicability of FeatureEnVi are demonstrated with two use cases and a case study. We also report feedback from interviews with two ML experts and a visualization researcher who assessed the effectiveness of our system.

  • 37.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 2, p. 1547-1557Article in journal (Refereed)
    Abstract [en]

    In machine learning (ML), ensemble methods—such as bagging, boosting, and stacking—are widely-established approaches that regularly achieve top-notch predictive performance. Stacking (also called "stacked generalization") is an ensemble method that combines heterogeneous base models, arranged in at least one layer, and then employs another metamodel to summarize the predictions of those models. Although it may be a highly-effective approach for increasing the predictive performance of ML, generating a stack of models from scratch can be a cumbersome trial-and-error process. This challenge stems from the enormous space of available solutions, with different sets of data instances and features that could be used for training, several algorithms to choose from, and instantiations of these algorithms using diverse parameters (i.e., models) that perform differently according to various metrics. In this work, we present a knowledge generation model, which supports ensemble learning with the use of visualization, and a visual analytics system for stacked generalization. Our system, StackGenVis, assists users in dynamically adapting performance metrics, managing data instances, selecting the most important features for a given data set, choosing a set of top-performant and diverse algorithms, and measuring the predictive performance. In consequence, our proposed tool helps users to decide between distinct models and to reduce the complexity of the resulting stack by removing overpromising and underperforming models. The applicability and effectiveness of StackGenVis are demonstrated with two use cases: a real-world healthcare data set and a collection of data related to sentiment/stance detection in texts. Finally, the tool has been evaluated through interviews with three ML experts.

  • 38.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization2021In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 40, no 3, p. 201-214Article in journal (Refereed)
    Abstract [en]

    During the training phase of machine learning (ML) models, it is usually necessary to configure several hyperparameters. This process is computationally intensive and requires an extensive search to infer the best hyperparameter set for the given problem. The challenge is exacerbated by the fact that most ML models are complex internally, and training involves trial-and-error processes that could remarkably affect the predictive result. Moreover, each hyperparameter of an ML algorithm is potentially intertwined with the others, and changing it might result in unforeseeable impacts on the remaining hyperparameters. Evolutionary optimization is a promising method to try and address those issues. According to this method, performant models are stored, while the remainder are improved through crossover and mutation processes inspired by genetic algorithms. We present VisEvol, a visual analytics tool that supports interactive exploration of hyperparameters and intervention in this evolutionary procedure. In summary, our proposed tool helps the user to generate new models through evolution and eventually explore powerful hyperparameter combinations in diverse regions of the extensive hyperparameter space. The outcome is a voting ensemble (with equal rights) that boosts the final predictive performance. The utility and applicability of VisEvol are demonstrated with two use cases and interviews with ML experts who evaluated the effectiveness of the tool.

  • 39.
    Chatzimparmpas, Angelos
    et al.
    Northwestern University, USA.
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Telea, Alexandru C.
    Utrecht University, Netherlands.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    DeforestVis: Behavior Analysis of Machine Learning Models with Surrogate Decision Stumps2024In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659Article in journal (Refereed)
    Abstract [en]

    As the complexity of Machine Learning (ML) models increases and their application in different (and critical) domains grows, there is a strong demand for more interpretable and trustworthy ML. A direct, model-agnostic, way to interpret such models is to train surrogate models—such as rule sets and decision trees—that sufficiently approximate the original ones while being simpler and easier-to-explain. Yet, rule sets can become very lengthy, with many if-else statements, and decision tree depth grows rapidly when accurately emulating complex ML models. In such cases, both approaches can fail to meet their core goal—providing users with model interpretability. To tackle this, we propose DeforestVis, a visual analytics tool that offers summarization of the behavior of complex ML models by providing surrogate decision stumps (one-level decision trees) generated with the Adaptive Boosting (AdaBoost) technique. DeforestVis helps users to explore the complexity vs fidelity trade-off by incrementally generating more stumps, creating attribute-based explanations with weighted stumps to justify decision making, and analyzing the impact of rule overriding on training instance allocation between one or more stumps. An independent test set allows users to monitor the effectiveness of manual rule changes and form hypotheses based on case-by-case analyses. We show the applicability and usefulness of DeforestVis with two use cases and expert interviews with data analysts and model developers.

  • 40.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Park, Vilhelm
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Evaluating StackGenVis with a Comparative User Study2022In: Proceedings of the 15th IEEE Pacific Visualization Symposium (PacificVis '22), IEEE, 2022, p. 161-165Conference paper (Refereed)
    Abstract [en]

    Stacked generalization (also called stacking) is an ensemble method in machine learning that deploys a metamodel to summarize the predictive results of heterogeneous base models organized into one or more layers. Despite being capable of producing high-performance results, building a stack of models can be a trial-and-error procedure. Thus, our previously developed visual analytics system, entitled StackGenVis, was designed to monitor and control the entire stacking process visually. In this work, we present the results of a comparative user study we performed for evaluating the StackGenVis system. We divided the study participants into two groups to test the usability and effectiveness of StackGenVis compared to Orange Visual Stacking (OVS) in an exploratory usage scenario using healthcare data. The results indicate that StackGenVis is significantly more powerful than OVS based on the qualitative feedback provided by the participants. However, the average completion time for all tasks was comparable between both tools.

  • 41.
    Chatzimparmpas, Angelos
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Paulovich, Fernando V.
    Eindhoven University of Technology, Netherlands.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    HardVis: Visual Analytics to Handle Instance Hardness Using Undersampling and Oversampling Techniques2023In: Computer graphics forum (Print), ISSN 0167-7055, E-ISSN 1467-8659, Vol. 42, no 1, p. 135-154Article in journal (Refereed)
    Abstract [en]

    Despite the tremendous advances in machine learning (ML), training with imbalanced data still poses challenges in many real-world applications. Among a series of diverse techniques to solve this problem, sampling algorithms are regarded as an efficient solution. However, the problem is more fundamental, with many works emphasizing the importance of instance hardness. This issue refers to the significance of managing unsafe or potentially noisy instances that are more likely to be misclassified and serve as the root cause of poor classification performance. This paper introduces HardVis, a visual analytics system designed to handle instance hardness mainly in imbalanced classification scenarios. Our proposed system assists users in visually comparing different distributions of data types, selecting types of instances based on local characteristics that will later be affected by the active sampling method, and validating which suggestions from undersampling or oversampling techniques are beneficial for the ML model. Additionally, rather than uniformly undersampling/oversampling a specific class, we allow users to find and sample easy and difficult to classify training instances from all classes. Users can explore subsets of data from different perspectives to decide all those parameters, while HardVis keeps track of their steps and evaluates the model’s predictive performance in a test set separately. The end result is a well-balanced data set that boosts the predictive power of the ML model. The efficacy and effectiveness of HardVis are demonstrated with a hypothetical usage scenario and a use case. Finally, we also look at how useful our system is based on feedback we received from ML experts.

    Download full text (pdf)
    fulltext
  • 42.
    Cláudio, Ana Paula
    et al.
    University of Lisbon, Portugal.
    Bouatouch, KadiUniversity of Rennes, France.Chessa, ManuelaUniversity of Genoa, Italy.Paljic, AlexisMines ParisTech, France.Kerren, AndreasLinnaeus University, Faculty of Technology, Department of computer science and media technology (CM).Hurter, ChristopheFrench Civil Aviation University (ENAC), France.Tremeau, AlainUniversity Jean Monnet, France.Farinella, Giovanni MariaUniversity of Catania, Italy.
    Computer Vision, Imaging and Computer Graphics Theory and Applications: 14th International Joint Conference, VISIGRAPP 2019, Prague, Czech Republic, February 25–27, 2019, Revised Selected Papers2020Collection (editor) (Refereed)
    Abstract [en]

    This book constitutes thoroughly revised and selected papers from the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, held in Prague, Czech Republic, in February 2019. The 25 thoroughly revised and extended papers presented in this volume were carefully reviewed and selected from 395 submissions. The papers contribute to the understanding of relevant trends of current research on computer graphics; human computer interaction; information visualization; computer vision.

  • 43.
    Conroy, Melanie
    et al.
    University of Memphis, USA.
    Gillmann, Christina
    Leipzig University, Germany.
    Harvey, Francis
    Leibniz Institute for Regional Geography, Germany; University of Warsaw, Poland.
    Mchedlidze, Tamara
    Utrecht University, Netherlands.
    Fabrikant, Sara Irina
    University of Zürich, Switzerland.
    Windhager, Florian
    Danube University Krems, Austria.
    Scheuermann, Gerik
    Leipzig University, Germany.
    Tangherlini, Timothy R.
    University of California, USA.
    Warren, Christopher N.
    Carnegie Mellon University, USA.
    Weingart, Scott B.
    University of Edinburgh, UK.
    Rehbein, Malte
    Passau University, Germany.
    Börner, Katy
    Indiana University, USA.
    Elo, Kimmo
    University of Turku, Finland.
    Jänicke, Stefan
    University of Southern Denmark, Denmark.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Nöllenburg, Martin
    TU Wien, Austria.
    Dwyer, Tim
    Monash University, Australia.
    Eide, Øyvind
    University of Cologne, Germany.
    Kobourov, Stephen
    University of Arizona, USA.
    Betz, Gregor
    Karlsruhe Institute of Technology, Germany.
    Uncertainty in humanities network visualization2024In: Frontiers in Communication, E-ISSN 2297-900X, Vol. 8, article id 1305137Article in journal (Refereed)
    Abstract [en]

    Network visualization is one of the most widely used tools in digital humanities research. The idea of uncertain or “fuzzy” data is also a core notion in digital humanities research. Yet network visualizations in digital humanities do not always prominently represent uncertainty. In this article, we present a mathematical and logical model of uncertainty as a range of values which can be used in network visualizations. We review some of the principles for visualizing uncertainty of different kinds, visual variables that can be used for representing uncertainty, and how these variables have been used to represent different data types in visualizations drawn from a range of non-humanities fields like climate science and bioinformatics. We then provide examples of two diagrams: one in which the variables displaying degrees of uncertainty are integrated into the graph and one in which glyphs are added to represent data certainty and uncertainty. Finally, we discuss how probabilistic data and what-if scenarios could be used to expand the representation of uncertainty in humanities network visualizations.

    Download full text (pdf)
    fulltext
  • 44. Coquillart, Sabine
    et al.
    Andujar, CarlosLaramee, Robert S.Kerren, AndreasLinnaeus University, Faculty of Technology, Department of Computer Science.Braz, José
    GRAPP 2013 and IVAPP 2013: Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications2013Conference proceedings (editor) (Refereed)
  • 45. Einsfeld, Katja
    et al.
    Ebert, Achim
    Kerren, Andreas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering.
    Deller, Matthias
    Knowledge Generation Through Human-Centered Information Visualization2009In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 8, no 3, p. 180-196Article in journal (Refereed)
    Abstract [en]

    One important intention of human-centered information visualization is to represent huge amounts of abstract data in a visual representation that allows even users from foreign application domains to interact with the visualization, to understand the underlying data, and finally, to gain new, application-related knowledge. The visualization will help experts as well as non-experts to link previously or isolated knowledge-items in their mental map with new insights.Our approach explicitly supports the process of linking knowledge-items with three concepts. At first, the representation of data items in an ontology categorizes and relates them. Secondly, the use of various visualization techniques visually correlates isolated items by graph-structures, layout, attachment, integration, or hyperlink techniques. Thirdly, the intensive use of visual metaphors relates a known source domain to a less known target domain. In order to realize a scenario of these concepts, we developed a visual interface for non-experts to maintain complex wastewater treatment plants. This domain-specific application is used to give our concepts a meaningful background.

  • 46.
    Espadoto, Mateus
    et al.
    University of São Paulo, Brazil;University of Groningen, Netherlands.
    Martins, Rafael Messias
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Hirata, Nina S. T.
    University of São Paulo, Brazil.
    Telea, Alexandru C.
    Utrecht University, Netherlands.
    Toward a Quantitative Survey of Dimension Reduction Techniques2021In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 27, no 3, p. 2153-2173Article in journal (Refereed)
    Abstract [en]

    Dimensionality reduction methods, also known as projections, are frequently used in multidimeDimensionality reduction methods, also known as projections, are frequently used in multidimensional data exploration in machine learning, data science, and information visualization. Tens of such techniques have been proposed, aiming to address a wide set of requirements, such as ability to show the high-dimensional data structure, distance or neighborhood preservation, computational scalability, stability to data noise and/or outliers, and practical ease of use. However, it is far from clear for practitioners how to choose the best technique for a given use context. We present a survey of a wide body of projection techniques that helps answering this question. For this, we characterize the input data space, projection techniques, and the quality of projections, by several quantitative metrics. We sample these three spaces according to these metrics, aiming at good coverage with bounded effort. We describe our measurements and outline observed dependencies of the measured variables. Based on these results, we draw several conclusions that help comparing projection techniques, explain their results for different types of data, and ultimately help practitioners when choosing a projection for a given context. Our methodology, datasets, projection implementations, metrics, visualizations, and results are publicly open, so interested stakeholders can examine and/or extend this benchmark.nsional data exploration in machine learning, data science, and information visualization. Tens of such techniques have been proposed, aiming to address a wide set of requirements, such as ability to show the high-dimensional data structure, distance or neighborhood preservation, computational scalability, stability to data noise and/or outliers, and practical ease of use. However, it is far from clear for practitioners how to choose the best technique for a given use context. We present a survey of a wide body of projection techniques that helps answering this question. For this, we characterize the input data space, projection techniques, and the quality of projections, by several quantitative metrics. We sample these three spaces according to these metrics, aiming at good coverage with bounded effort. We describe our measurements and outline observed dependencies of the measured variables. Based on these results, we draw several conclusions that help comparing projection techniques, explain their results for different types of data, and ultimately help practitioners when choosing a projection for a given context. Our methodology, datasets, projection implementations, metrics, visualizations, and results are publicly open, so interested stakeholders can examine and/or extend this benchmark.

  • 47.
    Feyer, Stefan P.
    et al.
    University of Konstanz, Germany.
    Pinaud, Bruno
    Unicersity of Bordeaux, France.
    Kobourov, Stephen
    University of Arizona, USA.
    Brich, Nicolas
    University of Tübingen, Germany.
    Krone, Michael
    University of Tübingen, Germany;New York University, USA.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Behrisch, Michael
    Utrecht University, Netherlands.
    Schreiber, Falk
    University of Konstanz, Germany;Monash University, Australia.
    Klein, Karsten
    University of Konstanz, Germany.
    2D, 2.5D, or 3D?: An Exploratory Study on Multilayer Network Visualisations in Virtual Reality2024In: IEEE Transactions on Visualization and Computer Graphics, ISSN 1077-2626, E-ISSN 1941-0506, Vol. 30, no 1, p. 469-479Article in journal (Refereed)
    Abstract [en]

    Relational information between different types of entities is often modelled by a multilayer network (MLN) - a network with subnetworks represented by layers. The layers of an MLN can be arranged in different ways in a visual representation, however, the impact of the arrangement on the readability of the network is an open question. Therefore, we studied this impact for several commonly occurring tasks related to MLN analysis. Additionally, layer arrangements with a dimensionality beyond 2D, which are common in this scenario, motivate the use of stereoscopic displays. We ran a human subject study utilising a Virtual Reality headset to evaluate 2D, 2.5D, and 3D layer arrangements. The study employs six analysis tasks that cover the spectrum of an MLN task taxonomy, from path finding and pattern identification to comparisons between and across layers. We found no clear overall winner. However, we explore the task-to-arrangement space and derive empirical-based recommendations on the effective use of 2D, 2.5D, and 3D layer arrangements for MLNs.

  • 48.
    Golub, Koraljka
    et al.
    Linnaeus University, Faculty of Arts and Humanities, Department of Cultural Sciences.
    Tyrkkö, Jukka
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Jusufi, Ilir
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Media Technology.
    Ardö, Anders
    Lund University, Sweden.
    Automatic subject classification for improving retrieval in a Swedish repository2017In: ISKO UK Conference 2017: Knowledge Organization: what's the story?, 11 – 12 September 2017, London, 2017Conference paper (Refereed)
    Abstract [en]

    The recent adoption of the Dewey Decimal Classification (DDC) in Sweden has ignited discussions about automated subject classification especially for digital collections, which generally seem to lack subject indexing from controlled vocabularies. This is particularly problematic in the context of academic resource retrieval tasks, which require an understanding of discipline-specific terminologies and the narratives behind their internal ontologies. The currently available experimental classification software have not been adequately tested and their usefulness is unproven especially for Swedish language resources. We address these issues by investigating a unifying framework of automatic subject indexing for the DDC, including an analysis of suitable interactive visualisation features for supporting these aims. We will address the disciplinary narratives behind the DDC in selected subject areas and the preliminary results will include an analysis of the data collection and a breakdown of the methodology. Major visualisation possibilities in support of the classification process are also outlined. The project will contribute significantly to Swedish information infrastructure by improving the findability of Swedish research resources by subject searching, one of the most common yet the most challenging types of searching.

    Download full text (pdf)
    Poster
  • 49. Hagen, Hans
    et al.
    Kerren, Andreas
    Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Computer Science.
    Dannenmann, Peter
    Visualization of Large and Unstructured Data Sets2006Book (Other (popular science, discussion, etc.))
  • 50.
    Huang, Zeyang
    et al.
    Linköping University, Sweden.
    Kucher, Kostiantyn
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM). Linköping University, Sweden.
    Towards an Exploratory Visual Analytics System for Multivariate Subnetworks in Social Media Analysis2022In: Poster Abstracts, IEEE Visualization and Visual Analytics (VIS '22), IEEE, 2022Conference paper (Refereed)
    Abstract [en]

    Identifying sociolinguistic attributes of inter-community interactions is essential for understanding the polarization of social network communities. A wide range of computational text and network analysis methods may be applicable for this task, however, interpretation of the respective results and investigation of particularly interesting cases and subnetworks are difficult due to the scale and complexity of the data, e.g., for the Reddit platform. In this poster paper, we present an interactive visual analysis interface that facilitates network exploration and comparison at different topological and multivariate attribute scales. Users are able to investigate text- and network-based properties of social network community interactions, identify anomalies of conflict starters, or gain insight into multivariate anomalies behind groups of negative social media posts.

12345 1 - 50 of 202
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf