lnu.sePublications
Change search
Refine search result
1 - 29 of 29
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Cernea, Daniel
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    User-Centered Collaborative Visualization2015Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The last couple of years have marked the entire field of information technology with the introduction of a new global resource, called data. Certainly, one can argue that large amounts of information and highly interconnected and complex datasets were available since the dawn of the computer and even centuries before. However, it has been only a few years since digital data has exponentially expended, diversified and interconnected into an overwhelming range of domains, generating an entire universe of zeros and ones. This universe represents a source of information with the potential of advancing a multitude of fields and sparking valuable insights. In order to obtain this information, this data needs to be explored, analyzed and interpreted.

    While a large set of problems can be addressed through automatic techniques from fields like artificial intelligence, machine learning or computer vision, there are various datasets and domains that still rely on the human intuition and experience in order to parse and discover hidden information. In such instances, the data is usually structured and represented in the form of an interactive visual representation that allows users to efficiently explore the data space and reach valuable insights. However, the experience, knowledge and intuition of a single person also has its limits. To address this, collaborative visualizations allow multiple users to communicate, interact and explore a visual representation by building on the different views and knowledge blocks contributed by each person.

    In this dissertation, we explore the potential of subjective measurements and user emotional awareness in collaborative scenarios as well as support flexible and user-centered collaboration in information visualization systems running on tabletop displays. We commence by introducing the concept of user-centered collaborative visualization (UCCV) and highlighting the context in which it applies. We continue with a thorough overview of the state-of-the-art in the areas of collaborative information visualization, subjectivity measurement and emotion visualization, combinable tabletop tangibles, as well as browsing history visualizations. Based on a new web browser history visualization for exploring user parallel browsing behavior, we introduce two novel user-centered techniques for supporting collaboration in co-located visualization systems. To begin with, we inspect the particularities of detecting user subjectivity through brain-computer interfaces, and present two emotion visualization techniques for touch and desktop interfaces. These visualizations offer real-time or post-task feedback about the users’ affective states, both in single-user and collaborative settings, thus increasing the emotional self-awareness and the awareness of other users’ emotions. For supporting collaborative interaction, a novel design for tabletop tangibles is described together with a set of specifically developed interactions for supporting tabletop collaboration. These ring-shaped tangibles minimize occlusion, support touch interaction, can act as interaction lenses, and describe logical operations through nesting operations. The visualization and the two UCCV techniques are each evaluated individually capturing a set of advantages and limitations of each approach. Additionally, the collaborative visualization supported by the two UCCV techniques is also collectively evaluated in three user studies that offer insight into the specifics of interpersonal interaction and task transition in collaborative visualization. The results show that the proposed collaboration support techniques do not only improve the efficiency of the visualization, but also help maintain the collaboration process and aid a balanced social interaction. 

    Download full text (pdf)
    fulltext
    Download (jpg)
    Presentationsbild
  • 2.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern.
    Ebert, Achim
    University of Kaiserslautern.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A Study of Emotion-triggered Adaptation Methods for Interactive Visualization2013In: UMAP 2013 Extended Proceedings: Late-Breaking Results, Project Papers and Workshop Proceedings of the 21st Conference on User Modeling, Adaptation, and Personalization. Rome, Italy, June 10-14, 2013 / [ed] Shlomo Berkovsky, Eelco Herder, Pasquale Lops & Olga C. Santos, CEUR-WS.org , 2013, Vol. 997, p. 9-16Conference paper (Refereed)
    Abstract [en]

    As the size and complexity of datasets increases, both visual-ization systems and their users are put under more pressure to oer quickand thorough insights about patterns hidden in this ocean of data. Whilenovel visualization techniques are being developed to better cope withthe various data contexts, users nd themselves increasingly often undermental bottlenecks that can induce a variety of emotions. In this paper,we execute a study to investigate the eectiveness of various emotion-triggered  adaptation  methods  for  visualization  systems.  The  emotionsconsidered are boredom and frustration, and are measured by means ofbrain-computer interface technology. Our ndings suggest that less intru-sive adaptive methods perform better at supporting users in overcomingemotional states with low valence or arousal, while more intrusive onestend to be misinterpreted or perceived as irritating.

  • 3.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Visualizing Group Affective Tone in Collaborative Scenarios2014Conference paper (Refereed)
    Abstract [en]

    A large set of complex datasets require the use of collaborative visualization solutions in order to harness the knowledge and experience of multiple experts. However, be it co-located or distributed, the collaboration process is inherently fragile, as small mistakes in communication or various human aspects can quickly derail it. In this paper, we introduce a novel visualization technique that highlights the group affective tone (GAT), also known as the presence of homogeneous emotional reactions within a group. The goal of our visualization is to improve users’ awareness of GAT, thus fostering a positive group affective tone that has been proven to increase effectiveness and creativity in collaborative scenarios. 

  • 4.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Morar, Valentina
    R3 - Un dispozitiv de intrare configurabil pentru interacţiunea liberă în spaţiu2010In: Romanian Journal of Human-Computer Interaction, ISSN 1843-4460, Vol. 3, p. 45-50Article in journal (Refereed)
    Abstract [un]

    În ultima perioadă s-a abordat tot mai des problema implementării unor dispozitive de intrare care să sprijine interacţiunea 3D prin oferirea a 6 sau a mai multor grade de libertate (degrees of freedom sau DoF). Cu toate acestea, astfel de dispozitive care să fie disponibile pentru interacţiune liberă în spaţiu - adică fără a fi necesară o suprafaţă ca sistem de referinţă, cum este cazul unui mouse - sunt proiectate doar pentru un tip restrâns de aplicaţii. De asemenea, aparatele de intrare de acest tip sunt rareori intuitive în utilizare şi limitate ca număr. Pentru a combate aceste probleme, în acest articol propunem un dispozitiv de complexitate şi costuri de implementare reduse, care poate fi utilizat în spaţiul liber şi este extrem de configurabil, susţinând nativ o interacţiune intuitivă cu variate medii virtuale. R3 (roll - rostogolire, rotate - rotire, rattle - agitare) oferă acurateţea necesară pentru navigare şi indicare - atât în 2D, cât şi în 3D – în aplicaţii de modelare şi jocuri, dar şi feedback tactil prin prezenţa unui trackball, toate acestea într-o manieră orientată spre utilizator. În plus, dispozitivul poate fi trecut uşor în modul de mouse, oferind astfel oricând suport pentru interacţiunea cu sistemele de operare convenţionale.

  • 5.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    A Survey of Technologies on the Rise for Emotion-Enhanced Interaction2015In: Journal of Visual Languages and Computing, ISSN 1045-926X, E-ISSN 1095-8533, Vol. 31, no A, p. 70-86Article in journal (Refereed)
    Abstract [en]

    Emotions are a major part of the human existence and social interactions. Some might say that emotions are one of the aspects that make us truly human. However, while we express emotions in various life settings, the world of computing seems to struggle with supporting and incorporating the emotional dimension. In the last decades, the concept of affect has gotten a new upswing in research, moving beyond topics like market research and product development, and further exploring the area of emotion-enhanced interaction.

    In this article, we highlight techniques that have been employed more intensely for emotion measurement in the context of affective interaction. Besides capturing the functional principles behind these approaches and the inherent volatility of human emotions, we present relevant applications and establish a categorization of the roles of emotion detection in interaction. Based on these findings, we also capture the main challenges that emotion measuring technologies will have to overcome in order to enable a truly seamless emotion-driven interaction.

  • 6.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Ebert, Achim
    Detecting Insight and Emotion in Visualization Applications with a Commercial EEG Headset2011In: Proceedings of the SIGRAD 2011 Conference on Evaluations of Graphics and Visualization - Efficiency, Usefulness, Accessibility, Usability, KTH, Stockholm, Sweden., Linköping: Linköping University Electronic Press , 2011, p. 53-60Conference paper (Refereed)
    Abstract [en]

    Insight represents a special element of knowledge building. From the beginning of their lives, humans experience moments of insight in which a certain idea or solution becomes as clear to them as never before. Especially in the field of visual representations, insight has the potential to be at the core of comprehension and pattern recognition. Still, one problem is that this moment of clarity is highly unpredictable and complex in nature, and many scientists have investigated different aspects of its generation process in the hope of capturing the essence of this eureka (Greek, for "I have found") moment.

    In this paper, we look at insight from the spectrum of information visualization. In particular, we inspect the possible correlation between epiphanies and emotional responses subjects experience when having an insight. In order to check the existence of such a connection, we employ a set of initial tests involving the EPOC mobile electroencephalographic (EEG) headset for detecting emotional responses generated by insights. The insights are generated by open-ended tasks that take the form of visual riddles and visualization applications. Our results suggest that there is a strong connection between insight and emotions like frustration and excitement. Moreover, measuring emotional responses via EEG during an insight-related problem solving results in non-intrusive, nearly automatic detection of the major Aha! moments the user experiences. We argue that this indirect detection of insights opens the door for the objective evaluation and comparison of various visualizations techniques.

  • 7.
    Cernea, Daniel
    et al.
    University of Kaiserslautern.
    Mora, Simone
    Norwegian University of Science.
    Perez, Alfredo
    Norwegian University of Science.
    Ebert, Achim
    University of Kaiserslautern.
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Divitini, Monica
    Norwegian University of Science.
    Gil de la Iglesia, Didac
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Otero, Nuno
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. University of Minho, Portugal.
    Tangible and Wearable User Interfaces for Supporting Collaboration among Emergency Workers2012In: Collaboration and Technology: 18th International Conference, CRIWG 2012 Raesfeld, Germany, September 16-19, 2012 Proceedings / [ed] Valeria Herskovic, H. Ulrich Hoppe, Marc Jansen, Jürgen Ziegler, Springer, 2012, Vol. 7493, p. 192-199Conference paper (Refereed)
    Abstract [en]

    Ensuring a constant flow of information is essential for offeringquick help in different types of disasters. In the following, we report on a workin-progress distributed, collaborative and tangible system for supporting crisismanagement. On one hand, field operators need devices that collect information—personal notes and sensor data—without interrupting their work. Onthe other hand, a disaster management system must operate in different scenariosand be available to people with different preferences, backgrounds and roles.Our work addresses these issues by introducing a multi-level collaborative systemthat manages real-time data flow and analysis for various rescue operators.

  • 8.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Olech, Peter-Scott
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Controlling In-Vehicle Systems with a Commercial EEG Headset: Performance and Cognitive Load2012In: Visualization of Large and Unstructured Data Sets: Applications in Geospatial Planning, Modeling and Engineering - Proceedings of IRTG 1131 Workshop 2, Schloss Dagstuhl - Leibniz-Zentrum für Informatik , 2012Conference paper (Refereed)
    Abstract [en]

    Humans have dreamed for centuries to control their surroundings solely by the power of theirminds. These aspirations have been captured by multiple science fiction creations, like theNeuromancer novel by William Gibson or the Brainstorm cinematic movie, to name just a few.Nowadays these dreams are slowly becoming reality due to a variety of brain-computer interfaces(BCI) that detect neural activation patterns and support the control of devices by brain signals.

    An important field in which BCIs are being successfully integrated is the interaction withvehicular systems. In this paper we evaluate the performance of BCIs, more specifically a commercialelectroencephalographic (EEG) headset, in combination with vehicle dashboard systemsand highlight the advantages and limitations of this approach. Further, we investigate the cognitiveload that drivers experience when interacting with secondary in-vehicle devices via touchcontrols or a BCI headset. As in-vehicle systems are increasingly versatile and complex, it becomesvital to capture the level of distraction and errors that controlling these secondary systemsmight introduce to the primary driving process. Our results suggest that the control with theEEG headset introduces less distraction to the driver, probably as it allows the eyes of the driverto remain focused on the road. Still, the control of the vehicle dashboard by EEG is efficientonly for a limited number of functions, after which increasing the number of in-vehicle controlsamplifies the detection of false commands.

  • 9.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. University of Kaiserslauten, Germany.
    Olech, Peter-Scott
    University of Kaiserslauten, Germany.
    Ebert, Achim
    University of Kaiserslauten, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    EEG-based Measurement of Subjective Parameters in Evaluations2011In: HCI International 2011 Posters' Extended Abstracts: International Conference, HCI International 2011, Orlando, FL, USA, July 9-14, 2011, Proceedings, Part II / [ed] Stephanidis, Constantine, Berlin Heidelberg: Springer, 2011, p. 279-283Conference paper (Refereed)
    Abstract [en]

    Evaluating new approaches, be it new interaction techniques, new applications or even new hardware, is an important task, which has to be done to ensure both usability and user satisfaction. The drawback of evaluating subjective parameters is that this can be relatively time consuming, and the outcome is possibly quite imprecise. Considering the recent release of cost-efficient commercial EEG headsets, we propose the utilization of electro-encephalographic (EEG) devices for evaluation purposes. The goal of our research is to evaluate if a commercial EEG headset can provide cutting-edge support during user studies and evaluations. Our results are encouraging and suggest that wireless EEG technology is a viable alternative for measuring subjectivity in evaluation scenarios.

  • 10.
    Cernea, Daniel
    et al.
    University of Kaiserslautern.
    Olech, Peter-Scott
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Measuring Subjectivity: Supporting Evaluations with the Emotiv EPOC Neuroheadset2012In: Künstliche Intelligenz, ISSN 0933-1875, E-ISSN 1610-1987, Vol. 26, no 2, p. 177-182Article in journal (Refereed)
    Abstract [en]

    Since the dawn of the industrial era, modern devices and interaction methods have undergone rigorous evaluations in order to ensure their functionality and quality, as well as usability. While there are many methods for measuring objective data, capturing and interpreting subjective factors—like the feelings or states of mind of the users—is still an imprecise and usually post-event process. In this paper we propose the utilization of the Emotiv EPOC commercial electroencephalographic (EEG) neuroheadset for real-time support during evaluations and user studies. We show in two evaluation scenarios that the wireless EPOC headsets can be used efficiently for supporting subjectivity measurement. Additionally, we highlight situations that may result in a lower accuracy, as well as explore possible reasons and propose solutions for improving the error rates of the device.

  • 11.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern, Germany.
    Truderung, Igor
    University of Kaiserslautern, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    An Interactive Visualization for Tabbed Browsing Behavior Analysis2014In: Computer Vision, Imaging and Computer Graphics: Theory and Applications / [ed] Sebastiano Battiato, Sabine Coquillart, Robert S. Laramee, Andreas Kerren, and José Braz, Springer, 2014, p. 69-84Chapter in book (Refereed)
    Abstract [en]

    Web browsers are at the core of online user experience, enablinga wide range of Web applications, like communication, games, entertainment, development, etc. Additionally, given the variety and complexity of online-supported tasks, users have started parallelizing and organizing their online browser sessions by employing multiple browser windows and tabs. However, there are few solutions that support analysts and casual users in detecting and extracting patterns from these parallel browsing histories. In this paper we introduce WebComets, an interactive visualization for exploring multi-session multi-user parallel browsing logs. After highlighting visual and functional aspects of the system, we introduce a motif-based contextual search for enabling the filtering and comparison of user navigation patterns. We further highlight the functionality of WebComets with a use case. Our investigations suggest that parallel browser history visualization can offer better insight into user tabbed browsing behavior and support the recognition of online navigation patterns.

  • 12.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Truderung, Igor
    University of Kaiserslautern, Germany .
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    University of Kaiserslautern, Germany .
    WebComets: A Tab-Oriented Approach for Browser History Visualization2013In: / [ed] S. Coquillart, C. Andujar, R. S. Laramee, A. Kerren, and J. Braz, SciTePress , 2013, p. 439-450Conference paper (Refereed)
    Abstract [en]

    Web browsers are our main gateways to the Internet. With their help we read articles, we learn, we listen to music, we share our thoughts and feelings, we write e-mails, or we chat. Current Web browser histories have mostly no visualization capabilities as well as limited options to filter patterns and information. Furthermore, such histories disregard the existence of parallel navigation in multiple browser windows andtabs. But a good understanding of parallel browsing behavior is of critical importance for the casual user and the behavioural analyst, while at the same time having implications in the design of search engines, Web sites and Web browsers. In this paper we present WebComets, an interactive visualization for extended browser histories. Our visualization employs browser histories that capture—among others—the taboriented, parallel nature of Web page navigation. Results presented in this paper suggest that WebComets better supports the analysis and comparison of parallel browsing and corresponding behavior patterns than common browser histories.

  • 13.
    Cernea, Daniel
    et al.
    Technische Univ. Kaiserslautern .
    Weber, Christopher
    Technische Univ. Kaiserslautern .
    Ebert, Achim
    Technische Univ. Kaiserslautern .
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Emotion Scents: A Method of Representing User Emotions on GUI Widgets2013In: Proceedings  of SPIE 8654: Visualization and Data Analysis 2013, Burlingame, California, USA, February 3, 2013, SPIE - International Society for Optical Engineering, 2013, p. 86540F-Conference paper (Refereed)
    Abstract [en]

    The world of desktop interfaces has been dominated for years by the concept of windows and standardized user interface (UI) components. Still, while supporting the interaction and information exchange between the users and the computer system, graphical user interface (GUI) widgets are rather one-sided, neglecting to capture the subjective facets of the user experience. In this paper, we propose a set of design guidelines for visualizing user emotions on standard GUI widgets (e.g., buttons, check boxes, etc.) in order to enrich the interface with a new dimension of subjective information by adding support for emotion awareness as well as post-task analysis and decision making. We highlight the use of an EEG headset for recording the various emotional states of the user while he/she is interacting with the widgets of the interface. We propose a visualization approach, called emotion scents, that allows users to view emotional reactions corresponding to di erent GUI widgets without in uencing the layout or changing the positioning of these widgets. Our approach does not focus on highlighting the emotional experience during the interaction with an entire system, but on representing the emotional perceptions and reactions generated by the interaction with a particular UI component. Our research is motivated by enabling emotional self-awareness and subjectivity analysis through the proposed emotionenhanced UI components for desktop interfaces. These assumptions are further supported by an evaluation of emotion scents.

  • 14.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. Univ Kaiserslautern, Germany.
    Weber, Christopher
    Univ Kaiserslautern, Germany.
    Ebert, Achim
    Univ Kaiserslautern, Germany.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Emotion-Prints: Interaction-Driven Emotion Visualization on Multi-Touch Interfaces2015In: Proceedings of SPIE 9397: Visualization and Data Analysis 2015, San Francisco, CA, USA, February 8-12, 2015 / [ed] David L. Kao, Ming C. Hao, Mark A. Livingston, and Thomas Wischgoll, SPIE - International Society for Optical Engineering, 2015, p. 9397-0A-Conference paper (Refereed)
    Abstract [en]

    Emotions are one of the unique aspects of human nature, and sadly at the same time one of the elements that our technological world is failing to capture and consider due to their subtlety and inherent complexity. But with the current dawn of new technologies that enable the interpretation of emotional states based on techniques involving facial expressions, speech and intonation, electrodermal response (EDS) and brain-computer interfaces (BCIs), we are finally able to access real-time user emotions in various system interfaces. In this paper we introduce emotion-prints, an approach for visualizing user emotional valence and arousal in the context of multi-touch systems. Our goal is to offer a standardized technique for representing user affective states in the moment when and at the location where the interaction occurs in order to increase affective self-awareness, support awareness in collaborative and competitive scenarios, and offer a framework for aiding the evaluation of touch applications through emotion visualization. We show that emotion-prints are not only independent of the shape of the graphical objects on the touch display, but also that they can be applied regardless of the acquisition technique used for detecting and interpreting user emotions. Moreover, our representation can encode any affective information that can be decomposed or reduced to Russell’s two-dimensional space of valence and arousal. Our approach is enforced by a BCI-based user study and a follow-up discussion of advantages and limitations. 

  • 15.
    Cernea, Daniel
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science. University of Kaiserslautern.
    Weber, Christopher
    UC Davis, Department of Computer Science.
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Ebert, Achim
    University of Kaiserslautern.
    Group Affective Tone Awareness and Regulation through Virtual Agents2014In: Proceedings of the Workshop on Affective Agents: Fourteenth International Conference on Intelligent Virtual Agents (IVA 2014), 2014, p. 9-16Conference paper (Refereed)
    Abstract [en]

    It happens increasingly often that experts need to collaboratein order to exchange ideas, views and opinions on their path towardsunderstanding. However, every collaboration process is inherently fragileand involves a large set of human subjective aspects, including socialinteraction, personality, and emotions. In this paper we present Pogat,an affective virtual agent designed to support the collaboration processaround displays by increasing user awareness of the group affective tone.A positive group affective tone, where all the participants of a groupexperience emotions of a positive valence, has been linked to fosteringcreativity in groups and supporting the entire collaboration process. Atthe same time, a negative or inexistent group affective tone can suggestnegative emotions in some of the group members, emotions that canlead to an inefficient or even obstructed collaboration. A study of ourapproach suggests that Pogat can increase the awareness of the overallaffective state of the group as well as positively affect the efficiency ofgroups in collaborative scenarios.

  • 16.
    Ebert, Achim
    et al.
    University of Kaiserslautern, Germany.
    Weber, Christopher
    University of Kaiserslautern, Germany.
    Cernea, Daniel
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Petsch, Sebastian
    University of Kaiserslautern, Germany.
    TangibleRings: Nestable Circular Tangibles2013In: CHI '13 Extended Abstracts on Human Factors in Computing Systems, ACM Press, 2013, p. 1617-1622Conference paper (Refereed)
    Abstract [en]

    The multitouch functionality of tabletop computers is often augmented by the use of tangible objects that offer an intuitive and haptic alternative to interaction and manipulation. However, employing tangibles can also lead to less desirable effects, such as occlusion or lack of precision. In this paper we highlight the design and implementation of ring-like tangible objects: TangibleRings. They do not occlude the objects underneath them and also support the detection of touch events inside their perimeter. Additionally, multiple rings may be nested within one another in order to combine ring functionalities or produce more complex filters.

  • 17.
    Elmqvist, Niklas
    et al.
    Purdue University, USA.
    Vande Moere, Andrew
    K. U. Leuven University, Belgium.
    Jetter, Hans-Christian
    University of Konstanz, Germany.
    Cernea, Daniel
    University of Kaiserslautern, Germany.
    Reiterer, Harald
    University of Konstanz, Germany.
    Jankun-Kelly, TJ
    Mississippi State University, USA.
    Fluid Interaction for Information Visualization2011In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 10, no 4, p. 327-340Article in journal (Refereed)
    Abstract [en]

    Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user’s dialogue with the data, and, ultimately, the user’s actual understanding and insight into these data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to design the interaction for a new visualization in the best manner. In this article, we attempt to address this issue by collecting examples of visualizations with ‘best-in-class’ interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose an operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of ‘flow’, and Norman’s gulfs of execution and evaluation.

  • 18.
    Isenberg, Petra
    et al.
    Université Paris-Sud, France.
    Elmqvist, Niklas
    Purdue University, USA.
    Scholtz, Jean
    Northwest National Laboratory, USA.
    Cernea, Daniel
    University of Kaiserslautern, Germany.
    Ma, Kwan-Liu
    University of California-Davis, USA.
    Hagen, Hans
    University of Kaiserslautern, Germany.
    Collaborative Visualization: Definition, Challenges, and Research Agenda2011In: Information Visualization, ISSN 1473-8716, E-ISSN 1473-8724, Vol. 10, no 4, p. 310-326Article in journal (Refereed)
    Abstract [en]

    The conflux of two growing areas of technology – collaboration and visualization – into a new research direction, collaborative visualization, provides new research challenges. Technology now allows us to easily connect and collaborate with one another – in settings as diverse as over networked computers, across mobile devices, or using shared displays such as interactive walls and tabletop surfaces. Digital information is now regularly accessed by multiple people in order to share information, to view it together, to analyze it, or to form decisions. Visualizations are used to deal more effectively with large amounts of information while interactive visualizations allow users to explore the underlying data. While researchers face many challenges in collaboration and in visualization, the emergence of collaborative visualization poses additional challenges, but it is also an exciting opportunity to reach new audiences and applications for visualization tools and techniques.

    The purpose of this article is (1) to provide a definition, clear scope, and overview of the evolving field of collaborative visualization, (2) to help pinpoint the unique focus of collaborative visualization with its specific aspects, challenges, and requirements within the intersection of general computer-supported cooperative work and visualization research, and (3) to draw attention to important future research questions to be addressed by the community. We conclude by discussing a research agenda for future work on collaborative visualization and urge for a new generation of visualization tools that are designed with collaboration in mind from their very inception.

  • 19.
    Kerren, Andreas
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Cernea, DanielAGT International, Germany.Pohl, MargitTechnical University of Vienna, Austria.
    Proceedings of EmoVis 2016: ACM IUI 2016 Workshop on Emotion and Visualization2016Conference proceedings (editor) (Refereed)
  • 20.
    Kerren, Andreas
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Computer Science.
    Cernea, Daniel
    AGT International, Germany.
    Pohl, Margit
    Technical University of Vienna, Austria.
    Workshop on Emotion and Visualization: EmoVis 20162016In: Companion Publication of the 21st International Conference on Intelligent User Interfaces, New York, NY, USA: ACM Publications, 2016, p. 1-2Conference paper (Other academic)
  • 21.
    Kucher, Kostiantyn
    et al.
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Cernea, Daniel
    Kerren, Andreas
    Linnaeus University, Faculty of Technology, Department of Computer Science.
    Visualizing Excitement of Individuals and Groups2016In: Proceedings of the ACM IUI 2016 Workshop on Emotion and Visualization (EmoVis '16) / [ed] Andreas Kerren, Daniel Cernea, and Margit Pohl, Linköping, Sweden: Linköping University Electronic Press, 2016, p. 15-22Conference paper (Refereed)
    Abstract [en]

    Excitement or arousal is one of the main emotional dimensions that affects our lives on a daily basis. We win a tennis match, watch a great movie, get into an argument with a colleague—all of these are instances when most of us experience excitement, yet we do not pay much attention to it. Today, there are few systems that capture our excitement levels and even fewer that actually promote awareness of our most exciting moments. In this paper, we propose a visualization concept for representing individual and group-level excitement for emotional self-awareness and group-level awareness. The data used for the visualization is obtained from smart wristbands worn by each of the users. The visualization uses animated glyphs to generate a real-time representation for each individual’s excitement levels. We introduce two types of encodings for these glyphs: one focusing on capturing both the current excitement and the excitement history, as well as another focusing only on real-time values and previous peaks. The excitement levels are computed based on measurements of the user’s galvanic skin response and accelerometer data from the wristbands, allowing for a classification of the excitement levels into experienced (excitement without physical manifestation) and manifested excitement. A dynamic clustering of the individual glyphs supports the scalability of our visualization, while at the same time offering an overview of the group-level excitement and its distribution. The results of a preliminary evaluation suggest that the visualization allows users to intuitively and accurately perceive both individual and group-level excitement. 

  • 22. Olech, Peter-Scott
    et al.
    Cernea, Daniel
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Meyer, Helge
    Ebert, Achim
    Digital Interactive Public Pinboards for Disaster and Crisis Management: Concept and Prototype Design2012In: Proceedings of the 2012 International Conference on Information and Knowledge Engineering (IKE '12) at the 2012 World Congress in Computer Science, Computer Engineering, and Applied Computing (WorldComp '12), CSREA Press, 2012Conference paper (Refereed)
    Abstract [en]

    Recent natural disasters, like the earthquakesin Port-au-Prince, Haiti (2010), Christchurch, New Zealand(2011) and the Tohoku earthquake in Japan (2011), whichalso triggered a tsunami resulting in the catastrophic failureof numerous nuclear power plants, pose the question howto support first responders in providing fast and adequatehelp. When first responders arrive on-site it is crucial thatthe flow of information is ensured: important fields arelogistics, communication, personal management and the deploymentof up-to-date information. Unfortunately the eventsin the past showed that there are certain shortcomings,especially in terms of communicating information from localresponders to arriving responders. Our approach proposesthe utilization of large public displays, seizing the ideaof traditional pinboards, referred in our work as DigitalInteractive Public Pinboards (DIPP). DIPPs are set up inhot spot locations and provide fast and reliable informationto first responders as well as citizens in the area of thenatural disaster.

  • 23. Olech, Peter-Scott
    et al.
    Cernea, Daniel
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Thelen, Sebastian
    Ebert, Achim
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Hagen, Hans
    V.I.P.: Supporting Digital Earth Ideas through Visualization, Interaction and Presentation Screens2010In: Proceedings of the 7th Taipei International Digital Earth Symposium (TIDES '10). , 2010Conference paper (Refereed)
  • 24. Sehgal, Anuj
    et al.
    Cernea, Daniel
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    A Multi-AUV Missions Simulation Framework for the USARSim Robotics Simulator2010In: Proceedings of the 18th Mediterranean Conference on Control and Automation (MED’10), 2010Conference paper (Refereed)
  • 25.
    Sehgal, Anuj
    et al.
    Jacobs University Bremen, Germany.
    Cernea, Daniel
    Jacobs University Bremen, Germany.
    Birk, Andreas
    Jacobs University Bremen, Germany.
    Modeling Underwater Acoustic Communications for Multi-Robot Missions in a Robotics Simulator2010In: OCEANS 2010 IEEE - Sydney, IEEE, 2010Conference paper (Refereed)
  • 26. Sehgal, Anuj
    et al.
    Cernea, Daniel
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Birk, Andreas
    Simulating Underwater Acoustic Communications in a High Fidelity Robotics Simulator2010In: Proceedings of the 7th IFAC Symposium on Intelligent Autonomous Vehicles (IAV '10) / [ed] Indiveri, Giovanni, Pascoal, Antonio M., Elsevier, 2010, p. 587-592Conference paper (Refereed)
    Abstract [en]

    The maximum potential for underwater exploration rests within the use of multiple Autonomous Underwater Vehicles (AUVs) and tasks involving human diver-AUV coordination. Such missions are gaining increasing popularity with the advent of better control mechanisms and availability of acoustic modems. However, high costs and the lack of useful tools to simulate multi-AUV tasks have hindered the full potential of the field. Though simulators could aid the development of AUV communication systems to help such missions, the few existing simulators focus upon simulating a single vehicle and, as such, do not provide tools for simulating underwater communication systems. In this paper we present an overview on modeling of the underwater acoustic channel, taking into account the high degree of local variability of ocean conditions, multi-path echoes and ambient noise, within the framework of an underwater acoustic communications server for the Unified System for Automation and Robotics Simulator (USARSim) robotics simulator, a simulation tool is capable of modeling multi-robot missions with high accuracy.

  • 27.
    Sehgal, Anuj
    et al.
    Jacobs university, Bremen.
    Cernea, Daniel
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Makaveeva, Milena
    Indian Underwater Robotics Society.
    Pose Estimation and Trajectory Derivation from Underwater Imagery2012Conference paper (Refereed)
    Abstract [en]

    Obtaining underwater imagery is normally a costlyaffair since expensive equipment such as multi-beam sonarscanners need to be utilized. Even though such scanners provideimagery in form of 3D point clouds, the tasks of locatingaccurate and dependable correspondences between point cloudsand registration can be quite slow. Registered 3D point cloudscan provide pose estimation and trajectory information vital tothe navigation of a robot, however, the slow speed of pointcloud registration normally means that maps are generatedoffline for later use. Furthermore, any algorithm must berobust against artifacts in 3D range data as sensor motion,reflection and refraction are commonplace. In our work wedescribe the use of the SIFT feature detector on scaled imagesbased on point clouds captured by sonar in order to registerthem in real-time. This online registration approach is used toderive navigational information vital to underwater vehicles. Thealgorithm utilizes the known point correspondence registrationalgorithm in order to achieve real-time registration of pointclouds, thereby generating 3D maps in real-time and providing3D pose estimation and trajectory information.

  • 28.
    Sehgal, Anuj
    et al.
    Indian Underwater Robotics Society, Noida, India.
    Cernea, Daniel
    University of Kaiserslautern, 67653, Kaiserslautern, Germany.
    Makaveeva, Milena
    Jacobs University Bremen, Bremen, Germany.
    Real-Time Scale Invariant 3D Range Point Cloud Registration2010In: Image Analysis and Recognition: 7th International Conference, ICIAR 2010, Póvoa de Varzim, Portugal, June 21-23, 2010. Proceedings, Part I, Springer, 2010, p. 220-229Conference paper (Refereed)
    Abstract [en]

    Stereo cameras, laser rangers and other time-of-flight ranging devices are utilized with increasing frequency as they can provide information in the 3D plane. The ability to perform real-time registration of the 3D point clouds obtained from these sensors is important in many applications. However, the tasks of locating accurate and dependable correspondences between point clouds and registration can be quite slow. Furthermore, any algorithm must be robust against artifacts in 3Drange data as sensor motion, reflection and refraction are commonplace. The SIFT feature detector is a robust algorithm used to locate features, but cannot be extended directly to the 3D range point clouds since itrequires dense pixel information, whereas the range voxels are sparsely distributed. This paper proposes an approach which enables SIFT application to locate scale and rotation invariant features in 3D point clouds.The algorithm then utilizes the known point correspondence registration algorithm in order to achieve real-time registration of 3D point clouds.

  • 29. Thelen, Sebastian
    et al.
    Cernea, Daniel
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Olech, Peter-Scott
    Kerren, Andreas
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Ebert, Achim
    D.I.P. – A Digital Interactive Pinboard with Support for Smart Device Interaction2010In: In Proceedings of the IASTED International Conference on Portable Lifestyle Devices (PLD '10), ACTA Press , 2010Conference paper (Refereed)
    Abstract [en]

    Smart phone devices are more popular than ever and becauseof their processing power able to deal with almost any kind of multimedia content, i.e., videos, audio, images,and text documents. Files are often shared in direct ways, for example via email, or by uploading them to one of the popular internet platforms. In this paper we present the Digital Interactive Pinboard (DIP), an alternative approach for sharing multimedia files via smart phones on large display systems. DIPs are based on the idea of traditional bulletin boards and like these set up in well visited places, so called hotspots. Systems implementing the approach do not ust act as passive public displays, but support a series of advanced interaction concepts optimized for smart phones. DIPs imply a strong social component, since physical presence is required to interact with board items that can be organized based on various group-specific criteria. We argue that a combination of physical and virtual aspects has impact on how users interact with the system and the way information is perceived. We introduce and analyze the approach from a conceptual point of view, as well as from the implementation side. We further describe two scenarios in which our approach has been employed to exchange information in well-frequented locations.

1 - 29 of 29
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf