lnu.sePublications
Change search
Refine search result
1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Laitinen, Mikko
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Tyrkkö, Jukka
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Visualizing rich corpus data using virtual reality2019In: Studies in Variation, Contacts and Change in English, ISSN 1797-4453, Vol. 20Article in journal (Refereed)
    Abstract [en]

    We demonstrate an approach that utilizes immersive virtual reality (VR) to explore and interact with corpus linguistics data. Our case study focuses on the language identification parameter in the Nordic Tweet Stream corpus, a dynamic corpus of Twitter data where each tweet originated within the Nordic countries. We demonstrate how VR can provide previously unexplored perspectives into the use of English and other non-indigenous languages in the Nordic countries alongside the native languages of the region and showcase its geospatial variation. We utilize a head-mounted display (HMD) for a room-scale VR scenario that allows 3D interaction by using hand gestures. In addition to spatial movement through the Nordic areas, the interface enables exploration of the Twitter data based on time (days, weeks, months, or time of predefined special events), making it particularly useful for diachronic investigations.

    In addition to demonstrating how the VR methods aid data visualization and exploration, we briefly discuss the pedagogical implications of using VR to showcase linguistic diversity. Our empirical results detail students’ reactions to working in this environment. The discussion part examines the benefits, prospects and limitations of using VR in visualizing corpus data.

  • 2.
    Alissandrakis, Aris
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Designing interactive mobile services to promote civic participation in northern Uganda2013In: ICT for Anti-Corruption, Democracy And Education In East Africa / [ed] Katja Sarajeva, Stockholm: Stockholm University, 2013, p. 53-65Chapter in book (Refereed)
    Abstract [en]

    This chapter presents the activities and outcomes of the "People's Voices: Developing Cross Media Services to Promote Citizens Participation in Local Governance Activities" project.

    The aims of the project were a) to identify and describe a number of cross media services that can be used to promote citizens’ participation in political decisions and civic activities, and b) to develop a conceptual design and a prototype system of such a service. The project included a number of field trips from Sweden to Uganda, and used participatory design and ethnographic techniques for requirements elicitation, actively involving the different stakeholders. The developed system allows people in Uganda to use their mobile phones to submit reports of irregularities in local governance or poor services delivery using an interactive voice menu interface.

    We hope that our specific contribution will emphasize on how novel ways of integrating and using ICT can provide opportunities to encourage and facilitate civic engagement in North Uganda. The potential massive adoption of the kind of interactive mobile services described in this book chapter can be used in unique ways to provide opportunities to make governmental services more innovative, transparent and cost-effective, as well as to encourage citizens to become more engaged and goal-focused for the common good of their society.

  • 3.
    Alissandrakis, Aris
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Nake, Isabella
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    A New Approach for Visualizing Quantified Self Data Using Avatars2016In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, New York, NY, USA: ACM Press, 2016, p. 522-527Conference paper (Refereed)
    Abstract [en]

    In recent years, it is becoming more common for people to use applications or devices that keep track of their life and activities, such as physical fitness, places they visited, the music they listen to, or pictures they took. This generates data that are used by the service providers for a variety of (usually analytics) purposes, but commonly there are limitations on how the users themselves can also explore or interact with these data. Our position paper describes a new approach of visualizing such Quantified Self data, in a meaningful and enjoyable way that can give the users personal insights into their own data. The visualization of the information is proposed as an avatar that maps the different activities the user is engaged with, along with each such activity level, as graphical features. An initial prototype (both in terms of graphical design and software architecture) as well as possible future extensions are discussed.

  • 4.
    Alissandrakis, Aris
    et al.
    Dept. of Comput. Intell. & Syst. Sci., Tokyo Inst. of Technol., Tokyo, Japan.
    Otero, Nuno
    Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
    Saunders, Joe
    Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK.
    Dautenhahn, Kerstin
    Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK.
    Nehaniv, Chrystopher
    Adaptive Systems Research Group, University of Hertfordshire, Hatfield, UK.
    Helping Robots Imitate: Metrics And Computational Solutions Inspired By Human-Robot Interaction Studies2010In: Advances in Cognitive Systems / [ed] Samia Nefti-Meziani and John Gray, Institution of Engineering and Technology, 2010, p. 127-167Chapter in book (Refereed)
    Abstract [en]

    In this chapter we describe three lines of research related to the issue of helping robots imitate people. These studies are based on observed human be- haviour, technical metrics and implemented technical solutions. The three lines of research are: (a) a number of user studies that show how humans naturally tend to demonstrate a task for a robot to learn, (b) a formal approach to tackle the problem of what a robot should imitate, and (c) a technology-driven conceptual framework and technique, inspired by social learning theories, that addresses how a robot can be taught. In this merging exercise we will try to propose a way through this prob- lem space, towards the design of a Human-Robot Interaction (HRI) system able to be taught by humans via demonstration.

  • 5.
    Alissandrakis, Aris
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Media Technology.
    Reski, Nico
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Media Technology.
    Using Mobile Augmented Reality to Facilitate Public Engagement2017In: Extended Papers of the International Symposium on Digital Humanities (DH 2016) / [ed] Koraljka Golub, Marcelo Milrad, CEUR-WS , 2017, Vol. 2021, p. 99-109Conference paper (Refereed)
    Abstract [en]

    This paper presents our initial efforts towards the development of a framework for facilitating public engagement through the use of mobile Augmented Reality (mAR), that fall under the overall project title "Augmented Reality for Public Engagement" (PEAR). We present the concept, implementation, and discuss the results from the deployment of a mobile phone app (PEAR 4 VXO). The mobile app was used for a user study in conjunction with a campaign carried out by Växjö municipality (Sweden) while exploring how to get citizens more engaged in urban planning actions and decisions. These particular activities took place during spring 2016.One of the salient features of our approach is that it combines novel ways of using mAR together with social media, online databases, and sensors, to support public engagement. In addition, the data collection process and audience engagement were tested in a follow-up limited deployment.The analysis and outcomes of our initial results validate the overall concept and indicate the potential usefulness of the app as a tool, but also highlight the need for an active campaign from the part of the stakeholders.Our future efforts will focus on addressing some of the problems and challenges that we have identified during the different phases of this user study.

  • 6.
    Alissandrakis, Aris
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Reski, Nico
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Laitinen, Mikko
    University of Eastern Finland, Finland.
    Tyrkkö, Jukka
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Levin, Magnus
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Lundberg, Jonas
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Visualizing dynamic text corpora using Virtual Reality2018In: ICAME 39 : Tampere, 30 May – 3 June, 2018: Corpus Linguistics and Changing Society : Book of Abstracts, Tampere: University of Tampere , 2018, p. 205-205Conference paper (Refereed)
    Abstract [en]

    In recent years, data visualization has become a major area in Digital Humanities research, and the same holds true also in linguistics. The rapidly increasing size of corpora, the emergence of dynamic real-time streams, and the availability of complex and enriched metadata have made it increasingly important to facilitate new and innovative approaches to presenting and exploring primary data. This demonstration showcases the uses of Virtual Reality (VR) in the visualization of geospatial linguistic data using data from the Nordic Tweet Stream (NTS) project (see Laitinen et al 2017). The NTS data for this demonstration comprises a full year of geotagged tweets (12,443,696 tweets from 273,648 user accounts) posted within the Nordic region (Denmark, Finland, Iceland, Norway, and Sweden). The dataset includes over 50 metadata parameters in addition to the tweets themselves.

    We demonstrate the potential of using VR to efficiently find meaningful patterns in vast streams of data. The VR environment allows an easy overview of any of the features (textual or metadata) in a text corpus. Our focus will be on the language identification data, which provides a previously unexplored perspective into the use of English and other non-indigenous languages in the Nordic countries alongside the native languages of the region.

    Our VR prototype utilizes the HTC Vive headset for a room-scale VR scenario, and it is being developed using the Unity3D game development engine. Each node in the VR space is displayed as a stacked cuboid, the equivalent of a bar chart in a three-dimensional space, summarizing all tweets at one geographic location for a given point in time (see: https://tinyurl.com/nts-vr). Each stacked cuboid represents information of the three most frequently used languages, appropriately color coded, enabling the user to get an overview of the language distribution at each location. The VR prototype further encourages users to move between different locations and inspect points of interest in more detail (overall location-related information, a detailed list of all languages detected, the most frequently used hashtags). An underlying map outlines country borders and facilitates orientation. In addition to spatial movement through the Nordic areas, the VR system provides an interface to explore the Twitter data based on time (days, weeks, months, or time of predefined special events), which enables users to explore data over time (see: https://tinyurl.com/nts-vr-time).

    In addition to demonstrating how the VR methods aid data visualization and exploration, we will also briefly discuss the pedagogical implications of using VR to showcase linguistic diversity.

  • 7.
    Cooney, Martin
    et al.
    ATR Intelligent Robotics and Communication Laboratories, Japan.
    Becker-Asano, Christian
    ATR Intelligent Robotics and Communication Laboratories, Japan.
    Kanda, Takayuki
    ATR Intelligent Robotics and Communication Laboratories, Japan.
    Alissandrakis, Aris
    ATR Intelligent Robotics and Communication Laboratories, Japan.
    Ishiguro, Hiroshi
    ATR Intelligent Robotics and Communication Laboratories, Japan.
    Full-body gesture recognition using inertial sensors for playful interaction with small humanoid robot2010In: Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), IEEE Press, 2010, p. 2276-2282Conference paper (Refereed)
    Abstract [en]

    People like to play, and robotic technology offersthe opportunity to interact with artifacts in new ways. Robots co-existing with humans in domestic and public environments are expected to behave as companions, also engaging in playful interaction. If a robot is small, we foresee that people will want to be able to pick it up and express their intentions playfully by hugging, shaking and moving it around in various ways. Such robots will need to recognize these gestures -- which we call "full-body gestures" because they affect the robot’s full body. Inertial sensors inside the robot could be used to detect these gestures, in order to avoid having to rely on external sensors in the environment. However, it is not obvious which gestures typically occur during play, and which of these can be reliably detected. We therefore investigate full-body gesture recognition using Sponge Robot, a small humanoid robot equipped with inertial sensors and designed for playful human-robot interaction.

  • 8.
    Cooney, Martin
    et al.
    Adv Telecommun Res Inst Int IRC HIL, Keihanna Sci City, Kyoto, Japan.
    Kanda, Takayuki
    Adv Telecommun Res Inst Int IRC HIL, Keihanna Sci City, Kyoto, Japan.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Ishiguro, Hiroshi
    Adv Telecommun Res Inst Int IRC HIL, Keihanna Sci City, Kyoto, Japan.
    Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot2014In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 6, no 2, p. 173-193Article in journal (Refereed)
    Abstract [en]

    Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how its body is moved when people perform such "full-body gestures". Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People's behavior is complex, and naive designs for a robot's behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people's behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a naive version of our robot. The interaction design is completed by investigating how a robot can provide "reward" and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.

  • 9.
    Cooney, Martin
    et al.
    ATR Laboratories, Japan.
    Kanda, Takayuki
    ATR Laboratories, Japan.
    Alissandrakis, Aris
    University of Hertfordshire, UK.
    Ishiguro, Hiroshi
    ATR Laboratories, Japan.
    Interaction Design for an Enjoyable Play Interaction with a Small Humanoid Robot2011In: Proceedings of the 2011 11th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2011), IEEE Press, 2011, p. 112-119Conference paper (Refereed)
    Abstract [en]

    Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robot's full body (“full-body gestures”). However, such recognition by itself is not enough to provide a nice interaction. We find that interactions with an initial, naïve version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide “reward” and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with “persisting intentions” can be used to establish an enjoyable play interaction.

  • 10.
    Dadzie, Aba-Sah
    et al.
    KMi, The Open University, UK.
    Müller, Maximilian
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Collaborative Learning through Creative Video Composition on Distributed User Interfaces2016In: State-of-the-Art and Future Directions of Smart Learning / [ed] Li, Y., Chang, M., Kravcik, M., Popescu, E., Huang, R., Kinshuk, Chen, N.-S., Springer, 2016, 1, p. 199-210Chapter in book (Refereed)
    Abstract [en]

    We report two studies that fed into user-centred design for pedagogical and technological scaffolds for social, constructive learning through creative, collaborative, reflective video composition. The studies validated this learning approach and verified the utility and usability of an initial prototype (scaffold) built to support it. However, challenges in interaction with the target technology, multi-touch tabletops, impacted ability to carry out prescribed learning activities. Our findings point to the need to investigate an alternative approach and informed redesign of our scaffolds. We propose coupling of distributed user interfaces, using mobile devices to access large, shared displays, to augment capability to follow our constructive learning process. We discuss also the need to manage recognised challenges to collaboration with a distributed approach.

  • 11.
    Herault, Romain Christian
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    An Application for Speech and Language Therapy Using Customized Interaction Between Physical Objects and Mobile Devices2016In: Proceedings of the 24th International Conference on Computers in Education / [ed] Chen, W. et al., India: Asia-Pacific Society for Computers in Education, 2016, p. 477-482Conference paper (Refereed)
    Abstract [en]

    This paper presents a prototype that facilitates the work of Speech and Language Therapists (SLTs) by providing an Android mobile device application that allows the therapist to focus on the patient rather than taking notes during exercises.Each physical object used by the therapist in those exercises can be given digital properties using Near Field Communication (NFC) tags. The registration does not require a high level of ICT skills from the therapists. SLTs often use such objects in non-technology driven exercises that deal with classification, seriation and inclusion. The application offers such exercises developed in close collaboration with two SLTs, and our aim was to provide therapists with a way to efficiently record activities while working with a patient using a mobile application. The tool was validated through several expert reviews, a usability study as well as a trial with a patient in Paris, France.

  • 12.
    Hoppe, H. Ulrich
    et al.
    Rhine-Ruhr Institute for Applied System Innovation (RIAS), Germany.
    Müller, Maximilian
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Schneegass, Christina
    Rhine-Ruhr Institute for Applied System Innovation (RIAS), Germany.
    Malzahn, Nils
    Rhine-Ruhr Institute for Applied System Innovation (RIAS), Germany.
    "VC/DC" - Video versus Domain Concepts in Comments to Learner-generated Science Videos2016In: Proceedings of the 24th International Conference on Computers in Education. India: Asia-Pacific Society for Computers in Education / [ed] Weiqin Chen et al., India: Asia-Pacific Society for Computers in Education, 2016, p. 172-181Conference paper (Refereed)
    Abstract [en]

    The recently finished EU project JuxtaLearn aimed at supporting students' learning of STEM subjects through the creation, exchange and discussion of learner-made videos. The approach is based on an eight-stage activity cycle in the beginning of which teachers identify specific "stumbling blocks" for a given theme (or "tricky topic"). In JuxtaLearn, video comments were analyzed to extract information on the learners' acquisition and understanding of domain concepts, especially to detect problems and misconceptions. These analyses were based on mapping texts to networks of concepts ("network-text analysis") as a basis for further processing. In this article we use data collected from recent field trials to shed light on what is actually discussed when students share their own videos in science domains. Would the aspect of video-making dominate over activities related to a deepening of domain understanding? Our findings indicate that there are different ways of balancing both aspects and interventions will be needed to bring forth the desired blend.

  • 13.
    Müller, Maximilian
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Otero, Nuno
    Linnaeus University, Faculty of Technology, Department of Media Technology. Instituto Universitário de Lisboa, Portugal.
    There is more to come: Anticipating content on interactive public displays through timer animations2016In: PerDis 2016: Proceedings of the 5th ACM International Symposium on Pervasive Displays, ACM Press, 2016, p. 247-248Conference paper (Refereed)
    Abstract [en]

    We experience a continuously growing number of public displays deployed in a diverse range of settings. Often these displays contain a variety of full-screen content for the audience that is organized by a scheduler application. However, such public display systems often miss to communicate their full set of content and features, neither do they hint schedule information. In this paper, we present and describe a timer control we implemented in our public display applications to communicate schedule and application information to the audience, which allows to manage expectations and anticipation around public displays. We also report initial insights from studies about how this kind of design features supported the audience in engaging with the public displays.

  • 14.
    Müller, Maximilian
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Otero, Nuno
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Application features to convey peers' interactions to engage users in a display network2015In: Proceedings of the 4th International Symposium on Pervasive Displays, New York, NY, USA: ACM Press, 2015, p. 267-268Conference paper (Refereed)
    Abstract [en]

    Recent socio-technological developments have shown growing interest in interactive pervasive computing scenarios supported by public displays. One of the main challenges in the design of public display systems still is to engage users to interact and be motivated to do so. In this work, we describe application features, implemented in our public display system, which aim to convey awareness of local and remote peers' interactions with an educational video installation to engage users to interact. This is facilitated by dynamic pop-up notifications and visualizations of interactions on the display screen. A first deployment and study showed that users found these presentations of peer interactions to be engaging, both with the display system as well as the social context around it.

  • 15.
    Müller, Maximilian
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Otero, Nuno
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Evaluating usage patterns and adoption of an interactive video installation on public displays in school contexts2014In: MUM '14 Proceedings of the 13th International Conference on Mobile and Ubiquitous Multimedia, New York, NY, USA: ACM Press, 2014, p. 160-169Conference paper (Refereed)
    Abstract [en]

    Recent years have seen a growing interest in supporting learning activities/scenarios that go beyond the traditional classroom context as well as the development of pervasive computing scenarios supported by display installations. In order to explore such interactive scenarios that span video-based learning activities across school contexts, we have developed two web-based functional prototypes of public display applications and performed a field evaluation during an initial test-deployment. The system consists of a public display endpoint providing video content enriched with quizzes related to this content, and a mobile endpoint providing interactivity and user participation. During a three weeks test-deployment at two Swedish schools, the display system was evaluated and important requirements for the next iterations were gathered. This work presents the results of the test-deployment and the users' adoption (usage patterns), and discusses the specialties of introducing such a system into educational environments. The deployment and the corresponding study enabled us to validate in real settings the overall technical approach and to test different perspectives of display usage. The conclusions point to the need to further understand how to promote an integrated view of display utilization in schools.

  • 16.
    Müller, Maximilian
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Otero, Nuno
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Increasing user engagement with distributed public displays through the awareness of peer interactions2015In: Proceedings of the 4th International Symposium on Pervasive Displays, New York, NY, USA: ACM Press, 2015, p. 23-29Conference paper (Refereed)
    Abstract [en]

    Recent developments have shown a growing interest in interactive pervasive computing scenarios supported by public displays as well as their introduction into educational environments. Still, one of the biggest challenges in the design of public display systems is to engage users to interact and be motivated to do so. In this paper, we report a study exploring the potential effect of the awareness of peers' interactions with an educational video installation and the popularity of the display system on the user engagement. The awareness is facilitated by pop-up notifications and visualizations of interactions on the display screen. We conducted a six day long deployment of our system which included a diary study, during which we altered the display's dynamic behavior in order to test different conditions. The analysis of the diary reports and the progression of the users' interactions showed that the users found the presentations of peer interactions to be engaging, both with the display system as well as the social context around it.

  • 17.
    Nake, Isabella
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Zbick, Janosch
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Visualizing Quantified Self Data Using Avatars2016In: ACHI 2016: The Ninth International Conference on Advances in Computer-Human Interactions / [ed] Alma Leora Culén, Leslie Miller, Irini Giannopulu, and Birgit Gersbeck-Schierholz, International Academy, Research and Industry Association (IARIA), 2016, p. 57-66Conference paper (Refereed)
    Abstract [en]

    In recent years, it is becoming more common for people to use applications or devices that keep track of their activities, such as fitness activities, places they visited, the music they listen to, and pictures they took. These data are used by the services for various purposes, but usually there are limitations for the users to explore or interact with them. Our project investigates a new approach of visualizing such Quantified Self data, in a meaningful and enjoyable way that gives the users insights into their data. The paper discusses the feasibility of creating a service that allows users to connect the activity tracking applications they already use, analyse the amount of activities, and then presents them the resulting information. The visualization of the information is proposed as an avatar that maps the different activities the user is engaged with, along with the activity levels, as graphical features. Within the scope of this work, several user studies were conducted and a system prototype was implemented to explore how to build, using web technologies, such a system that aggregates and analyses personal activity data, and also to determine what kind of data should and can be collected, to provide meaningful information to the users. Furthermore, it was investigated how a possible design for the avatar could look like, to be clearly understood by the users.

  • 18.
    Otero, Nuno
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Müller, Maximilian
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Lencastre, José Alberto
    University of Minho, Portugal.
    Casal, João
    University of Minho, Portugal.
    José, Rui
    University of Minho, Portugal.
    Promoting secondary school learners' curiosity towards science through digital public displays2013In: Proceedings of International Conference on Making Sense of Converging Media, AcademicMindTrek '13 / [ed] Artur Lugmayr, Heljä Franssila, Janne Paavilainen, Hannu Kärkkäinen, ACM Press, 2013, no 470, p. 204-210Conference paper (Refereed)
    Abstract [en]

    This paper contributes to the understanding of how digital public displays can be utilized in schools taking into consideration educational goals. This work is part of a currently on-going research project that aims to promote students' curiosity in science and technology through creative film-making, collaborative editing activities, and content sharing. In order to explore the design space concerning digital public displays for schools' contexts, six workshops with secondary school teachers in two different countries were conducted to elicit sensitivities towards possible features and interaction techniques as well as inquire about expectations and technology adoption. Our findings suggest that teachers are receptive to the technology and were able to generate scenarios that take advantage of the possibilities offered by digital public displays to stimulate learning processes. However, there are several crucial elements regarding management and control of content that need to be carefully crafted/designed in order to accommodate each schools' organizational issues.

  • 19.
    Otero, Nuno
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Müller, Maximilian
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Milrad, Marcelo
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Exploring video-based interactions around digital public displays to foster curiosity about science in schools2013Conference paper (Refereed)
    Abstract [en]

    In this poster, we describe our initial steps towards understanding how digital public displays in schools can be utilized in order to foster students' curiosity towards scientific topics. More specically, this present work is part of an on-going research project (JuxtaLearn) that aims at provoking students' curiosity in science and technology through creative lmmaking and editing activities. In order to explore the design space concerning digital public displays for schools' contexts we conducted some initial workshops with science teachers in order to elicit their sensitivities towards possible features and interaction techniques, as well as to inquire about expectations and technology adoption.

  • 20.
    Reski, Nico
    et al.
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of Media Technology.
    Change Your Perspective: Exploration of a 3D Network Created from Open Data in an Immersive Virtual Reality Environment2016In: ACHI 2016: The Ninth International Conference on Advances in Computer-Human Interactions / [ed] Alma Leora Culén, Leslie Miller, Irini Giannopulu, Birgit Gersbeck-Schierholz, International Academy, Research and Industry Association (IARIA), 2016, p. 403-410Conference paper (Refereed)
    Abstract [en]

    This paper investigates an approach of how to naturally interact and explore information (based on open data) within an immersive virtual reality environment (VRE) using a head-mounted display and vision-based motion controls. We present the results of a user interaction study that investigated the acceptance of the developed prototype, estimated the workload as well as examined the participants' behavior. Additional discussions with experts provided further feedback towards the prototype's overall design and concept. The results indicate that the participants were enthusiastic regarding the novelty and intuitiveness of exploring information in a VRE, as well as were challenged (in a positive manner) with the applied interface and interaction design. The presented concept and design were well received by the experts, who valued the idea and implementation and encouraged to be even bolder, making more use of the available 3D environment.

  • 21.
    Reski, Nico
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Open data exploration in virtual reality: a comparative study of input technology2019In: Virtual Reality, ISSN 1359-4338, E-ISSN 1434-9957Article in journal (Refereed)
    Abstract [en]

    In this article, we compare three different input technologies (gamepad, vision-based motion controls, room-scale) for an interactive virtual reality (VR) environment. The overall system is able to visualize (open) data from multiple online sources in a unified interface, enabling the user to browse and explore displayed information in an immersive VR setting. We conducted a user interaction study (n=24; n=8 per input technology, between-group design) to investigate experienced workload and perceived flow of interaction. Log files and observations allowed further insights and comparison of each condition. We have identified trends that indicate user preference of a visual (virtual) representation, but no clear trends regarding the application of physical controllers (over vision-based controls), in a scenario that encouraged exploration with no time limitations.

  • 22.
    Reski, Nico
    et al.
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Using an Augmented Reality Cube-like Interface and 3D Gesture-based Interaction to Navigate and Manipulate Data2018In: VINCI '18 Proceedings of the 11th International Symposium on Visual Information Communication and Interaction, New York: Association for Computing Machinery (ACM), 2018, p. 92-96Conference paper (Refereed)
    Abstract [en]

    In this paper we describe our work-in-progress to create an interface that enables users to browse and select data within an Augmented Reality environment, using a virtual cube object that can be interacted with through 3D gestural input. We present the prototype design (including the graphical elements), describe the interaction possibilities of touching the cube with the hand/finger, and put the prototype into the context of our Augmented Reality for Public Engagement (PEAR) framework. An interactive prototype was implemented and runs on a typical off-the-shelf smart-phone device.

  • 23. Reski, Nico
    et al.
    Alissandrakis, Aris
    Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM).
    Tyrkkö, Jukka
    Linnaeus University, Faculty of Arts and Humanities, Department of Languages.
    Collaborative exploration of rich corpus data using immersive virtual reality and non-immersive technologies2019In: ADDA: Approaches to Digital Discourse Analysis – ADDA 2, Turku, Finland 23-25 May 2019 ; Book of abstracts, Turku: University of Turku , 2019, p. 7-7Conference paper (Other academic)
    Abstract [en]

    In recent years, large textual data sets, comprising many data points and rich metadata, have become a common object of investigation and analysis. Information Visualization and Visual Analytics provide practical tools for visual data analysis, most commonly as interactive two-dimensional (2D) visualizations that are displayed through normal computer monitors. At the same time, display technologies have evolved rapidly over the past decade. In particular, emerging technologies such as virtual reality (VR), augmented reality (AR), or mixed reality (MR) have become affordable and more user-friendly (LaValle 2016). Under the banner of “Immersive Analytics”, researchers started to explore the novel application of such immersive technologies for the purpose of data analysis (Marriott et al. 2018).

    By using immersive technologies, researchers hope to increase motivation and user engagement for the overall data analysis activity as well as providing different perspectives on the data. This can be particularly helpful in the case of exploratory data analysis, when the researcher attempts to identify interesting points or anomalies in the data without prior knowledge of what exactly they are searching for. Furthermore, the data analysis process often involves the collaborative sharing of information and knowledge between multiple users for the goal of interpreting and making sense of the explored data together (Isenberg et al. 2011). However, immersive technologies such as VR are often rather single user-centric experiences, where one user is wearing a head-mounted display (HMD) device and is thus visually isolated from the real-world surroundings. Consequently, new tools and approaches for co-located, synchronous collaboration in such immersive data analysis scenarios are needed.

    In this software demonstration, we present our developed VR system that enables two users to explore data at the same time, one inside an immersive VR environment, and one outside VR using a non-immersive companion application. The context of this demonstrated data analysis activity is centered around the exploration of the language variability in tweets from the perspectives of multilingualism and sociolinguistics (see, e.g. Coats 2017 and Grieve et al. 2017). Our primary data come from the the Nordic Tweet Stream (NTS) corpus (Laitinen et al. 2018, Tyrkkö 2018), and the immersive VR application visualizes in three dimensions (3D) the clustered Twitter traffic within the Nordic region as stacked cuboids according to their geospatial position, where each stack represents a color-coded language share (Alissandrakis et al. 2018). Through the utilization of 3D gestural input, the VR user can interact with the data using hand postures and gestures in order to move through the virtual 3D space, select clusters and display more detailed information, and to navigate through time (Reski and Alissandrakis 2019) ( https://vrxar.lnu.se/apps/odxvrxnts-360/ ). A non-immersive companion application, running in a normal web browser, presents an overview map of the Nordic region as well as other supplemental information about the data that are more suitable to be displayed using non-immersive technologies.

    We will present two complementary applications, each with a different objective within the collaborative data analysis framework. The design and implementation of certain connectivity and collaboration features within these applications facilitate the co-located, synchronous exploration and sensemaking. For instance, the VR user’s position and orientation are displayed and updated in real-time within the overview map of the non-immersive application. The other way around, the selected cluster of the non-immersive user is also highlighted for the user in VR. Initial tests with pairs of language students validated the proof-of-concept of the developed collaborative system and encourage the conduction of further future investigations in this direction.

1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf