lnu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 22) Show all publications
Reski, N., Alissandrakis, A. & Tyrkkö, J. (2019). Collaborative exploration of rich corpus data using immersive virtual reality and non-immersive technologies. In: ADDA: Approaches to Digital Discourse Analysis – ADDA 2, Turku, Finland 23-25 May 2019 ; Book of abstracts. Paper presented at 2nd International Conference: Approaches to Digital Discourse Analysis (ADDA 2), 23-25 May, 2019, Turku, Finland (pp. 7-7). Turku: University of Turku
Open this publication in new window or tab >>Collaborative exploration of rich corpus data using immersive virtual reality and non-immersive technologies
2019 (English)In: ADDA: Approaches to Digital Discourse Analysis – ADDA 2, Turku, Finland 23-25 May 2019 ; Book of abstracts, Turku: University of Turku , 2019, p. 7-7Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

In recent years, large textual data sets, comprising many data points and rich metadata, have become a common object of investigation and analysis. Information Visualization and Visual Analytics provide practical tools for visual data analysis, most commonly as interactive two-dimensional (2D) visualizations that are displayed through normal computer monitors. At the same time, display technologies have evolved rapidly over the past decade. In particular, emerging technologies such as virtual reality (VR), augmented reality (AR), or mixed reality (MR) have become affordable and more user-friendly (LaValle 2016). Under the banner of “Immersive Analytics”, researchers started to explore the novel application of such immersive technologies for the purpose of data analysis (Marriott et al. 2018).

By using immersive technologies, researchers hope to increase motivation and user engagement for the overall data analysis activity as well as providing different perspectives on the data. This can be particularly helpful in the case of exploratory data analysis, when the researcher attempts to identify interesting points or anomalies in the data without prior knowledge of what exactly they are searching for. Furthermore, the data analysis process often involves the collaborative sharing of information and knowledge between multiple users for the goal of interpreting and making sense of the explored data together (Isenberg et al. 2011). However, immersive technologies such as VR are often rather single user-centric experiences, where one user is wearing a head-mounted display (HMD) device and is thus visually isolated from the real-world surroundings. Consequently, new tools and approaches for co-located, synchronous collaboration in such immersive data analysis scenarios are needed.

In this software demonstration, we present our developed VR system that enables two users to explore data at the same time, one inside an immersive VR environment, and one outside VR using a non-immersive companion application. The context of this demonstrated data analysis activity is centered around the exploration of the language variability in tweets from the perspectives of multilingualism and sociolinguistics (see, e.g. Coats 2017 and Grieve et al. 2017). Our primary data come from the the Nordic Tweet Stream (NTS) corpus (Laitinen et al. 2018, Tyrkkö 2018), and the immersive VR application visualizes in three dimensions (3D) the clustered Twitter traffic within the Nordic region as stacked cuboids according to their geospatial position, where each stack represents a color-coded language share (Alissandrakis et al. 2018). Through the utilization of 3D gestural input, the VR user can interact with the data using hand postures and gestures in order to move through the virtual 3D space, select clusters and display more detailed information, and to navigate through time (Reski and Alissandrakis 2019) ( https://vrxar.lnu.se/apps/odxvrxnts-360/ ). A non-immersive companion application, running in a normal web browser, presents an overview map of the Nordic region as well as other supplemental information about the data that are more suitable to be displayed using non-immersive technologies.

We will present two complementary applications, each with a different objective within the collaborative data analysis framework. The design and implementation of certain connectivity and collaboration features within these applications facilitate the co-located, synchronous exploration and sensemaking. For instance, the VR user’s position and orientation are displayed and updated in real-time within the overview map of the non-immersive application. The other way around, the selected cluster of the non-immersive user is also highlighted for the user in VR. Initial tests with pairs of language students validated the proof-of-concept of the developed collaborative system and encourage the conduction of further future investigations in this direction.

Place, publisher, year, edition, pages
Turku: University of Turku, 2019
Keywords
virtual reality, Nordic Tweet Stream, digital humanities, immersive analytics
National Category
Human Computer Interaction General Language Studies and Linguistics Language Technology (Computational Linguistics)
Research subject
Computer and Information Sciences Computer Science, Computer Science; Computer Science, Information and software visualization; Humanities, Linguistics
Identifiers
urn:nbn:se:lnu:diva-83858 (URN)
Conference
2nd International Conference: Approaches to Digital Discourse Analysis (ADDA 2), 23-25 May, 2019, Turku, Finland
Projects
DISA-DHOpen Data Exploration in Virtual Reality (ODxVR)
Available from: 2019-05-28 Created: 2019-05-28 Last updated: 2019-06-03Bibliographically approved
Reski, N. & Alissandrakis, A. (2019). Open data exploration in virtual reality: a comparative study of input technology. Virtual Reality
Open this publication in new window or tab >>Open data exploration in virtual reality: a comparative study of input technology
2019 (English)In: Virtual Reality, ISSN 1359-4338, E-ISSN 1434-9957Article in journal (Refereed) Published
Abstract [en]

In this article, we compare three different input technologies (gamepad, vision-based motion controls, room-scale) for an interactive virtual reality (VR) environment. The overall system is able to visualize (open) data from multiple online sources in a unified interface, enabling the user to browse and explore displayed information in an immersive VR setting. We conducted a user interaction study (n=24; n=8 per input technology, between-group design) to investigate experienced workload and perceived flow of interaction. Log files and observations allowed further insights and comparison of each condition. We have identified trends that indicate user preference of a visual (virtual) representation, but no clear trends regarding the application of physical controllers (over vision-based controls), in a scenario that encouraged exploration with no time limitations.

Place, publisher, year, edition, pages
Springer, 2019
Keywords
Comparative study, Gamepad, Room-scale virtual reality, Virtual reality, Vision-based motion controls, 3D gestural input
National Category
Computer Sciences Human Computer Interaction
Research subject
Computer and Information Sciences Computer Science; Computer and Information Sciences Computer Science, Computer Science; Computer Science, Information and software visualization
Identifiers
urn:nbn:se:lnu:diva-79974 (URN)10.1007/s10055-019-00378-w (DOI)
Projects
Open Data Exploration in Virtual Reality (ODxVR)
Funder
Knowledge Foundation, 2016/0174
Available from: 2019-01-28 Created: 2019-01-28 Last updated: 2019-09-26
Reski, N. & Alissandrakis, A. (2018). Using an Augmented Reality Cube-like Interface and 3D Gesture-based Interaction to Navigate and Manipulate Data. In: VINCI '18 Proceedings of the 11th International Symposium on Visual Information Communication and Interaction: . Paper presented at 11th International Symposium on Visual Information Communication and Interaction (VINCI '18), Växjö, Sweden, August 13 - 15, 2018 (pp. 92-96). New York: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Using an Augmented Reality Cube-like Interface and 3D Gesture-based Interaction to Navigate and Manipulate Data
2018 (English)In: VINCI '18 Proceedings of the 11th International Symposium on Visual Information Communication and Interaction, New York: Association for Computing Machinery (ACM), 2018, p. 92-96Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we describe our work-in-progress to create an interface that enables users to browse and select data within an Augmented Reality environment, using a virtual cube object that can be interacted with through 3D gestural input. We present the prototype design (including the graphical elements), describe the interaction possibilities of touching the cube with the hand/finger, and put the prototype into the context of our Augmented Reality for Public Engagement (PEAR) framework. An interactive prototype was implemented and runs on a typical off-the-shelf smart-phone device.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2018
Keywords
human-computer interaction, augmented reality, interaction design, 3D user interface, 3D gesture-based interaction
National Category
Human Computer Interaction
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-77132 (URN)10.1145/3231622.3231625 (DOI)2-s2.0-85055492360 (Scopus ID)978-1-4503-6501-7 (ISBN)
Conference
11th International Symposium on Visual Information Communication and Interaction (VINCI '18), Växjö, Sweden, August 13 - 15, 2018
Projects
Augmented Reality for Public Engagement (PEAR)
Funder
Knowledge Foundation
Available from: 2018-08-15 Created: 2018-08-15 Last updated: 2019-08-29Bibliographically approved
Alissandrakis, A., Reski, N., Laitinen, M., Tyrkkö, J., Levin, M. & Lundberg, J. (2018). Visualizing dynamic text corpora using Virtual Reality. In: ICAME 39 : Tampere, 30 May – 3 June, 2018: Corpus Linguistics and Changing Society : Book of Abstracts. Paper presented at The 39th Annual Conference of the International Computer Archive for Modern and Medieval English (ICAME39): Corpus Linguistics and Changing Society. Tampere, 30 May - 3 June, 2018 (pp. 205-205). Tampere: University of Tampere
Open this publication in new window or tab >>Visualizing dynamic text corpora using Virtual Reality
Show others...
2018 (English)In: ICAME 39 : Tampere, 30 May – 3 June, 2018: Corpus Linguistics and Changing Society : Book of Abstracts, Tampere: University of Tampere , 2018, p. 205-205Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In recent years, data visualization has become a major area in Digital Humanities research, and the same holds true also in linguistics. The rapidly increasing size of corpora, the emergence of dynamic real-time streams, and the availability of complex and enriched metadata have made it increasingly important to facilitate new and innovative approaches to presenting and exploring primary data. This demonstration showcases the uses of Virtual Reality (VR) in the visualization of geospatial linguistic data using data from the Nordic Tweet Stream (NTS) project (see Laitinen et al 2017). The NTS data for this demonstration comprises a full year of geotagged tweets (12,443,696 tweets from 273,648 user accounts) posted within the Nordic region (Denmark, Finland, Iceland, Norway, and Sweden). The dataset includes over 50 metadata parameters in addition to the tweets themselves.

We demonstrate the potential of using VR to efficiently find meaningful patterns in vast streams of data. The VR environment allows an easy overview of any of the features (textual or metadata) in a text corpus. Our focus will be on the language identification data, which provides a previously unexplored perspective into the use of English and other non-indigenous languages in the Nordic countries alongside the native languages of the region.

Our VR prototype utilizes the HTC Vive headset for a room-scale VR scenario, and it is being developed using the Unity3D game development engine. Each node in the VR space is displayed as a stacked cuboid, the equivalent of a bar chart in a three-dimensional space, summarizing all tweets at one geographic location for a given point in time (see: https://tinyurl.com/nts-vr). Each stacked cuboid represents information of the three most frequently used languages, appropriately color coded, enabling the user to get an overview of the language distribution at each location. The VR prototype further encourages users to move between different locations and inspect points of interest in more detail (overall location-related information, a detailed list of all languages detected, the most frequently used hashtags). An underlying map outlines country borders and facilitates orientation. In addition to spatial movement through the Nordic areas, the VR system provides an interface to explore the Twitter data based on time (days, weeks, months, or time of predefined special events), which enables users to explore data over time (see: https://tinyurl.com/nts-vr-time).

In addition to demonstrating how the VR methods aid data visualization and exploration, we will also briefly discuss the pedagogical implications of using VR to showcase linguistic diversity.

Place, publisher, year, edition, pages
Tampere: University of Tampere, 2018
Keywords
virtual reality, Nordic Tweet Stream, digital humanities
National Category
General Language Studies and Linguistics Human Computer Interaction Language Technology (Computational Linguistics)
Research subject
Computer Science, Information and software visualization; Humanities, Linguistics
Identifiers
urn:nbn:se:lnu:diva-75064 (URN)
Conference
The 39th Annual Conference of the International Computer Archive for Modern and Medieval English (ICAME39): Corpus Linguistics and Changing Society. Tampere, 30 May - 3 June, 2018
Projects
DISA-DHOpen Data Exploration in Virtual Reality (ODxVR)
Available from: 2018-06-05 Created: 2018-06-05 Last updated: 2018-07-23Bibliographically approved
Alissandrakis, A. & Reski, N. (2017). Using Mobile Augmented Reality to Facilitate Public Engagement. In: Koraljka Golub, Marcelo Milrad (Ed.), Extended Papers of the International Symposium on Digital Humanities (DH 2016): . Paper presented at International Symposium on Digital Humanities (DH 2016) Växjö, Sweden, November, 7-8, 2016. (pp. 99-109). CEUR-WS, 2021
Open this publication in new window or tab >>Using Mobile Augmented Reality to Facilitate Public Engagement
2017 (English)In: Extended Papers of the International Symposium on Digital Humanities (DH 2016) / [ed] Koraljka Golub, Marcelo Milrad, CEUR-WS , 2017, Vol. 2021, p. 99-109Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents our initial efforts towards the development of a framework for facilitating public engagement through the use of mobile Augmented Reality (mAR), that fall under the overall project title "Augmented Reality for Public Engagement" (PEAR). We present the concept, implementation, and discuss the results from the deployment of a mobile phone app (PEAR 4 VXO). The mobile app was used for a user study in conjunction with a campaign carried out by Växjö municipality (Sweden) while exploring how to get citizens more engaged in urban planning actions and decisions. These particular activities took place during spring 2016.One of the salient features of our approach is that it combines novel ways of using mAR together with social media, online databases, and sensors, to support public engagement. In addition, the data collection process and audience engagement were tested in a follow-up limited deployment.The analysis and outcomes of our initial results validate the overall concept and indicate the potential usefulness of the app as a tool, but also highlight the need for an active campaign from the part of the stakeholders.Our future efforts will focus on addressing some of the problems and challenges that we have identified during the different phases of this user study.

Place, publisher, year, edition, pages
CEUR-WS, 2017
Series
CEUR Workshop Proceedings, ISSN 1613-0073
Keywords
Augmented Reality, public engagement, crowdsourcing
National Category
Human Computer Interaction
Research subject
Computer and Information Sciences Computer Science, Media Technology; Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-69265 (URN)
Conference
International Symposium on Digital Humanities (DH 2016) Växjö, Sweden, November, 7-8, 2016.
Projects
Augmented Reality for Public Engagement (PEAR)
Available from: 2017-12-13 Created: 2017-12-13 Last updated: 2019-01-10Bibliographically approved
Alissandrakis, A. & Nake, I. (2016). A New Approach for Visualizing Quantified Self Data Using Avatars. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. Paper presented at 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 12-16, 2016, Heidelberg (pp. 522-527). New York, NY, USA: ACM Press
Open this publication in new window or tab >>A New Approach for Visualizing Quantified Self Data Using Avatars
2016 (English)In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, New York, NY, USA: ACM Press, 2016, p. 522-527Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, it is becoming more common for people to use applications or devices that keep track of their life and activities, such as physical fitness, places they visited, the music they listen to, or pictures they took. This generates data that are used by the service providers for a variety of (usually analytics) purposes, but commonly there are limitations on how the users themselves can also explore or interact with these data. Our position paper describes a new approach of visualizing such Quantified Self data, in a meaningful and enjoyable way that can give the users personal insights into their own data. The visualization of the information is proposed as an avatar that maps the different activities the user is engaged with, along with each such activity level, as graphical features. An initial prototype (both in terms of graphical design and software architecture) as well as possible future extensions are discussed.

Place, publisher, year, edition, pages
New York, NY, USA: ACM Press, 2016
Keywords
avatars, data visualization, quantified self
National Category
Media and Communication Technology
Research subject
Computer and Information Sciences Computer Science, Computer Science; Computer Science, Information and software visualization; Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-56510 (URN)10.1145/2968219.2968315 (DOI)2-s2.0-84991094844 (Scopus ID)978-1-4503-4462-3 (ISBN)
Conference
2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 12-16, 2016, Heidelberg
Available from: 2016-09-14 Created: 2016-09-14 Last updated: 2018-01-10Bibliographically approved
Herault, R. C. & Alissandrakis, A. (2016). An Application for Speech and Language Therapy Using Customized Interaction Between Physical Objects and Mobile Devices. In: Chen, W. et al. (Ed.), Proceedings of the 24th International Conference on Computers in Education: . Paper presented at 24th International Conference on Computers in Education (ICCE 2016), Mumbai, India, Nov 28th to Dec 2nd, 2016 (pp. 477-482). India: Asia-Pacific Society for Computers in Education
Open this publication in new window or tab >>An Application for Speech and Language Therapy Using Customized Interaction Between Physical Objects and Mobile Devices
2016 (English)In: Proceedings of the 24th International Conference on Computers in Education / [ed] Chen, W. et al., India: Asia-Pacific Society for Computers in Education, 2016, p. 477-482Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a prototype that facilitates the work of Speech and Language Therapists (SLTs) by providing an Android mobile device application that allows the therapist to focus on the patient rather than taking notes during exercises.Each physical object used by the therapist in those exercises can be given digital properties using Near Field Communication (NFC) tags. The registration does not require a high level of ICT skills from the therapists. SLTs often use such objects in non-technology driven exercises that deal with classification, seriation and inclusion. The application offers such exercises developed in close collaboration with two SLTs, and our aim was to provide therapists with a way to efficiently record activities while working with a patient using a mobile application. The tool was validated through several expert reviews, a usability study as well as a trial with a patient in Paris, France.

Place, publisher, year, edition, pages
India: Asia-Pacific Society for Computers in Education, 2016
Keywords
NFC-based interactions, speech and language therapy, mobile application
National Category
Media and Communication Technology Human Computer Interaction
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-60851 (URN)000419241800076 ()9789868473577 (ISBN)
Conference
24th International Conference on Computers in Education (ICCE 2016), Mumbai, India, Nov 28th to Dec 2nd, 2016
Available from: 2017-02-22 Created: 2017-02-22 Last updated: 2018-02-16Bibliographically approved
Reski, N. & Alissandrakis, A. (2016). Change Your Perspective: Exploration of a 3D Network Created from Open Data in an Immersive Virtual Reality Environment. In: Alma Leora Culén, Leslie Miller, Irini Giannopulu, Birgit Gersbeck-Schierholz (Ed.), ACHI 2016: The Ninth International Conference on Advances in Computer-Human Interactions. Paper presented at ACHI 2016: The Ninth International Conference on Advances in Computer-Human Interactions (pp. 403-410). International Academy, Research and Industry Association (IARIA)
Open this publication in new window or tab >>Change Your Perspective: Exploration of a 3D Network Created from Open Data in an Immersive Virtual Reality Environment
2016 (English)In: ACHI 2016: The Ninth International Conference on Advances in Computer-Human Interactions / [ed] Alma Leora Culén, Leslie Miller, Irini Giannopulu, Birgit Gersbeck-Schierholz, International Academy, Research and Industry Association (IARIA), 2016, p. 403-410Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates an approach of how to naturally interact and explore information (based on open data) within an immersive virtual reality environment (VRE) using a head-mounted display and vision-based motion controls. We present the results of a user interaction study that investigated the acceptance of the developed prototype, estimated the workload as well as examined the participants' behavior. Additional discussions with experts provided further feedback towards the prototype's overall design and concept. The results indicate that the participants were enthusiastic regarding the novelty and intuitiveness of exploring information in a VRE, as well as were challenged (in a positive manner) with the applied interface and interaction design. The presented concept and design were well received by the experts, who valued the idea and implementation and encouraged to be even bolder, making more use of the available 3D environment.

Place, publisher, year, edition, pages
International Academy, Research and Industry Association (IARIA), 2016
Keywords
human-computer interaction, virtual reality, immersive interaction, information visualization
National Category
Media and Communication Technology
Research subject
Computer and Information Sciences Computer Science, Media Technology; Computer Science, Information and software visualization
Identifiers
urn:nbn:se:lnu:diva-52379 (URN)978-1-61208-468-8 (ISBN)
Conference
ACHI 2016: The Ninth International Conference on Advances in Computer-Human Interactions
Available from: 2016-05-04 Created: 2016-05-04 Last updated: 2018-01-10Bibliographically approved
Dadzie, A.-S., Müller, M., Alissandrakis, A. & Milrad, M. (2016). Collaborative Learning through Creative Video Composition on Distributed User Interfaces (1ed.). In: Li, Y., Chang, M., Kravcik, M., Popescu, E., Huang, R., Kinshuk, Chen, N.-S. (Ed.), State-of-the-Art and Future Directions of Smart Learning: (pp. 199-210). Springer
Open this publication in new window or tab >>Collaborative Learning through Creative Video Composition on Distributed User Interfaces
2016 (English)In: State-of-the-Art and Future Directions of Smart Learning / [ed] Li, Y., Chang, M., Kravcik, M., Popescu, E., Huang, R., Kinshuk, Chen, N.-S., Springer, 2016, 1, p. 199-210Chapter in book (Refereed)
Abstract [en]

We report two studies that fed into user-centred design for pedagogical and technological scaffolds for social, constructive learning through creative, collaborative, reflective video composition. The studies validated this learning approach and verified the utility and usability of an initial prototype (scaffold) built to support it. However, challenges in interaction with the target technology, multi-touch tabletops, impacted ability to carry out prescribed learning activities. Our findings point to the need to investigate an alternative approach and informed redesign of our scaffolds. We propose coupling of distributed user interfaces, using mobile devices to access large, shared displays, to augment capability to follow our constructive learning process. We discuss also the need to manage recognised challenges to collaboration with a distributed approach.

Place, publisher, year, edition, pages
Springer, 2016 Edition: 1
Series
Lecture Notes in Educational Technology, ISSN 2196-4971 ; 23
Keywords
CSCL, Creative video composition, Reflective knowledge construction, Shared displays, Distributed UIs, Process support
National Category
Computer Sciences Human Computer Interaction
Research subject
Computer and Information Sciences Computer Science, Computer Science; Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-46736 (URN)10.1007/978-981-287-868-7_23 (DOI)000389703400023 ()2-s2.0-85009815932 (Scopus ID)978-981-287-868-7 (ISBN)
Projects
JuxtaLearn
Funder
EU, FP7, Seventh Framework Programme, 317964
Available from: 2015-10-12 Created: 2015-10-12 Last updated: 2019-08-29Bibliographically approved
Müller, M., Alissandrakis, A. & Otero, N. (2016). There is more to come: Anticipating content on interactive public displays through timer animations. In: PerDis 2016: Proceedings of the 5th ACM International Symposium on Pervasive Displays. Paper presented at 5th ACM International Symposium on Pervasive Displays, PerDis 2016, 20 June 2016 through 22 June 2016 (pp. 247-248). ACM Press
Open this publication in new window or tab >>There is more to come: Anticipating content on interactive public displays through timer animations
2016 (English)In: PerDis 2016: Proceedings of the 5th ACM International Symposium on Pervasive Displays, ACM Press, 2016, p. 247-248Conference paper, Published paper (Refereed)
Abstract [en]

We experience a continuously growing number of public displays deployed in a diverse range of settings. Often these displays contain a variety of full-screen content for the audience that is organized by a scheduler application. However, such public display systems often miss to communicate their full set of content and features, neither do they hint schedule information. In this paper, we present and describe a timer control we implemented in our public display applications to communicate schedule and application information to the audience, which allows to manage expectations and anticipation around public displays. We also report initial insights from studies about how this kind of design features supported the audience in engaging with the public displays.

Place, publisher, year, edition, pages
ACM Press, 2016
Keywords
Anticipation, Digital signage, Multi-purpose displays, Public displays, Scheduler, Timer, User interface, Visual signals
National Category
Media and Communications
Research subject
Media Studies and Journalism, Media and Communication Science
Identifiers
urn:nbn:se:lnu:diva-56114 (URN)10.1145/2914920.2940341 (DOI)2-s2.0-84979738802 (Scopus ID)9781450343664 (ISBN)
Conference
5th ACM International Symposium on Pervasive Displays, PerDis 2016, 20 June 2016 through 22 June 2016
Available from: 2016-09-08 Created: 2016-08-31 Last updated: 2017-04-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4162-6475

Search in DiVA

Show all publications