lnu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 16) Show all publications
Hagelbäck, J., Lincke, A., Löwe, W. & Rall, E. (2019). On the Agreement of Commodity 3D Cameras. In: 23rd International Conference on Image Processing, Computer Vision, & Pattern Recognition (IPCV'19: July 29 - August 1, 2019, USA): . Paper presented at 3rd International Conference on Image Processing, Computer Vision, & Pattern Recognition. CSREA Press
Open this publication in new window or tab >>On the Agreement of Commodity 3D Cameras
2019 (English)In: 23rd International Conference on Image Processing, Computer Vision, & Pattern Recognition (IPCV'19: July 29 - August 1, 2019, USA), CSREA Press, 2019Conference paper, Published paper (Refereed)
Abstract [en]

The advent of commodity 3D sensor technol- ogy has, amongst other things, enabled the efficient and effective assessment of human movements. Machine learning approaches do not rely manual definitions of gold standards for each new movement. However, to train models for the automated assessments of a new movement they still need a lot of data that map recorded movements to expert judg- ments. As camera technology changes, this training needs to be repeated if a new camera does not agree with the old one. The present paper presents an inexpensive method to check the agreement of cameras, which, in turn, would allow for a safe reuse of trained models regardless of the cameras. We apply the method to the Kinect, Astra Mini, and Real Sense cameras. The results show that these cameras do not agree and that the models cannot be reused without an unacceptable decay in accuracy. However, the suggested method works independent of movements and cameras and could potentially save effort when integrating new cameras in an existing assessment environment.

Place, publisher, year, edition, pages
CSREA Press, 2019
Keywords
3D camera agreement, human movement assessment
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:lnu:diva-89180 (URN)
Conference
3rd International Conference on Image Processing, Computer Vision, & Pattern Recognition
Available from: 2019-09-18 Created: 2019-09-18 Last updated: 2019-09-18
Hagelbäck, J., Liapota, P., Lincke, A. & Löwe, W. (2019). Variants of Dynamic Time Warping and their Performance in Human Movement Assessment. In: 21st International Conference on Artificial Intelligence (ICAI'19: July 29 - August 1, 2019, las Vegas, USA): . Paper presented at 21st International Conference on Artificial Intelligence, ICAI'19: July 29 - August 1, 2019, las Vegas, USA. CSREA Press
Open this publication in new window or tab >>Variants of Dynamic Time Warping and their Performance in Human Movement Assessment
2019 (English)In: 21st International Conference on Artificial Intelligence (ICAI'19: July 29 - August 1, 2019, las Vegas, USA), CSREA Press, 2019Conference paper, Published paper (Refereed)
Abstract [en]

The advent of commodity 3D sensor technology enabled, amongst other things, the efficient and effective assessment of human movements. Statistical and machine learning approaches map recorded movement instances to expert scores to train models for the automated assessment of new movements. However, there are many variations in selecting the approaches and setting the parameters for achieving good performance, i.e., high scoring accuracy and low response time. The present paper researches the design space and the impact of sequence alignment on accuracy and response time. More specifically, we introduce variants of Dynamic Time Warping (DTW) for aligning the phases of slow and fast movement instances and assess their effect on the scoring accuracy and response time. Results show that an automated stripping of leading and trailing frames not belonging to the movement (using one DTW variant) followed by an alignment of selected frames in the movements (based on another DTW variant) outperforms the original DTW and other suggested variants thereof. Since these results are independent of the selected learning approach and do not rely on the movement specifics, the results can help improving the performance of automated human movement assessment, in general.

Place, publisher, year, edition, pages
CSREA Press, 2019
Keywords
Dynamic Time Warping variants, human movement assessment
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:lnu:diva-89181 (URN)
Conference
21st International Conference on Artificial Intelligence, ICAI'19: July 29 - August 1, 2019, las Vegas, USA
Available from: 2019-09-18 Created: 2019-09-18 Last updated: 2019-09-18
Lincke, A., Lozano Prieto, D., Herault, R. C., Forsgärde, E.-S. & Milrad, M. (2019). Visualizing learners’ navigation behaviour using 360 degrees interactive videos. In: Alessandro Bozzon, Francisco Domínguez Mayo & Joaquim Filipe (Ed.), Proceedings of the 15th International Conference on Web Information Systems and Technologies: . Paper presented at WEBIST, Vienna, Austria, September 18-20, 2019 (pp. 358-364). Vienna: SciTePress, 1
Open this publication in new window or tab >>Visualizing learners’ navigation behaviour using 360 degrees interactive videos
Show others...
2019 (English)In: Proceedings of the 15th International Conference on Web Information Systems and Technologies / [ed] Alessandro Bozzon, Francisco Domínguez Mayo & Joaquim Filipe, Vienna: SciTePress, 2019, Vol. 1, p. 358-364Conference paper, Published paper (Refereed)
Abstract [en]

The use of 360-degrees interactive videos for educational purposes in the medical field has increased in recent years, as well as the use of virtual reality in general. Learner’s navigation behavior in 360-degrees interactive video learning environments has not been thoroughly explored yet. In this paper, a dataset of interactions generated by 80 students working in 16 groups while learning about patient trauma treatment using 360-degrees interactive videos is used to visualize learners’ navigation behavior. Three visualization approaches were designed and implemented for exploring users’ navigation paths and patterns of interaction with the learning materials are presented and discussed. The visualization tool was developed to explore the issues above and it provides a comprehensive overview of the navigation paths and patterns. A user study with four experts in the information visualization field has revealed the advantages and drawbacks of our solution. The paper concludes by providing some suggestions for improvements of the proposed visualizations.

Place, publisher, year, edition, pages
Vienna: SciTePress, 2019
Keywords
360 degrees interactive video, Interaction patterns, Navigation paths, Visualization, Navigation behaviour
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-89506 (URN)10.5220/0008356203580364 (DOI)9789897583865 (ISBN)
Conference
WEBIST, Vienna, Austria, September 18-20, 2019
Available from: 2019-10-09 Created: 2019-10-09 Last updated: 2019-10-10Bibliographically approved
Herault, R. C., Lincke, A., Milrad, M., Forsgärde, E.-S., Elmqvist, C. & Svensson, A. (2018). Design and Evaluation of a 360 Degrees Interactive Video System to Support Collaborative Training for Nursing Students in Patient Trauma Treatment. In: Yang, JC Chang, M Wong, LH Rodrigo, MMT (Ed.), 26TH INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION (ICCE 2018): . Paper presented at 26th INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, Metro Manila, PHILIPPINES, NOV 26-30, 2018 (pp. 298-303). Asia-Pacific Society for Computers in Education
Open this publication in new window or tab >>Design and Evaluation of a 360 Degrees Interactive Video System to Support Collaborative Training for Nursing Students in Patient Trauma Treatment
Show others...
2018 (English)In: 26TH INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION (ICCE 2018) / [ed] Yang, JC Chang, M Wong, LH Rodrigo, MMT, Asia-Pacific Society for Computers in Education, 2018, p. 298-303Conference paper, Published paper (Refereed)
Abstract [en]

Extreme catastrophe situations are rare in Sweden, which makes training opportunities important to secure the competence among emergency personnel that should be actively involved during those situations. There is a need to conceptualize, design and implement interactive learning environments that allow to educate, train and assess these catastrophe situations more often and in different settings, conditions and places. In order to address these challenges, a prototype system has been designed and developed containing immersive interactive 360 degrees educational videos that are available via a web browser. The content of these videos includes simulated learning scenes of a trauma team working at the hospital emergency department. Different types of interaction mechanisms are integrated within the videos in which learners should act upon and respond. The prototype was tested during the fall term 2017 with 17 students from the specialist nursing program, and four medical experts. These activities were assessed in order to get new insights into issues related to the proposed approach and feedback connected to the usefulness, usability and learnability of the suggested prototype. The initial outcomes of the evaluation indicate that the system can provide students with novel interaction mechanisms to improve their skills and it can be applied as a complementary tool to the methods used currently in their education.

Place, publisher, year, edition, pages
Asia-Pacific Society for Computers in Education, 2018
Keywords
emergency preparedness, interactive learning, nurse specialists, trauma, 360 degrees interactive videos
National Category
Media and Communication Technology Nursing
Research subject
Computer and Information Sciences Computer Science, Media Technology; Health and Caring Sciences, Nursing
Identifiers
urn:nbn:se:lnu:diva-80292 (URN)000456331300047 ()2-s2.0-85060015828 (Scopus ID)
Conference
26th INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, Metro Manila, PHILIPPINES, NOV 26-30, 2018
Available from: 2019-02-08 Created: 2019-02-08 Last updated: 2019-06-05Bibliographically approved
Lincke, A., Lundberg, J., Thunander, M., Milrad, M., Lundberg, J. & Jusufi, I. (2018). Diabetes Information in Social Media. In: Karsten Klein, Yi-Na Li, and Andreas Kerren (Ed.), Proceedings of the 11th International Symposium on Visual Information Communication and Interaction (VINCI '18): . Paper presented at 11th International Symposium on Visual Information Communication and Interaction (VINCI '18), 13-15 August 2018, Växjö, Sweden (pp. 104-105). ACM Publications
Open this publication in new window or tab >>Diabetes Information in Social Media
Show others...
2018 (English)In: Proceedings of the 11th International Symposium on Visual Information Communication and Interaction (VINCI '18) / [ed] Karsten Klein, Yi-Na Li, and Andreas Kerren, ACM Publications, 2018, p. 104-105Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Social media platforms have created new ways for people to communicate and express themselves. Thus, it is important to explore how e-health related information is generated and disseminated in these platforms. The aim of our current efforts is to investigate the content and flow of information when people in Sweden use Twitter to talk about diabetes related issues. To achieve our goals, we have used data mining and visualization techniques in order to explore, analyze and cluster Twitter data we have collected during a period of 10 months. Our initial results indicate that patients use Twitter to share diabetes related information and to communicate about their disease as an alternative way that complements the traditional channels used by health care professionals.

Place, publisher, year, edition, pages
ACM Publications, 2018
Keywords
Social media, Twitter data analysis, diabetes, visualization
National Category
Computer Sciences Human Computer Interaction
Research subject
Computer Science, Information and software visualization; Computer and Information Sciences Computer Science, Computer Science; Computer and Information Sciences Computer Science, Media Technology; Health and Caring Sciences, Health Informatics
Identifiers
urn:nbn:se:lnu:diva-78214 (URN)10.1145/3231622.3232508 (DOI)2-s2.0-85055512544 (Scopus ID)978-1-4503-6501-7 (ISBN)
Conference
11th International Symposium on Visual Information Communication and Interaction (VINCI '18), 13-15 August 2018, Växjö, Sweden
Available from: 2018-10-09 Created: 2018-10-09 Last updated: 2019-08-29Bibliographically approved
Herault, R. C., Lincke, A., Milrad, M., Forsgärde, E.-S. & Elmqvist, C. (2018). Using 360-degrees interactive videos inpatient trauma treatment education: design, development and evaluationaspects. Smart Learning Environments, 5, Article ID 26.
Open this publication in new window or tab >>Using 360-degrees interactive videos inpatient trauma treatment education: design, development and evaluationaspects
Show others...
2018 (English)In: Smart Learning Environments, E-ISSN 2196-7091, Vol. 5, article id 26Article in journal (Refereed) Published
Abstract [en]

Extremely catastrophic situations are rare in Sweden, which makes training opportunities important to ensure competence among emergency personnel who should be actively involved during such situations. There is a requirement to conceptualize, design, and implement an interactive learning environment that allows the education, training and assessment of these catastrophic situations more often, and in different environments, conditions and places. Therefore, to address these challenges, a prototype system has been designed and developed, containing immersive, interactive 360-degrees videos that are available via a web browser. The content of these videos includes situations such as simulated learning scenes of a trauma team working at the hospital emergency department. Various forms of interactive mechanisms are integrated within the videos, to which learners should respond and act upon. The prototype was tested during the fall term of 2017 with 17 students (working in groups), from a specialist nursing program, and four experts. The video recordings of these study sessions were analyzed and the outcomes are presented in this paper. Different group interaction patterns with the proposed tool were identified. Furthermore, new requirements for refining the 360-degrees interactive video, and the technical challenges associated with the production of this content, have been found during the study. The results of our evaluation indicate that the system can provide the students with novel interaction mechanisms, to improve their skills, and it can be used as a complementary tool for the teaching and learning methods currently used in their education process.

Place, publisher, year, edition, pages
Springer, 2018
Keywords
Simulation, 360-degrees interactive video, Video coding, Nurse education, Smart learning environments
National Category
Media and Communication Technology
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-80036 (URN)10.1186/s40561-018-0074-x (DOI)
Available from: 2019-01-30 Created: 2019-01-30 Last updated: 2019-07-09Bibliographically approved
Sotsenko, A. (2017). A Rich Context Model: Design and Implementation. (Licentiate dissertation). Växjö: Faculty of Technology, Linnaeus University
Open this publication in new window or tab >>A Rich Context Model: Design and Implementation
2017 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

The latest developments of mobile devices include a variety of hardware features that allow for more rich data collection and services. Numerous sensors, Internet connectivity, low energy Bluetooth connectivity to other devices (e.g., smart watches, activity tracker, health data monitoring devices) are just some examples of hardware that helps to provide additional information that can be beneficially used for many application domains. Among others, they could be utilized in mobile learning scenarios (for data collection in science education, field trips), in mobile health scenarios (for health data collection and monitoring the health state of patients, changes in health conditions and/or detection of emergency situations), and in personalized recommender systems. This information captures the current context situation of the user that could help to make mobile applications more personalized and deliver a better user experience. Moreover, the context related information collected by the mobile device and the different applications can be enriched by using additional external information sources (e.g., Web Service APIs), which help to describe the user’s context situation in more details.

The main challenge in context modeling is the lack of generalization at the core of the model, as most of the existing context models depend on particular application domains or scenarios. We tackle this challenge by conceptualizing and designing a rich generic context model. In this thesis, we present the state of the art of recent approaches used for context modeling and introduce a rich context model as an approach for modeling context in a domain-independent way. Additionally, we investigate whether context information can enhance existing mobile applications by making them sensible to the user’s current situation. We demonstrate the reusability and flexibility of the rich context model in a several case studies. The main contributions of this thesis are: (1) an overview of recent, existing research in context modeling for different application domains; (2) a theoretical foundation of the proposed approach for modeling context in a domain-independent way; (3) several case studies in different mobile application domains.

Place, publisher, year, edition, pages
Växjö: Faculty of Technology, Linnaeus University, 2017. p. 103
Series
Rapporter: Fakulteten för teknik, Linnéuniversitetet ; 48
Keywords
Context modeling, rich context model, mobile users, current context of the user, mobile sensors, multidimensional vector space model, contextualization
National Category
Computer Systems
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-60850 (URN)978-91-88357-62-5 (ISBN)
Presentation
2017-02-17, C1202, Växjö, 09:15 (English)
Opponent
Supervisors
Available from: 2017-02-24 Created: 2017-02-22 Last updated: 2019-06-05Bibliographically approved
Sotsenko, A., Zbick, J., Jansen, M. & Milrad, M. (2016). Flexible and Contextualized Cloud Applications for Mobile Learning Scenarios. In: Alejandro Peña-Ayala (Ed.), Mobile, Ubiquitous, and Pervasive Learning: Fundaments, Applications, and Trends (pp. 167-192). Springer Publishing Company
Open this publication in new window or tab >>Flexible and Contextualized Cloud Applications for Mobile Learning Scenarios
2016 (English)In: Mobile, Ubiquitous, and Pervasive Learning: Fundaments, Applications, and Trends / [ed] Alejandro Peña-Ayala, Springer Publishing Company, 2016, p. 167-192Chapter in book (Refereed)
Abstract [en]

This chapter describes our research efforts related to the design of mobile learning (m-learning) applications in cloud-computing (CC) environments. Many cloud-based services can be used/integrated in m-learning scenarios, hence, there is a rich source of applications that could easily be applied to design and deploy those within the context of cloud-based services. Here, we present two cloud-based approaches—a flexible framework for an easy generation and deployment of mobile learning applications for teachers, and a flexible contextualization service to support personalized learning environment for mobile learners. The framework provides a flexible approach that supports teachers in designing mobile applications and automatically deploys those in order to allow teachers to create their own m-learning activities supported by mobile devices. The contextualization service is proposed to improve the content delivery of learning objects (LOs). This service allows adapting the learning content and the mobile user interface (UI) to the current context of the user. Together, this leads to a powerful and flexible framework for the provisioning of potentially ad hoc mobile learning scenarios. We provide a description of the design and implementation of two proposed cloud-based approaches together with scenario examples. Furthermore, we discuss the benefits of using flexible and contextualized cloud applications in mobile learning scenarios. Hereby, we contribute to this growing field of research by exploring new ways for designing and using flexible and contextualized cloud-based applications that support m-learning.

Place, publisher, year, edition, pages
Springer Publishing Company, 2016
Series
Advances in Intelligent Systems and Computing, ISSN 2194-5357 ; 406
Keywords
Mobile learning, Contextualization, Contextualized service, Cloud computing, Cloud-based services, Context modeling
National Category
Computer Systems
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-49569 (URN)10.1007/978-3-319-26518-6_7 (DOI)2-s2.0-84966270917 (Scopus ID)978-3-319-26516-2 (ISBN)
Available from: 2016-02-04 Created: 2016-02-04 Last updated: 2019-06-05Bibliographically approved
Sotsenko, A., Jansen, M., Milrad, M. & Rana, J. (2016). Using a Rich Context Model for Real-Time Big Data Analytics in Twitter. In: 2016 IEEE 4TH INTERNATIONAL CONFERENCE ON FUTURE INTERNET OF THINGS AND CLOUD WORKSHOPS (FICLOUDW): . Paper presented at IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), AUG 22-24, 2016, Vienna, AUSTRIA (pp. 228-233). IEEE
Open this publication in new window or tab >>Using a Rich Context Model for Real-Time Big Data Analytics in Twitter
2016 (English)In: 2016 IEEE 4TH INTERNATIONAL CONFERENCE ON FUTURE INTERNET OF THINGS AND CLOUD WORKSHOPS (FICLOUDW), IEEE, 2016, p. 228-233Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present an approach for contextual big data analytics in social networks, particularly in Twitter. The combination of a Rich Context Model (RCM) with machine learning is used in order to improve the quality of the data mining techniques. We propose the algorithm and architecture of our approach for real-time contextual analysis of tweets. The proposed approach can be used to enrich and empower the predictive analytics or to provide relevant context-aware recommendations.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
rich context model, big data, context analytics, twitter, k-means clustering
National Category
Media and Communication Technology
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-58342 (URN)10.1109/W-FiCloud.2016.55 (DOI)000386667700037 ()2-s2.0-85009830198 (Scopus ID)978-1-5090-3946-3 (ISBN)978-1-5090-3947-0 (ISBN)
Conference
IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), AUG 22-24, 2016, Vienna, AUSTRIA
Available from: 2016-11-30 Created: 2016-11-28 Last updated: 2019-06-05Bibliographically approved
Sotsenko, A., Zbick, J., Jansen, M. & Milrad, M. (2015). Contextualization of Mobile Learners. In: Mohamed Hamada (Ed.), Mobile Learning: Trends, Attitudes and Effectiveness (pp. 39-54). Nova Science Publishers, Inc.
Open this publication in new window or tab >>Contextualization of Mobile Learners
2015 (English)In: Mobile Learning: Trends, Attitudes and Effectiveness / [ed] Mohamed Hamada, Nova Science Publishers, Inc., 2015, p. 39-54Chapter in book (Refereed)
Abstract [en]

This chapter describes our current research efforts related to the contextualization of learners in mobile learning activities. Substantial research in the field of mobile learning has explored aspects related to contextualized learning scenarios. However, new ways of interpretation and consideration of contextual information of mobile learners are necessary. This chapter provides an overview regarding the state of the art of innovative approaches for supporting contextualization in mobile learning. Additionally, we provide the description of the design and implementation of a flexible multi-dimensional vector space model to organize and process contextual data together with visualization tools for further analysis and interpretation. We also present a study with outcomes and insights on the usage of the contextualization support for mobile learners. To conlcude, we discuss the benefits of using contextualization models for learners in different use-cases. Moreover, a description is presented in order to illustrate how the proposed contextual model can easily be adapted and reused for different use-cases in mobile learning scenarios and potentially other mobile fields.

Place, publisher, year, edition, pages
Nova Science Publishers, Inc., 2015
Series
Education in a Competitive and Globalizing World
Keywords
Contextualization, mobile learning, rich context model, learners’ context, learning object, learning format, contextualized mobile learning, contextual information, personalization, flexible context model
National Category
Computer Systems
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-46099 (URN)978-1-63483-429-2 (ISBN)1634834291 (ISBN)
Available from: 2015-09-05 Created: 2015-09-05 Last updated: 2019-06-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9062-1609

Search in DiVA

Show all publications