lnu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 28) Show all publications
Georgiadis, A. & Yousefi, S. (2017). Analysis of the user experience in a 3D gesture-based supported mobile VR game. In: VRST '17 Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology: . Paper presented at 23rd ACM Symposium on Virtual Reality Software and Technology. ACM Publications, Article ID 47.
Open this publication in new window or tab >>Analysis of the user experience in a 3D gesture-based supported mobile VR game
2017 (English)In: VRST '17 Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, ACM Publications, 2017, article id 47Conference paper, Published paper (Refereed)
Abstract [en]

The work presented in this paper, explored the enhancement of User Experience (UX) by introducing a novel gesture-based controller in a mobile multiplayer Virtual Reality (VR) game. Using only the smartphone's RGB camera, the image input was used for both gesture analysis, capable of understanding user actions, as well as segmenting the real hand that was illustrated in the Virtual Environment (VE). Users were also able to share the VR space by cooperating in a survival-strategy scenario. The results from the user studies indicated that both the bare hand controller and the addition of another player in the VR scene, affected the experience for the participants. Users had a stronger feeling of presence in the VE when participated with an other user, and the visual representation of their hand in the VR world made the interactions seem more natural. Even though, there is still a number of limitations, this project nodes this approach capable of offering a natural and engaging solution of VR interaction, capable of rich UX while maintaining a low entry level for the end users.

Place, publisher, year, edition, pages
ACM Publications, 2017
National Category
Computer Systems Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology; Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:lnu:diva-75426 (URN)10.1145/3139131.3141224 (DOI)000455354500047 ()978-1-4503-5548-3 (ISBN)
Conference
23rd ACM Symposium on Virtual Reality Software and Technology
Available from: 2018-06-09 Created: 2018-06-09 Last updated: 2019-05-27Bibliographically approved
Yousefi, S., Kidane, M., Delgado, Y., Chana, J. & Reski, N. (2016). 3D Gesture-Based Interaction for Immersive Experience in Mobile VR. In: 2016 23rd International Conference on Pattern Recognition (ICPR)Cancún Center, Cancún, México, December 4-8, 2016: . Paper presented at 23rd International Conference on Pattern Recognition (ICPR): Image Analysis and Machine Learning for Scene Understanding" Cancun, 4-8 December, 2016 (pp. 2122-2127). Cancun: IEEE Press
Open this publication in new window or tab >>3D Gesture-Based Interaction for Immersive Experience in Mobile VR
Show others...
2016 (English)In: 2016 23rd International Conference on Pattern Recognition (ICPR)Cancún Center, Cancún, México, December 4-8, 2016, Cancun: IEEE Press, 2016, , p. 6p. 2122-2127Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we introduce a novel solution for real-time 3D hand gesture analysis using the embedded 2D camera of a mobile device. The presented framework is based on forming a large database of hand gestures including the ground truth information of hand poses and details of finger joints in 3D. For a query frame captured by the mobile device's camera in real time, the gesture analysis system finds the best match from the database. Once the best match is found, the corresponding ground truth information will be used for interaction in the designed interface. The presented framework performs an extremely efficient gesture analysis (more than 30 fps) in flexible lighting condition and complex background with dynamic movement of the mobile device. The introduced work is implemented in Android and tested in Gear VR headset.

Place, publisher, year, edition, pages
Cancun: IEEE Press, 2016. p. 6
Series
International Conference on Pattern Recognition, ISSN 1051-4651
Keywords
Gesture and Behavior Analysis, Image and video analysis and understanding, Human Computer Interaction
National Category
Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-61447 (URN)10.1109/ICPR.2016.7899949 (DOI)000406771302019 ()2-s2.0-85019102175 (Scopus ID)9781509048465 (ISBN)9781509048472 (ISBN)
Conference
23rd International Conference on Pattern Recognition (ICPR): Image Analysis and Machine Learning for Scene Understanding" Cancun, 4-8 December, 2016
Available from: 2017-03-16 Created: 2017-03-16 Last updated: 2019-06-11Bibliographically approved
Yousefi, S. & Li, H. (2015). 3D Hand Gesture Analysis through a Real-time Gesture Search Engine. International Journal of Advanced Robotic Systems, 12, Article ID 67.
Open this publication in new window or tab >>3D Hand Gesture Analysis through a Real-time Gesture Search Engine
2015 (English)In: International Journal of Advanced Robotic Systems, ISSN 1729-8806, E-ISSN 1729-8814, Vol. 12, article id 67Article in journal (Refereed) Published
Abstract [en]

3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real-time with high accuracy using an ordinary camera.

National Category
Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-40975 (URN)10.5772/60045 (DOI)
Available from: 2015-02-12 Created: 2015-03-18 Last updated: 2017-12-04Bibliographically approved
Yousefi, S. & Li, H. (2015). 3D Interaction through a Real-time Gesture Search Engine. In: Computer Vision - ACCV 2014 Workshops: Singapore, Singapore, November 1-2, 2014, Revised Selected Papers, Part II. Paper presented at 12th Asian Conference on Computer Vision (ACCV), 2nd Workshop on User-Centred Computer Vision (pp. 199-213). Springer
Open this publication in new window or tab >>3D Interaction through a Real-time Gesture Search Engine
2015 (English)In: Computer Vision - ACCV 2014 Workshops: Singapore, Singapore, November 1-2, 2014, Revised Selected Papers, Part II, Springer, 2015, p. 199-213Conference paper, Published paper (Refereed)
Abstract [en]

3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for realtime 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real-time with high accuracy using an ordinary camera.

Place, publisher, year, edition, pages
Springer, 2015
National Category
Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-40977 (URN)10.1007/978-3-319-16631-5_15 (DOI)978-3-319-16630-8 (ISBN)
Conference
12th Asian Conference on Computer Vision (ACCV), 2nd Workshop on User-Centred Computer Vision
Available from: 2015-02-12 Created: 2015-03-18 Last updated: 2017-04-19Bibliographically approved
Tewele, M. K., Yousefi, S. & Milrad, M. (2015). Supporting video conference communication using a vision-based human facial synthesis approach. In: Proceedings of IEEE SAI Intelligent Systems Conference (IntelliSys), 2015: . Paper presented at IEEE SAI Intelligent Systems Conference (IntelliSys), 2015 (pp. 807-812). London: IEEE conference proceedings
Open this publication in new window or tab >>Supporting video conference communication using a vision-based human facial synthesis approach
2015 (English)In: Proceedings of IEEE SAI Intelligent Systems Conference (IntelliSys), 2015, London: IEEE conference proceedings, 2015, p. 807-812Conference paper, Published paper (Refereed)
Abstract [en]

Facial expressions plays an important role in human communication. In many cases where anonymity is a high priority, the identity of a person needs to be hidden or replaced by another one. Recent advancements in facial expression analysis technology have been used in order to duplicate facial human expressions with an avatar. However, a 3D avatar does not convey the same feeling as a real human face does. This paper documents an exploratory study investigating how vision-based facial analysis can be used to match someones facial expressions and head movements with pre-recorded video segments of another person. As a result of these efforts the identity of that person can be replaced with another person in real-time for videoconference supported communication. The proposed technical solutions have been implemented to support real-t

Place, publisher, year, edition, pages
London: IEEE conference proceedings, 2015
Keywords
avatars;emotion recognition;face recognition;teleconferencing;video communication;video signal processing;Video Conference Communication;avatar;facial expression analysis technology;videoconference supported communication;vision-based human facial synthesis approach;Avatars;Databases;Face;Real-time systems;Servers;Streaming media;Facial analysis;head motion analysis;video conference;video synthesis
National Category
Interaction Technologies Media Engineering
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:lnu:diva-50757 (URN)10.1109/IntelliSys.2015.7361234 (DOI)000378642300115 ()2-s2.0-84962648998 (Scopus ID)978-1-4673-7606-8 (ISBN)
Conference
IEEE SAI Intelligent Systems Conference (IntelliSys), 2015
Available from: 2016-03-15 Created: 2016-03-15 Last updated: 2017-04-19Bibliographically approved
Yousefi, S., Li, H. & Liu, L. (2014). 3D Gesture Analysis Using a Large-Scale Gesture Database. In: George Bebis et al (Ed.), Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part I. Paper presented at 10th International Symposium on Visual Computing (pp. 206-217). Springer
Open this publication in new window or tab >>3D Gesture Analysis Using a Large-Scale Gesture Database
2014 (English)In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part I / [ed] George Bebis et al, Springer, 2014, p. 206-217Conference paper, Published paper (Refereed)
Abstract [en]

3D gesture analysis is a highly desired feature of future interaction design. Specifically, in augmented environments, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities. This paper, introduces a novel solution for real-time 3D gesture analysis using an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique search algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query input and database and retrieving the best match. Once the best match is found from the database in real-time, the pre-calculated 3D parameters can instantly be used for gesture-based interaction.

Place, publisher, year, edition, pages
Springer, 2014
Series
Lecture Notes in Computer Science, ISSN 0302-9743
National Category
Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-40973 (URN)10.1007/978-3-319-14249-4_20 (DOI)978-3-319-14248-7 (ISBN)
Conference
10th International Symposium on Visual Computing
Available from: 2015-02-12 Created: 2015-03-18 Last updated: 2017-04-19Bibliographically approved
Yousefi, S. (2014). 3D Gesture Recognition and Tracking for Next Generation of Smart Devices: Theories, Concepts, and Implementations. (Doctoral dissertation). Stockholm: KTH Royal Institute of Technology
Open this publication in new window or tab >>3D Gesture Recognition and Tracking for Next Generation of Smart Devices: Theories, Concepts, and Implementations
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The rapid development of mobile devices during the recent decade has been greatly driven by interaction and visualization technologies. Although touchscreens have signicantly enhanced the interaction technology, it is predictable that with the future mobile devices, e.g., augmentedreality glasses and smart watches, users will demand more intuitive in-puts such as free-hand interaction in 3D space. Specically, for manipulation of the digital content in augmented environments, 3D hand/body gestures will be extremely required. Therefore, 3D gesture recognition and tracking are highly desired features for interaction design in future smart environments. Due to the complexity of the hand/body motions, and limitations of mobile devices in expensive computations, 3D gesture analysis is still an extremely diffcult problem to solve.

This thesis aims to introduce new concepts, theories and technologies for natural and intuitive interaction in future augmented environments. Contributions of this thesis support the concept of bare-hand 3D gestural interaction and interactive visualization on future smart devices. The introduced technical solutions enable an e ective interaction in the 3D space around the smart device. High accuracy and robust 3D motion analysis of the hand/body gestures is performed to facilitate the 3D interaction in various application scenarios. The proposed technologies enable users to control, manipulate, and organize the digital content in 3D space.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2014. p. xii, 101
Series
TRITA-CSC-A, ISSN 1653-5723 ; 14:02
Keywords
3D gestural interaction, gesture recognition, gesture tracking, 3D visualization, 3D motion analysis, augmented environments
National Category
Media and Communication Technology
Research subject
Media Technology
Identifiers
urn:nbn:se:lnu:diva-40974 (URN)978-91-7595-031-0 (ISBN)
Public defence
2014-03-17, F3, Lindstedtsvägen 26, KTH, 13:15 (English)
Opponent
Supervisors
Note

QC 20140226

Available from: 2014-02-26 Created: 2015-03-18 Last updated: 2018-01-11Bibliographically approved
Kondori, F. A., Yousefi, S., Ostovar, A., Liu, L. & Li, H. (2014). A Direct Method for 3D Hand Pose Recovery. In: Pattern Recognition (ICPR), 2014 22nd International Conference on: . Paper presented at 22nd International Conference on Pattern Recognition (ICPR2014)Stockholm, August 2014 (pp. 345-350). IEEE Press
Open this publication in new window or tab >>A Direct Method for 3D Hand Pose Recovery
Show others...
2014 (English)In: Pattern Recognition (ICPR), 2014 22nd International Conference on, IEEE Press, 2014, p. 345-350Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel approach for performing intuitive 3D gesture-based interaction using depth data acquired by Kinect. Unlike current depth-based systems that focus only on classical gesture recognition problem, we also consider 3D gesture pose estimation for creating immersive gestural interaction. In this paper, we formulate gesture-based interaction system as a combination of two separate problems, gesture recognition and gesture pose estimation. We focus on the second problem and propose a direct method for recovering hand motion parameters. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Our experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation. This application is intended to explore the system capabilities in real-time biomedical applications. Eventually, system usability test is conducted to evaluate the learn ability, user experience and interaction quality in 3D interaction in comparison to 2D touch-screen interaction.

Place, publisher, year, edition, pages
IEEE Press, 2014
National Category
Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-40981 (URN)10.1109/ICPR.2014.68 (DOI)
Conference
22nd International Conference on Pattern Recognition (ICPR2014)Stockholm, August 2014
Available from: 2014-04-11 Created: 2015-03-18 Last updated: 2017-04-19Bibliographically approved
Yousefi, S., Abedan Kondori, F. & Li, H. (2014). Bare-hand Gesture Recognition and Tracking through the Large-scale Image Retrieval. In: : . Paper presented at 9th International Conference on Computer Vision Theory and Applications (VISAPP). SciTePress
Open this publication in new window or tab >>Bare-hand Gesture Recognition and Tracking through the Large-scale Image Retrieval
2014 (English)Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
SciTePress, 2014
National Category
Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-40983 (URN)
Conference
9th International Conference on Computer Vision Theory and Applications (VISAPP)
Note

NQC 2014

Available from: 2014-02-25 Created: 2015-03-18 Last updated: 2017-04-19Bibliographically approved
Abedan Kondori, F., Yousefi, S. & Li, H. (2014). Direct three-dimensional head pose estimation from Kinect-type sensors. Electronics Letters, 50(4), 268-270
Open this publication in new window or tab >>Direct three-dimensional head pose estimation from Kinect-type sensors
2014 (English)In: Electronics Letters, ISSN 0013-5194, E-ISSN 1350-911X, Vol. 50, no 4, p. 268-270Article in journal (Refereed) Published
Abstract [en]

A direct method for recovering three-dimensional (3D) head motion parameters from a sequence of range images acquired by Kinect sensors is presented. Based on the range images, a new version of the optical flow constraint equation is derived, which can be used to directly estimate 3D motion parameters without any need of imposing other constraints. Since all calculations with the new constraint equation are based on the range images, Z(xyt), the existing techniques and experiences developed and accumulated on the topic of motion from optical flow can be directly applied simply by treating the range images as normal intensity images I(xyt). In this reported work, it is demonstrated how to employ the new optical flow constraint equation to recover the 3D motion of a moving head from the sequences of range images, and furthermore, how to use an old trick to handle the case when the optical flow is large. It is shown, in the end, that the performance of the proposed approach is comparable with that of some of the state-of-the-art approaches that use range data to recover 3D motion parameters.

National Category
Signal Processing
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
urn:nbn:se:lnu:diva-40987 (URN)10.1049/el.2013.2489 (DOI)000331405200019 ()
Available from: 2014-03-06 Created: 2015-03-18 Last updated: 2017-12-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-2203-5805

Search in DiVA

Show all publications