Username   Password       Forgot your password?  Forgot your username? 

Exploring the Effects of Group Interaction in Large Display Systems

Volume 14, Number 1, January 2018, pp. 159-167
DOI: 10.23940/ijpe.18.01.p17.159167

Hao Jianga,b,c,*, Chang Gaob,c, Tianlu Maob,c, Hui Lid, Zhaoqi Wangb,c

aHunan Engineering Laboratory of Digital Preservation Technology of Traditional Settlements, Hunan, 421002, China
bBeijing Key Laboratory of Mobile Computing and Pervasive Device, Beijing, 100190, China
cInstitute of Computing Technology, Chinese Academy of Science, Beijing, 100190, China
dSchool of Computer Science and Technology,Sichuan University, Sichuan, 610065, China

(Submitted on October 18, 2017; Revised on November 19, 2017; Accepted on December 17, 2017)


Large display systems have been successfully applied in the virtual reality domain because they can provide a full immersion sense through large visual space and high display resolution. However, most of the previous studies on the interaction method of those systems focused on single or double users. In this paper, we study the effects of integrating group interaction in such systems and we propose a framework called “Groupnect”, which enables the unique experience of group interaction in a large display system. By using optical tracking and 3D gesture recognition technologies, our approach can automatically recognize gesture-based control signals for 20 users simultaneously and trigger corresponding real-time actions in a back-end system. Through a comparison experiment between standard interaction mode and group interaction mode, the results demonstrate that physical and mental participation of users could be promoted in group interaction mode. It has immense potential to design group interaction applications on entertainment, education and training areas.


References: 25

1. R. Aspin, “An Initial Study into Augmented Inward Looking Exploration and Navigation in Cave-Like Ipt Systems,” In Virtual Reality Conference, 2008. VR ’08. IEEE, pages 245–246, March 2008.
2. X. Cao and R. Balakrishnan, “Visionwand: Interaction Techniques for Large Displays Using a Passive Wand Tracked in 3d,” in Proceedings of the 16th annual ACM symposium on User interface software and technology, pages 173–182. ACM, 2003.
3. O. Chapuis, A. Bezerianos, and S. Frantzeskakis, “Smarties: An Input System for Wall Display Development,” In Proceedings of International Conference on Human Factors in Computing Systems, pages 2763–2772, 2014.
4. K. Cheng and K. Pulo, “Direct Interaction with Large-Scale Display Systems Using Infrared Laser Tracking Devices,” in Proceedings of the Asia-Pacific symposium on Information visualisation, Volume 24, pages 67–74. Australian Computer Society, Inc., 2003.
5. C. Cruz-Neira, D. J. Sandin, T. A. DeFanti, R. V. Kenyon, J. C. Hart, “The Cave: Audio Visual Experience Automatic Virtual Environment,” Commun. ACM, 35(6):64–72, June 1992.
6. A. Febretti, A. Nishimoto, V. Mateevitsi, L. Renambot, A. Johnson, and J. Leigh, “Omegalib: A Multi-View Application Framework for Hybrid Reality Display Environments,” In Virtual Reality (VR) IEEE, pages 9–14, March 2014.
7. T. K. Fredericks, D. C. Sang, J. Hart, S. E. Butt, and A. Mital, “An Investigation of Myocardial Aerobic Capacity as a Measure of Both Physical and Cognitive Workloads,” International Journal of Industrial Ergonomics, 35(12):10971107, 2005.
8. S. G. Hart and L. E. Staveland, “Development of Nasa-tlx (task load index): Results of Empirical and Theoretical Research,” Advances in Psychology, page 139183, 1988.
9. B. Izatt, K. Scholberq, and R.P. Mcmahan. “Super-Kave, an Immersive Visualization Tool for Neutrino Physics,” In Virtual Reality (VR), 2013 IEEE, pages 75–76, March 2013.
10. R. Jota, M. A. Nacenta, J. A. Jorge, S. Carpendale, and S. Greenberg, “A Comparison of Ray Pointing Techniques for Very Large Displays,” In Proceedings of Graphics Interface 2010, pages 269–276. Canadian Information Processing Society, 2010.
11. J. Kim, D. Gracanin, K. Matkovic, and F. Quek, “Iphone/Ipod Touch as Input Devices for Navigation in Immersive Virtual Environments,” In Virtual Reality Conference, 2009. VR 2009. IEEE, pages 261–262, March 2009.
12. R. Kopper, D. A. Bowman, M. G. Silva, and R.P. McMahan, “A Human Motor Behavior Model for Distal Pointing Tasks,” International Journal of human-computer studies, 68(10):603–615, 2010.
13. J.J. Laviola,  Jr. Daniel,  F. Keefe,  R.C. Zeleznik,  D.A. Feliz, “Case Studies in Building Custom Input Devices for Virtual Environment Interaction,” VR Workshop Beyond Glove & Wand Based Interaction, pages 67–71, 2004.
14. H. K. Lee and J. H. Kim, “An Hmm-Based Threshold Model Approach for Gesture Recognition,” IEEE Transactions on Pattern Analysis Machine Intelligence, 21(10):961–973, 1999.
15. Y. Li, “Protractor: A Fast and Accurate Gesture Recognizer,” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, pages 2169–2172, New York, NY, USA, 2010. ACM.
16. M. Nancel, O. Chapuis, E. Pietriga, X. D. Yang, P. P. Irani, M. Beaudouin-Lafon, “High-precision Pointing on Large Wall Displays Using Small Handheld Devices,” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’13, pages 831–840, New York, NY, USA, 2013. ACM.
17. M. Nancel, J. Wagner, E. Pietriga, O. Chapuis, and W. Mackay, “Mid-air Pan-and-zoom on Wall-Sized Displays,” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 177–186. ACM, 2011.
18. T. Ni, G.S. Schmidt, O.G. Staadt, M.A. Livingston, R. Ball, and R. May, “A Survey of Large High-Resolution Display Technologies, Techniques, and Applications,” In Virtual Reality Conference, 2006, pages 223–236, March 2006.
19. M. Nunes, L. Nedel, and V. Roesler, “Motivating People to Perform Better in Exergames: Collaboration vs. Competition in Virtual Environments,” In Virtual Reality (VR), 2013 IEEE, pages 115–116, March 2013.
20. R. Pausch, J. Snoddy, R. Taylor, S. Watson, E. Haseltine, “Disney’s Aladdin: First Steps Toward Storytelling in Virtual Reality,” In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’96, pages 193–203, New York, NY, USA, 1996. ACM.
21. B. Peng and G. Qian, “Online Gesture Spotting from Visual Hull Data,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(6):1175–1188, June 2011.
22. D. Roberts, A. S. Garcia, J. Dodiya, R. Wolff, A. J. Fairchild, and T. Fernando, “Collaborative Telepresence Workspaces for Space Operation and Science,” Institute of Electrical & Electronics Engineers, 2015.
23. A. Simon and S. Scholz, “Multi-viewpoint Images for Multi-user Interaction,” In Virtual Reality, 2005. Proceedings. VR 2005. IEEE, pages 107–113, March 2005.
24. D. Vogel and R. Balakrishnan, “Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays,” In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, UIST ’05, pages 33–42, New York, NY, USA, 2005. ACM.
25. H. D. Yang, S. Sclaroff, and S. W. Lee, “Sign Language Spotting with a Threshold Model based on Conditional Random Fields,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(7):1264–1277, July 2009.


Please note : You will need Adobe Acrobat viewer to view the full articles.Get Free Adobe Reader

This site uses encryption for transmitting your passwords.