
A Real-time Feedback System for Presentation Skills
PROCEEDINGS
Stephan Kopf, Daniel Schön, Benjamin Guthier, Roman Rietsche, Wolfgang Effelsberg, University of Mannheim, Germany
EdMedia + Innovate Learning, in Montreal, Quebec, Canada ISBN 978-1-939797-16-2 Publisher: Association for the Advancement of Computing in Education (AACE), Waynesville, NC
Abstract
Giving a presentation is an everyday skill in many people’s educational and professional life. However, training is still rare and expensive. Important aspects like talking speed or body language are well understood, and many good practices exist, but they are difficult to evaluate. They require at least one experienced trainer who attends the presentation and is able to evaluate and give a constructive feedback. Our aim is to make a first step towards an automatic feedback system for presentation skills by using common motion-detection technology. We implemented a software tool using Microsoft’s Kinect and captured gestures, eye-contact, movement, speech, and the speed of slide changes. A short evaluation using eight presentations in a university context showed that speaker movement and body gestures are detected well while not all spoken words and slide changes could be recognized due to the Kinect’s limited technical capabilities.
Citation
Kopf, S., Schön, D., Guthier, B., Rietsche, R. & Effelsberg, W. (2015). A Real-time Feedback System for Presentation Skills. In S. Carliner, C. Fulford & N. Ostashewski (Eds.), Proceedings of EdMedia 2015--World Conference on Educational Media and Technology (pp. 1686-1693). Montreal, Quebec, Canada: Association for the Advancement of Computing in Education (AACE). Retrieved June 6, 2023 from https://www.learntechlib.org/primary/p/151444/.
© 2015 Association for the Advancement of Computing in Education (AACE)
References
View References & Citations Map- Dondi, P., Lombardi, L., & Porta, M. (2014). Development of gesture-based human–computer interaction applications by fusion of depth and colour video streams, IET Computer Vision (Volume 8, Issue 6, 568-578)
- Gallo, L., Placitelli, A., & Ciampi, M. (2011). Controller-free exploration of medical image data: Experiencing the Kinect, in: Computer-Based Medical Systems.
- Hey, B. (2011). Presenting in science and research (in German), Berlin, Heidelberg: Springer.
- Kim, Y., Lee, M., Park, J., Jung, S., Kim, K., & Cha, J. (2015). Design of Exhibition contents using swipe gesture recognition communication based on Kinect, International Conf. On Information Networking (346-347).
- Kopf, S., Haenselmann, T., Farin, D., Effelsberg, W. (2004). Automatic Generation of Summaries for the Web. Proc. Of IS& T/SPIE Electronic Imaging (EI), Vol. 5307, pp. 417 – 428.
- Kopf, S., Haenselmann, T., Effelsberg, W. (2005). Shape-based Posture and Gesture Recognition in Videos. Proc. Of IS& T/SPIE Electronic Imaging (EI), Vol. 5682, pp. 114 – 124.
- Kopf, S., Haenselmann, T., Kiess, J., Guthier, B., Effelsberg, W. (2011). Algorithms for video retargeting. Multimedia Tools and Applications (MTAP), Vol. 51 (2), pp. 819 – 861.
- Kopf, S., Wilk, S., Effelsberg, W. (2012). Bringing Videos to Social Media. Proc. Of IEEE International Conference on Multimedia and Expo (ICME), pp. 681 – 686.
- Mentzel, W., & Flume, P. (2008). Rhetoric (in German), Planegg, Munich: Haufe publishing.
- Nöllke, C. (2011). Presentation (in German), Freiburg: Haufe publishing.
- Pabst-Weinschenk, M. (1995). Talking in studies: a training program (in German), Berlin: Cornelsen Scriptor.
- Panger, G. (2012). Kinect in the kitchen: testing depth camera interactions in practical home environments, in: CHI ’12 Extended Abstracts on Human Factors in Computing Systems (1985–1990).
- Rahman, A.M., Saboune, J., & El Saddik, A. (2011). Motion-path based in car gesture control of the multimedia devices, in: Proceedings of the first ACM international symposium on Design and analysis of intelligent vehicular networks and applications (69–76).
- Richter, S., Kühne, G., Schuster, O. (2001). Contour-based Classification of Video Objects. Proc. Of IS& T/SPIE Electronic Imaging (EI), Vol. 4315, pp. 608 – 618.
- Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., & Blake, A. (2011). Real-time human pose recognition in parts from single depth images, in: IEEE Conference on Computer Vision and Pattern Recognition (1297–1304).
- Shum, H., Ho, E., Jiang, Y., & Takagi, S. (2013). Real-time posture reconstruction for Microsoft Kinect, IEEE Transactions on Cybernetics (Volume 43, Issue 5, 1357–1369).
- Wang, C., Liu, Z., & Chan, S. (2015). Superpixel-Based Hand Gesture Recognition With Kinect Depth Camera, IEEE Transactions on Multimedia (Volume 17, Issue 1, 29-39).
- Wilk, S., Kopf, S., Effelsberg, W. (2013). Social Video: A Collaborative Video Annotation Environment to Support E-Learning. Proc. Of World Conference on Educational Multimedia, Hypermedia and Telecommunications (EdMedia), pp. 1228-1237.
- Yang, Z., Zicheng, L., & Hong, C. (2013). RGB-depth feature for 3D human activity recognition, Communications, China (Volume 10, Issue 7, 93–103).
- Yao, Y., & Fu, Y. (2014). Contour Model-Based Hand-Gesture Recognition Using the Kinect Sensor, IEEE Transactions on Circuits and Systems for Video Technology (Volume 24, Issue 11, 1935-1944).
These references have been extracted automatically and may have some errors. Signed in users can suggest corrections to these mistakes.
Suggest Corrections to References