Application of action recognition and tactical optimization methods for rope skipping competitions based on artificial intelligence
Abstract
To solve the problems that action recognition methods in rope skipping competitions rely on manual annotation and are prone to misjudgment in complex movements, this study implemented an AI-based rope skipping action recognition and tactical optimization method, using artificial intelligence technology to achieve efficient and accurate action recognition and tactical adjustment. The feature extraction of video frames is performed through Convolutional Neural Network (CNN), and the processed feature sequence is sent to Long Short-Term Memory (LSTM) network for processing, so as to achieve accurate recognition of rope skipping actions. To optimize the competition strategy, the Deep Q Network (DQN) is used to optimize the tactical execution. Experimental results show that the proposed model can recognize common rope skipping movements such as single jump, double-leg jump and cross jump with an average accuracy of 98.4%; the tactical strategy optimized by reinforcement learning significantly improves the performance of athletes, the jumping frequency increases by 4.59% and the error rate decreases by 0.986%. This study not only provides an intelligent evaluation and optimization solution for rope skipping competitions, but also has certain reference significance for action recognition and tactical decision-making in other sports.
References
1. Pareek P, Thakkar A. A survey on video-based human action recognition: recent updates, datasets, challenges, and applications. Artificial Intelligence Review, 2021, 54(3): 2259-2322.
2. Zhou Z, Ding C, Li J, et al. Sequential order-aware coding-based robust subspace clustering for human action recognition in untrimmed videos. IEEE Transactions on Image Processing, 2022, 32: 13-28.
3. Banerjee A, Singh P K, Sarkar R. Fuzzy integral-based CNN classifier fusion for 3D skeleton action recognition. IEEE transactions on circuits and systems for video technology, 2020, 31(6): 2206-2216.
4. Khan M A, Zhang Y D, Khan S A, et al. A resource conscious human action recognition framework using 26-layered deep convolutional neural network. Multimedia Tools and Applications, 2021, 80: 35827-35849.
5. Leong M C, Prasad D K, Lee Y T, et al. Semi-CNN architecture for effective spatio-temporal learning in action recognition. Applied Sciences, 2020, 10(2): 557.
6. Cui Z. 3D-CNN-based Action Recognition Algorithm for Basketball Players. Informatica, 2024, 48(13).
7. He J Y, Wu X, Cheng Z Q, et al. DB-LSTM: Densely-connected Bi-directional LSTM for human action recognition. Neurocomputing, 2021, 444: 319-331.
8. Muhammad K, Ullah A, Imran A S, et al. Human action recognition using attention based LSTM network with dilated CNN features. Future Generation Computer Systems, 2021, 125: 820-830.
9. Zhu A, Wu Q, Cui R, et al. Exploring a rich spatial–temporal dependent relational model for skeleton-based action recognition by bidirectional LSTM-CNN. Neurocomputing, 2020, 414: 90-100.
10. Chao Z H, Ya Long Y, Yi L, et al. Deep Q Learning-Enabled Training and Health Monitoring of Basketball Players Using IoT Integrated Multidisciplinary Techniques. Mobile Networks and Applications, 2024: 1-16.
11. Xia K, Huang J, Wang H. LSTM-CNN architecture for human activity recognition. IEEE Access, 2020, 8: 56855-56866.
12. Bousmina A, Selmi M, Ben Rhaiem M A, et al. A hybrid approach based on gan and cnn-lstm for aerial activity recognition. Remote Sensing, 2023, 15(14): 3626.
13. Sarabu A, Santra A K. Human action recognition in videos using convolution long short-term memory network with spatio-temporal networks. Emerging Science Journal, 2021, 5(1): 25-33.
14. Wensel J, Ullah H, Munir A. Vit-ret: Vision and recurrent transformer neural networks for human activity recognition in videos. IEEE Access, 2023.
15. Zhang Y, Peng L, Ma G, et al. Dynamic gesture recognition model based on millimeter-wave radar with ResNet-18 and LSTM. Frontiers in Neurorobotics, 2022, 16: 903197.
16. Dass S D S, Barua H B, Krishnasamy G, et al. ActNetFormer: Transformer-ResNet Hybrid Method for Semi-Supervised Action Recognition in Videos. arXiv preprint arXiv:2404.06243, 2024.
17. Li J, Gong R, Wang G. Enhancing fitness action recognition with ResNet-TransFit: Integrating IoT and deep learning techniques for real-time monitoring. Alexandria Engineering Journal, 2024, 109: 89-101.
18. Lin Y, Chi W, Sun W, et al. Human action recognition algorithm based on improved ResNet and skeletal keypoints in single image. Mathematical Problems in Engineering, 2020, 2020(1): 6954174.
19. Ronald M, Poulose A, Han D S. iSPLInception: an inception-ResNet deep learning architecture for human activity recognition. IEEE Access, 2021, 9: 68985-69001.
20. Zhang Z, Lv Z, Gan C, et al. Human action recognition using convolutional LSTM and fully-connected LSTM with different attentions. Neurocomputing, 2020, 410: 304-316.
21. Saoudi E M, Jaafari J, Andaloussi S J. Advancing human action recognition: A hybrid approach using attention-based LSTM and 3D CNN. Scientific African, 2023, 21: e01796.
22. Ullah M, Yamin M M, Mohammed A, et al. Attention-based LSTM network for action recognition in sports. Electronic Imaging, 2021, 33: 1-6.
23. Mekruksavanich S, Jitpattanakul A, Sitthithakerngkiet K, et al. Resnet-se: Channel attention-based deep residual network for complex activity recognition using wrist-worn wearable sensors. IEEE Access, 2022, 10: 51142-51154.
24. Dai C, Liu X, Lai J. Human action recognition using two-stream attention based LSTM networks. Applied soft computing, 2020, 86: 105820.
25. Shi-qiang Y, Jiang-tao Y, Zhuo L, et al. Human action recognition based on LSTM neural network. Journal of graphics, 2021, 42(2): 174.
26. Ng W, Zhang M, Wang T. Multi-localized sensitive autoencoder-attention-LSTM for skeleton-based action recognition. IEEE Transactions on Multimedia, 2021, 24: 1678-1690.
27. Yang Y, Li F, Chang H. Enhancing Short Track Speed Skating Performance through Improved DDQN Tactical Decision Model. Sensors, 2023, 23(24): 9904.
28. Li W. An IoT-based Smart Healthcare integrated solution for Basketball using Q-Learning Algorithm. Mobile Networks and Applications, 2024: 1-14.
29. Su W, Gao M, Gao X, et al. Adaptive decision-making with deep Q-network for heterogeneous unmanned aerial vehicle swarms in dynamic environments. Computers and Electrical Engineering, 2024, 119: 109621.
30. Zhang J, Tao D. Research on deep reinforcement learning basketball robot shooting skills improvement based on end to end architecture and multi-modal perception. Frontiers in Neurorobotics, 2023, 17: 1274543.
31. Cheng W, Cheng W. Optimization research on biomechanical characteristics and motion detection technology of lower limbs in basketball sports. Molecular & Cellular Biomechanics, 2024, 21(3): 488-488.
32. Guo J, Liu Q, Chen E. A deep reinforcement learning method for multimodal data fusion in action recognition. IEEE Signal Processing Letters, 2021, 29: 120-124.
33. Chang C W, Chang C Y, Lin Y Y. A hybrid CNN and LSTM-based deep learning model for abnormal behavior detection. Multimedia Tools and Applications, 2022, 81(9): 11825-11843.
34. Ercolano G, Rossi S. Combining CNN and LSTM for activity of daily living recognition with a 3D matrix skeleton representation. Intelligent Service Robotics, 2021, 14(2): 175-185.
35. Gorji A, Bourdoux A, Pollin S, et al. Multi-view CNN-LSTM architecture for radar-based human activity recognition. Ieee Access, 2022, 10: 24509-24519.
Copyright (c) 2024 Huan Zhang
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright on all articles published in this journal is retained by the author(s), while the author(s) grant the publisher as the original publisher to publish the article.
Articles published in this journal are licensed under a Creative Commons Attribution 4.0 International, which means they can be shared, adapted and distributed provided that the original published version is cited.