Recognition of foot footwork based on dual model convolutional neural network-driven biomechanics patterns
Abstract
Today, with the increasing popularity of football, more and more scholars will focus on using the digital and systematic management methods of football to further improve the safety and effectiveness of football. Similar to how cells within an organism operate in a highly coordinated and biomechanically regulated manner, football players' movements also involve complex biomechanical processes. Establishing a football data analysis system and guiding athletes is like understanding and modulating the molecular interactions and mechanical forces within cells to optimize their function. However, the current such systems are mostly based on video monitoring technology, and their actual operation process is limited by the deployment environment and expensive. In order to realize the widespread popularity of football movement analysis, this paper uses intelligent wearable devices, based on dual model convolutional neural network (DMCNN) for football players virtual step (behind), step (puskash), push progress (sliding), test (inside and outside cycling) and jump (Ronaldo), and by adjusting the convolutional core size and convolution step parameters optimize neural network performance. The resulting algorithm model, which outperforms the K nearest neighbor (KNN) and support vector machine (SVM) algorithms, provides a more accurate understanding of football players’ biomechanical patterns, just as advanced cell molecular biomechanics techniques offer deeper insights into cellular behavior and function, potentially leading to more refined training regimens and injury prevention strategies in football.
References
1. Khan M A, Javed K, Saba T. Human Action Recognition using Fusion of Multiview and Deep Features: An Application to Video Surveillance. Multimedia Tools and Applications, 2020.
2. Mingkang Z, Xianling L, University J, et al. Human Action Recognition Algorithm Based on Bi-LSTM-Attention Model. Laser & Optoelectronics Progress, 2019, 56(15):1503-1512.
3. Knudson, D. Introduction to Biomechanics of Human Movement. In: Fundamentals of Biomechanics. Cham. 2021. https://doi.org/10.1007/978-3-030-51838-7_1
4. Tai WH, Zhang R, Zhao L. Cutting-Edge Research in Sports Biomechanics: From Basic Science to Applied Technology. Bioengineering, 2023, 10(6), 668. https://doi.org/10.3390/bioengineering10060668
5. Shun L, Ring G, Jianguo W. Human Action Recognition Method Based on Key Frame and Skeleton Information. Transducer and Microsystem Technologies, 2019:26-30.
6. Baveye Y, Chamaret C, Dellandrea E, et al. Affective Video Content Analysis: A Mul- tidisciplinary Insight. IEEE Transactions on Affective Computing, 2017:1.
7. Meng S, Wang T, Liu L. Monitoring Continuous State Violation in Datacenters: Exploring the Time Dimension. 2010 IEEE 26th International Conference on Data Engineering (ICDE 2010), Long Beach, CA, USA, 2010, pp. 968-979, doi: 10.1109/ICDE.2010.5447923.
8. Parisi G I. Human Action Recognition and Assessment via Deep Neural Network Self-Organization. In: Noceti, N., Sciutti, A., Rea, F. (eds) Modelling Human Motion. Springer, Cham. https://doi.org/10.1007/978-3-030-46732-6_10
9. Zhang, S., Li, Y., Zhang, S., Shahabi, F., Xia, S., Deng, Y., & Alshurafa, N. (2022). Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances. Sensors, 22(4), 1476. https://doi.org/10.3390/s22041476
10. Wang, X., Yu, H., Kold, S., Rahbek, O., Bai, S. (2023). Wearable sensors for activity monitoring and motion control: A review. Biomimetic Intelligence and Robotics, 3(1), 100089, https://doi.org/10.1016/j.birob.2023.100089
11. Badii C, Bellini P, Difino A, et al. Smart City IoT Platform Respecting GDPR Privacy and Security Aspects. IEEE Access, 2020, (99):1.
12. Wangbin C, Shuoqing L, Shanshan T, et al. Design of Smart Home Fireproofing and Security System Based on Single-chip Microcomputer. Natural Ences, 2019, 32(1):39-42.
13. Kong L, Huang D, Qin J, et al. A Joint Framework for Athlete Tracking and Action Recognition in Sports Videos. IEEE Transactions on Circuits and Systems for Video Technology, 2019:11.
14. Gunnar, Johansson. Visual Perception of Biological Motion and A Model for Its Analysis. Attention, Perception & Psychophysics, 1973, 14:201-211.
15. Ke Y, Sukthankar R, Hebert M. Spatio-temporal Shape and Flow Correlation for Action Recognition. Ke, Y., Sukthankar, R., & Hebert, M. (2007). Spatio-temporal Shape and Flow Correlation for Action Recognition. 2007 IEEE Conference on Computer Vision and Pattern Recognition, 1-8.
16. Tran S D, Dams L S. Event modeling andrecognition using markov logic networks. In: Forsyth, D., Torr, P., Zisserman, A. (eds) Computer Vision – ECCV 2008. ECCV 2008. Lecture Notes in Computer Science, vol 5303. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88688-4_45
17. Shimotashiro Y, Shinya M. Quantification in shooting precision for preferred and non-preferred foot in college soccer players using the 95% equal confidence ellipse. Front Sports Act Living. 2024;6:1434096. Published 2024 Sep 13. doi:10.3389/fspor.2024.1434096
18. Bertozzi F, Rabello R, Zago M, Esposito F, Sforza C. Foot dominance and ball approach angle affect whole-body instep kick kinematics in soccer players. Sports Biomech. Published online August 22, 2022. doi:10.1080/14763141.2022.2110514
19. Laptev, Lindeberg. Space-time Interest Points. Proceedings Ninth IEEE International Conference on Computer Vision, Nice, France, 2003, pp. 432-439 vol.1, doi: 10.1109/ICCV.2003.1238378
20. Wang J, Domingos P M. Hybrid Markov Logic Networks. Machine Learning, 2008, 62(1-2):1106-1111.
21. Wan S, Qi L, Xu X, et al. Deep Learning Models for Real-time Human Activity Recognition with Smartphones. Mobile Networks and Applications, 2020, 25(2):743-755.
22. Mutegeki R, Han D S. A CNN-LSTM Approach to Human Activity Recognition[C]// ICAIIC.2020:362-366.
23. Hamilton W L, Ying R, Leskovec J. Inductive Representation Learning on Large Graphs. Neural Information Processing Systems, 2017:1024-1034.
24. Wang J, Yang Y, Mao J, et al. CNN-RNN: A Unified Framework for Multi-label Image Classi-fication. 2016:2285-2294.
25. Gers F A, Schmh Huber J, Cummins F. Learning to Forget: Continual Prediction with LSTM. Neural Computation, 2000, 12(10):2451-2471.
26. Zhaowei Q, Yuan W, Xiaoru W. Transfer learning based hierarchical attention neuralnet-work for sentiment analysis. Journal of Computer Applications, 2018:3054-3062.
27. Belardinelli A, Pirri F. A biologically plausible robot attention model, based on space andtime. Cognitive Processing, 2006, 7(1):11-14.
Copyright (c) 2025 Weihong Shen
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright on all articles published in this journal is retained by the author(s), while the author(s) grant the publisher as the original publisher to publish the article.
Articles published in this journal are licensed under a Creative Commons Attribution 4.0 International, which means they can be shared, adapted and distributed provided that the original published version is cited.