Modern live sports broadcasts display a wide variety of graphic visualizations identifying key players in a partic-ular play. Traditionally, these graphics are created with extensive manual annotation for post-match analysis and take a significant amount of time to be produced. To create such visualizations in near real-time, automatic on-screen player identification and localization is essential. However, it is a challenging vision problem, especially for sports such as American football where the players wear elaborate pro-tective equipment. In this work, we propose a novel ap-proach which uses sensor data streams captured by wear-ables to automatically identify and locate on-screen players with low latency and high accuracy. The approach esti-mates a field registration homography using on-field player positions from RFID sensors, which is then used to iden-tify and locate individual players on-screen. Experiments using American football data show that the method outper-forms a deep learning based state-of-the-art(SOTA) vision-only field registration model both in terms of accuracy of the homography and also success rate of correct homography computation. On a dataset of over 150 replay clips, the pro-posed method correctly estimated the homography for ap-proximately 25% additional clips as compared to the SOTA method. We demonstrate the efficacy of our method by ap-plying it to the problem of rendering visualizations around key players within a few minutes of the live play. The player identification accuracy for these key players was over 96%across all clips, with an end-to-end latency of less than 1 minute.
Research areas