Xuanchi Ren is currently an undergraduate student in Hong Kong University of Science and Technology. During 2020 spring, he was an exchange student in EPFL, where he was fortunate to work with Prof.Alexandre Alahi. He is a full-time research intern in Intelligent Multimedia Group of Microsoft Research Asia from July, 2020, supervised by Dr. Yuwang Wang and Prof.Wenjun Zeng. Up to now, he is working with Prof.Qifeng Chen and Dr. Li Erran Li as a research assistant.
His research interests currently focus on computer vision and deep learning. He has experiences in disentangled representation, autonomous driving, 3D reconstruction, video generation, human motion synthesis and low-level vision.
He is expected to apply for Ph.D starting from 2022 Fall. Here is his Curriculum Vitae.
Exchange student, 2020 Spring
BSc in Compter Science and Math, 2017-2022
We present a learning-based approach with pose perceptual loss for automatic music video generation. Our method can produce a realistic dance video that conforms to the beats and rhymes of almost any given music. To achieve this, we firstly generate a human skeleton sequence from music and then apply the learned pose-to-appearance mapping to generate the final video. In the stage of generating skeleton sequences, we utilize two discriminators to capture different aspects of the sequence and propose a novel pose perceptual loss to produce natural dances. Besides, we also provide a new cross-modal evaluation to evaluate the dance quality, which is able to estimate the similarity between two modalities of music and dance. Finally, a user study is conducted to demonstrate that dance video synthesized by the presented approach produces surprisingly realistic results.