(Web Desk) - Robot development is nowadays one of the top growing sectors as the need for smart machines has surged dramatically from warehouses to homes and even medical surgeries.
Companies across the world are in a tight race to develop the best suitable robot that can give us human-like feel.
Now, scientists in China have developed robots that give human-like realistic expressions.
EXPRESSIVE FACIAL FEATURES
The humanoid robot with highly expressive facial features is developed by Liu Xiaofeng, a professor at Hohai University in east China’s Jiangsu Province, and his research team.
For the development of this robot, the research team developed a new algorithm for generating facial expressions on humanoid robots.
Liu claimed that humanoid robots usually don’t give intricate and authentic facial expressions characteristic of humans, which creates issues in smooth user engagement.
Addressing this challenge, Liu and his team unveiled a comprehensive two-stage methodology to empower our autonomous affective robot with the capacity to exhibit rich and natural facial expressions.
FINE-GRAINED FACIAL EXPRESSIONS
Liu explained that in the first stage, their method generates nuanced robot facial expression images guided by AUs. In the subsequent phase, they actualize an affective robot with multifaceted degrees of freedom for facial movements, enabling it to embody the synthesized fine-grained facial expressions, reported Xinhua.
Published in the journal IEEE Transactions on Robotics, the study presents an innovative Action Unit (AU) driven facial expression disentangled synthesis method, enabling the generation of nuanced robot facial expression images guided by Action Units.
By harnessing facial AUs within a framework of weakly supervised learning, the researchers effectively surmount the scarcity of paired training data (comprising source and target facial expression images).
“To preserve the integrity of AUs while mitigating identity interference, we leverage a latent facial attribute space to disentangle expression-related and expression–unrelated cues, employing solely the former for expression synthesis,” said researchers in the study.
“In the subsequent phase, we actualize an affective robot endowed with multifaceted degrees of freedom for facial movements, facilitating the embodiment of the synthesized fine-grained facial expressions.”
Researchers devised a specialized motor command mapping network that serves as a conduit between the generated expression images and the robot’s realistic facial responses.
Chinese researchers refined the prediction of precise motor commands from the robot’s generated facial expressions by utilizing the physical motor positions as constraints.
This refinement process ensures that the robot’s facial movements authentically express accurate and natural expressions, according to the study.
Finally, qualitative and quantitative evaluations on the benchmarking Emotionet dataset verify the effectiveness of the proposed generation method.
“Results on the self-developed affective robot indicate that our method achieves a promising generation of specific facial expressions with given AUs, significantly enhancing the affective human-robot interaction,” said the researchers.