Nvidia’s new technology automates facial animation using sound files

Nvidia’s new technology is available for testing and it uses neural networks to animate digital faces using sound files.

Nvidia’s open-source simulation platform has received a beta version of a new feature called Audio2Face. The new system will allow users to input audio files of voices. Then, an animated face will be automatically animated to match the voice lines. The current version uses a Digital Mark character model, but users could use any face with the same rigging and bones in its place.

The system currently works with most languages and has several sliders to change details. As this is a beta, more updates to the software could bring improvements down the line. As reported by PCGamer, the system uses a neural network that matches the animation in real-time. Once the software is downloaded, the program will build a Tensor RT Engine that optimises the neural network for any hardware running the software. Changes can then be made in real-time or baked in permanently.

Several videos on Nvidia on Demand provide tutorials allowing anyone to try the tech if their hardware is capable.

Earlier this month, a truck full of Nvidia’s GeForce graphics cards was stolen in a robbery in California. Each of the cards has a value between £240 and £1440. The truck was on its way from San Francisco to the company’s Southern California distribution centre. If the cards find their way to buyers, EVGA has said that it will not register or honour the warranty or upgrade claims on any stolen cards. It also reminded buyers that it is a criminal and civil offence to buy or receive stolen property.

In other news, those buying, selling, and creating non-fungible tokens in the US will have to start paying taxes on their investments.

Leave a Comment