Youtube had added the Stories section recently and to the delight of the fans, they could now post their day-to-day events for 24 hours like they have been doing on Snapchat or Instagram. Now, the video service giant has added new filters for users to try and it makes use of a new and advanced machine learning technology by Google.
Google is taking big steps in AR and wants users to have a unique filter for every kind of facial expressions. They can range from a smile to a frown and still have the best possible outcome with the filter. Although the technology is very similar to Snapchat and Instagram Stories, Google claims that their technology is very unique and will contribute towards the best selfie filters in the smartphones.
In a blog post, Google said:
To make all this possible, we employ machine learning (ML) to infer approximate 3D surface geometry to enable visual effects, requiring only a single camera input without the need for a dedicated depth sensor. This approach provides the use of AR effects at realtime speeds, using TensorFlow Lite for mobile CPU inference or its new mobile GPU functionality where available. This technology is the same as what powers YouTube Stories’ new creator effects, and is also available to the broader developer community via the latest ARCore SDK release and the ML Kit Face Contour Detection API.
We are excited to share this new technology with creators, users and developers alike, who can use this new technology immediately by downloading the latest ARCore SDK. In the future we plan to broaden this technology to more Google products.