We are all aware of Siri, and her voice. Siri is the virtual personal assistant, which is a part of iDevices and which answers your query in her robotic voice. However, Apple is now using artificial intelligence to upgrade Siri’s voice. Before the final launch of Apple iOS 11, there will be a significant change in the voice of Siri as compared to the previous softwares. It will sound more human and you can actually compare the difference once you hear the new voice of Siri.
Apple have faced challenges while improving the voice of Siri, some of which included, recording many hours of great quality audio, patterns of stress and intonation in the area of spoken language and not putting too much stress on the phone while including such a feature. Apple have been using deep learning system i.e. machine learning. Machine learning is where one can help a text-to-speech system understand, how to bifurcate between certain audio clips and make them sound more human like.
Apple have stressed a lot on this topic. They hired a new female voice actor and recorded 20 hours of spoken language in US English. The audio which they recorded is between 1 and 2 million segments. These audios were then imposed upon the learning system, to generate better quality of speech. Apple have published a paper detailing all these and what more it used to make Siri sound more natural.
You can check out the difference between the three iOS versions, i.e. iOS 9, 10 and 11 here.