Google is known for introducing a lot of interesting feature to its app on both Android and iOS, and while many might argue that most of the features are rudimentary and unnecessary at best, you have to understand that they are only going to make it easier for your to live life on your terms.
With that said, starting today, users will be able to "Hum to Search" a song that has been bothering them for some time using the Google app on both Android and IOS. Simply open the Google app, tap the microphone and say "What's this song" or hit the "Search a song" button.
Google is Making Our Lives Easier by Helping Us Find the Tune Stuck in Our Heads
In order to be sure that everything is done correctly, you must hum for at least 10 to 15 seconds before Google Search can do its magic and give you the results. The results will include cover artist name, as well as the match percentage. However, that is not all, if you want to make your life even easier, you can use the Hum to Search feature on Google Assistant as well by simply saying, "Hey Google, what's this song."
According to Google, the feature is currently available in English on iOS, and in more than 20 languages on Android. Google is also hoping to bring more languages into the future. They have also released a small teaser showing how the feature works that you can check below.
However, something that might look like a very simple feature actually has very clever workings in the background. Google uses Machine Learning to make that happen, Google took to its blog to explain how it happens, and it's pretty interesting.
When you hum a melody into Search, our machine learning models transform the audio into a number-based sequence representing the song’s melody. Our models are trained to identify songs based on a variety of sources, including humans singing, whistling or humming, as well as studio recordings. The algorithms also take away all the other details, like accompanying instruments and the voice's timbre and tone. What we’re left with is the song’s number-based sequence, or the fingerprint.
We compare these sequences to thousands of songs from around the world and identify potential matches in real time. For example, if you listen to Tones and I’s “Dance Monkey,” you’ll recognize the song whether it was sung, whistled, or hummed. Similarly, our machine learning models recognize the melody of the studio-recorded version of the song, which we can use to match it with a person’s hummed audio.
Needless to say, it is a pretty exciting feature and we cannot wait to try this out. Machine learning is definitely an impressive thing and to witness it become even better is something we are looking forward to testing out.