Jinni speech recognition for search
January 10, 2013
At CES, Video content discovery solutions provider Jinni has unveiled a new NLU (natural language understanding) discovery engine to power voice-activated video guides that understand natural human language. The Jinni NLU engine leverages the company’s Entertainment Genome to interpret natural human speech and derive the underlying meaning to enable intuitive interaction between users and their TVs. Users will be able to simply tell their TV what they are in the mood to watch and Jinni will find the most fitting content from live TV, VoD and any other available video catalogue.
Jinni’s NLU engine simply requires the user to tell the TV what they want, for example:
“Is there anything witty and romantic on TV tonight?”
“Show us something like Dexter on VOD.”
“I want to watch something funny about an obnoxious boss.”
“Consumer demand has changed dramatically and today people expect to be able to interact with technology in a very natural, personalised way,” explained Jinni co-founder and CEO, Yosi Glick. “This is the core belief that inspired our semantic approach to video discovery and has allowed us to bring such an advanced NLU solution to market so quickly.”