Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upHow to use localized speech commands #7617
Comments
|
I could have sworn that I wrote up a similar response on some other issue but now can't find it - in anycase, having speech commands in a different language is definitely something that isn't easy to do out of the box. Note that MRTK uses Unity's KeywordRecognizer (and DictationRecognizer) which under the covers uses the the UWP speech APIs which are documented here: https://docs.microsoft.com/en-us/windows/uwp/design/input/specify-the-speech-recognizer-language A quick google search "KeywordRecognizer language" finds that yeah folks have had some trouble getting this to work on non english (or in general, a language not matching the language of their OS). In order to make this work I suspect it's possible, you just have to go around Unity APIs and directly invoke the UWP APIs (see link https://docs.microsoft.com/en-us/windows/uwp/design/input/specify-the-speech-recognizer-language above) before the MRTK starts its speech system (or in general, before the first instance of Unity's speech stuff kicking off) |
|
If you end up having some luck here, would definitely love to hear - we can keep this issue open to track some future investigations + results. |


Describe the issue
It's not clear in the documentation how to use Localized Speech Commands.
Feature area
I've seen that under Input Profile >Speech > Speech Commands there is some LocalizationKey.
By looking at the SpeechCommands struct, I figured out that if my build target a UWP plateform, it will load the string from the resource file instead of the given keyword in the Editor.
But how can I create such a resource file ? Is it possible within Unity, or should I open visual studio after every build to create one ? If so, how can I do it ?
Existing doc link
https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/Speech.html