The Wayback Machine - https://web.archive.org/web/20200702054147/https://github.com/microsoft/MixedRealityToolkit-Unity/issues/7617
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use localized speech commands #7617

Open
amonnot-ah opened this issue Apr 6, 2020 · 2 comments
Open

How to use localized speech commands #7617

amonnot-ah opened this issue Apr 6, 2020 · 2 comments
Labels

Comments

@amonnot-ah
Copy link

@amonnot-ah amonnot-ah commented Apr 6, 2020

Describe the issue

It's not clear in the documentation how to use Localized Speech Commands.

Feature area

I've seen that under Input Profile >Speech > Speech Commands there is some LocalizationKey.
By looking at the SpeechCommands struct, I figured out that if my build target a UWP plateform, it will load the string from the resource file instead of the given keyword in the Editor.
But how can I create such a resource file ? Is it possible within Unity, or should I open visual studio after every build to create one ? If so, how can I do it ?

Existing doc link

https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/Speech.html

@wiwei
Copy link
Contributor

@wiwei wiwei commented Apr 14, 2020

I could have sworn that I wrote up a similar response on some other issue but now can't find it - in anycase, having speech commands in a different language is definitely something that isn't easy to do out of the box.

Note that MRTK uses Unity's KeywordRecognizer (and DictationRecognizer) which under the covers uses the the UWP speech APIs which are documented here:

https://docs.microsoft.com/en-us/windows/uwp/design/input/specify-the-speech-recognizer-language

A quick google search "KeywordRecognizer language" finds that yeah folks have had some trouble getting this to work on non english (or in general, a language not matching the language of their OS).

In order to make this work I suspect it's possible, you just have to go around Unity APIs and directly invoke the UWP APIs (see link https://docs.microsoft.com/en-us/windows/uwp/design/input/specify-the-speech-recognizer-language above) before the MRTK starts its speech system (or in general, before the first instance of Unity's speech stuff kicking off)

@wiwei
Copy link
Contributor

@wiwei wiwei commented Apr 14, 2020

If you end up having some luck here, would definitely love to hear - we can keep this issue open to track some future investigations + results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
2 participants
You can’t perform that action at this time.