windows – Responding to voice and text to place images described on screen, and allow user to resize and drag to form pictures (which may convey, symbolism)


I am looking for an application that does the following:

The application presents a user interface where you can do the following:

  • you can speak, words (names of objects). When you do so, an object which corresponds to the named word (i.e., the object described by that word) is placed on the screen. In a more advanced version of the app, words, can also have adjectives. It is not important that a specific object is placed, as long as the object with the given name is placed on the screen. If the object, is not, as desired, the user can say “variant” or press on the variant UI button. The object names are placed on the screen, and mute people can type the object names in a name box below the user interface. The user interface works in TalkBack mode.

  • The user interface can allow the user to resize, drag, and place objects in specific positions (after they have been placed on screen by the voice or typing action). To support TalkBack, special sounds can be made by each object to identify it, and, the sound is emitted, either, upon collision of the finger being dragged onto the object, or, by continuous sound of the given object being made upon hovering with the finger. This behavior can be accessed though the app settings.

With this interface, children, who presumably know how to speak and are learning to write, for example, six year olds, can speak, thus, being able to speak, and, realize, that, their voice, produces results. They can inspect, what they, first imagine, then see, then make.

They can construct, in this manner, a mimick(/mock), of the setting, that, could be their surroundings.

They can arrange things logically, via this interface, to convey ideas.

Mute people can wear IoT wearables such that when they gesticulate in the selected sign language, the images, corresponding, to the nouns, (and nouns with adjectives), arep placed on screen, in reaction to the sign language.

Paraplegics and amputees could use Eva Facial Mouse to select the images from a list or use Elon Musk’s technology to place the images on the screen according to their imagination, which could see, the imagined, object, or figure out a word, and, place it (and the corresponding object on the screen.

Thank you for your implementations.



Source link

Related Posts

About The Author

Add Comment