Skip to main content

and
  1. No Access

    Chapter

    Multimodal Fusion and Fission within the W3C MMI Architectural Pattern

    The current W3C recommendation for multimodal interfaces provides a standard for the message exchange and overall structure of modality components in multimodal applications. However, the details for multimoda...

    Dirk Schnelle-Walka, Carlos Duarte in Multimodal Interaction with W3C Standards (2017)

  2. No Access

    Chapter

    SCXML on Resource Constrained Devices

    Ever since their introduction as a visual formalism by Harel et al. in 1987, state-charts played an important role to formally specify the behavior of reactive systems. However, various shortcomings in their o...

    Stefan Radomski, Jens Heuschkel in Multimodal Interaction with W3C Standards (2017)

  3. No Access

    Chapter and Conference Paper

    Open Source German Distant Speech Recognition: Corpus and Acoustic Model

    We present a new freely available corpus for German distant speech recognition and report speaker-independent word error rate (WER) results for two open source speech recognizers trained on this corpus. The co...

    Stephan Radeck-Arneth, Benjamin Milde, Arvid Lange in Text, Speech, and Dialogue (2015)

  4. No Access

    Chapter and Conference Paper

    Towards an Information State Update Model Approach for Nonverbal Communication

    The Information State Update (ISU) Model describes an approach to dialog management that was predominantly applied to single user scenarios using voice as the only modality. Extensions to multimodal interactio...

    Dirk Schnelle-Walka, Stefan Radomski in Computers Hel** People with Special Needs (2014)

  5. No Access

    Chapter and Conference Paper

    Multimodal Fusion and Fission within W3C Standards for Nonverbal Communication with Blind Persons

    Multimodal fusion and multimodal fission are well known concepts for multimodal systems but have not been well integrated in current architectures to support collaboration of blind and sighted people. In this ...

    Dirk Schnelle-Walka, Stefan Radomski in Computers Hel** People with Special Needs (2014)

  6. No Access

    Article

    JVoiceXML as a modality component in the W3C multimodal architecture

    Research regarding multimodal interaction led to a multitude of proposals for suitable software architectures. With all architectures describing multimodal systems differently, interoperability is severely hin...

    Dirk Schnelle-Walka, Stefan Radomski in Journal on Multimodal User Interfaces (2013)