Conversational technology allows people to get information, conduct transactions, and be entertained, simply by speaking to a computer. Based on the integration of speech recognition, natural language processing and dialog understanding, conversational systems are rapidly becoming an important component of solutions such as virtual assistants, customer care, and the Internet of Things. These systems save money and time, attract new customers, and improve accessibility for their users on a day in and day out basis for millions of people all over the world.

We can help you with

Natural Language Understanding

Applications of natural language technologies for customer service, virtual agents, language learning and speech therapy.

Speech Technologies

Applications of speech recognition, speech understanding and text to speech.

Industry Analysis

Knowledge of the industry and major company activities involving these technologies.

Standards

W3C Voice, Multimodal, and Cognitive Accessibility standards

Our Expertise

Standards

The World Wide Web Consortium has published many standards that are making speech and language technologies interoperable and easy to use. We are highly experienced with these standards as well as current W3C efforts in the Web of Things and Cognitive Accessibility.

speech and language disorders book

Software for Language Disorders

Conversational Technologies has a long history of work on applications using speech and language technologies for remediative and assistive systems for language disorders, especially aphasia. This work is described in a recent book Speech and Language Technology for Language Disorders, co-authored by Deborah Dahl.

natural language processing class

Training

We present workshops on topics including Natural Language Processing, Multimodal Design and The Open Web Platform. SpeechTEK University 2016 included two of our workshops, "Natural Language Understanding" and "Developing Multimodal Applications for New Platforms". "Natural Language Understanding" covered natural language understanding in call centers and virtual assistants. "Developing Multimodal Applications for New Platforms" covered natural language for the Internet of Things, including an example of spoken language interaction with a fitness tracker.

Natural Language Processing Systems

We are knowledgeable about commercial and research natural language processing tools, such as wit.ai and OpenNLP, as well as the academic literature. Here's a five minute overview of natural language processing, which is part of the AVIOS (Applied Voice Input Output Society) video series, "A Closer Look at the World of Speech Technology".

multimodal demos

Demos

Some introductory information about the W3C multimodal standards (EMMA, EmotionML and the Multimodal Architecture, and some demos of the standards in action. Here's a video of another demo -- natural language control of Hue lights with W3C standards.

About Us

Company Background

Conversational Technologies was founded in 2002 with the mission of helping its customers apply speech and language technologies in creative, socially beneficial, and innovative ways.

Our Skills

Conversational Technologies provides the reports, analysis, training, and design services that you need to understand these technologies and the solutions that use them.

Many of our customers are entrepreneurs with groundbreaking ideas for new applications of speech and natural language understanding technologies, We help them develop requirements, develop proof of concepts of their ideas, find vendors, learn about tools, and architect the solutions that make it possible for them to realize their visions.

Deborah Dahl
Deborah Dahl
Principal

Deborah Dahl has over 30 years of experience in speech and natural language technologies, including work on research, defense, and commercial systems. Dr. Dahl is a frequent speaker at industry conferences such as the Mobile Voice Conference and SpeechTEK. She is also active in speech, multimodal and accessibility standards activities in the World Wide Web Consortium. serving as Chair of the Multimodal Interaction Working Group and past Co-Chair of the Hypertext Coordination Group. She is an editor of several multimodal specifications, including EMMA (Extensible MultiModal Annotation specification), the Multimodal Architecture and Interfaces specification, and the Discovery and Registration specification. She is a member of the Board of Directors of AVIOS, the Applied Voice Input Output Society.

Dr. Dahl has published many technical papers and book chapters, as well as two books, with a third, on multimodal standards, in progress.

Dr. Dahl received the prestigious "Speech Luminary" award from Speech Technology Magazine in 2012 and 2014.

Contact Us

Bring us your ideas! We love to hear about ideas for new, disruptive applications of speech and language technologies, as well as your ideas for improving traditional applications.

Contact information

+1 610 888-4532 info at conversational-technologies dot com