13   Variables

This chapter describes variables defined by VoiceXML. See:

 •  Variable Summary for an overview of the variables, grouped by scope
 •  Variable Index for an alphabetical list of all variables

In the cases where the BeVocal VoiceXML interpreter deviates from the VoiceXML 2.0 Specification, the difference is clearly marked below in the following ways:

 •  Not Implemented--Functionality not currently available.
 •  Extension--Added functionality.
 •  Deprecated--Non-standard or superseded feature that was supported by an earlier version but has been replaced by a new feature.

Variable Summary

The following table organizes variables according to scope in which each variable's value is available:

 •  Session variables are available to all applications that are executed in a particular session with the VoiceXML interpreter.
 •  Application variables are available throughout a particular application.
 •  Event-related variables are available in event handlers only.
Scope Variable

Session

session.bevocal.timeincall Extension
session.bevocal.version Extension
session.iidigits Not Implemented
session.telephone.ani
session.telephone.dnis

Application

application.lastaudio$ Extension
application.lastresult$ New in VoiceXML 2.0

Event handler

_event New in VoiceXML 2.0
_message New in VoiceXML 2.0

Variable Index

The following table lists the variables in alphabetical order.

Variable Scope

_event New in VoiceXML 2.0

Event handler

_message New in VoiceXML 2.0

Event handler

application.lastaudio$ Extension

Application

application.lastresult$ New in VoiceXML 2.0

Application

session.bevocal.timeincall Extension

Session

session.bevocal.version Extension

Session

session.iidigits Not Implemented

Session

session.telephone.ani VoiceXML 1.0 only

Session

session.telephone.dnis VoiceXML 1.0 only

Session

Variable Descriptions

This section contains variable descriptions is alphabetical order.

_event

New in VoiceXML 2.0. Within the anonymous scope of an event handler, the JavaScript variable _event is set to the name of the event that was thrown. For example:

 <error>
   <prompt>event is <value expr="_event"/></prompt>
 </error>

If the event is error.badfetch.http.500, this handler will say, "event is error.badfetch.http.500."

_message

New in VoiceXML 2.0. Within the anonymous scope of an event handler, the JavaScript variable _message is set to the message string that provides additional context about the event that was thrown. If no message was supplied when the event was thrown, this variable has the value undefined.

application.lastaudio$

Extension. If the bevocal.audio.capture property is set to true, the interpreter captures spoken audio for each recognition.

 •  When the user's speech matches a field grammar, a form grammar, a menu choice, or a link grammar, the application.lastaudio$ variable contains an audio capture of the user's speech.
 •  When a no-match event occurs in a field, the application.lastaudio$ variable contains an audio capture of the user's speech.
 •  When a no-input event occurs in a field, the application.lastaudio$ variable is cleared.

You can send the captured audio to a server using a <data> element. Doing so is useful if you need a record of the user's speech for legal reasons.

application.lastresult$

New in VoiceXML 2.0. When the user provides input during the execution of a <field> element or the <initial> element of a mixed-initiative form, the interpreter invokes the speech-recognition engine on the user's response. The most likely recognized utterance is used to set the relevant input variables. This utterance is chosen on the basis of speech-recognition engine confidence levels as well as grammar weighting and scoping rules. If this utterance matches multiple rules in an ambiguous grammar, the input variables are set according to an arbitrary one of those rules.

Additional information from the speech-recognition engine is available in the read-only variable application.lastresult$. This variable has a dual interpretation:

 •  It acts like a normal object whose properties describe the most likely recognized result--that is, the one that was used to set the input variables.
 •  It also acts like an array of objects, each describing one of the likely recognition results. This array always has at least one element. If multiple recognition is enabled, it may contain additional elements. See Maximum Array Size for further details.

The application.lastresult$ variable, and each member of the application.lastresult$ array, is a JavaScript object with the following properties:

Property Description
confidence

The recognition confidence level of this result (with 0.0 representing the lowest confidence and 1.0 representing the highest).

utterance

The raw string of words that were recognized, for example "portland oregon".

inputmode

The mode in which user input was provided, one of:

 •  dtmf - DTMF input
 •  voice - spoken input
interpretation

The interpretation of this result. This property contains a JavaScript object whose properties correspond to the slots that can be set by the matched grammar rule.

If the application.lastresult$ array contains more than one element, each element has a different combination of utterance and interpretation--that is, different elements differ in the utterance, the interpretation, or both. Each element corresponds to one interpretation of one likely utterance. The same utterance may have different interpretations, and two or more different utterances may have a common interpretation.

The elements of the application.lastresult$ array for different possible utterances are sorted by descending order of confidence level; elements for the different interpretations of a given utterance or for multiple utterances with the same confidence are in an undefined order.

You can use the expression application.lastresult$.length to get the number of elements in the application.lastresult$ array.

The application.lastresult$ variable holds information about the last recognition that occurred within the application. Before the interpreter enters a waiting state (a recognition, record, or transfer) the variable is set to undefined. When a nomatch event is thrown, application.lastresult$ is set to the nomatch result. When a noinput event is thrown, application.lastresult$ is not reset to undefined. An application can check the variable in the <filled> element of the field or form for which input was received.

Maximum Array Size

The maxnbest and bevocal.maxinterpretations properties determine which multiple-recognition features are enabled and the maximum number of elements in the application.lastresult$ array. The following table shows the possible combinations of values for these properties.

maxnbest bevocal.maxinterpretations Maximum Array Size

1

Unset, 1, or less than 1

1--Both features are disabled

1

Greater than 1

bevocal.maximuminterpretations--Only multiple interpretations is enabled

Greater than 1

Unset or less than 1

maxnbest--Both features are enabled

Greater than 1

1

maxnbest--Only N-best recognition is enabled

Greater than 1

Greater than 1

maxnbest * bevocal.maximuminterpretations-- Both features are enabled

If both features are disabled, the application.lastresult$ array contains a single element with index 0 and application.lastresult$[0] is identical to application.lastresult$. If one or both of the features are enabled, the array may contain multiple elements, up to the maximum specified in the table. If N results were found, the array contains N elements with indexes from 0 to N-1.

Variable Structure

The entire structure of the application.lastresult$ variable is as follows:

 application.lastresult$ {
   confidence
   utterance
   inputmode
   interpretation {
     slotname1
     slotname2
     ...
   }
 }
 application.lastresult$[0] {
   confidence
   utterance
   inputmode
   interpretation {...}
 }
 ...
 application.lastresult$[n] { ... }

Recognition Results

You typically examine the application.lastresult$ object if multiple recognition is disabled. You typically examine the application.lastresult$ array if multiple recognition is enabled.

Whether or not multiple recognition is enabled, application.lastresult$.utterance is the most likely recognized utterance and application.lastresult$.interpretation is the chosen interpretation of that utterance--an object whose properties are the slots that were filled in, corresponding to the input variables that were set.

Tip:

 •  Remember not to access properties of a particular element of the application.lastresult$ array until you have verified that the element exists. If you try to access application.lastresult$[i].utterance when i is greater than or equal to the number of results, an error.semantic event is thrown.

Multiple Recognized Utterances

If the speech-recognition engine finds multiple possible utterances, the application.lastresult$ array contains at least one element for each utterance. The elements for different utterances are ordered by speech-recognition engine confidence levels alone. When multiple utterances are recognized, the value of application.lastresult$ is always identical to application.lastresult$[0].

For example, suppose the user muttered something that sounded like "Austin" or "Boston" as the initial input to a form with an unambiguous grammar. The application.lastresult$ variable might be set as follows.

Recognition Result Property Value

Most likely utterance

Chosen (and only) interpretation of the utterance

application.lastresult$.confidence
.38
application.lastresult$.utterance
"austin"
application.lastresult$.inputmode
"voice"
application.lastresult$.interpretation.city
"Austin"
application.lastresult$.interpretation.state
"TX"

Utterance with the highest confidence level

First (and only) interpretation of the utterance

application.lastresult$[0].confidence
.38
application.lastresult$[0].utterance
"austin"
application.lastresult$[0].inputmode
"voice"
application.lastresult$[0].interpretation.city
"Austin"
application.lastresult$[0].interpretation.state
"TX"

Utterance with the second highest confidence level

First (and only) interpretation of the utterance

application.lastresult$[1].confidence
.37
application.lastresult$[1].utterance
"boston"
application.lastresult$[1].inputmode
"voice"
application.lastresult$[1].interpretation.city
"Boston"
application.lastresult$[1].interpretation.state
"MA"

With the application.lastresult$ variable set as shown in the preceding table, the city input variable would be set to Austin and the state input variable would be set to TX. The application could either proceed with those settings, or examine the application.lastresult$ array to determine that another interpretation of the input is possible.

Multiple Interpretations

If a particular utterance matches a single grammar rule, the application.lastresult$ array contains a single element for that utterance. The interpretation property of this elements gives the slot values set by the matched grammar rule.

If the utterance matches multiple rules in an ambiguous grammar, the application.lastresult$ array contains multiple elements for that utterance; the interpretation property of each of these elements gives the slot values set by one of those grammar rules. The order of these elements within the array is undefined.

For example, suppose the user clearly said "Portland." The chosen interpretation for this recognized result might be "Portland, Oregon," and the application.lastresult$ variable might be set as follows.

Recognition Result Property Value

Most likely (and only) utterance

Chosen interpretation of the utterance

application.lastresult$.confidence
.8
application.lastresult$.utterance
"portland"
application.lastresult$.inputmode
"voice"
application.lastresult$.interpretation.city
"Portland"
application.lastresult$.interpretation.state
"OR"

Most likely (and only) utterance

First interpretation of the utterance

application.lastresult$.confidence
.8
application.lastresult$.utterance
"portland"
application.lastresult$.inputmode
"voice"
application.lastresult$.interpretation[0].city
"Portland"
application.lastresult$.interpretation[0].state
"ME"

Most likely (and only) utterance

Second interpretation of the utterance

application.lastresult$.confidence
.8
application.lastresult$.utterance
"portland"
application.lastresult$.inputmode
"voice"
application.lastresult$.interpretation[1].city
"Portland"
application.lastresult$.interpretation[1].state
"OR"

With the application.lastresult$ variable set as shown in the preceding table, the city input variable would be set to Portland and the state input variable would be set to OR. The application could either proceed with those settings, or examine the application.lastresult$ variable to determine that another interpretation of the input is possible.

Multiple Utterances and Interpretations

If the speech-recognition engine finds multiple possible utterances that match an ambiguous grammar, one or more of the utterances may have multiple interpretations. For example, suppose the user muttered something that sounded like "Austin" or "Boston." If the speech-recognition engine found two interpretations for the utterance "Austin" and one for "Boston," the application.lastresult$ variable might be set as follows.

Recognition Result Property Value

Most likely utterance

Chosen (and only) interpretation of the utterance

application.lastresult$.confidence
.37
application.lastresult$.utterance
"boston"
application.lastresult$.inputmode
"voice"
application.lastresult$.interpretation.city
"Boston"
application.lastresult$.interpretation.state
"MA"

Utterance with the highest confidence level

First (and only) interpretation of the utterance

application.lastresult$[0].confidence
.37
application.lastresult$[0].utterance
"boston"
application.lastresult$[0].inputmode
"voice"
application.lastresult$[0].interpretation.city
"Boston"
application.lastresult$[0].interpretation.state
"MA"

Utterance with the second highest confidence level

First interpretation of the utterance

application.lastresult$[1].confidence
.36
application.lastresult$[1].utterance
"austin"
application.lastresult$[1].inputmode
"voice"
application.lastresult$[1].interpretation.city
"Austin"
application.lastresult$[1].interpretation.state
"TX"

Utterance with the second highest confidence level

Second interpretation of the utterance

application.lastresult$[2].confidence
.36
application.lastresult$[2].utterance
"austin"
application.lastresult$[2].inputmode
"voice"
application.lastresult$[2].interpretation.city
"Austin"
application.lastresult$[2].interpretation.state
"CA"

With the application.lastresult$ variable set as shown in the preceding table, the city input variable would be set to Boston and the state input variable would be set to MA. The application could proceed with those settings, or examine the application.lastresult$ variable to determine that other interpretations of the input are possible.

session.bevocal.timeincall

Extension. The number of milliseconds since the beginning of this call.

session.bevocal.version

Extension. The version number of the VoiceXML interpreter (for example, 1.2.3).

session.iidigits

Not Implemented. Information Indicator Digits

The session.iidigits variable is set to information about the caller's location (pay phone, and so on), when available. Complete list available in "Local Exchange Routing Guide" published by Telecordia.

session.telephone.ani

Automatic Number Identification

The session.telephone.ani variable is set to the caller's telephone number, when available.

session.telephone.dnis

Dialed Number Identification Service

The session.telephone.dnis variable is set to the number the caller dialed, when available.


[Show Frames]   [FIRST] [PREVIOUS] [NEXT]
BeVocal, Inc. Café Home | Developer Agreement | Privacy Policy | Site Map | Terms & Conditions
Part No. 520-0001-02 | © 1999-2007, BeVocal, Inc. All rights reserved | 1.877.33.VOCAL