This chapter describes variables defined by VoiceXML. See:
| | Variable Summary for an overview of the variables, grouped by scope |
| | Variable Index for an alphabetical list of all variables |
In the cases where the BeVocal VoiceXML interpreter deviates from the VoiceXML 2.0 Specification, the difference is clearly marked below in the following ways:
The following table organizes variables according to scope in which each variable's value is available:
| | Session variables are available to all applications that are executed in a particular session with the VoiceXML interpreter. | ||
| | Application variables are available throughout a particular application. | ||
| | Event-related variables are available in event handlers only. |
| Scope | Variable |
|
|
|
|
|
The following table lists the variables in alphabetical order.
| Variable | Scope |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This section contains variable descriptions is alphabetical order.
New in VoiceXML 2.0. Within the anonymous scope of an event handler, the JavaScript variable _event is set to the name of the event that was thrown. For example:
<error> <prompt>event is <value expr="_event"/></prompt> </error>
If the event is error.badfetch.http.500, this handler will say, "event is error.badfetch.http.500."
New in VoiceXML 2.0. Within the anonymous scope of an event handler, the JavaScript variable _message is set to the message string that provides additional context about the event that was thrown. If no message was supplied when the event was thrown, this variable has the value undefined.
Extension. If the bevocal.audio.capture property is set to true, the interpreter captures spoken audio for each recognition.
You can send the captured audio to a server using a <data> element. Doing so is useful if you need a record of the user's speech for legal reasons.
New in VoiceXML 2.0. When the user provides input during the execution of a <field> element or the <initial> element of a mixed-initiative form, the interpreter invokes the speech-recognition engine on the user's response. The most likely recognized utterance is used to set the relevant input variables. This utterance is chosen on the basis of speech-recognition engine confidence levels as well as grammar weighting and scoping rules. If this utterance matches multiple rules in an ambiguous grammar, the input variables are set according to an arbitrary one of those rules.
Additional information from the speech-recognition engine is available in the read-only variable application.lastresult$. This variable has a dual interpretation:
| | It acts like a normal object whose properties describe the most likely recognized result--that is, the one that was used to set the input variables. |
| | It also acts like an array of objects, each describing one of the likely recognition results. This array always has at least one element. If multiple recognition is enabled, it may contain additional elements. See Maximum Array Size for further details. |
The application.lastresult$ variable, and each member of the application.lastresult$ array, is a JavaScript object with the following properties:
If the application.lastresult$ array contains more than one element, each element has a different combination of utterance and interpretation--that is, different elements differ in the utterance, the interpretation, or both. Each element corresponds to one interpretation of one likely utterance. The same utterance may have different interpretations, and two or more different utterances may have a common interpretation.
The elements of the application.lastresult$ array for different possible utterances are sorted by descending order of confidence level; elements for the different interpretations of a given utterance or for multiple utterances with the same confidence are in an undefined order.
You can use the expression application.lastresult$.length to get the number of elements in the application.lastresult$ array.
The application.lastresult$ variable holds information about the last recognition that occurred within the application. Before the interpreter enters a waiting state (a recognition, record, or transfer) the variable is set to undefined. When a nomatch event is thrown, application.lastresult$ is set to the nomatch result. When a noinput event is thrown, application.lastresult$ is not reset to undefined. An application can check the variable in the <filled> element of the field or form for which input was received.
The maxnbest and bevocal.maxinterpretations properties determine which multiple-recognition features are enabled and the maximum number of elements in the application.lastresult$ array. The following table shows the possible combinations of values for these properties.
| maxnbest | bevocal.maxinterpretations | Maximum Array Size |
|
||
|
If both features are disabled, the application.lastresult$ array contains a single element with index 0 and application.lastresult$[0] is identical to application.lastresult$. If one or both of the features are enabled, the array may contain multiple elements, up to the maximum specified in the table. If N results were found, the array contains N elements with indexes from 0 to N-1.
The entire structure of the application.lastresult$ variable is as follows:
application.lastresult$ { confidence utterance inputmode interpretation {slotname1slotname2... } } application.lastresult$[0] { confidence utterance inputmode interpretation {...} } ... application.lastresult$[n] { ... }
You typically examine the application.lastresult$ object if multiple recognition is disabled. You typically examine the application.lastresult$ array if multiple recognition is enabled.
Whether or not multiple recognition is enabled, application.lastresult$.utterance is the most likely recognized utterance and application.lastresult$.interpretation is the chosen interpretation of that utterance--an object whose properties are the slots that were filled in, corresponding to the input variables that were set.
|
|||
|
If the speech-recognition engine finds multiple possible utterances, the application.lastresult$ array contains at least one element for each utterance. The elements for different utterances are ordered by speech-recognition engine confidence levels alone. When multiple utterances are recognized, the value of application.lastresult$ is always identical to application.lastresult$[0].
For example, suppose the user muttered something that sounded like "Austin" or "Boston" as the initial input to a form with an unambiguous grammar. The application.lastresult$ variable might be set as follows.
| Recognition Result | Property | Value |
application.lastresult$.confidence |
.38 |
|
application.lastresult$.utterance |
"austin" |
|
application.lastresult$.inputmode |
"voice" |
|
application.lastresult$.interpretation.city |
"Austin" |
|
application.lastresult$.interpretation.state |
"TX" |
|
application.lastresult$[0].confidence |
.38 |
|
application.lastresult$[0].utterance |
"austin" |
|
application.lastresult$[0].inputmode |
"voice" |
|
application.lastresult$[0].interpretation.city |
"Austin" |
|
application.lastresult$[0].interpretation.state |
"TX" |
|
application.lastresult$[1].confidence |
.37 |
|
application.lastresult$[1].utterance |
"boston" |
|
application.lastresult$[1].inputmode |
"voice" |
|
application.lastresult$[1].interpretation.city |
"Boston" |
|
application.lastresult$[1].interpretation.state |
"MA" |
With the application.lastresult$ variable set as shown in the preceding table, the city input variable would be set to Austin and the state input variable would be set to TX. The application could either proceed with those settings, or examine the application.lastresult$ array to determine that another interpretation of the input is possible.
If a particular utterance matches a single grammar rule, the application.lastresult$ array contains a single element for that utterance. The interpretation property of this elements gives the slot values set by the matched grammar rule.
If the utterance matches multiple rules in an ambiguous grammar, the application.lastresult$ array contains multiple elements for that utterance; the interpretation property of each of these elements gives the slot values set by one of those grammar rules. The order of these elements within the array is undefined.
For example, suppose the user clearly said "Portland." The chosen interpretation for this recognized result might be "Portland, Oregon," and the application.lastresult$ variable might be set as follows.
| Recognition Result | Property | Value |
application.lastresult$.confidence |
.8 |
|
application.lastresult$.utterance |
"portland" |
|
application.lastresult$.inputmode |
"voice" |
|
application.lastresult$.interpretation.city |
"Portland" |
|
application.lastresult$.interpretation.state |
"OR" |
|
application.lastresult$.confidence |
.8 |
|
application.lastresult$.utterance |
"portland" |
|
application.lastresult$.inputmode |
"voice" |
|
application.lastresult$.interpretation[0].city |
"Portland" |
|
application.lastresult$.interpretation[0].state |
"ME" |
|
application.lastresult$.confidence |
.8 |
|
application.lastresult$.utterance |
"portland" |
|
application.lastresult$.inputmode |
"voice" |
|
application.lastresult$.interpretation[1].city |
"Portland" |
|
application.lastresult$.interpretation[1].state |
"OR" |
With the application.lastresult$ variable set as shown in the preceding table, the city input variable would be set to Portland and the state input variable would be set to OR. The application could either proceed with those settings, or examine the application.lastresult$ variable to determine that another interpretation of the input is possible.
If the speech-recognition engine finds multiple possible utterances that match an ambiguous grammar, one or more of the utterances may have multiple interpretations. For example, suppose the user muttered something that sounded like "Austin" or "Boston." If the speech-recognition engine found two interpretations for the utterance "Austin" and one for "Boston," the application.lastresult$ variable might be set as follows.
| Recognition Result | Property | Value |
application.lastresult$.confidence |
.37 |
|
application.lastresult$.utterance |
"boston" |
|
application.lastresult$.inputmode |
"voice" |
|
application.lastresult$.interpretation.city |
"Boston" |
|
application.lastresult$.interpretation.state |
"MA" |
|
application.lastresult$[0].confidence |
.37 |
|
application.lastresult$[0].utterance |
"boston" |
|
application.lastresult$[0].inputmode |
"voice" |
|
application.lastresult$[0].interpretation.city |
"Boston" |
|
application.lastresult$[0].interpretation.state |
"MA" |
|
application.lastresult$[1].confidence |
.36 |
|
application.lastresult$[1].utterance |
"austin" |
|
application.lastresult$[1].inputmode |
"voice" |
|
application.lastresult$[1].interpretation.city |
"Austin" |
|
application.lastresult$[1].interpretation.state |
"TX" |
|
application.lastresult$[2].confidence |
.36 |
|
application.lastresult$[2].utterance |
"austin" |
|
application.lastresult$[2].inputmode |
"voice" |
|
application.lastresult$[2].interpretation.city |
"Austin" |
|
application.lastresult$[2].interpretation.state |
"CA" |
With the application.lastresult$ variable set as shown in the preceding table, the city input variable would be set to Boston and the state input variable would be set to MA. The application could proceed with those settings, or examine the application.lastresult$ variable to determine that other interpretations of the input are possible.
Extension. The number of milliseconds since the beginning of this call.
Extension. The version number of the VoiceXML interpreter (for example, 1.2.3).
Not Implemented. Information Indicator Digits
The session.iidigits variable is set to information about the caller's location (pay phone, and so on), when available. Complete list available in "Local Exchange Routing Guide" published by Telecordia.
Automatic Number Identification
The session.telephone.ani variable is set to the caller's telephone number, when available.
Dialed Number Identification Service
The session.telephone.dnis variable is set to the number the caller dialed, when available.
| Café Home |
Developer Agreement |
Privacy Policy |
Site Map |
Terms & Conditions Part No. 520-0001-02 | © 1999-2007, BeVocal, Inc. All rights reserved | 1.877.33.VOCAL |