Date post: | 26-Mar-2015 |
Category: |
Documents |
Upload: | jayden-mcelroy |
View: | 216 times |
Download: | 2 times |
Error Awareness and Recovery in Task-Oriented Spoken Dialogue Systems
Thesis Proposal
Dan BohusCarnegie Mellon University, January 2004
Thesis CommitteeAlex Rudnicky (Chair)Roni RosenfeldJeff SchneiderEric Horvitz (Microsoft Research)
2
Problem
Spans most domains and interaction types Has a significant impact on performance
Lack of robustness when faced with understanding errors
3
An exampleS: What city are you leaving from?U: Urbana Champaign [OKAY IN THAT SAME PAY]S: Sorry I’m not sure I understood what you said. What city are you leaving from ?U: Urbana Champaign [FOR MINUTE SINCE HEY]S: I’m still having trouble understanding you. To go on, I need you to answer the
following question. What city are you leaving from?U: Chicago [CHICAGO]S: traveling from Chicago. Where would you like to go?U: Huntsville [SEOUL]S: traveling to Seoul. What day did you need to travel?U: no no I’m traveling to Birmingham [THE TRAVELING TO BERLIN P_M]S: traveling in the afternoon. Okay what day would you be departing chicagoU: /uh/ the tenth of august [AT THE TENTH OF AUGUST]S: a flight tomorrow… I have a Korean Airlines flight departing Chicago at 1:40pm,
arrives Seoul at 5pm the next day. Do you want to take that?U: my destination is Birmingham [FLIGHT DESTINATION MR WEEKEND]S: traveling on Saturday, August 12th … I have a flight departing Chicago at
1:40pm arrives Seoul at ………
4
Some Statistics …
Corrections [Krahmer, Swerts, Litman, Levow]
30% of utterances correct system mistakes 2-3 times more likely to be misrecognized
Semantic error rates: ~25-35%
SpeechActs [SRI] 25%
CU Communicator [CU] 27%
Jupiter [MIT] 28%
CMU Communicator [CMU] 32%
How May I Help You? [AT&T] 36%
5
Significant Impact on Interaction
CMU Communicator
40% 26%
Contain understanding errors
Failed
Multi-site Communicator Corpus [Shin et al]
37%
Failed
sessions
sessions
33%
63%
6
Outline
ProblemApproach Infrastructure Research Program Summary & Timeline
problem : approach : infrastructure : indicators : strategies : decision process : summary
7
Increasing Robustness …
Increase the accuracy of speech recognition
Assume recognition is unreliable, and create the mechanisms for acting robustly at the dialogue management level
problem : approach : infrastructure : indicators : strategies : decision process : summary
8
Snapshot of Existing Work: Slide 1
Theoretical models of grounding Contribution Model [Clark], Grounding Acts [Traum]
Practice: heuristic rules Misunderstandings
Threshold(s) on confidence scores
Non-understandings
problem : approach : infrastructure : indicators : strategies : decision process : summary
Analytical/Descriptive, not decision oriented
Ad-hoc, lack generality, not easy to extend
9
Snapshot of Existing Work: Slide 2
Conversation as Action under Uncertainty [Paek and Horvitz]
Belief networks to model uncertainties Decisions based on expected utility, VOI-analysis
Reinforcement learning for dialogue control policies [Singh, Kearns, Litman, Walker, Levin, Pieraccini, Young, Scheffler, etc]
Formulate dialogue control as an MDP Learn optimal control policy from data
Do not scale up to complex, real-world tasks
problem : approach : infrastructure : indicators : strategies : decision process : summary
10
Develop a task-independent, adaptive and scalable framework for error recovery in task-oriented spoken dialogue systems
Develop a task-independent, adaptive and scalable framework for error recovery in task-oriented spoken dialogue systems
Thesis Statement
Decision making under uncertainty
Approach:
problem : approach : infrastructure : indicators : strategies : decision process : summary
11
1. Error awareness
2. Error recovery strategies
3. Error handling decision process
Three components
Develop indicators that … Assess reliability of information Assess how well the dialogue is advancing
Develop and investigate an extended set of conversational error handling strategies
Develop a scalable reinforcement-learning based architecture for making error handling decisions
problem : approach : infrastructure : indicators : strategies : decision process : summary
0. Infrastructure
problem : approach : infrastructure : indicators : strategies : decision process : summary
12
Infrastructure
RavenClaw Modern dialog management framework for
complex, task-oriented domains
RavenClaw spoken dialogue systems Test-bed for evaluation
problem : approach : infrastructure : indicators : strategies : decision process : summary
Completed
Completed
13
RavenClaw
Dialogue Task (Specification)
Domain-Independent Dialogue Engine
RoomLine
Login
Welcome
AskRegistered AskName
GreetUser
GetQuery
DateTime Location Properties
Network Projector Whiteboard
GetResults DiscussResults
user_nameregistered
query
results
RoomLine
Login
AskRegistered
Dialogue Stack
registered: [No]-> false, [Yes] -> true
registered: [No]-> false, [Yes] -> trueregistered: [No]-> false, [Yes] -> trueuser_name: [UserName]
registered: [No]-> false, [Yes] -> trueregistered: [No]-> false, [Yes] -> trueuser_name: [UserName]user_name: [UserName]query.date_time: [DateTime]query.location: [Location]query.network: [Network]
Expectation Agenda
Error HandlingDecision Process
Strategies
ErrorIndicators
ExplicitConfirm
problem : approach : infrastructure : indicators : strategies : decision process : summary
14
RavenClaw-based Systems
problem : approach : infrastructure : indicators : strategies : decision process : summary
System Domain
RoomLine Information Access
CMU Let’s Go! Bus Information System
Information Access
LARRI [Symphony] Guidance through procedures
Intelligent Procedure Assistant [NASA Ames]
Guidance through procedures
TeamTalk [11-754] Command-and-control
Eureka [11-743] Web-access
15
0. Infrastructure
1. Error awareness
2. Error recovery strategies
3. Error handling decision process
Research Plan
Develop indicators that … Assess reliability of information Assess how well the dialogue is advancing
Develop and investigate an extended set of conversational error handling strategies
Develop a scalable reinforcement-learning based architecture for making error handling decisions
problem : approach : infrastructure : indicators : strategies : decision process : summary
16
Existing Work
Confidence Annotation Traditionally focused on speech recognizer
[Bansal, Chase, Cox, and others]
Recently, multiple sources of knowledge[San-Segundo, Walker, Bosch, Bohus, and others]
Recognition, parsing, dialogue management
Detect misunderstandings: ~ 80-90% accuracy
Correction and Aware Site Detection[Swerts, Litman, Levow and others]
Multiple sources of knowledge Detect corrections: ~ 80-90% accuracy
problem : approach : infrastructure : indicators : strategies : decision process : summary
17
S: Where are you flying from?
U: [CityName={Aspen/0.6; Austin/0.2}]
S: Did you say you wanted to fly out of Aspen?
U: [No/0.6] [CityName={Boston/0.8}]
Proposed: Belief Updating
Continuously assess beliefs in light of initial confidence and subsequent events
[CityName={Aspen/?; Austin/?; Boston/?}]
An example:
problem : approach : infrastructure : indicators : strategies : decision process : summary
initial belief+
system action+
user response
updated belief
18
contents
Belief Updating: Approach
Model the update in a dynamic belief network
Userconcept
User response
t t + 1
problem : approach : infrastructure : indicators : strategies : decision process : summary
confidence
correction
1st Hyp 2nd Hyp 3rd Hyp
Confidence
Yes No
PositiveMarkers
NegativeMarkers
UtteranceLength
Userconcept
Systemaction
19
0. Infrastructure
1. Error awareness
2. Error recovery strategies
3. Error handling decision process
Research Plan
Develop indicators that … Assess reliability of information Assess how well the dialogue is advancing
Develop and investigate an extended set of conversational error handling strategies
Develop a scalable reinforcement-learning based architecture for making error handling decisions
problem : approach : infrastructure : indicators : strategies : decision process : summary
20
Is the Dialogue Advancing Normally?
Locally, turn-level: Non-understanding indicators
Non-understanding flag directly available Develop additional indicators
Recognition, Understanding, Interpretation
Globally, discourse-level: Dialogue-on-track indicators
Counts, averages of non-understanding indicators Rate of dialogue advance
problem : approach : infrastructure : indicators : strategies : decision process : summary
21
0. Infrastructure
1. Error awareness
2. Error recovery strategies
3. Error handling decision process
Research Plan
Develop indicators that … Assess reliability of information Assess how well the dialogue is advancing
Develop and investigate an extended set of conversational error handling strategies
Develop a scalable reinforcement-learning based architecture for making error handling decisions
problem : approach : infrastructure : indicators : strategies : decision process : summary
22
Error Recovery Strategies
Identify Identify and define an extended set of error
handling strategies
Implement Construct task-decoupled implementations of a
large number of strategies
Evaluate Evaluate performance and bring further
refinements
problem : approach : infrastructure : indicators : strategies : decision process : summary
23
List of Error Recovery Strategies
HelpWhere are we?Start overScratch concept valueGo backChannel establishmentSuspend/ResumeRepeatSummarizeQuit Restart subtask plan
Select alternative planStart overTerminate session / Direct to operator
Local problems(non-understandings)
Global problems(compounded, discourse-level problems)Switch input modality
SNR repairAsk repeat turn
Notify non-understandingExplicit confirm turnTargeted helpWH-reformulationKeep-a-word reformulationGeneric helpYou can say
Ask rephrase turn
problem : approach : infrastructure : indicators : strategies : decision process : summary
User Initiated System Initiated
Explicit confirmationImplicit confirmationDisambiguationAsk repeat conceptReject concept
Ensure that the system has reliable information(misunderstandings)
Ensure that the dialogueon track
24
List of Error Recovery Strategies
HelpWhere are we?Start overScratch concept valueGo backChannel establishmentSuspend/ResumeRepeatSummarizeQuit Restart subtask plan
Select alternative planStart overTerminate session / Direct to operator
Local problems(non-understandings)
Global problems(compounded, discourse-level problems)Switch input modality
SNR repairAsk repeat turn
Notify non-understandingExplicit confirm turnTargeted helpWH-reformulationKeep-a-word reformulationGeneric helpYou can say
Ask rephrase turn
problem : approach : infrastructure : indicators : strategies : decision process : summary
User Initiated System Initiated
Explicit confirmationImplicit confirmationDisambiguationAsk repeat conceptReject concept
Ensure that the system has reliable information(misunderstandings)
Ensure that the dialogueon track
25
Error Recovery Strategies: Evaluation
Reusability Deploy in different spoken dialogue systems
Efficiency of non-understanding strategies Simple metric: Is the next utterance understood? Efficiency depends on decision process Construct upper and lower bounds for efficiency
Lower bound: decision process which chooses uniformly Upper bound: human performs decision process (WOZ)
problem : approach : infrastructure : indicators : strategies : decision process : summary
26
0. Infrastructure
1. Error awareness
2. Error recovery strategies
3. Error handling decision process
Research Plan
Develop indicators that … Assess reliability of information Assess how well the dialogue is advancing
Develop and investigate an extended set of conversational error handling strategies
Develop a scalable reinforcement-learning based architecture for making error handling decisions
problem : approach : infrastructure : indicators : strategies : decision process : summary
27
Dialogue control ~ Markov Decision Process States Actions Rewards
Previous work: successes in small domains NJFun [Singh, Kearns, Litman, Walker et al]
Problems Approach does not scale Once learned, policies are not reusable
Previous Reinforcement Learning Work
problem : approach : infrastructure : indicators : strategies : decision process : summary
S1
S2
S3A
R
28
Proposed Approach
Overcome previous shortcomings:
1. Focus learning only on error handling Reduces the size of the learning problem Favors reusability of learned policies Lessens the system development effort
2. Use a “divide-and-conquer” approach Leverage independences in dialogue
problem : approach : infrastructure : indicators : strategies : decision process : summary
29
Decision Process Architecture
RoomLine
Login
Welcome
AskRegistered AskName
GreetUser
user_nameregistered
GatingMechanism
Concept-MDP Concept-MDP
Topic-MDP
Topic-MDP
Topic-MDP
Small-size models Parameters can be tied across
models Accommodate dynamic task
generation
Favors reusability of policies
Initial policies can be easily handcrafted
problem : approach : infrastructure : indicators : strategies : decision process : summary
No Action
Explicit Confirm
No Action
No Action
No Action
ExplicitConfirmation
Independence assumption
30
Reward Structure & Learning
Gating Mechanism
MDP MDP MDP
Action
Global, post-gate rewardsReward
Gating Mechanism
MDP MDP MDP
Action
Local rewards
Reward Reward Reward
Rewards based on any dialogue performance metric
Atypical, multi-agent reinforcement learning setting
Multiple, standard RL problems
Model-based approaches
problem : approach : infrastructure : indicators : strategies : decision process : summary
31
Evaluation
Performance Compare learned policies with initial heuristic
policies Metrics
Task completion Efficiency Number and lengths of error segments User satisfaction
Scalability Deploy in a system operating with a sizable task Theoretical analysis
problem : approach : infrastructure : indicators : strategies : decision process : summary
32
Outline
Problem Approach Infrastructure Research Program Summary & Timeline
problem : approach : infrastructure : indicators : strategies : decision process : summary
33
Goal: develop a task-independent, adaptive and scalable framework for error recovery in task-oriented spoken dialogue systems
Modern dialogue management framework Belief updating framework Investigation of an extended set of error handling
strategies Scalable data-driven approach for learning error
handling policies
Summary of Anticipated Contributions
problem : approach : infrastructure : indicators : strategies : decision process : summary
34
Timelineproposal
milestone 1
milestone 2
milestone 3
defense
end ofyear 4
end ofyear 5
now
5.5 years
Data collection forbelief updating and
WOZ study
Develop andevaluate the
belief updatingmodels
Implementdialogue-on-track
indicators
Misunderstanding and
non-understandingstrategies Investigate
theoreticalaspects ofproposed
reinforcementlearningmodel
Evaluatenon-understandingstrategies; develop
remaining strategies
Error handlingdecision process:
reinforcementlearning
experiments
Data collection forRL training
Data collection forRL evaluation
data indicators strategies decisions
Contingencydata collection
efforts
Additional experiments: extensions or
contingency work
problem : approach : infrastructure : indicators : strategies : decision process : summary
February 2004
September 2004
September 2005
January 2005
December 2005
35
Thank You!
Questions & Comments
36
Additional Slides
37
Year 4 Year 5 Year 6 Spring’04 Summer’04 Fall’04 Winter’04-05 Spring’05 Summer’05 Fall’05 Winter 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Ex
pe
rim
en
ts
Data Collection and Experiments
BDC: Background Data Collection
[5] – 4 months DC-1: Data collection for belief updating and non-understanding strategies evaluation WOZ: Wizard-of-oz experiment for non-understanding strategies
BDC: Background Data Collection
[9] – 3 months DC-2L: Data collection for decision process training and baselines
[11] – 2 m DC-2E: Data collection for decision process evaluation
[14] – 6 months Contingency (or extension work items) data collections / experiments
Belief Updating (Work Item 5)
[6] – 5 months Build and evaluate belief updating models, integrate in RavenClaw
Ind
ica
tors
Non-understanding and Dialogue-On-Track Indicators (Work Item 6)
[7] – 3 months Implement dialogue-on-track indicators
Str
ate
gie
s
Error prevention and recovery strategies (Work Item 8)
[1] – 4 months - Finish RavenClaw implementations for the misunderstanding and non-understanding strategies
[4] – 6 months - Evaluate non-understanding strategies in random exploration mode and in a WOZ setting - Develop the rest of the error handling strategies
[15] – 6 months Refinements of the proposed model, follow-up work for evaluating adaptability and reusability of policies
De
cis
ion
P
roc
es
s Decision Process:
Reinforcement Learning Work (Work Item 9)
[2] – 12 months Investigate more the theoretical aspects of the proposed RL model, establish final structure for the topic and concept MDPs, design initial policies, and finalize structure for gating function. Implement the models in the RavenClaw dialogue management framework.
[10] – 6 months Perform reinforcement learning experiments/evaluation for the decision process
[16] – 6 months (Contingency time) Alternative data-driven models
[13] – 3 months Write decision process paper
Wri
tin
g
Writing
[3] – 3 months Write paper on RavenClaw conversational strategies for error handling
[8] – 3 months Write belief updating paper
[12] – 10 months Write thesis document
1 2 3 4 5 6 M1 8 9 10 11 M2 13 14 15 16 17 M3 19 20 21 22 23 24
38
Understanding Process
Errors in spoken dialogue systems
Recognition
System acquiresinformation
System does not acquire information
Non-understanding
System acquirescorrect information
System acquiresincorrect informationMisunderstanding
OK
ParsingContextual
Interpretation
Non-understanding
indicators/Turn-level strategies
Belief Updating/Concept-level
strategies
39
Structure of Individual MDPs
HC
ExplConf
ImplConf
NoAct
LC
ExplConf
ImplConf
NoAct
MC
ExplConf
ImplConf
NoAct
0NoAct
Concept MDPs State-space: belief indicators Action-space: concept scoped system actions
Topic MDPs State-space: non-understanding, dialogue-on-track
indicators Action-space: non-understanding actions, topic-level actions
40
Gating Mechanism
Heuristic derived from domain-independent dialogue principles Give priority to entities closer to the conversational
focus Give priority to topics over concept
41
Task-independence / Reusability
Dialogue Task (Specification)
Domain-Independent Dialogue Engine
RoomLine
Login
Welcome
AskRegistered AskName
GreetUser
GetQuery
DateTime Location Properties
Network Projector Whiteboard
GetResults DiscussResults
user_nameregistered
query
results
RoomLine
Login
AskRegistered
Dialogue Stack
registered: [No]-> false, [Yes] -> true
registered: [No]-> false, [Yes] -> trueregistered: [No]-> false, [Yes] -> trueuser_name: [UserName]
registered: [No]-> false, [Yes] -> trueregistered: [No]-> false, [Yes] -> trueuser_name: [UserName]user_name: [UserName]query.date_time: [DateTime]query.location: [Location]query.network: [Network]
Expectation Agenda
Error HandlingDecision Process
Strategies
ErrorIndicators
ExplicitConfirm
problem : approach : infrastructure : indicators : strategies : decision process : summary
Argument: architecure
Proof: deployment across multiple RavenClaw systems
42
Adaptable
problem : approach : infrastructure : indicators : strategies : decision process : summary
Argument: reinforcement learning approach
Proof: longer term evaluation of adaptability (extension work item)
RoomLine
Login
Welcome
AskRegistered AskName
GreetUser
user_nameregistered
GatingMechanism
Concept-MDP Concept-MDP
Topic-MDP
Topic-MDP
Topic-MDP
No Action
Explicit Confirm
No Action
No Action
No Action
ExplicitConfirmation
43
Scalable
problem : approach : infrastructure : indicators : strategies : decision process : summary
Argument: architecture
Proof: deployment and experiments with systems with large tasks
RoomLine
Login
Welcome
AskRegistered AskName
GreetUser
user_nameregistered
GatingMechanism
Concept-MDP Concept-MDP
Topic-MDP
Topic-MDP
Topic-MDP
No Action
Explicit Confirm
No Action
No Action
No Action
ExplicitConfirmation
44
Scalability of Reinforcement Learning
NJFun 3 concepts, 7 state variables, 62 states Learned a policy from 311 dialogues
Consider 12 concepts (RoomLine/20, CMU Let’s Go!/27) 242 states State-space: grows 4 times # Parameters: grows 16 times
45
Extension Work Items
Portability of confidence annotation and belief updating schemes Use domain-independent features Train in one domain, test in another Use of unlabeled data
Self-training and co-training to improve performance in a new domain
Evaluation of adaptability/reusability Reusability: migrate policies from one domain to
another Adaptability: monitor model changes and system
behavior throughout a period of extended use
46
Study by [Shin et al]
Labeled error segments in 141 dialogs from multiple Communicator systems 1.66 error segments / session 22% of the error segments never get back on
track 37% of sessions have breakdowns in interaction
47
Errors Indicators, Strategies & Decision Process
RoomLine
Login
Welcome
AskRegistered AskName
GreetUser
user_nameregistered
GatingMechanism
Concept-MDP Concept-MDP
Topic-MDP
Topic-MDP
Topic-MDP
No Action
Explicit Confirm
No Action
No Action
No Action
ExplicitConfirmation
Error indicators Strategies Decision Process
48
An exampleS: Where do you want to go to?U: I wanna go to Grand Rapids Michigan on December first
[I WANNA GO TO GRAND RAPIDS MICHIGAN FOLLOWING DECEMBER BOSTON]
S: traveling to Boston… departing Pittsburgh on what date?U: traveling to Grand Rapids Michigan [TRAVELING TO GRAND RAPIDS MICHIGAN]S: traveling to Grand Rapids… on what date would you like to travel?U: December first [DECEMBER FOR SHIT]S: sorry, I’m not sure I understood what you said. What day did you need to travel?U: Friday December first [FRIDAY DECEMBER FOURTH]S: a flight on Monday, December fourth… and what time would you like to leave
Pittsburgh Monday, December fourthU: traveling on Friday December first [COMING FRIDAY DECEMBER]S: the date you gave was ambiguous. Did you mean November twenty-third or
November twenty fourth?U: traveling on Friday December first [TRAVELING ON FRIDAY DECEMBER]S: a flight on Friday, November twenty-fourth… and departing Pittsburgh at what time?U: traveling the following week on Friday [I’M TRAVELING FOLLOWING WEEK ON
FRIDAY]