Conversation with a Conversational Agent
Find a dialogue-based/conversational agent online
For my dialogue-based/conversational agent, I selected Eliza, a computer program that emulates a Rogerian therapist.
a) a fruitful engaging conversation
- Me: I'm feeling tired today.
- Eliza: Do you believe it is normal to be feeling tired today?
- Me: I think so. I have a lot of things on my plate.
- Eliza: Do you really think so.
- Me: I do.
- Eliza: OK..."I think so. You have a lot of things on your plate." Tell me more.
- Me: Well, I"m juggling full-time work, schoolwork, an academic internship, and trying to make time for friends and family.
- Eliza: Did you come to me because you're juggling full-time work, school work, an academic internship, and trying to make time for friends and family?
- Me: Yes. I don't know how to make time for everything.
- Eliza: Don't you really know how to make time for everything?
b) an unfruitful (failed) conversation on a topic of your choice, and share an example of each:
- Eliza: Hello, I am Eliza.
- Me: Hello, I am Grace
- Eliza: How long have you been Grace?
- Me: All my life.
- Eliza: Tell me more...
- Me: Don't know what else to say.
- Eliza: Have you asked anyone else?
- Me: About what?
- Eliza: Are such questions on your mind often?
- Me: What questions?
- Eliza: What is it that you really want to know?
c) try to explain what are potential factors/components that could have contributed to the success or failure of the conversation (e.g., any similar factors/components discussed in the readings?)
Honestly, I had a hard time deciphering a fruitful engaging conversation from an unfruitful one. Even in the instance where Eliza was able to respond, I didn't find her answers very helpful or engaging. In fact, they came off as a little insulting. I wonder if this was mostly due to the nature of the content I was discussing. I was probably more sensitive about and invested in this content than I would have been about something less personal.
In the engaging conversation I selected, I was at least able to get a little back and forth going with Eliza, although I did need to provide a double prompt to get there. In the unfruitful conversation, I was never able to really get off the ground with Eliza. Particularly, it's a challenge because of the way that Eliza's first comment is structured, you think you should reply in turn and introduce yourself. However, the AI doesn't seem prepared to launch from that introduction.
d) how you may change the design/structure of the chatbot for improvement.
As Cassell (2001) stated, I can see how an embodied AI could have helped me to feel more connected to the system and have also helped provide context for the AI to understand me better. Although Rosé and VanLehn (2005) studied a different aspect of this, I think that the idea that there can be uncertainty in assessing just text-based content carried through for me in this activity.
Additionally, I think that an AI system that was able to better assess the content of sentences (focus, subject) and underlying emotions, would help to create better answers. Finally, I think that ideally the conversational agent should be taught basic cadences of conversation/meetings and be able to classify statements accordingly. For example, this is an introduction, this is a question, this is a statement, this is a farewell.
References
Cassell, J. (2001). Embodied conversational agents: representation and intelligence in user interfaces.
AI magazine, 22(4), 67-83.
Rosé, C., & VanLehn, K. (2005). An evaluation of a hybrid language understanding approach for
robust selection of tutoring goals. International Journal of Artificial Intelligence in Education, 15(4),
325-355.
Hello Grace,
ReplyDeleteI had a good laugh when Eliza asked you “How long have you been Grace?”
Your initial impression of feeling that the chatbot “..came off as a little insulting”, or getting the sense that you were “….never able to really get off the ground with Eliza” is an important one. You may want to think what is it about Eliza’s comment that was insulting (e.g., repeating your words, not contributing any new information to the conversation?). Are there times that you experience similar insults and “never really connecting” with humans conversations? Is there something that we can learn from failed human conversations? You make some interesting suggestions to improve Eliza, but as a system that is known to be (not that sophisticated) you may want to start out with a simple change? For example, Eliza may use a word/comment that a human made, and possibly (look it up in a database) and return a comment that is relevant but not exactly the same.