Top Mission Laboratories Members Projects Research Robots Resources Publications Access
Top > Publications
Japanese

QuickSearch:   Matching entries: 0.

settings...
Books
Shuichi Nishio, Hideyuki Nakanishi, Tsumomu Fujinami, "Investigating Human Nature and Communication through Robots", Frontiers Media, January, 2017.
Abstract: The development of information technology enabled us to exchange more items of information among us no matter how far we are apart from each other. It also changed our way of communication. Various types of robots recently promoted to be sold to general public hint that these robots may further influence our daily life as they physically interact with us and handle objects in environment. We may even recognize a feel of presence similar to that of human beings when we talk to a robot or when a robot takes part in our conversation. The impact will be strong enough for us to think about the meaning of communication. This e-book consists of various studies that examine our communication influenced by robots. Topics include our attitudes toward robot behaviors, designing robots for better communicating with people, and how people can be affected by communicating through robots.
BibTeX:
@Book{Nishio2017,
  title =     {Investigating Human Nature and Communication through Robots},
  publisher = {Frontiers Media},
  year =      {2017},
  editor =    {Shuichi Nishio and Hideyuki Nakanishi and Tsumomu Fujinami},
  month =     Jan,
  abstract =  {The development of information technology enabled us to exchange more items of information among us no matter how far we are apart from each other. It also changed our way of communication. Various types of robots recently promoted to be sold to general public hint that these robots may further influence our daily life as they physically interact with us and handle objects in environment. We may even recognize a feel of presence similar to that of human beings when we talk to a robot or when a robot takes part in our conversation. The impact will be strong enough for us to think about the meaning of communication. This e-book consists of various studies that examine our communication influenced by robots. Topics include our attitudes toward robot behaviors, designing robots for better communicating with people, and how people can be affected by communicating through robots.},
  file =      {Nishio2017.pdf:pdf/Nishio2017.pdf:PDF},
  isbn =      {9782889450862},
  url =       {http://www.frontiersin.org/books/Investigating_Human_Nature_and_Communication_through_Robots/1098}
}
Book Chapters
Panikos Heracleous, Denis Beautemps, Hiroshi Ishiguro, Norihiro Hagita, "Towards Augmentative Speech Communication", Chapter in Speech and Language Technologies, InTech, Vukovar, Croatia, pp. 303-318, June, 2011.
Abstract: Speech is the most natural form of communication for human beings and is often described as a uni-modal communication channel. However, it is well known that speech is multi-modal in nature and includes the auditive, visual, and tactile modalities (i.e., as in Tadoma communication \citeTADOMA). Other less natural modalities such as electromyographic signal, invisible articulator display, or brain electrical activity or electromagnetic activity can also be considered. Therefore, in situations where audio speech is not available or is corrupted because of disability or adverse environmental condition, people may resort to alternative methods such as augmented speech.
BibTeX:
@InCollection{Heracleous2011,
  Title                    = {Towards Augmentative Speech Communication},
  Author                   = {Panikos Heracleous and Denis Beautemps and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {Speech and Language Technologies},
  Publisher                = {{InT}ech},
  Year                     = {2011},

  Address                  = {Vukovar, Croatia},
  Editor                   = {Ivo Ipsic},
  Month                    = Jun,
  Pages                    = {303--318},

  Abstract                 = {Speech is the most natural form of communication for human beings and is often described as a uni-modal communication channel. However, it is well known that speech is multi-modal in nature and includes the auditive, visual, and tactile modalities (i.e., as in Tadoma communication \cite{TADOMA}). Other less natural modalities such as electromyographic signal, invisible articulator display, or brain electrical activity or electromagnetic activity can also be considered. Therefore, in situations where audio speech is not available or is corrupted because of disability or adverse environmental condition, people may resort to alternative methods such as augmented speech.},
  File                     = {Heracleous2011.pdf:Heracleous2011.pdf:PDF;InTech-Towards_augmentative_speech_communication.pdf:http\://www.intechopen.com/source/pdfs/15082/InTech-Towards_augmentative_speech_communication.pdf:PDF},
  Grant                    = {CREST},
  Url                      = {http://www.intechopen.com/articles/show/title/towards-augmentative-speech-communication}
}
Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Geminoid: Teleoperated Android of an Existing Person", Chapter in Humanoid Robots: New Developments, I-Tech Education and Publishing, Vienna, Austria, pp. 343-352, June, 2007.
BibTeX:
@InCollection{Nishio2007a,
  author =          {Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title =           {Geminoid: Teleoperated Android of an Existing Person},
  booktitle =       {Humanoid Robots: New Developments},
  publisher =       {I-Tech Education and Publishing},
  year =            {2007},
  editor =          {Armando Carlos de Pina Filho},
  pages =           {343--352},
  address =         {Vienna, Austria},
  month =           Jun,
  file =            {Nishio2007a.pdf:Nishio2007a.pdf:PDF;InTech-Geminoid_teleoperated_android_of_an_existing_person.pdf:http\://www.intechopen.com/source/pdfs/240/InTech-Geminoid_teleoperated_android_of_an_existing_person.pdf:PDF},
  url =             {http://www.intechopen.com/articles/show/title/geminoid__teleoperated_android_of_an_existing_person}
}
Overviews
Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Using Androids to Provide Communication Support for the Elderly", New Breeze, vol. 27, no. 4, pp. 14-17, October, 2015.
BibTeX:
@Article{Nishio2015c,
  author =   {Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title =    {Using Androids to Provide Communication Support for the Elderly},
  journal =  {New Breeze},
  year =     {2015},
  volume =   {27},
  number =   {4},
  pages =    {14-17},
  month =    Oct,
  day =      {9},
  file =     {Nishio2015c.pdf:pdf/Nishio2015c.pdf:PDF},
  url =      {https://www.ituaj.jp/wp-content/uploads/2015/10/nb27-4_web_05_ROBOTS_usingandroids.pdf}
}
Kohei Ogawa, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Android Robots as Tele-presence Media", Biomedical Engineering and Cognitive Neuroscience for Healthcare: Interdisciplinary Applications, Medical Information Science Reference, Pennsylvania, USA, pp. 54-63, September, 2012.
Abstract: In this chapter, the authors describe two human-like android robots, known as Geminoid and Telenoid, which they have developed. Geminoid was developed for two reasons: (1) to explore how humans react or respond the android during face-to-face communication and (2) to investigate the advantages of the android as a communication medium compared to traditional communication media, such as the telephone or the television conference system. The authors conducted two experiments: the first was targeted to an interlocutor of Geminoid, and the second was targeted to an operator of it. The results of these experiments showed that Geminoid could emulate a human's presence in a natural-conversation situation. Additionally, Geminoid could be as persuasive to the interlocutor as a human. The operators of Geminoid were also influenced by the android: during operation, they felt as if their bodies were one and the same with the Geminoid body. The latest challenge has been to develop Telenoid, an android with a more abstract appearance than Geminoid, which looks and behaves as a minimalistic human. At first glance, Telenoid resembles a human; however, its appearance can be interpreted as any sex or any age. Two field experiments were conducted with Telenoid. The results of these experiments showed that Telenoid could be an acceptable communication medium for both young and elderly people. In particular, physical interaction, such as a hug, positively affected the experience of communicating with Telenoid.
BibTeX:
@Article{Ogawa2012b,
  author =    {Kohei Ogawa and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title =     {Android Robots as Tele-presence Media},
  journal =   {Biomedical Engineering and Cognitive Neuroscience for Healthcare: Interdisciplinary Applications},
  year =      {2012},
  pages =     {54-63},
  month =     Sep,
  abstract =  {In this chapter, the authors describe two human-like android robots, known as Geminoid and Telenoid, which they have developed. Geminoid was developed for two reasons: (1) to explore how humans react or respond the android during face-to-face communication and (2) to investigate the advantages of the android as a communication medium compared to traditional communication media, such as the telephone or the television conference system. The authors conducted two experiments: the first was targeted to an interlocutor of Geminoid, and the second was targeted to an operator of it. The results of these experiments showed that Geminoid could emulate a human's presence in a natural-conversation situation. Additionally, Geminoid could be as persuasive to the interlocutor as a human. The operators of Geminoid were also influenced by the android: during operation, they felt as if their bodies were one and the same with the Geminoid body. The latest challenge has been to develop Telenoid, an android with a more abstract appearance than Geminoid, which looks and behaves as a minimalistic human. At first glance, Telenoid resembles a human; however, its appearance can be interpreted as any sex or any age. Two field experiments were conducted with Telenoid. The results of these experiments showed that Telenoid could be an acceptable communication medium for both young and elderly people. In particular, physical interaction, such as a hug, positively affected the experience of communicating with Telenoid.},
  address =   {Pennsylvania, USA},
  chapter =   {6},
  doi =       {10.4018/978-1-4666-2113-8.ch006},
  editor =    {Jinglong Wu},
  file =      {Ogawa2012b.pdf:Ogawa2012b.pdf:PDF},
  isbn =      {9781466621138},
  publisher = {Medical Information Science Reference},
  url =       {http://www.igi-global.com/chapter/android-robots-telepresence-media/69905}
}
Daisuke Sakamoto, Hiroshi Ishiguro, "Geminoid: Remote-Controlled Android System for Studying Human Presence", Kansei Engineering International, vol. 8, no. 1, pp. 3-9, 2009.
BibTeX:
@Article{Sakamoto2009,
  author =   {Daisuke Sakamoto and Hiroshi Ishiguro},
  title =    {Geminoid: Remote-Controlled Android System for Studying Human Presence},
  journal =  {Kansei Engineering International},
  year =     {2009},
  volume =   {8},
  number =   {1},
  pages =    {3--9},
  file =     {Sakamoto2009.pdf:Sakamoto2009.pdf:PDF},
  url =      {http://mol.medicalonline.jp/archive/search?jo=dp7keint&ye=2009&vo=8&issue=1}
}
Invited Talks
Hiroshi Ishiguro, "Studies on humanlike robots", In Computer Graphics International 2017 (CGI2017), Keio University Hiyoshi Campus, Yokohama, June, 2017.
Abstract: In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.
BibTeX:
@InProceedings{Ishiguro2017e,
  author =    {Hiroshi Ishiguro},
  title =     {Studies on humanlike robots},
  booktitle = {Computer Graphics International 2017 (CGI2017)},
  year =      {2017},
  address =   {Keio University Hiyoshi Campus, Yokohama},
  month =     Jun,
  abstract =  {In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.},
  url =       {http://fj.ics.keio.ac.jp/cgi17/}
}
Hiroshi Ishiguro, "Studies on Humanlike Robots", In Academia Film Olomouc (AFO52), Olomouc, Czech, April, 2017.
Abstract: In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.
BibTeX:
@InProceedings{Ishiguro2017f,
  author =    {Hiroshi Ishiguro},
  title =     {Studies on Humanlike Robots},
  booktitle = {Academia Film Olomouc (AFO52)},
  year =      {2017},
  address =   {Olomouc, Czech},
  month =     Apr,
  abstract =  {In this talk, the speaker discusses the design principles for the robots and their effects to conversations with humans.},
  day =       {28},
  url =       {http://www.afo.cz/programme/3703/}
}
Hiroshi Ishiguro, "AI, Labour, Creativity and Authorship", In AI in Asia: AI for Social Good, Waseda University, Tokyo, March, 2017.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society.
BibTeX:
@InProceedings{Ishiguro2017a,
  author =    {Hiroshi Ishiguro},
  title =     {AI, Labour, Creativity and Authorship},
  booktitle = {AI in Asia: AI for Social Good},
  year =      {2017},
  address =   {Waseda University, Tokyo},
  month =     Mar,
  abstract =  {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society.},
  day =       {6},
  url =       {https://www.digitalasiahub.org/2017/02/27/ai-in-asia-ai-for-social-good/}
}
Hiroshi Ishiguro, "Humans and Robots in a Free-for-All Discussion", In The South by Southwest (SXSW) Conference & Festivals 2017, Austin Convention Center, USA, March, 2017.
Abstract: Robots are now equal if not surpassing humans in many skill sets - games, driving, and musical performance. Now they are able to maintain logical conversations rather than responding to simple questions. Famed roboticist Dr. Ishiguro, who created an android with a splitting image of himself, Japanese communication giant NTT's Dr. Higashinaka, who spearheads the development of the latest spoken dialogue technology, and two robots will have a lively banter. Are robots now our conversational companions?
BibTeX:
@InProceedings{Ishiguro2017c,
  author =    {Hiroshi Ishiguro},
  title =     {Humans and Robots in a Free-for-All Discussion},
  booktitle = {The South by Southwest (SXSW) Conference \& Festivals 2017},
  year =      {2017},
  address =   {Austin Convention Center, USA},
  month =     Mar,
  abstract =  {Robots are now equal if not surpassing humans in many skill sets - games, driving, and musical performance. Now they are able to maintain logical conversations rather than responding to simple questions. Famed roboticist Dr. Ishiguro, who created an android with a splitting image of himself, Japanese communication giant NTT's Dr. Higashinaka, who spearheads the development of the latest spoken dialogue technology, and two robots will have a lively banter. Are robots now our conversational companions?},
  day =       {12},
  url =       {http://schedule.sxsw.com/2017/events/PP95381}
}
Hiroshi Ishiguro, "Androids, Robots, and Our Future Life", In CeBIT 2017, Hannover, Germany, March, 2017.
Abstract: We, humans, have innate brain function to recognize humans. Therefore, humanlike robots, androids, can be ideal information media for human-robot/computer interaction. In this talk, the speaker introduces the developed robots in his laboratories and their practical applications and discuss how the robot changes our life in the future.
BibTeX:
@InProceedings{Ishiguro2017b,
  author =    {Hiroshi Ishiguro},
  title =     {Androids, Robots, and Our Future Life},
  booktitle = {CeBIT 2017},
  year =      {2017},
  address =   {Hannover, Germany},
  month =     Mar,
  abstract =  {We, humans, have innate brain function to recognize humans. Therefore, humanlike robots, androids, can be ideal information media for human-robot/computer interaction. In this talk, the speaker introduces the developed robots in his laboratories and their practical applications and discuss how the robot changes our life in the future.},
  day =       {21},
  url =       {http://www.cebit.de/en/}
}
Hiroshi Ishiguro, "Uncanny Valleys: Thinking and Feeling in the Age of Synthetic Humans", In USC Visions and Voices, Doheny Memorial Library, USA, March, 2017.
Abstract: A discussion with leading robotics experts, including Hiroshi Ishiguro, Yoshio Matsumoto, Travis Deyle, and Jonathan Gratch of the USC Institute for Creative Technologies, and science historian Jessica Riskin (The Restless Clock) about the future of artificial life and new pathways for human-machine interactions. You'll also have a chance to explore an interactive showcase that reveals how roboticists are replicating human locomotion, facial expressions, and intelligence as they assemble walking, talking, thinking, and feeling machines.
BibTeX:
@InProceedings{Ishiguro2017d,
  author =    {Hiroshi Ishiguro},
  title =     {Uncanny Valleys: Thinking and Feeling in the Age of Synthetic Humans},
  booktitle = {USC Visions and Voices},
  year =      {2017},
  address =   {Doheny Memorial Library, USA},
  month =     Mar,
  abstract =  {A discussion with leading robotics experts, including Hiroshi Ishiguro, Yoshio Matsumoto, Travis Deyle, and Jonathan Gratch of the USC Institute for Creative Technologies, and science historian Jessica Riskin (The Restless Clock) about the future of artificial life and new pathways for human-machine interactions. You'll also have a chance to explore an interactive showcase that reveals how roboticists are replicating human locomotion, facial expressions, and intelligence as they assemble walking, talking, thinking, and feeling machines.},
  day =       {23},
  url =       {https://calendar.usc.edu/event/uncanny_valleys_thinking_and_feeling_in_the_age_of_synthetic_humans#.WNDWQz96pGZ}
}
Hiroshi Ishiguro, "Studies on humanlike robots", In IVA seminar, IVA Konferenscenter, Sweden, January, 2017.
Abstract: Most of us are used to see robots being portrayed in movies, either as good or bad characters, having humanlike abilities: they can conduct dialog, interact with the environment and collaborate with humans and each others. How far are we from having these rather advanced systems among us, helping us with the daily activities, in our homes and at our jobs?
BibTeX:
@InProceedings{Ishiguro2017,
  author =    {Hiroshi Ishiguro},
  title =     {Studies on humanlike robots},
  booktitle = {IVA seminar},
  year =      {2017},
  address =   {IVA Konferenscenter, Sweden},
  month =     Jan,
  abstract =  {Most of us are used to see robots being portrayed in movies, either as good or bad characters, having humanlike abilities: they can conduct dialog, interact with the environment and collaborate with humans and each others. How far are we from having these rather advanced systems among us, helping us with the daily activities, in our homes and at our jobs?},
  day =       {24},
  url =       {http://www.iva.se/en/tidigare-event/social-and-humanlike-robots/}
}
Hiroshi Ishiguro, "Humanlike robots and our future society", In ROMAEUROPA FESTIVAL 2016, Auditorium MACRO, Italy, November, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@InProceedings{Ishiguro2016i,
  author =    {Hiroshi Ishiguro},
  title =     {Humanlike robots and our future society},
  booktitle = {ROMAEUROPA FESTIVAL 2016},
  year =      {2016},
  address =   {Auditorium MACRO, Italy},
  month =     Nov,
  abstract =  {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
  day =       {24},
  url =       {http://romaeuropa.net/festival-2016/ishiguro/}
}
Hiroshi Ishiguro, "Robotics", In Microsoft Research Asia Faculty Summit 2016, Yonsei University, Korea, November, 2016.
Abstract: his session examines the future direction of robotics research. As a background movement, AI is sparking great interest and exploration. In order to realize AI in human society, it is necessary to embody such AI in physical forms, namely to have physical forms. Under such circumstance, this session explores and clarifies the current direction of basic robotics research. Thorough examination of what types of research components are missing, and how does such capability development affect the directional paths of research will be highlighted.
BibTeX:
@InProceedings{Ishiguro2016k,
  author =    {Hiroshi Ishiguro},
  title =     {Robotics},
  booktitle = {Microsoft Research Asia Faculty Summit 2016},
  year =      {2016},
  address =   {Yonsei University, Korea},
  month =     Nov,
  abstract =  {his session examines the future direction of robotics research. As a background movement, AI is sparking great interest and exploration. In order to realize AI in human society, it is necessary to embody such AI in physical forms, namely to have physical forms. Under such circumstance, this session explores and clarifies the current direction of basic robotics research. Thorough examination of what types of research components are missing, and how does such capability development affect the directional paths of research will be highlighted.},
  day =       {5},
  url =       {https://www.microsoft.com/en-us/research/event/asia-faculty-summit-2016/}
}
Hiroshi Ishiguro, "What can we learn from very human-like robots & androids?", In Creative Innovation Asia Pacific 2016, Sofitel Melbourne on Collins, Australia, November, 2016.
Abstract: Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.
BibTeX:
@InProceedings{Ishiguro2016e,
  author =    {Hiroshi Ishiguro},
  title =     {What can we learn from very human-like robots \& androids?},
  booktitle = {Creative Innovation Asia Pacific 2016},
  year =      {2016},
  address =   {Sofitel Melbourne on Collins, Australia},
  month =     Nov,
  abstract =  {Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.},
  day =       {9},
  url =       {http://www.creativeinnovationglobal.com.au/Ci2016/}
}
Hiroshi Ishiguro, "Interactive robots and our future life", In MarkeThing, Alten Teppichfabrik Berlin, Germany, September, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@InProceedings{Ishiguro2016g,
  author =    {Hiroshi Ishiguro},
  title =     {Interactive robots and our future life},
  booktitle = {MarkeThing},
  year =      {2016},
  address =   {Alten Teppichfabrik Berlin, Germany},
  month =     Sep,
  abstract =  {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
  day =       {28},
  url =       {http://www.markething.de/}
}
Hiroshi Ishiguro, "Studies on Humanoids and Androids", In CEDI 2016, University of Salamanca, Spain, September, 2016.
Abstract: Geminoid that is an tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.
BibTeX:
@InProceedings{Ishiguro2016h,
  author =    {Hiroshi Ishiguro},
  title =     {Studies on Humanoids and Androids},
  booktitle = {CEDI 2016},
  year =      {2016},
  address =   {University of Salamanca, Spain},
  month =     Sep,
  abstract =  {Geminoid that is an tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.},
  day =       {13},
  url =       {http://www.congresocedi.es/en/ponentes-invitados}
}
Hiroshi Ishiguro, "Communication Robots", In International Symposium of "Empathetic systems", "ICP2016" and "JNS2016/Elsevier". Brain and Social Mind: The Origin of Empathy and Morality, PACIFICO Yokohama, Yokohama, July, 2016.
Abstract: Geminoid that is a tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.
BibTeX:
@InProceedings{Ishiguro2016f,
  author =    {Hiroshi Ishiguro},
  title =     {Communication Robots},
  booktitle = {International Symposium of "Empathetic systems", "ICP2016" and "JNS2016/Elsevier". Brain and Social Mind: The Origin of Empathy and Morality},
  year =      {2016},
  address =   {PACIFICO Yokohama, Yokohama},
  month =     Jul,
  abstract =  {Geminoid that is a tele-operated android of an existing person can transmit the presence of the operator to the distant place. The operator recognizes the android body as his/her own body after talking with someone through the geminoid and has virtual feeling to be touched when someone touches to the geminoid. However, the geminoid is not the ideal medium for everybody. For example, elderly people hesitate to talk with adult humans and adult androids. A question is what is the ideal medium for everybody. In order to investigate the ideal medium, we are proposing the minimum design of interactive humanoids. It is called Telenoid. The geminoid is the perfect copy of an existing person and it is the maximum design of interactive humanoids. On the other hand, the minimum design looks like a human but we cannot know the age and gender. Elderly people like to talk with the telenoid. In this talk, we discuss the design principles and the effect to the conversation.},
  day =       {23},
  url =       {http://darwin.c.u-tokyo.ac.jp/empathysymposium2016/ja/}
}
Hiroshi Ishiguro, "Adaptation to Teleoperate Robots", In The 31st International Congress of Psychology, PACIFICO Yokohama, Yokohama, July, 2016.
Abstract: We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.
BibTeX:
@InProceedings{Ishiguro2016d,
  author =    {Hiroshi Ishiguro},
  title =     {Adaptation to Teleoperate Robots},
  booktitle = {The 31st International Congress of Psychology},
  year =      {2016},
  address =   {PACIFICO Yokohama, Yokohama},
  month =     Jul,
  abstract =  {We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.},
  day =       {24},
  url =       {http://www.icp2016.jp/index.html}
}
Hiroshi Ishiguro, "The Power of Presence", In The Power of Presence:Preconference of International Communication Association 2016 in Japan, Kyoto Research Park, Kyoto, June, 2016.
Abstract: a keynote address from renowned Professor Hiroshi Ishiguro of Osaka University, creator of amazing humanoid robots and co-author of “Human-Robot Interaction in Social Robotics" (2012, CRC Press)
BibTeX:
@InProceedings{Ishiguro2016c,
  author =    {Hiroshi Ishiguro},
  title =     {The Power of Presence},
  booktitle = {The Power of Presence:Preconference of International Communication Association 2016 in Japan},
  year =      {2016},
  address =   {Kyoto Research Park, Kyoto},
  month =     Jun,
  abstract =  {a keynote address from renowned Professor Hiroshi Ishiguro of Osaka University, creator of amazing humanoid robots and co-author of “Human-Robot Interaction in Social Robotics" (2012, CRC Press)},
  day =       {8},
  url =       {https://ispr.info/presence-conferences/the-power-of-presence-preconference-of-international-communication-association-2016-in-japan/}
}
Hiroshi Ishiguro, "Humanoids: Future Robots for Service", In RoboBusiness Europe 2016, Odense Congress Center, Denmark, June, 2016.
Abstract: Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.
BibTeX:
@InProceedings{Ishiguro2016,
  author =    {Hiroshi Ishiguro},
  title =     {Humanoids: Future Robots for Service},
  booktitle = {RoboBusiness Europe 2016},
  year =      {2016},
  address =   {Odense Congress Center, Denmark},
  month =     Jun,
  abstract =  {Interactive robots and their role as a social partner for a human. Ishiguro will talk on principles of conversation. Does the robot need functions for voice recognition for the verbal conversation? He will propose two approaches for realizing human-robot conversation without voice recognition.},
  day =       {2},
  url =       {http://www.robobusiness.eu/rb/}
}
Hiroshi Ishiguro, "AI(Artificial Intelligence) & Humanoid robot", In Soeul Forum 2016, Seoul Shilla Hotel, Korea, May, 2016.
Abstract: In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.
BibTeX:
@InProceedings{Ishiguro2016b,
  author =    {Hiroshi Ishiguro},
  title =     {AI(Artificial Intelligence) \& Humanoid robot},
  booktitle = {Soeul Forum 2016},
  year =      {2016},
  address =   {Seoul Shilla Hotel, Korea},
  month =     May,
  abstract =  {In this talk, the speaker discusses about AI(Artificial Intelligence) & Humanoid robot and how they will affect the society in the near future.},
  day =       {12},
  url =       {http://www.seoulforum.kr/eng/}
}
Shuichi Nishio, "Portable android robot "Telenoid" for aged citizens: overview and results in Japan and Denmark", In 2016 MOST&JST Workshop on ICT for Accessibility and Support of Older People, Tainan, Taiwan, April, 2016.
BibTeX:
@InProceedings{Nishio2016,
  author =    {Shuichi Nishio},
  title =     {Portable android robot "Telenoid" for aged citizens: overview and results in Japan and Denmark},
  booktitle = {2016 MOST\&JST Workshop on ICT for Accessibility and Support of Older People},
  year =      {2016},
  address =   {Tainan, Taiwan},
  month =     Apr,
  day =       {11},
}
Hiroshi Ishiguro, "Androids and Future Life", In South by Southwest 2016 Music, Film and Interactive Festivals(SXSW), Austin Convention Center, USA, March, 2016.
Abstract: We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.
BibTeX:
@InProceedings{Ishiguro2016a,
  author =    {Hiroshi Ishiguro},
  title =     {Androids and Future Life},
  booktitle = {South by Southwest 2016 Music, Film and Interactive Festivals(SXSW)},
  year =      {2016},
  address =   {Austin Convention Center, USA},
  month =     Mar,
  abstract =  {We, humans, have an innate brain function to recognize humans. Therefore, very humanlike robots, androids, can be ideal information media for human-robot/computer interactions. In the near future, the use of humanlike robots will increase. To realize a robot society, the speaker has developed various types of interactive robots and androids. Geminoid, a tele-operated android of an existing person can transmit the presence of the operator to the distant place. However, the geminoid is not the ideal medium for everybody. People enjoy talking to Telenoids. In this talk, the speaker discusses the design principles for the robots and their effects on conversations with humans.},
  day =       {13},
  url =       {http://schedule.sxsw.com/2016/events/event_PP50105}
}
Dylan F. Glas, "ERICA: The ERATO Intelligent Conversational Android", In Symposium on Human-Robot Interaction, Stanford University, USA, November, 2015.
Abstract: Tthe ERATO Ishiguro Symbiotic Human-Robot Interaction project is developing new android technologies with the eventual goal to pass the Total Turing Test. To pursue the goals of this project, we have developed a new android, Erica. I will introduce Erica's capabilities and design philosophy, and I will present some of the key objectives that we will address in the ERATO project.
BibTeX:
@InProceedings{Glas2015,
  author =    {Dylan F. Glas},
  title =     {ERICA: The ERATO Intelligent Conversational Android},
  booktitle = {Symposium on Human-Robot Interaction},
  year =      {2015},
  address =   {Stanford University, USA},
  month =     Nov,
  abstract =  {Tthe ERATO Ishiguro Symbiotic Human-Robot Interaction project is developing new android technologies with the eventual goal to pass the Total Turing Test. To pursue the goals of this project, we have developed a new android, Erica. I will introduce Erica's capabilities and design philosophy, and I will present some of the key objectives that we will address in the ERATO project.},
  file =      {Glas2015.pdf:pdf/Glas2015.pdf:PDF},
}
山崎竜二, "「テレノイド」ロボット:その特異な存在", In ケアとソリューション 大阪フォーラム ケアとテクノロジー, 大阪, October, 2015.
BibTeX:
@InProceedings{山崎竜二2015,
  author =    {山崎竜二},
  title =     {「テレノイド」ロボット:その特異な存在},
  booktitle = {ケアとソリューション 大阪フォーラム ケアとテクノロジー},
  year =      {2015},
  address =   {大阪},
  month =     Oct,
  file =      {山崎竜二2015.pdf:pdf/山崎竜二2015.pdf:PDF},
}
Hiroshi Ishiguro, "Minimum design of interactive robots", In International Symposium on Pedagogical Machines CREST 国際シンポジウム-「ペダゴジカル・マシンの探求」, 東京, March, 2015.
BibTeX:
@InProceedings{Ishiguro2015,
  Title                    = {Minimum design of interactive robots},
  Author                   = {Hiroshi Ishiguro},
  Booktitle                = {International Symposium on Pedagogical Machines CREST 国際シンポジウム-「ペダゴジカル・マシンの探求」},
  Year                     = {2015},

  Address                  = {東京},
  Month                    = Mar,

  Category                 = {招待講演},
  File                     = {Ishiguro2015a.pdf:pdf/Ishiguro2015a.pdf:PDF},
  Grant                    = {CREST},
  Language                 = {en}
}
Shuichi Nishio, "Teleoperated android robots - Fundamentals, applications and future", In China International Advanced Manufacturing Conference 2014, Mianyang, China, October, 2014.
Abstract: I will introduce our various experiences on teleoperated android robots, how their are manufactured, scientific findings, applications to real world issues and how they will be used in our society in future.
BibTeX:
@InProceedings{Nishio2014a,
  Title                    = {Teleoperated android robots - Fundamentals, applications and future},
  Author                   = {Shuichi Nishio},
  Booktitle                = {China International Advanced Manufacturing Conference 2014},
  Year                     = {2014},

  Address                  = {Mianyang, China},
  Month                    = Oct,

  Abstract                 = {I will introduce our various experiences on teleoperated android robots, how their are manufactured, scientific findings, applications to real world issues and how they will be used in our society in future.},
  Category                 = {招待講演},
  Grant                    = {CREST},
  Language                 = {en}
}
Hiroshi Ishiguro, "Android Philosophy", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 3, August, 2014.
BibTeX:
@InProceedings{Ishiguro2014b,
  Title                    = {Android Philosophy},
  Author                   = {Hiroshi Ishiguro},
  Booktitle                = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  Year                     = {2014},

  Address                  = {Aarhus, Denmark},
  Editor                   = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  Month                    = Aug,
  Pages                    = {3},
  Publisher                = {IOS Press},
  Volume                   = {273},

  Category                 = {招待講演},
  Doi                      = {10.3233/978-1-61499-480-0-3},
  Grant                    = {CREST},
  Language                 = {en},
  Reviewed                 = {y},
  Url                      = {http://ebooks.iospress.nl/volumearticle/38527}
}
Hiroshi Ishiguro, "Telenoid : A Teleoperated Android with a Minimalistic Human Design", In Robo Business Europe, Billund, Denmark, May, 2014.
BibTeX:
@InProceedings{Ishiguro2014a,
  author =    {Hiroshi Ishiguro},
  title =     {Telenoid : A Teleoperated Android with a Minimalistic Human Design},
  booktitle = {Robo Business Europe},
  year =      {2014},
  address =   {Billund, Denmark},
  month =     May,
  day =       {26-28},
}
Hiroshi Ishiguro, "The Future Life Supported by Robotic Avatars", In The Global Mobile Internet Conference Beijing, Beijing, China, May, 2014.
BibTeX:
@InProceedings{Ishiguro2014,
  Title                    = {The Future Life Supported by Robotic Avatars},
  Author                   = {Hiroshi Ishiguro},
  Booktitle                = {The Global Mobile Internet Conference Beijing},
  Year                     = {2014},

  Address                  = {Beijing, China},
  Month                    = May,

  Category                 = {招待講演},
  Day                      = {5-6},
  File                     = {ishiguro2014a.pdf:pdf/ishiguro2014a.pdf:PDF},
  Grant                    = {CREST},
  Language                 = {en}
}
Shuichi Nishio, "The Impact of the Care‐Robot ‘Telenoid' on Elderly Persons in Japan", In International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics, Delmenhorst, Germany, February, 2014.
BibTeX:
@InProceedings{Nishio2014,
  author =    {Shuichi Nishio},
  title =     {The Impact of the Care‐Robot ‘Telenoid' on Elderly Persons in Japan},
  booktitle = {International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics},
  year =      {2014},
  address =   {Delmenhorst, Germany},
  month =     Feb,
  day =       {13-15},
}
Ryuji Yamazaki, "Teleoperated Android in Elderly Care", In Patient@home seminar, Denmark, February, 2014.
Abstract: We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. By conducting pilot studies in Japan and Denmark, we investigate how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world. As populations age, the isolation issue of senior citizens is one of the leading issues in healthcare promotion. In order to solve the isolation issue resulting in geriatric syndromes and improve seniors' well-being by enhancing social connectedness, we propose to employ Telenoid that might facilitate their communication with others. By introducing Telenoid into care facilities and senior's homes, we found various influences on the elderly with or without dementia. Most senior participants had positive impressions of Telenoid from the very beginning, even though, ironically, their caretaker had a negative one. Especially the elderly with dementia showed strong attachment to Telenoid and created its identity imaginatively and interactively. In a long-term study, we also found that demented elderly increasingly showed prosocial behaviors to Telenoid and it encouraged them to be more communicative and open. With a focus on elderly care, this presentation will introduce our field trials and discuss the potential of interactions between the android robot and human users for further research.
BibTeX:
@InProceedings{Yamazaki2014b,
  author =    {Ryuji Yamazaki},
  title =     {Teleoperated Android in Elderly Care},
  booktitle = {Patient@home seminar},
  year =      {2014},
  address =   {Denmark},
  month =     Feb,
  abstract =  {We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. By conducting pilot studies in Japan and Denmark, we investigate how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world. As populations age, the isolation issue of senior citizens is one of the leading issues in healthcare promotion. In order to solve the isolation issue resulting in geriatric syndromes and improve seniors' well-being by enhancing social connectedness, we propose to employ Telenoid that might facilitate their communication with others. By introducing Telenoid into care facilities and senior's homes, we found various influences on the elderly with or without dementia. Most senior participants had positive impressions of Telenoid from the very beginning, even though, ironically, their caretaker had a negative one. Especially the elderly with dementia showed strong attachment to Telenoid and created its identity imaginatively and interactively. In a long-term study, we also found that demented elderly increasingly showed prosocial behaviors to Telenoid and it encouraged them to be more communicative and open. With a focus on elderly care, this presentation will introduce our field trials and discuss the potential of interactions between the android robot and human users for further research.},
  day =       {5},
}
Hiroshi Ishiguro, "Studies on very humanlike robots", In International Conference on Instrumentation, Control, Information Technology and System Integration, Aichi, September, 2013.
Abstract: Studies on interactive robots and androids are not just in robotics but they are also closely coupled in cognitive science and neuroscience. It is a research area for investigating fundamental issues of interface and media technology. This talks introduce the series of androids developed in both Osaka University and ATR and propose a new information medium realized based on the studies.
BibTeX:
@InProceedings{Ishiguro2013a,
  Title                    = {Studies on very humanlike robots},
  Author                   = {Hiroshi Ishiguro},
  Booktitle                = {International Conference on Instrumentation, Control, Information Technology and System Integration},
  Year                     = {2013},

  Address                  = {Aichi},
  Month                    = Sep,

  Abstract                 = {Studies on interactive robots and androids are not just in robotics but they are also closely coupled in cognitive science and neuroscience. It is a research area for investigating fundamental issues of interface and media technology. This talks introduce the series of androids developed in both Osaka University and ATR and propose a new information medium realized based on the studies.},
  Category                 = {招待講演},
  Day                      = {14},
  Grant                    = {CREST},
  Language                 = {en}
}
Hiroshi Ishiguro, "The Future Life Supported by Robotic Avatars", In Global Future 2045 International Congress, NY, USA, June, 2013.
Abstract: Robotic avatars or tele-operated robots are already available and working in practical situations, especially in USA. The robot society has started. In our future life we are going to use various tele-operated and autonomous robots. The speaker is taking the leadership for developing tele-operated robots and androids. The tele-opereated android copy of himself is well-known in the world. By means of robots and androids, he has studied the cognitive and social aspects of human-robot interaction. Thus, he has contributed to establishing this research area. In this talk, he will introduce the series of robots and androids developed at the Intelligent Robot Laboratory of the Department of Systems Innovation of Osaka University and at the Hiroshi Ishiguro Laboratory of the Advanced Telecommunications Research Institute International (ATR).
BibTeX:
@InProceedings{Ishiguro2013,
  author =    {Hiroshi Ishiguro},
  title =     {The Future Life Supported by Robotic Avatars},
  booktitle = {Global Future 2045 International Congress},
  year =      {2013},
  address =   {NY, USA},
  month =     Jun,
  abstract =  {Robotic avatars or tele-operated robots are already available and working in practical situations, especially in USA. The robot society has started. In our future life we are going to use various tele-operated and autonomous robots. The speaker is taking the leadership for developing tele-operated robots and androids. The tele-opereated android copy of himself is well-known in the world. By means of robots and androids, he has studied the cognitive and social aspects of human-robot interaction. Thus, he has contributed to establishing this research area. In this talk, he will introduce the series of robots and androids developed at the Intelligent Robot Laboratory of the Department of Systems Innovation of Osaka University and at the Hiroshi Ishiguro Laboratory of the Advanced Telecommunications Research Institute International (ATR).},
}
Mari Velonaki, David C. Rye, Steve Scheding, Karl F. MacDorman, Stephen J. Cowley, Hiroshi Ishiguro, Shuichi Nishio, "Panel Discussion: Engagement, Trust and Intimacy: Are these the Essential Elements for a Successful Interaction between a Human and a Robot?", In AAAI Spring Symposium on Emotion, Personality, and Social Behavior, California, USA, pp. 141-147, March, 2008. (2008.3.26)
BibTeX:
@InProceedings{Nishio2008b,
  Title                    = {Panel Discussion: Engagement, Trust and Intimacy: Are these the Essential Elements for a Successful Interaction between a Human and a Robot?},
  Author                   = {Mari Velonaki and David C. Rye and Steve Scheding and Karl F. MacDorman and Stephen J. Cowley and Hiroshi Ishiguro and Shuichi Nishio},
  Booktitle                = {{AAAI} Spring Symposium on Emotion, Personality, and Social Behavior},
  Year                     = {2008},

  Address                  = {California, USA},
  Month                    = Mar,
  Note                     = {2008.3.26},
  Pages                    = {141-147},

  Category                 = {招待講演},
  File                     = {Rye_Panel.pdf:http\://psychometrixassociates.com/Rye_Panel.pdf:PDF},
  Grant                    = {ATR},
  Url                      = {http://www.aaai.org/Library/Symposia/Spring/2008/ss08-04-022.php}
}
Journal Papers
Soheil Keshmiri, Hidenobu Sumioka, Ryuji Yamazaki, Hiroshi Ishiguro, "A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series", Frontiers in Human Neuroscience, vol. 11, no. 15, pp. 1-14, February, 2017.
Abstract: We present a non-parametric approach to prediction of the n-back n ∈ 1, 2 task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.
BibTeX:
@Article{Keshmiri2017b,
  author =   {Soheil Keshmiri and Hidenobu Sumioka and Ryuji Yamazaki and Hiroshi Ishiguro},
  title =    {A Non-parametric Approach to the Overall Estimate of Cognitive Load Using NIRS Time Series},
  journal =  {Frontiers in Human Neuroscience},
  year =     {2017},
  volume =   {11},
  number =   {15},
  pages =    {1-14},
  month =    Feb,
  abstract = {We present a non-parametric approach to prediction of the n-back n ∈ {1, 2} task as a proxy measure of mental workload using Near Infrared Spectroscopy (NIRS) data. In particular, we focus on measuring the mental workload through hemodynamic responses in the brain induced by these tasks, thereby realizing the potential that they can offer for their detection in real world scenarios (e.g., difficulty of a conversation). Our approach takes advantage of intrinsic linearity that is inherent in the components of the NIRS time series to adopt a one-step regression strategy. We demonstrate the correctness of our approach through its mathematical analysis. Furthermore, we study the performance of our model in an inter-subject setting in contrast with state-of-the-art techniques in the literature to show a significant improvement on prediction of these tasks (82.50 and 86.40% for female and male participants, respectively). Moreover, our empirical analysis suggest a gender difference effect on the performance of the classifiers (with male data exhibiting a higher non-linearity) along with the left-lateralized activation in both genders with higher specificity in females.},
  doi =      {10.3389/fnhum.2017.00015},
  file =     {Keshmiri2017b.pdf:pdf/Keshmiri2017b.pdf:PDF},
  url =      {http://journal.frontiersin.org/article/10.3389/fnhum.2017.00015/full}
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "A Model for Generating Socially-Appropriate Deictic Behaviors Towards People", International Journal of Social Robotics, vol. 9, no. Issue 1, pp. 33-49, January, 2017.
Abstract: Pointing behaviors are essential in enabling social robots to communicate about a particular object, person, or space. Yet, pointing to a person can be considered rude in many cultures, and as robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to people in a socially-appropriate way. We confirmed in an empirical study that although people would point precisely to an object to indicate where it is, they were reluctant to do so when pointing to another person. We propose a model for selecting utterance and pointing behaviors towards people in terms of a balance between understandability and social appropriateness. Calibrating our proposed model based on empirical human behavior, we developed a system able to autonomously select among six deictic behaviors and execute them on a humanoid robot. We evaluated the system in an experiment in a shopping mall, and the results show that the robot's deictic behavior was perceived by both the listener and the referent as more polite, more natural, and better overall when using our model, as compared with a model considering understandability alone.
BibTeX:
@Article{Liu2017a,
  author =   {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title =    {A Model for Generating Socially-Appropriate Deictic Behaviors Towards People},
  journal =  {International Journal of Social Robotics},
  year =     {2017},
  volume =   {9},
  number =   {Issue 1},
  pages =    {33-49},
  month =    Jan,
  abstract = {Pointing behaviors are essential in enabling social robots to communicate about a particular object, person, or space. Yet, pointing to a person can be considered rude in many cultures, and as robots collaborate with humans in increasingly diverse environments, they will need to effectively refer to people in a socially-appropriate way. We confirmed in an empirical study that although people would point precisely to an object to indicate where it is, they were reluctant to do so when pointing to another person. We propose a model for selecting utterance and pointing behaviors towards people in terms of a balance between understandability and social appropriateness. Calibrating our proposed model based on empirical human behavior, we developed a system able to autonomously select among six deictic behaviors and execute them on a humanoid robot. We evaluated the system in an experiment in a shopping mall, and the results show that the robot's deictic behavior was perceived by both the listener and the referent as more polite, more natural, and better overall when using our model, as compared with a model considering understandability alone.},
  doi =      {10.1007/s12369-016-0348-9},
  file =     {Liu2017a.pdf:pdf/Liu2017a.pdf:PDF},
  url =      {http://link.springer.com/article/10.1007%2Fs12369-016-0348-9}
}
Jani Even, Jonas Furrer, Yoichi Morales, Carlos T. Ishi, Norihiro Hagita, "Probabilistic 3D Mapping of Sound-Emitting Structures Based on Acoustic Ray Casting", IEEE Transactions on Robotics (T-RO), vol. 33 Issue2, pp. 333-345, December, 2016.
Abstract: This paper presents a two-step framework for creating the 3D sound map with a mobile robot. The first step creates a geometric map that describes the environment. The second step adds the acoustic information to the geometric map. The resulting sound map shows the probability of emitting sound for all the structures in the environment. This paper focuses on the second step. The method uses acoustic ray casting for accumulating in a probabilistic manner the acoustic information gathered by a mobile robot equipped with a microphone array. First, the method transforms the acoustic power received from a set of directions in likelihoods of sound presence in these directions. Then, using an estimate of the robot's pose, the acoustic ray casting procedure transfers these likelihoods to the structures in the geometric map. Finally, the probability of that structure emitting sound is modified to take into account the new likelihoods. Experimental results show that the sound maps are: accurate as it was possible to localize sound sources in 3D with an average error of 0.1 meters and practical as different types of environments were mapped.
BibTeX:
@Article{Even2016a,
  author =   {Jani Even and Jonas Furrer and Yoichi Morales and Carlos T. Ishi and Norihiro Hagita},
  title =    {Probabilistic 3D Mapping of Sound-Emitting Structures Based on Acoustic Ray Casting},
  journal =  {IEEE Transactions on Robotics (T-RO)},
  year =     {2016},
  volume =   {33 Issue2},
  pages =    {333-345},
  month =    Dec,
  abstract = {This paper presents a two-step framework for creating the 3D sound map with a mobile robot. The first step creates a geometric map that describes the environment. The second step adds the acoustic information to the geometric map. The resulting sound map shows the probability of emitting sound for all the structures in the environment. This paper focuses on the second step. The method uses acoustic ray casting for accumulating in a probabilistic manner the acoustic information gathered by a mobile robot equipped with a microphone array. First, the method transforms the acoustic power received from a set of directions in likelihoods of sound presence in these directions. Then, using an estimate of the robot's pose, the acoustic ray casting procedure transfers these likelihoods to the structures in the geometric map. Finally, the probability of that structure emitting sound is modified to take into account the new likelihoods. Experimental results show that the sound maps are: accurate as it was possible to localize sound sources in 3D with an average error of 0.1 meters and practical as different types of environments were mapped.},
  doi =      {10.1109/TRO.2016.2630053},
  file =     {Even2016a.pdf:pdf/Even2016a.pdf:PDF},
  url =      {http://ieeexplore.ieee.org/document/7790815/}
}
Jakub Zlotowski, Hidenobu Sumioka, Shuichi Nishio, Dylan F. Glas, Christoph Bartneck, Hiroshi Ishiguro, "Appearance of a Robot Affects the Impact of its Behaviour on Perceived Trustworthiness and Empathy", Paladyn, Journal of Behavioral Robotics, vol. 7, no. 1, pp. 55-66, December, 2016.
Abstract: An increasing number of companion robots started reaching the public in the recent years. These robots vary in their appearance and behavior. Since these two factors can have an impact on lasting human-robot relationships, it is important to understand their effect for companion robots. We have conducted an experiment that evaluated the impact of a robot's appearance and its behaviour in repeated interactions on its perceived empathy, trustworthiness and anxiety experienced by a human. The results indicate that a highly humanlike robot is perceived as less trustworthy and empathic than a more machinelike robot. Moreover, negative behaviour of a machinelike robot reduces its trustworthiness and perceived empathy stronger than for highly humanlike robot. In addition, we found that a robot which disapproves of what a human says can induce anxiety felt towards its communication capabilities. Our findings suggest that more machinelike robots can be more suitable as companions than highly humanlike robots. Moreover, a robot disagreeing with a human interaction partner should be able to provide feedback on its understanding of the partner's message in order to reduce her anxiety.
BibTeX:
@Article{Zlotowski2016a,
  author =   {Jakub Zlotowski and Hidenobu Sumioka and Shuichi Nishio and Dylan F. Glas and Christoph Bartneck and Hiroshi Ishiguro},
  title =    {Appearance of a Robot Affects the Impact of its Behaviour on Perceived Trustworthiness and Empathy},
  journal =  {Paladyn, Journal of Behavioral Robotics},
  year =     {2016},
  volume =   {7},
  number =   {1},
  pages =    {55-66},
  month =    Dec,
  abstract = {An increasing number of companion robots started reaching the public in the recent years. These robots vary in their appearance and behavior. Since these two factors can have an impact on lasting human-robot relationships, it is important to understand their effect for companion robots. We have conducted an experiment that evaluated the impact of a robot's appearance and its behaviour in repeated interactions on its perceived empathy, trustworthiness and anxiety experienced by a human. The results indicate that a highly humanlike robot is perceived as less trustworthy and empathic than a more machinelike robot. Moreover, negative behaviour of a machinelike robot reduces its trustworthiness and perceived empathy stronger than for highly humanlike robot. In addition, we found that a robot which disapproves of what a human says can induce anxiety felt towards its communication capabilities. Our findings suggest that more machinelike robots can be more suitable as companions than highly humanlike robots. Moreover, a robot disagreeing with a human interaction partner should be able to provide feedback on its understanding of the partner's message in order to reduce her anxiety.},
  file =     {Zlotowski2016a.pdf:pdf/Zlotowski2016a.pdf:PDF},
  url =      {https://www.degruyter.com/view/j/pjbr.2016.7.issue-1/pjbr-2016-0005/pjbr-2016-0005.xml}
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning", PLOS ONE, pp. 1-17, September, 2016.
Abstract: Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.
BibTeX:
@Article{Alimardani2016a,
  author =          {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {The Importance of Visual Feedback Design in BCIs; from Embodiment to Motor Imagery Learning},
  journal =         {PLOS ONE},
  year =            {2016},
  pages =           {1-17},
  month =           Sep,
  abstract =        {Brain computer interfaces (BCIs) have been developed and implemented in many areas as a new communication channel between the human brain and external devices. Despite their rapid growth and broad popularity, the inaccurate performance and cost of user-training are yet the main issues that prevent their application out of the research and clinical environment. We previously introduced a BCI system for the control of a very humanlike android that could raise a sense of embodiment and agency in the operators only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we further discovered that the positive bias of subjects' performance both increased their sensation of embodiment and improved their motor imagery skills in a short period. In this work, we studied the shared mechanism between the experience of embodiment and motor imagery. We compared the trend of motor imagery learning when two groups of subjects BCI-operated different looking robots, a very humanlike android's hands and a pair of metallic gripper. Although our experiments did not show a significant change of learning between the two groups immediately during one session, the android group revealed better motor imagery skills in the follow up session when both groups repeated the task using the non-humanlike gripper. This result shows that motor imagery skills learnt during the BCI-operation of humanlike hands are more robust to time and visual feedback changes. We discuss the role of embodiment and mirror neuron system in such outcome and propose the application of androids for efficient BCI training.},
  day =             {6},
  doi =             {10.1371/journal.pone.0161945},
  file =            {Alimardani2016a.pdf:pdf/Alimardani2016a.pdf:PDF},
  url =             {http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0161945}
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot", Scientific Reports, vol. 6, no. 33514, September, 2016.
Abstract: Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.
BibTeX:
@Article{Alimardani2016,
  author =          {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot},
  journal =         {Scientific Reports},
  year =            {2016},
  volume =          {6},
  number =          {33514},
  month =           Sep,
  abstract =        {Body ownership illusions provide evidence that our sense of self is not coherent and can be extended to non-body objects. Studying about these illusions gives us practical tools to understand the brain mechanisms that underlie body recognition and the experience of self. We previously introduced an illusion of body ownership transfer (BOT) for operators of a very humanlike robot. This sensation of owning the robot's body was confirmed when operators controlled the robot either by performing the desired motion with their body (motion-control) or by employing a brain-computer interface (BCI) that translated motor imagery commands to robot movement (BCI-control). The interesting observation during BCI-control was that the illusion could be induced even with a noticeable delay in the BCI system. Temporal discrepancy has always shown critical weakening effects on body ownership illusions. However the delay-robustness of BOT during BCI-control raised a question about the interaction between the proprioceptive inputs and delayed visual feedback in agency-driven illusions. In this work, we compared the intensity of BOT illusion for operators in two conditions; motion-control and BCI-control. Our results revealed a significantly stronger BOT illusion for the case of BCI-control. This finding highlights BCI's potential in inducing stronger agency-driven illusions by building a direct communication between the brain and controlled body, and therefore removing awareness from the subject's own body.},
  doi =             {10.1038/srep33514},
  file =            {Alimardani2016.pdf:pdf/Alimardani2016.pdf:PDF},
  url =             {http://www.nature.com/articles/srep33514}
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, Norihiro Hagita, "Data-driven HRI: Learning social behaviors by example from human-human interaction", IEEE Transactions on Robotics, vol. 32, no. 4, pp. 988-1008, August, 2016.
Abstract: Recent studies in human-robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human-human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully-automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially-appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.
BibTeX:
@Article{Liu2016d,
  author =   {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro and Norihiro Hagita},
  title =    {Data-driven HRI: Learning social behaviors by example from human-human interaction},
  journal =  {IEEE Transactions on Robotics},
  year =     {2016},
  volume =   {32},
  number =   {4},
  pages =    {988-1008},
  month =    Aug,
  abstract = {Recent studies in human-robot interaction (HRI) have investigated ways to harness the power of the crowd for the purpose of creating robot interaction logic through games and teleoperation interfaces. Sensor networks capable of observing human-human interactions in the real world provide a potentially valuable and scalable source of interaction data that can be used for designing robot behavior. To that end, we present here a fully-automated method for reproducing observed real-world social interactions with a robot. The proposed method includes techniques for characterizing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naive Bayesian classifier. Finally, we propose techniques for reproducing robot speech and locomotion behaviors in a robust way, despite the natural variation of human behaviors and the large amount of sensor noise present in speech recognition. We show our technique in use, training a robot to play the role of a shop clerk in a simple camera shop scenario, and we demonstrate through a comparison experiment that our techniques successfully enabled the generation of socially-appropriate speech and locomotion behavior. Notably, the performance of our technique in terms of correct behavior selection was found to be higher than the success rate of speech recognition, indicating its robustness to sensor noise.},
  file =     {Liu2016d.pdf:pdf/Liu2016d.pdf:PDF},
  url =      {http://ieeexplore.ieee.org/document/7539621/}
}
Kaiko Kuwamura, Shuichi Nishio, Shinichi Sato, "Can We Talk through a Robot As if Face-to-Face? Long-Term Fieldwork Using Teleoperated Robot for Seniors with Alzheimer's Disease", Frontiers in Psychology, vol. 7, no. 1066, pp. 1-13, July, 2016.
Abstract: This work presents a case study on fieldwork in a group home for the elderly with dementia using a teleoperated robot called Telenoid. We compared Telenoid-mediated and face-to-face conditions with three residents with Alzheimer's disease (AD). The result indicates that two of the three residents with moderate AD showed a positive reaction to Telenoid. Both became less nervous while communicating with Telenoid from the time they were first introduced to it. Moreover, they started to use more body gestures in the face-to-face condition and more physical interactions in the Telenoid-mediated condition. In this work, we present all the results and discuss the possibilities of using Telenoid as a tool to provide opportunities for seniors to communicate over the long term.
BibTeX:
@Article{Kuwamura2016a,
  author =          {Kaiko Kuwamura and Shuichi Nishio and Shinichi Sato},
  title =           {Can We Talk through a Robot As if Face-to-Face? Long-Term Fieldwork Using Teleoperated Robot for Seniors with Alzheimer's Disease},
  journal =         {Frontiers in Psychology},
  year =            {2016},
  volume =          {7},
  number =          {1066},
  pages =           {1-13},
  month =           Jul,
  abstract =        {This work presents a case study on fieldwork in a group home for the elderly with dementia using a teleoperated robot called Telenoid. We compared Telenoid-mediated and face-to-face conditions with three residents with Alzheimer's disease (AD). The result indicates that two of the three residents with moderate AD showed a positive reaction to Telenoid. Both became less nervous while communicating with Telenoid from the time they were first introduced to it. Moreover, they started to use more body gestures in the face-to-face condition and more physical interactions in the Telenoid-mediated condition. In this work, we present all the results and discuss the possibilities of using Telenoid as a tool to provide opportunities for seniors to communicate over the long term.},
  day =             {19},
  doi =             {10.3389/fpsyg.2016.01066},
  file =            {Kuwamura2016a.pdf:pdf/Kuwamura2016a.pdf:PDF},
  keywords =        {Elderly care robot, Teleoperated robot, Alzheimer's disease, Elderly care facility, Gerontology},
  url =             {http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01066}
}
Ryuji Yamazaki, Louise Christensen, Kate Skov, Chi-Chih Chang, Malene F. Damholdt, Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Intimacy in Phone Conversations: Anxiety Reduction for Danish Seniors with Hugvie", Frontiers in Psychology, vol. 7, no. 537, April, 2016.
Abstract: There is a lack of physical contact in current telecommunications such as text messaging and Internet access. To challenge the limitation and re-embody telecommunication, researchers have attempted to introduce tactile stimulation to media and developed huggable devices. Previous experiments in Japan showed that a huggable communication technology, i.e., Hugvie decreased stress level of its female users. In the present experiment in Denmark, we aim to investigate (i) whether Hugvie can decrease stress cross-culturally, i.e., Japanese vs. Danish participants (ii), investigate whether gender plays a role in this psychological effect (stress reduction) and (iii) if there is a preference of this type of communication technology (Hugvie vs. a regular telephone). Twenty-nine healthy elderly participated (15 female and 14 male, M = 64.52 years, SD = 5.67) in Jutland, Denmark. The participants filled out questionnaires including State-Trait Anxiety Inventory, NEO Five Factor Inventory (NEO-FFI), and Becks Depression Inventory, had a 15 min conversation via phone or Hugvie and were interviewed afterward. They spoke with an unknown person of opposite gender during the conversation; the same two conversation partners were used during the experiment and the Phone and Hugvie groups were equally balanced. There was no baseline difference between the Hugvie and Phone groups on age or anxiety or depression scores. In the Hugvie group, there was a statistically significant reduction on state anxiety after meeting Hugvie (p = 0.013). The change in state anxiety for the Hugvie group was positively correlated with openness (r = 0.532, p = 0.041) as measured by the NEO-FFI. This indicates that openness to experiences may increase the chances of having an anxiety reduction from being with Hugvie. Based on the results, we see that personality may affect the participants' engagement and benefits from Hugvie. We discuss the implications of the results and further elaborations.
BibTeX:
@Article{Yamazaki2016,
  author =   {Ryuji Yamazaki and Louise Christensen and Kate Skov and Chi-Chih Chang and Malene F. Damholdt and Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title =    {Intimacy in Phone Conversations: Anxiety Reduction for Danish Seniors with Hugvie},
  journal =  {Frontiers in Psychology},
  year =     {2016},
  volume =   {7},
  number =   {537},
  month =    Apr,
  abstract = {There is a lack of physical contact in current telecommunications such as text messaging and Internet access. To challenge the limitation and re-embody telecommunication, researchers have attempted to introduce tactile stimulation to media and developed huggable devices. Previous experiments in Japan showed that a huggable communication technology, i.e., Hugvie decreased stress level of its female users. In the present experiment in Denmark, we aim to investigate (i) whether Hugvie can decrease stress cross-culturally, i.e., Japanese vs. Danish participants (ii), investigate whether gender plays a role in this psychological effect (stress reduction) and (iii) if there is a preference of this type of communication technology (Hugvie vs. a regular telephone). Twenty-nine healthy elderly participated (15 female and 14 male, M = 64.52 years, SD = 5.67) in Jutland, Denmark. The participants filled out questionnaires including State-Trait Anxiety Inventory, NEO Five Factor Inventory (NEO-FFI), and Becks Depression Inventory, had a 15 min conversation via phone or Hugvie and were interviewed afterward. They spoke with an unknown person of opposite gender during the conversation; the same two conversation partners were used during the experiment and the Phone and Hugvie groups were equally balanced. There was no baseline difference between the Hugvie and Phone groups on age or anxiety or depression scores. In the Hugvie group, there was a statistically significant reduction on state anxiety after meeting Hugvie (p = 0.013). The change in state anxiety for the Hugvie group was positively correlated with openness (r = 0.532, p = 0.041) as measured by the NEO-FFI. This indicates that openness to experiences may increase the chances of having an anxiety reduction from being with Hugvie. Based on the results, we see that personality may affect the participants' engagement and benefits from Hugvie. We discuss the implications of the results and further elaborations.},
  doi =      {10.3389/fpsyg.2016.00537},
  file =     {Yamazaki2016.pdf:pdf/Yamazaki2016.pdf:PDF},
  url =      {http://journal.frontiersin.org/researchtopic/investigating-human-nature-and-communication-through-robots-3705}
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Impact of Mediated Intimate Interaction on Education: A Huggable Communication Medium that Encourages Listening", Frontiers in Psychology, section Human-Media Interaction, vol. 7, no. 510, pp. 1-10, April, 2016.
Abstract: In this paper, we propose the introduction of human-like communication media as a proxy for teachers to support the listening of children in school education. Three case studies are presented on storytime fieldwork for children using our huggable communication medium called Hugvie, through which children are encouraged to concentrate on listening by intimate interaction between children and storytellers. We investigate the effect of Hugvie on children's listening and how they and their teachers react to it through observations and interviews. Our results suggest that Hugvie increased the number of children who concentrated on listening to a story and was welcomed by almost all the children and educators. We also discuss improvement and research issues to introduce huggable communication media into classrooms, potential applications, and their contributions to other education situations through improved listening.
BibTeX:
@Article{Nakanishi2016,
  author =   {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title =    {Impact of Mediated Intimate Interaction on Education: A Huggable Communication Medium that Encourages Listening},
  journal =  {Frontiers in Psychology, section Human-Media Interaction},
  year =     {2016},
  volume =   {7},
  number =   {510},
  pages =    {1-10},
  month =    Apr,
  abstract = {In this paper, we propose the introduction of human-like communication media as a proxy for teachers to support the listening of children in school education. Three case studies are presented on storytime fieldwork for children using our huggable communication medium called Hugvie, through which children are encouraged to concentrate on listening by intimate interaction between children and storytellers. We investigate the effect of Hugvie on children's listening and how they and their teachers react to it through observations and interviews. Our results suggest that Hugvie increased the number of children who concentrated on listening to a story and was welcomed by almost all the children and educators. We also discuss improvement and research issues to introduce huggable communication media into classrooms, potential applications, and their contributions to other education situations through improved listening.},
  day =      {19},
  doi =      {10.3389/fpsyg.2016.00510},
  file =     {Nakanishi2016.pdf:pdf/Nakanishi2016.pdf:PDF},
  url =      {http://journal.frontiersin.org/article/10.3389/fpsyg.2016.00510}
}
Malene F. Damholdt, Marco Nørskov, Ryuji Yamazaki, Raul Hakli, Catharina V. Hansen, Christina Vestergaard, Johanna Seibt, "Attitudinal Change in Elderly Citizens Toward Social Robots: The Role of Personality Traits and Beliefs About Robot Functionality", Frontiers in Psychology, vol. 6, no. 1701, November, 2015.
Abstract: Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.
BibTeX:
@Article{Damholdt2015,
  author =   {Malene F. Damholdt and Marco Nørskov and Ryuji Yamazaki and Raul Hakli and Catharina V. Hansen and Christina Vestergaard and Johanna Seibt},
  title =    {Attitudinal Change in Elderly Citizens Toward Social Robots: The Role of Personality Traits and Beliefs About Robot Functionality},
  journal =  {Frontiers in Psychology},
  year =     {2015},
  volume =   {6},
  number =   {1701},
  month =    Nov,
  abstract = {Attitudes toward robots influence the tendency to accept or reject robotic devices. Thus it is important to investigate whether and how attitudes toward robots can change. In this pilot study we investigate attitudinal changes in elderly citizens toward a tele-operated robot in relation to three parameters: (i) the information provided about robot functionality, (ii) the number of encounters, (iii) personality type. Fourteen elderly residents at a rehabilitation center participated. Pre-encounter attitudes toward robots, anthropomorphic thinking, and personality were assessed. Thereafter the participants interacted with a tele-operated robot (Telenoid) during their lunch (c. 30 min.) for up to 3 days. Half of the participants were informed that the robot was tele-operated (IC) whilst the other half were naïve to its functioning (UC). Post-encounter assessments of attitudes toward robots and anthropomorphic thinking were undertaken to assess change. Attitudes toward robots were assessed with a new generic 35-items questionnaire (attitudes toward social robots scale: ASOR-5), offering a differentiated conceptualization of the conditions for social interaction. There was no significant difference between the IC and UC groups in attitude change toward robots though trends were observed. Personality was correlated with some tendencies for attitude changes; Extraversion correlated with positive attitude changes to intimate-personal relatedness with the robot (r = 0.619) and to psychological relatedness (r = 0.581) whilst Neuroticism correlated negatively (r = -0.582) with mental relatedness with the robot. The results tentatively suggest that neither information about functionality nor direct repeated encounters are pivotal in changing attitudes toward robots in elderly citizens. This may reflect a cognitive congruence bias where the robot is experienced in congruence with initial attitudes, or it may support action-based explanations of cognitive dissonance reductions, given that robots, unlike computers, are not yet perceived as action targets. Specific personality traits may be indicators of attitude change relating to specific domains of social interaction. Implications and future directions are discussed.},
  doi =      {10.3389/fpsyg.2015.01701},
  file =     {Damholdt2015.pdf:pdf/Damholdt2015.pdf:PDF},
  url =      {http://journal.frontiersin.org/researchtopic/investigating-human-nature-and-communication-through-robots-3705}
}
Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Inconsistency of Personality Evaluation Caused by Appearance Gap in Robotic Telecommunication", Interaction Studies, vol. 16, no. 2, pp. 249-271, November, 2015.
Abstract: In this paper, we discuss the problem of the appearance of teleoperated robots that are used as telecommunication media. Teleoperated robots have a physical existence that increases the feeling of copresence, compared with recent communication media such as cellphones and video chat. However, their appearance is xed, for example stuffed bear, or a image displayed on a monitor. Since people can determine their partner's personality merely from their appearance, a teleoperated robot's appearance which is different from the operator might construct a personality that conflicts with the operator's original personality. We compared the appearances of three communication media (nonhuman-like appearance robot, human-like appearance robot, and video chat) and found that due to the appearance gap, the human-like appearance robot prevented confusion better than the nonhuman-like appearance robot or the video chat and also transmitted an appropriate atmosphere due to the operator.
BibTeX:
@Article{Kuwamura2013a,
  Title                    = {Inconsistency of Personality Evaluation Caused by Appearance Gap in Robotic Telecommunication},
  Author                   = {Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  Journal                  = {Interaction Studies},
  Year                     = {2015},

  Month                    = NOV,
  Number                   = {2},
  Pages                    = {249-271},
  Volume                   = {16},

  Abstract                 = {In this paper, we discuss the problem of the appearance of teleoperated robots that are used as telecommunication media. Teleoperated robots have a physical existence that increases the feeling of copresence, compared with recent communication media such as cellphones and video chat. However, their appearance is xed, for example stuffed bear, or a image displayed on a monitor. Since people can determine their partner's personality merely from their appearance, a teleoperated robot's appearance which is different from the operator might construct a personality that conflicts with the operator's original personality. We compared the appearances of three communication media (nonhuman-like appearance robot, human-like appearance robot, and video chat) and found that due to the appearance gap, the human-like appearance robot prevented confusion better than the nonhuman-like appearance robot or the video chat and also transmitted an appropriate atmosphere due to the operator.},
  Acknowledgement          = {This research was supported by JST, CREST.},
  File                     = {Kuwamura2013a.pdf:pdf/Kuwamura2013a.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {teleoperated android; telecomunication; robot; appearance; personality},
  Language                 = {en},
  Reviewed                 = {Y}
}
Jakub Zlotowski, Hidenobu Sumioka, Shuichi Nishio, Dylan Glas, Christoph Bartneck, Hiroshi Ishiguro, "Persistence of the Uncanny Valley: the Influence of Repeated Interactions and a Robot's Attitude on Its Perception", Frontiers in Psychology, June, 2015.
Abstract: The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what and whether an uncanny looking robot will have an impact on an interaction. In this paper we describe an exploratory empirical study that involved repeated interactions with robots that differed in embodiment and their attitude towards a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, mere repeated interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.
BibTeX:
@Article{Zlotowski,
  Title                    = {Persistence of the Uncanny Valley: the Influence of Repeated Interactions and a Robot's Attitude on Its Perception},
  Author                   = {Jakub Zlotowski and Hidenobu Sumioka and Shuichi Nishio and Dylan Glas and Christoph Bartneck and Hiroshi Ishiguro},
  Journal                  = {Frontiers in Psychology},
  Year                     = {2015},

  Month                    = JUN,

  Abstract                 = {The uncanny valley theory proposed by Mori has been heavily investigated in the recent years by researchers from various fields. However, the videos and images used in these studies did not permit any human interaction with the uncanny objects. Therefore, in the field of human-robot interaction it is still unclear what and whether an uncanny looking robot will have an impact on an interaction. In this paper we describe an exploratory empirical study that involved repeated interactions with robots that differed in embodiment and their attitude towards a human. We found that both investigated components of the uncanniness (likeability and eeriness) can be affected by an interaction with a robot. Likeability of a robot was mainly affected by its attitude and this effect was especially prominent for a machine-like robot. On the other hand, mere repeated interactions was sufficient to reduce eeriness irrespective of a robot's embodiment. As a result we urge other researchers to investigate Mori's theory in studies that involve actual human-robot interaction in order to fully understand the changing nature of this phenomenon.},
  Doi                      = {10.3389/fpsyg.2015.00883},
  File                     = {Jakub2014a.pdf:pdf/Jakub2014a.pdf:PDF},
  Grant                    = {CREST},
  Language                 = {en},
  Url                      = {http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00883/abstract}
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot", International Journal of Humanoid Robotics, vol. 12, issue 01, pp. 1550002 (22 pages), 2015.
Abstract: To be accepted as a part of our everyday lives, companion robots will require the capability to recognize people's behavior and respond appropriately. In the current work, we investigated which characteristics of behavior could be used by a small humanoid robot to recognize when a human is seeking to convey affection. A main challenge in doing so was that human social norms are complex, comprising behavior which exhibits high spatiotemporal variance, consists of multiple channels and can express different meanings. To deal with this difficulty, we adopted a combined approach in which we analyzed free interactions and also asked participants to rate short video-clips depicting human-robot interaction. As a result, we are able to present a wide range of findings related to the current topic, including on the fundamental role (prevalence, affectionate impact, and motivations) of actions, channels, and modalities; effects of posture and a robot's behavior; expected reactions; and contributions of modalities in complementary and conflicting configurations. This article extends the existing literature by identifying some useful multimodal affectionate cues which can be leveraged by a robot during interactions; we aim to use the acquired knowledge in a small humanoid robot to provide affection during play toward improving quality of life for lonely persons.
BibTeX:
@Article{Cooney2013b,
  author =          {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Importance of Touch for Conveying Affection in a Multimodal Interaction with a Small Humanoid Robot},
  journal =         {International Journal of Humanoid Robotics},
  year =            {2015},
  volume =          {12, issue 01},
  pages =           {1550002 (22 pages)},
  abstract =        {To be accepted as a part of our everyday lives, companion robots will require the capability to recognize people's behavior and respond appropriately. In the current work, we investigated which characteristics of behavior could be used by a small humanoid robot to recognize when a human is seeking to convey affection. A main challenge in doing so was that human social norms are complex, comprising behavior which exhibits high spatiotemporal variance, consists of multiple channels and can express different meanings. To deal with this difficulty, we adopted a combined approach in which we analyzed free interactions and also asked participants to rate short video-clips depicting human-robot interaction. As a result, we are able to present a wide range of findings related to the current topic, including on the fundamental role (prevalence, affectionate impact, and motivations) of actions, channels, and modalities; effects of posture and a robot's behavior; expected reactions; and contributions of modalities in complementary and conflicting configurations. This article extends the existing literature by identifying some useful multimodal affectionate cues which can be leveraged by a robot during interactions; we aim to use the acquired knowledge in a small humanoid robot to provide affection during play toward improving quality of life for lonely persons.},
  doi =             {10.1142/S0219843615500024},
  file =            {Cooney2014a.pdf:pdf/Cooney2014a.pdf:PDF},
  keywords =        {Affection; multi-modal; play; small humanoid robot, human-robot interaction},
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Affectionate Interaction with a Small Humanoid Robot Capable of Recognizing Social Touch Behavior", ACM Transactions on Interactive Intelligent Systems, vol. 4, no. 4, pp. 32, December, 2014.
Abstract: Activity recognition, involving a capability to automatically recognize people's behavior and its underlying significance, will play a crucial role in facilitating the integration of interactive robotic artifacts into everyday human environments. In particular, social intelligence in recognizing affectionate behavior will offer value by allowing companion robots to bond meaningfully with persons involved. The current article addresses the issue of designing an affectionate haptic interaction between a person and a companion robot by a) furthering understanding of how people's attempts to communicate affection to a robot through touch can be recognized, and b) exploring how a small humanoid robot can behave in conjunction with such touches to elicit affection. We report on an experiment conducted to gain insight into how people perceive three fundamental interactive strategies in which a robot is either always highly affectionate, appropriately affectionate, or superficially unaffectionate (emphasizing positivity, contingency, and challenge respectively). Results provide insight into the structure of affectionate interaction between humans and humanoid robots—underlining the importance of an interaction design expressing sincerity, liking, stability and variation—and suggest the usefulness of novel modalities such as warmth and cold.
BibTeX:
@Article{Cooney2014c,
  author =          {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Affectionate Interaction with a Small Humanoid Robot Capable of Recognizing Social Touch Behavior},
  journal =         {{ACM} Transactions on Interactive Intelligent Systems},
  year =            {2014},
  volume =          {4},
  number =          {4},
  pages =           {32},
  month =           Dec,
  abstract =        {Activity recognition, involving a capability to automatically recognize people's behavior and its underlying significance, will play a crucial role in facilitating the integration of interactive robotic artifacts into everyday human environments. In particular, social intelligence in recognizing affectionate behavior will offer value by allowing companion robots to bond meaningfully with persons involved. The current article addresses the issue of designing an affectionate haptic interaction between a person and a companion robot by a) furthering understanding of how people's attempts to communicate affection to a robot through touch can be recognized, and b) exploring how a small humanoid robot can behave in conjunction with such touches to elicit affection. We report on an experiment conducted to gain insight into how people perceive three fundamental interactive strategies in which a robot is either always highly affectionate, appropriately affectionate, or superficially unaffectionate (emphasizing positivity, contingency, and challenge respectively). Results provide insight into the structure of affectionate interaction between humans and humanoid robots—underlining the importance of an interaction design expressing sincerity, liking, stability and variation—and suggest the usefulness of novel modalities such as warmth and cold.},
  doi =             {10.1145/2685395},
  file =            {Cooney2014b.pdf:pdf/Cooney2014b.pdf:PDF},
  keywords =        {human-robot interaction; activity recognition; small humanoid companion robot; affectionate touch behavior; intelligent systems},
  url =             {http://dl.acm.org/citation.cfm?doid=2688469.2685395}
}
Rosario Sorbello, Antonio Chella, Carmelo Cali, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, "Telenoid Android Robot as an Embodied Perceptual Social Regulation Medium Engaging Natural Human-Humanoid Interaction", Robotics and Autonomous Systems Journal, vol. 62, issue 9, pp. 1329-1341, September, 2014.
Abstract: The present paper aims to validate our research on Human-Humanoid Interaction (HHI) using the minimalist humanoid robot Telenoid. We conducted the human-robot interaction test with 142 young people who had no prior interaction experience with this robot. The main goal is the analysis of the two social dimensions ("Perception" and "Believability" ) useful for increasing the natural behaviour between users and Telenoid. We administered our custom questionnaire to human subjects in association with a well defined experimental setting ("ordinary and goal-guided task"). A thorough analysis of the questionnaires has been carried out and reliability and internal consistency in correlation between the multiple items has been calculated. Our experimental results show that the perceptual behavior and believability, as implicit social competences, could improve the meaningfulness and the natural-like sense of human-humanoid interaction in everyday life taskdriven activities. Telenoid is perceived as an autonomous cooperative agent for a shared environment by human beings.
BibTeX:
@Article{Sorbello2013a,
  Title                    = {Telenoid Android Robot as an Embodied Perceptual Social Regulation Medium Engaging Natural Human-Humanoid Interaction},
  Author                   = {Rosario Sorbello and Antonio Chella and Carmelo Cali and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro},
  Journal                  = {Robotics and Autonomous Systems Journal},
  Year                     = {2014},

  Month                    = SEP,
  Pages                    = {1329-1341},
  Volume                   = {62, issue 9},

  Abstract                 = {The present paper aims to validate our research on Human-Humanoid Interaction (HHI) using the minimalist humanoid robot Telenoid. We conducted the human-robot interaction test with 142 young people who had no prior interaction experience with this robot. The main goal is the analysis of the two social dimensions ("Perception" and "Believability" ) useful for increasing the natural behaviour between users and Telenoid. We administered our custom questionnaire to human subjects in association with a well defined experimental setting ("ordinary and goal-guided task"). A thorough analysis of the questionnaires has been carried out and reliability and internal consistency in correlation between the multiple items has been calculated. Our experimental results show that the perceptual behavior and believability, as implicit social competences, could improve the meaningfulness and the natural-like sense of human-humanoid interaction in everyday life taskdriven activities. Telenoid is perceived as an autonomous cooperative agent for a shared environment by human beings.},
  Doi                      = {10.1016/j.robot.2014.03.017},
  File                     = {Sorbello2013a.pdf:pdf/Sorbello2013a.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Telenoid; Geminoid; Social Robot; Human-Humanoid Robot Interaction},
  Language                 = {en},
  Reviewed                 = {Y},
  Url                      = {http://www.sciencedirect.com/science/article/pii/S092188901400061X}
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Effect of biased feedback on motor imagery learning in BCI-teleoperation system", Frontiers in Systems Neuroscience, vol. 8, no. 52, April, 2014.
Abstract: Feedback design is an important issue in motor imagery BCI systems. Regardless, to date it has not been reported how feedback presentation can optimize co-adaptation between a human brain and such systems. This paper assesses the effect of realistic visual feedback on users' BCI performance and motor imagery skills. We previously developed a tele-operation system for a pair of humanlike robotic hands and showed that BCI control of such hands along with first-person perspective visual feedback of movements can arouse a sense of embodiment in the operators. In the first stage of this study, we found that the intensity of this ownership illusion was associated with feedback presentation and subjects' performance during BCI motion control. In the second stage, we probed the effect of positive and negative feedback bias on subjects' BCI performance and motor imagery skills. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback and a possible occurrence of ownership illusion. Our findings suggest that in general training protocols for BCIs, manipulation of feedback can play an important role in the optimization of subjects' motor imagery skills.
BibTeX:
@Article{Alimardani2014a,
  author =          {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Effect of biased feedback on motor imagery learning in BCI-teleoperation system},
  journal =         {Frontiers in Systems Neuroscience},
  year =            {2014},
  volume =          {8},
  number =          {52},
  month =           Apr,
  abstract =        {Feedback design is an important issue in motor imagery BCI systems. Regardless, to date it has not been reported how feedback presentation can optimize co-adaptation between a human brain and such systems. This paper assesses the effect of realistic visual feedback on users' BCI performance and motor imagery skills. We previously developed a tele-operation system for a pair of humanlike robotic hands and showed that BCI control of such hands along with first-person perspective visual feedback of movements can arouse a sense of embodiment in the operators. In the first stage of this study, we found that the intensity of this ownership illusion was associated with feedback presentation and subjects' performance during BCI motion control. In the second stage, we probed the effect of positive and negative feedback bias on subjects' BCI performance and motor imagery skills. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback and a possible occurrence of ownership illusion. Our findings suggest that in general training protocols for BCIs, manipulation of feedback can play an important role in the optimization of subjects' motor imagery skills.},
  doi =             {10.3389/fnsys.2014.00052},
  file =            {Alimardani2014a.pdf:pdf/Alimardani2014a.pdf:PDF},
  keywords =        {body ownership illusion; BCI‐teleoperation; motor imagery learning; feedback effect; training},
  url =             {http://journal.frontiersin.org/Journal/10.3389/fnsys.2014.00052/full}
}
Kaiko Kuwamura, Kurima Sakai, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Hugvie: communication device for encouraging good relationship through the act of hugging", Lovotics, vol. Vol. 1, Issue 1, pp. 10000104, February, 2014.
Abstract: In this paper, we introduce a communication device which encourages users to establish a good relationship with others. We designed the device so that it allows users to virtually hug the person in the remote site through the medium. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging the communication medium, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked. From this result, we discuss Active Co-Presence, a new method to enhance co-presence of people in remote through active behavior.
BibTeX:
@Article{Kuwamura2014a,
  Title                    = {Hugvie: communication device for encouraging good relationship through the act of hugging},
  Author                   = {Kaiko Kuwamura and Kurima Sakai and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  Journal                  = {Lovotics},
  Year                     = {2014},

  Month                    = Feb,
  Pages                    = {10000104},
  Volume                   = {Vol. 1, Issue 1},

  Abstract                 = {In this paper, we introduce a communication device which encourages users to establish a good relationship with others. We designed the device so that it allows users to virtually hug the person in the remote site through the medium. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging the communication medium, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked. From this result, we discuss Active Co-Presence, a new method to enhance co-presence of people in remote through active behavior.},
  Acknowledgement          = {This research was partially supported by JST, CREST.},
  Doi                      = {10.4172/2090-9888.10000104},
  File                     = {Kuwamura2014a.pdf:pdf/Kuwamura2014a.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {hug; co-presence; telecommunication},
  Language                 = {en},
  Reviewed                 = {y},
  Url                      = {http://www.omicsonline.com/open-access/hugvie_communication_device_for_encouraging_good_relationship_through_the_act_of_hugging.pdf?aid=24445}
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "The Uncanny in the Wild. Analysis of Unscripted Human-Android Interaction in the Field.", International Journal of Social Robotics, vol. 6, no. 1, pp. 67-83, January, 2014.
Abstract: Against the background of the uncanny valley hypothesis we investigated how people react towards an android robot in a natural environment dependent on the behavior displayed by the robot (still vs. moving) in a quasi-experimental observational field study. We present data on unscripted interactions between humans and the android robot “Geminoid HI-1" in an Austrian public café and subsequent interviews. Data were analyzed with regard to the participants' nonverbal behavior (e.g. attention paid to the robot, proximity). We found that participants' behavior towards the android robot as well as their interview answers were influenced by the behavior the robot displayed. In addition, we found huge inter-individual differences in the participants' behavior. Implications for the uncanny valley and research on social human–robot interactions are discussed.
BibTeX:
@Article{Putten2011b,
  author =          {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {The Uncanny in the Wild. Analysis of Unscripted Human-Android Interaction in the Field.},
  journal =         {International Journal of Social Robotics},
  year =            {2014},
  volume =          {6},
  number =          {1},
  pages =           {67-83},
  month =           Jan,
  abstract =        {Against the background of the uncanny valley hypothesis we investigated how people react towards an android robot in a natural environment dependent on the behavior displayed by the robot (still vs. moving) in a quasi-experimental observational field study. We present data on unscripted interactions between humans and the android robot “Geminoid HI-1" in an Austrian public café and subsequent interviews. Data were analyzed with regard to the participants' nonverbal behavior (e.g. attention paid to the robot, proximity). We found that participants' behavior towards the android robot as well as their interview answers were influenced by the behavior the robot displayed. In addition, we found huge inter-individual differences in the participants' behavior. Implications for the uncanny valley and research on social human–robot interactions are discussed.},
  doi =             {10.1007/s12369-013-0198-7},
  file =            {Putten2011b.pdf:pdf/Putten2011b.pdf:PDF},
  keywords =        {human-robot interaction; field study; observation; multimodal evaluation of human interaction with robots; Uncanny Valley},
  url =             {http://link.springer.com/article/10.1007/s12369-013-0198-7}
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Marco Nørskov, Nobu Ishiguro, Giuseppe Balistreri, "Acceptability of a Teleoperated Android by Senior Citizens in Danish Society: A Case Study on the Application of an Embodied Communication Medium to Home Care", International Journal of Social Robotics, vol. 6, no. 3, pp. 429-442, 2014.
Abstract: We explore the potential of teleoperated androids,which are embodied telecommunication media with humanlike appearances. By conducting field experiments, we investigated how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world when it is employed to express telepresence and a sense of ‘being there'. Our exploratory study focused on the social aspects of the android robot, which might facilitate communication between the elderly and Telenoid's operator. This new way of creating social relationships can be used to solve a problem in society, the social isolation of senior citizens. It has been becoming a major issue even in Denmark that is known as one of countries with advanced welfare systems. After asking elderly people to use Te-lenoid at their homes, we found that the elderly with or without dementia showed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Their positivity and strong attachment to its minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.
BibTeX:
@Article{Yamazaki2013a,
  author =          {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Marco N\orskov and Nobu Ishiguro and Giuseppe Balistreri},
  title =           {Acceptability of a Teleoperated Android by Senior Citizens in Danish Society: A Case Study on the Application of an Embodied Communication Medium to Home Care},
  journal =         {International Journal of Social Robotics},
  year =            {2014},
  volume =          {6},
  number =          {3},
  pages =           {429-442},
  abstract =        {We explore the potential of teleoperated androids,which are embodied telecommunication media with humanlike appearances. By conducting field experiments, we investigated how Telenoid, a teleoperated android designed as a minimalistic human, affect people in the real world when it is employed to express telepresence and a sense of ‘being there'. Our exploratory study focused on the social aspects of the android robot, which might facilitate communication between the elderly and Telenoid's operator. This new way of creating social relationships can be used to solve a problem in society, the social isolation of senior citizens. It has been becoming a major issue even in Denmark that is known as one of countries with advanced welfare systems. After asking elderly people to use Te-lenoid at their homes, we found that the elderly with or without dementia showed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Their positivity and strong attachment to its minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.},
  doi =             {10.1007/s12369-014-0247-x},
  file =            {Yamazaki2013a.pdf:pdf/Yamazaki2013a.pdf:PDF},
  keywords =        {teleoperated android; minimal design; embodied communication; social isolation; elderly care; social acceptance},
}
Hidenobu Sumioka, Shuichi Nishio, Takashi Minato, Ryuji Yamazaki, Hiroshi Ishiguro, "Minimal human design approach for sonzai-kan media: investigation of a feeling of human presence", Cognitive Computation, vol. 6, Issue 4, pp. 760-774, 2014.
Abstract: Even though human-like robotic media give the feeling of being with others and positively affect our physical and mental health, scant research has addressed how much information about a person should be reproduced to enhance the feeling of a human presence. We call this feeling sonzai-kan, which is a Japanese phrase that means the feeling of a presence. We propose a minimal design approach for exploring the requirements to enhance this feeling and hypothesize that it is enhanced if information is presented from at least two different modalities. In this approach, the exploration is conducted by designing sonzai-kan media through exploratory research with the media, their evaluations, and the development of their systems. In this paper, we give an overview of our current work with Telenoid, a teleoperated android designed with our approach, to illustrate how we explore the requirements and how such media impact our quality of life. We discuss the potential advantages of our approach for forging positive social relationships and designing an autonomous agent with minimal cognitive architecture.
BibTeX:
@Article{Sumioka2013e,
  author =          {Hidenobu Sumioka and Shuichi Nishio and Takashi Minato and Ryuji Yamazaki and Hiroshi Ishiguro},
  title =           {Minimal human design approach for sonzai-kan media: investigation of a feeling of human presence},
  journal =         {Cognitive Computation},
  year =            {2014},
  volume =          {6, Issue 4},
  pages =           {760-774},
  abstract =        {Even though human-like robotic media give the feeling of being with others and positively affect our physical and mental health, scant research has addressed how much information about a person should be reproduced to enhance the feeling of a human presence. We call this feeling sonzai-kan, which is a Japanese phrase that means the feeling of a presence. We propose a minimal design approach for exploring the requirements to enhance this feeling and hypothesize that it is enhanced if information is presented from at least two different modalities. In this approach, the exploration is conducted by designing sonzai-kan media through exploratory research with the media, their evaluations, and the development of their systems. In this paper, we give an overview of our current work with Telenoid, a teleoperated android designed with our approach, to illustrate how we explore the requirements and how such media impact our quality of life. We discuss the potential advantages of our approach for forging positive social relationships and designing an autonomous agent with minimal cognitive architecture.},
  doi =             {10.1007/s12559-014-9270-3},
  file =            {Sumioka2014.pdf:pdf/Sumioka2014.pdf:PDF},
  keywords =        {Human–robot Interaction; Minimal design; Elderly care; Android science},
  url =             {http://link.springer.com/article/10.1007%2Fs12559-014-9270-3}
}
Kurima Sakai, Hidenobu Sumioka, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Motion Design of Interactive Small Humanoid Robot with Visual Illusion", International Journal of Innovative Computing, Information and Control, vol. 9, no. 12, pp. 4725-4736, December, 2013.
Abstract: This paper presents a novel method to express motions of a small human-like robotic avatar that can be a portable communication medium: a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems in implementing actuators in the small body. The method is to induce illusory motion of the robot's extremities with blinking lights. This idea needs only Light Emitting Diodes (LEDs) and avoids the above problems. This paper presents the design of an LED blinking pattern to induce an illusory nodding motion of Elfoid, which is a hand-held tele-operated humanoid robot. A psychological experiment shows that the illusory nodding motion gives a better impression to people than a symbolic blinking pattern. This result suggests that even the illusory motion of a robotic avatar can improve tele-communications.
BibTeX:
@Article{Sakai2013,
  author =          {Kurima Sakai and Hidenobu Sumioka and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Motion Design of Interactive Small Humanoid Robot with Visual Illusion},
  journal =         {International Journal of Innovative Computing, Information and Control},
  year =            {2013},
  volume =          {9},
  number =          {12},
  pages =           {4725-4736},
  month =           Dec,
  abstract =        {This paper presents a novel method to express motions of a small human-like robotic avatar that can be a portable communication medium: a user can talk with another person while feeling the other's presence at anytime, anywhere. The human-like robotic avatar is expected to express human-like movements; however, there are technical and cost problems in implementing actuators in the small body. The method is to induce illusory motion of the robot's extremities with blinking lights. This idea needs only Light Emitting Diodes (LEDs) and avoids the above problems. This paper presents the design of an LED blinking pattern to induce an illusory nodding motion of Elfoid, which is a hand-held tele-operated humanoid robot. A psychological experiment shows that the illusory nodding motion gives a better impression to people than a symbolic blinking pattern. This result suggests that even the illusory motion of a robotic avatar can improve tele-communications.},
  file =            {Sakai2013.pdf:pdf/Sakai2013.pdf:PDF},
  keywords =        {Tele-communication; Nonverbal communication; Portable robot avatar; Visual illusion of motion},
  url =             {http://www.ijicic.org/apchi12-275.pdf}
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Designing Robots for Well-being: Theoretical Background and Visual Scenes of Affectionate Play with a Small Humanoid Robot", Lovotics, November, 2013.
Abstract: Social well-being, referring to a subjectively perceived long-term state of happiness, life satisfaction, health, and other prosperity afforded by social interactions, is increasingly being employed to rate the success of human social systems. Although short-term changes in well-being can be difficult to measure directly, two important determinants can be assessed: perceived enjoyment and affection from relationships. The current article chronicles our work over several years toward achieving enjoyable and affectionate interactions with robots, with the aim of contributing to perception of social well-being in interacting persons. Emphasis has been placed on both describing in detail the theoretical basis underlying our work, and relating the story of each of several designs from idea to evaluation in a visual fashion. For the latter, we trace the course of designing four different robotic artifacts intended to further our understanding of how to provide enjoyment, elicit affection, and realize one specific scenario for affectionate play. As a result, by describing (a) how perceived enjoyment and affection contribute to social well-being, and (b) how a small humanoid robot can proactively engage in enjoyable and affectionate play—recognizing people's behavior and leveraging this knowledge—the current article informs the design of companion robots intended to facilitate a perception of social well-being in interacting persons during affectionate play.
BibTeX:
@Article{Cooney2013d,
  author =          {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Designing Robots for Well-being: Theoretical Background and Visual Scenes of Affectionate Play with a Small Humanoid Robot},
  journal =         {Lovotics},
  year =            {2013},
  month =           Nov,
  abstract =        {Social well-being, referring to a subjectively perceived long-term state of happiness, life satisfaction, health, and other prosperity afforded by social interactions, is increasingly being employed to rate the success of human social systems. Although short-term changes in well-being can be difficult to measure directly, two important determinants can be assessed: perceived enjoyment and affection from relationships. The current article chronicles our work over several years toward achieving enjoyable and affectionate interactions with robots, with the aim of contributing to perception of social well-being in interacting persons. Emphasis has been placed on both describing in detail the theoretical basis underlying our work, and relating the story of each of several designs from idea to evaluation in a visual fashion. For the latter, we trace the course of designing four different robotic artifacts intended to further our understanding of how to provide enjoyment, elicit affection, and realize one specific scenario for affectionate play. As a result, by describing (a) how perceived enjoyment and affection contribute to social well-being, and (b) how a small humanoid robot can proactively engage in enjoyable and affectionate play—recognizing people's behavior and leveraging this knowledge—the current article informs the design of companion robots intended to facilitate a perception of social well-being in interacting persons during affectionate play.},
  doi =             {10.4172/2090-9888.1000101},
  file =            {Cooney2013d.pdf:pdf/Cooney2013d.pdf:PDF},
  keywords =        {Human-robot interaction; well-being; enjoyment; affection; recognizing typical behavior; small humanoid robot},
  url =             {http://www.omicsonline.com/open-access/designing_robots_for_well_being_theoretical_background_and_visual.pdf?aid=24444}
}
Hidenobu Sumioka, Aya Nakae, Ryota Kanai, Hiroshi Ishiguro, "Huggable communication medium decreases cortisol levels", Scientific Reports, vol. 3, no. 3034, October, 2013.
Abstract: Interpersonal touch is a fundamental component of social interactions because it can mitigate physical and psychological distress. To reproduce the psychological and physiological effects associated with interpersonal touch, interest is growing in introducing tactile sensations to communication devices. However, it remains unknown whether physical contact with such devices can produce objectively measurable endocrine effects like real interpersonal touching can. We directly tested this possibility by examining changes in stress hormone cortisol before and after a conversation with a huggable communication device. Participants had 15-minute conversations with a remote partner that was carried out either with a huggable human-shaped device or with a mobile phone. Our experiment revealed significant reduction in the cortisol levels for those who had conversations with the huggable device. Our approach to evaluate communication media with biological markers suggests new design directions for interpersonal communication media to improve social support systems in modern highly networked societies.
BibTeX:
@Article{Sumioka2013d,
  author =          {Hidenobu Sumioka and Aya Nakae and Ryota Kanai and Hiroshi Ishiguro},
  title =           {Huggable communication medium decreases cortisol levels},
  journal =         {Scientific Reports},
  year =            {2013},
  volume =          {3},
  number =          {3034},
  month =           Oct,
  abstract =        {Interpersonal touch is a fundamental component of social interactions because it can mitigate physical and psychological distress. To reproduce the psychological and physiological effects associated with interpersonal touch, interest is growing in introducing tactile sensations to communication devices. However, it remains unknown whether physical contact with such devices can produce objectively measurable endocrine effects like real interpersonal touching can. We directly tested this possibility by examining changes in stress hormone cortisol before and after a conversation with a huggable communication device. Participants had 15-minute conversations with a remote partner that was carried out either with a huggable human-shaped device or with a mobile phone. Our experiment revealed significant reduction in the cortisol levels for those who had conversations with the huggable device. Our approach to evaluate communication media with biological markers suggests new design directions for interpersonal communication media to improve social support systems in modern highly networked societies.},
  doi =             {10.1038/srep03034},
  file =            {Sumioka2013d.pdf:pdf/Sumioka2013d.pdf:PDF},
  url =             {http://www.nature.com/srep/2013/131023/srep03034/full/srep03034.html}
}
Martin Cooney, Takayuki Kanda, Aris Alissandrakis, Hiroshi Ishiguro, "Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot", International Journal of Social Robotics, vol. 6, pp. 173-193, September, 2013.
Abstract: Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how the robot‘s body is moved when people perform such full-body gestures. Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People‘s behavior is complex, and naïve designs for a robot‘s behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people‘s behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a naïve version of our robot. The interaction design is completed by investigating how a robot can provide reward and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.
BibTeX:
@Article{Cooney2013,
  Title                    = {Designing Enjoyable Motion-Based Play Interactions with a Small Humanoid Robot},
  Author                   = {Martin Cooney and Takayuki Kanda and Aris Alissandrakis and Hiroshi Ishiguro},
  Journal                  = {International Journal of Social Robotics},
  Year                     = {2013},

  Month                    = Sep,
  Pages                    = {173-193},
  Volume                   = {6},

  Abstract                 = {Robots designed to co-exist with humans in domestic and public environments should be capable of interacting with people in an enjoyable fashion in order to be socially accepted. In this research, we seek to set up a small humanoid robot with the capability to provide enjoyment to people who pick up the robot and play with it by hugging, shaking and moving the robot in various ways. Inertial sensors inside a robot can capture how the robot‘s body is moved when people perform such full-body gestures. Unclear is how a robot can recognize what people do during play, and how such knowledge can be used to provide enjoyment. People‘s behavior is complex, and na\"{i}ve designs for a robot‘s behavior based only on intuitive knowledge from previous designs may lead to failed interactions. To solve these problems, we model people‘s behavior using typical full-body gestures observed in free interaction trials, and devise an interaction design based on avoiding typical failures observed in play sessions with a na\"{i}ve version of our robot. The interaction design is completed by investigating how a robot can provide reward and itself suggest ways to play during an interaction. We then verify experimentally that our design can be used to provide enjoyment during a playful interaction. By describing the process of how a small humanoid robot can be designed to provide enjoyment, we seek to move one step closer to realizing companion robots which can be successfully integrated into human society.},
  Acknowledgement          = {This research was supported by the Ministry of Internal Affairs and Communications of Japan and JST, CREST.},
  Doi                      = {10.1007/s12369-013-0212-0},
  File                     = {Cooney2013.pdf:pdf/Cooney2013.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Interaction design for enjoyment; Playful human-robot interaction; Full-body gesture recognition; Inertial sensing; Small humanoid robot},
  Language                 = {en},
  Reviewed                 = {Y},
  Url                      = {http://link.springer.com/article/10.1007%2Fs12369-013-0212-0}
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators", Scientific Reports, vol. 3, no. 2396, August, 2013.
Abstract: Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.
BibTeX:
@Article{Alimardani2013,
  author =          {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Humanlike robot hands controlled by brain activity arouse illusion of ownership in operators},
  journal =         {Scientific Reports},
  year =            {2013},
  volume =          {3},
  number =          {2396},
  month =           Aug,
  abstract =        {Operators of a pair of robotic hands report ownership for those hands when they hold image of a grasp motion and watch the robot perform it. We present a novel body ownership illusion that is induced by merely watching and controlling robot's motions through a brain machine interface. In past studies, body ownership illusions were induced by correlation of such sensory inputs as vision, touch and proprioception. However, in the presented illusion none of the mentioned sensations are integrated except vision. Our results show that during BMI-operation of robotic hands, the interaction between motor commands and visual feedback of the intended motions is adequate to incorporate the non-body limbs into one's own body. Our discussion focuses on the role of proprioceptive information in the mechanism of agency-driven illusions. We believe that our findings will contribute to improvement of tele-presence systems in which operators incorporate BMI-operated robots into their body representations.},
  day =             {9},
  doi =             {10.1038/srep02396},
  file =            {alimardani2013a.pdf:pdf/alimardani2013a.pdf:PDF},
  url =             {http://www.nature.com/srep/2013/130809/srep02396/full/srep02396.html}
}
Shuichi Nishio, Koichi Taura, Hidenobu Sumioka, Hiroshi Ishiguro, "Teleoperated Android Robot as Emotion Regulation Media", International Journal of Social Robotics, vol. 5, no. 4, pp. 563-573, July, 2013.
Abstract: In this paper, we experimentally examined whether changes in the facial expressions of teleoperated androids could affect and regulate operators' emotion, based on the facial feedback theory of emotion and the phenomenon of body ownership transfer to the robot. Twenty-six Japanese participants had conversations with an experimenter based on a situation where participants feel anger and, during the conversation, the android's facial expression changed according to a pre-programmed scheme. The results showed that the facial feedback from the android did occur. Moreover, by comparing the two groups of participants, one with operating the robot and another without operating it, we found that this facial feedback from the android robot occur only when participants operated the robot and, when an operator could effectively operate the robot, his/her emotional states were much affected by facial expression change of the robot.
BibTeX:
@Article{Nishio2013a,
  author =          {Shuichi Nishio and Koichi Taura and Hidenobu Sumioka and Hiroshi Ishiguro},
  title =           {Teleoperated Android Robot as Emotion Regulation Media},
  journal =         {International Journal of Social Robotics},
  year =            {2013},
  volume =          {5},
  number =          {4},
  pages =           {563-573},
  month =           Jul,
  abstract =        {In this paper, we experimentally examined whether changes in the facial expressions of teleoperated androids could affect and regulate operators' emotion, based on the facial feedback theory of emotion and the phenomenon of body ownership transfer to the robot. Twenty-six Japanese participants had conversations with an experimenter based on a situation where participants feel anger and, during the conversation, the android's facial expression changed according to a pre-programmed scheme. The results showed that the facial feedback from the android did occur. Moreover, by comparing the two groups of participants, one with operating the robot and another without operating it, we found that this facial feedback from the android robot occur only when participants operated the robot and, when an operator could effectively operate the robot, his/her emotional states were much affected by facial expression change of the robot.},
  doi =             {10.1007/s12369-013-0201-3},
  file =            {Nishio2013a.pdf:pdf/Nishio2013a.pdf:PDF},
  keywords =        {Teleoperated android robot; Emotion regulation; Facial feedback hypothesis; Body ownership transfer},
  url =             {http://link.springer.com/article/10.1007%2Fs12369-013-0201-3}
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Kohei Matsumura, Takashi Minato, Hiroshi Ishiguro, Tsutomu Fujinami, Masaru Nishikawa, "Promoting Socialization of Schoolchildren Using a Teleoperated Android: An Interaction Study", International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350007(1-25), April, 2013.
Abstract: Our research focuses on the social aspects of teleoperated androids as new media for human relationships and explores how they can contribute and encourage people to associate with others. We introduced Telenoid, a teleoperated android with a minimalistic human design, to elementary school classrooms to see how children respond to it. We found that Telenoid encourages children to work cooperatively and facilitates communication with senior citizens with dementia. Children differentiated their roles spontaneously and cooperatively participated in group work. In another class, we applied Telenoid to remote communication between schoolchildren and assisted living residents. The children felt relaxed about continuing their conversations with the elderly and positively participated in them. The results suggest that limited functionality may facilitate cooperation among participants, and varied embodiments may promote the learning process of the association with others, even those who are unfamiliar. We propose a teleoperated android as an educational tool to promote socialization.
BibTeX:
@Article{Yamazaki2012e,
  author =          {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Kohei Matsumura and Takashi Minato and Hiroshi Ishiguro and Tsutomu Fujinami and Masaru Nishikawa},
  title =           {Promoting Socialization of Schoolchildren Using a Teleoperated Android: An Interaction Study},
  journal =         {International Journal of Humanoid Robotics},
  year =            {2013},
  volume =          {10},
  number =          {1},
  pages =           {1350007(1-25)},
  month =           Apr,
  abstract =        {Our research focuses on the social aspects of teleoperated androids as new media for human relationships and explores how they can contribute and encourage people to associate with others. We introduced Telenoid, a teleoperated android with a minimalistic human design, to elementary school classrooms to see how children respond to it. We found that Telenoid encourages children to work cooperatively and facilitates communication with senior citizens with dementia. Children differentiated their roles spontaneously and cooperatively participated in group work. In another class, we applied Telenoid to remote communication between schoolchildren and assisted living residents. The children felt relaxed about continuing their conversations with the elderly and positively participated in them. The results suggest that limited functionality may facilitate cooperation among participants, and varied embodiments may promote the learning process of the association with others, even those who are unfamiliar. We propose a teleoperated android as an educational tool to promote socialization.},
  day =             {2},
  doi =             {10.1142/S0219843613500072},
  file =            {Yamazaki2012e.pdf:pdf/Yamazaki2012e.pdf:PDF},
  keywords =        {Telecommunication; android robot; minimal design; cooperation; role differentiation; inter-generational relationship; embodied communication; teleoperation; socialization},
  url =             {http://www.worldscientific.com/doi/abs/10.1142/S0219843613500072}
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Generation of Nodding, Head Tilting and Gazing for Human-Robot Speech Interaction", International Journal of Humanoid Robotics, vol. 10, no. 1, pp. 1350009(1-19), April, 2013.
Abstract: Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paperproposes a model for generating headtilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping peoples original motions without gaze information. We also nd that an upward motion of a robots face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping peoples original motions with gaze information in terms ofperceived naturalness.
BibTeX:
@Article{Liu2012a,
  Title                    = {Generation of Nodding, Head Tilting and Gazing for Human-Robot Speech Interaction},
  Author                   = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  Journal                  = {International Journal of Humanoid Robotics},
  Year                     = {2013},

  Month                    = Apr,
  Number                   = {1},
  Pages                    = {1350009(1-19)},
  Volume                   = {10},

  Abstract                 = {Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paperproposes a model for generating headtilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping peoples original motions without gaze information. We also nd that an upward motion of a robots face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping peoples original motions with gaze information in terms ofperceived naturalness.},
  Acknowledgement          = {This work was supported by {JST CREST}.},
  Day                      = {2},
  Doi                      = {10.1142/S0219843613500096},
  File                     = {Liu2012a.pdf:pdf/Liu2012a.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Head motion; dialogue acts; gazing; motion generation},
  Language                 = {en},
  Reviewed                 = {Y},
  Url                      = {http://www.worldscientific.com/doi/abs/10.1142/S0219843613500096}
}
Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Analysis of relationship between head motion events and speech in dialogue conversations", Speech Communication, Special issue on Gesture and speech in interaction, pp. 233-243, 2013.
Abstract: Head motion naturally occurs in synchrony with speech and may convey paralinguistic information (such as intentions, attitudes and emotions) in dialogue communication. With the aim of verifying the relationship between head motion and several types of linguistic, paralinguistic and prosodic information conveyed by speech utterances, analyses were conducted on motion-captured data of multiple speakers during natural dialogue conversations. Although most of past works tried to relate head motion with prosodic features, our analysis results firstly indicated that head motion was more directly related to dialogue act functions, rather than to prosodic features. Among the head motion types, nods occurred with most frequency during speech utterances, not only for expressing dialogue acts of agreement or affirmation, but also appearing at the last syllable of the phrases with strong phrase boundaries. Head shakes appeared mostly in phrases expressing negation, while head tilts appeared mostly in phrases expressing thinking, and in interjections expressing unexpectedness and denial. Speaker variability analyses indicated that the occurrence of head motion differs depending on the inter-personal relationship with the interlocutor and the speaker's emotional and attitudinal state. A clear increase in the occurrence rate of nods was observed when the dialogue partners do not have a close inter-personal relationship, and in situations where the speaker talks confidently, cheerfully, with enthusiasm, or expresses interest or sympathy to the interlocutor's talk.
BibTeX:
@Article{Ishi2013,
  author =   {Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  title =    {Analysis of relationship between head motion events and speech in dialogue conversations},
  journal =  {Speech Communication, Special issue on Gesture and speech in interaction},
  year =     {2013},
  pages =    {233-243},
  abstract = {Head motion naturally occurs in synchrony with speech and may convey paralinguistic information (such as intentions, attitudes and emotions) in dialogue communication. With the aim of verifying the relationship between head motion and several types of linguistic, paralinguistic and prosodic information conveyed by speech utterances, analyses were conducted on motion-captured data of multiple speakers during natural dialogue conversations. Although most of past works tried to relate head motion with prosodic features, our analysis results firstly indicated that head motion was more directly related to dialogue act functions, rather than to prosodic features. Among the head motion types, nods occurred with most frequency during speech utterances, not only for expressing dialogue acts of agreement or affirmation, but also appearing at the last syllable of the phrases with strong phrase boundaries. Head shakes appeared mostly in phrases expressing negation, while head tilts appeared mostly in phrases expressing thinking, and in interjections expressing unexpectedness and denial. Speaker variability analyses indicated that the occurrence of head motion differs depending on the inter-personal relationship with the interlocutor and the speaker's emotional and attitudinal state. A clear increase in the occurrence rate of nods was observed when the dialogue partners do not have a close inter-personal relationship, and in situations where the speaker talks confidently, cheerfully, with enthusiasm, or expresses interest or sympathy to the interlocutor's talk.},
  file =     {Ishi2013.pdf:pdf/Ishi2013.pdf:PDF},
}
Kohei Ogawa, Shuichi Nishio, Kensuke Koda, Giuseppe Balistreri, Tetsuya Watanabe, Hiroshi Ishiguro, "Exploring the Natural Reaction of Young and Aged Person with Telenoid in a Real World", Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 15, no. 5, pp. 592-597, July, 2011.
Abstract: This paper describes two field tests conducted with shopping mall visitors and with aged persons defined as in their 70s to 90s. For both of the field tests, we used an android we developed called Telenoid R1 or just Telenoid. In the following field tests we interviewed participants about their impressions of the Telenoid. The results of the shopping mall showed that almost half of the interviewees felt negative toward Telenoid until they hugged it, after which opinions became positive. Results of the other test showed that the majority of aged persons reported a positive opinion and, interestingly, all aged persons who interacted with Telenoid gave it a hug without any suggestion to do so. This suggests that older persons find Telenoid to be acceptable medium for the elderly. Younger persons may also find Telenoid acceptable, seeing that visitors developed positive feelings toward the robot after giving it a hug. These results should prove valuable in our future work with androids.
BibTeX:
@Article{Ogawa2011,
  author =          {Kohei Ogawa and Shuichi Nishio and Kensuke Koda and Giuseppe Balistreri and Tetsuya Watanabe and Hiroshi Ishiguro},
  title =           {Exploring the Natural Reaction of Young and Aged Person with Telenoid in a Real World},
  journal =         {Journal of Advanced Computational Intelligence and Intelligent Informatics},
  year =            {2011},
  volume =          {15},
  number =          {5},
  pages =           {592--597},
  month =           Jul,
  abstract =        {This paper describes two field tests conducted with shopping mall visitors and with aged persons defined as in their 70s to 90s. For both of the field tests, we used an android we developed called Telenoid R1 or just Telenoid. In the following field tests we interviewed participants about their impressions of the Telenoid. The results of the shopping mall showed that almost half of the interviewees felt negative toward Telenoid until they hugged it, after which opinions became positive. Results of the other test showed that the majority of aged persons reported a positive opinion and, interestingly, all aged persons who interacted with Telenoid gave it a hug without any suggestion to do so. This suggests that older persons find Telenoid to be acceptable medium for the elderly. Younger persons may also find Telenoid acceptable, seeing that visitors developed positive feelings toward the robot after giving it a hug. These results should prove valuable in our future work with androids.},
  file =            {Ogawa2011.pdf:Ogawa2011.pdf:PDF},
  keywords =        {Telenoid; Geminoid; human robot interaction},
  url =             {http://www.fujipress.jp/finder/xslt.php?mode=present&inputfile=JACII001500050012.xml}
}
Shuichi Nishio, Hiroshi Ishiguro, "Attitude Change Induced by Different Appearances of Interaction Agents", International Journal of Machine Consciousness, vol. 3, no. 1, pp. 115-126, 2011.
Abstract: Human-robot interaction studies up to now have been limited to simple tasks such as route guidance or playing simple games. With the advance in robotic technologies, we are now at the stage to explore requirements for highly complicated tasks such as having human-like conversations. When robots start to play advanced roles in our lives such as in health care, attributes such as trust, reliance and persuasiveness will also be important. In this paper, we examine how the appearance of robots affects people's attitudes toward them. Past studies have shown that the appearance of robots is one of the elements that influences people's behavior. However, it is still unknown what effect appearance has when having serious conversations that require high-level activity. Participants were asked to have a discussion with tele-operated robots of various appearances such as an android with high similarity to a human or a humanoid robot that has human-like body parts. Through the discussion, the tele-operator tried to persuade the participants. We examined how appearance affects robots' persuasiveness as well as people's behavior and impression of robots. A possible contribution to machine consciousness research is also discussed.
BibTeX:
@Article{Nishio2011,
  author =          {Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Attitude Change Induced by Different Appearances of Interaction Agents},
  journal =         {International Journal of Machine Consciousness},
  year =            {2011},
  volume =          {3},
  number =          {1},
  pages =           {115--126},
  abstract =        {Human-robot interaction studies up to now have been limited to simple tasks such as route guidance or playing simple games. With the advance in robotic technologies, we are now at the stage to explore requirements for highly complicated tasks such as having human-like conversations. When robots start to play advanced roles in our lives such as in health care, attributes such as trust, reliance and persuasiveness will also be important. In this paper, we examine how the appearance of robots affects people's attitudes toward them. Past studies have shown that the appearance of robots is one of the elements that influences people's behavior. However, it is still unknown what effect appearance has when having serious conversations that require high-level activity. Participants were asked to have a discussion with tele-operated robots of various appearances such as an android with high similarity to a human or a humanoid robot that has human-like body parts. Through the discussion, the tele-operator tried to persuade the participants. We examined how appearance affects robots' persuasiveness as well as people's behavior and impression of robots. A possible contribution to machine consciousness research is also discussed.},
  doi =             {10.1142/S1793843011000637},
  file =            {Nishio2011.pdf:Nishio2011.pdf:PDF},
  keywords =        {Robot; appearance; interaction agents; human-robot interaction},
  url =             {http://www.worldscinet.com/ijmc/03/0301/S1793843011000637.html}
}
Christian Becker-Asano, Hiroshi Ishiguro, "Intercultural Differences in Decoding Facial Expressions of The Android Robot Geminoid F", Journal of Artificial Intelligence and Soft Computing Research, vol. 1, no. 3, pp. 215-231, 2011.
Abstract: As android robots become increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. Being interested in intercultural differences we prepared both surveys in English, German, as well as Japanese language, and we not only found that in general our design of the emotional expressions “fearful" and “surprised" were often confused, but also that Japanese participants more often confused “angry" with “sad" than the German and English participants. Although facial displays of the same emotions portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for her. Finally, from the analysis of free responses that the participants were invited to give, a number of interesting further conclusions are drawn that help to clarify the question of how intercultural differences impact on the interpretation of facial displays of an android's emotions.
BibTeX:
@Article{Becker-Asano2011,
  author =          {Christian Becker-Asano and Hiroshi Ishiguro},
  title =           {Intercultural Differences in Decoding Facial Expressions of The Android Robot Geminoid F},
  journal =         {Journal of Artificial Intelligence and Soft Computing Research},
  year =            {2011},
  volume =          {1},
  number =          {3},
  pages =           {215--231},
  abstract =        {As android robots become increasingly sophisticated in their technical as well as artistic design, their non-verbal expressiveness is getting closer to that of real humans. Accordingly, this paper presents results of two online surveys designed to evaluate a female android's facial display of five basic emotions. Being interested in intercultural differences we prepared both surveys in English, German, as well as Japanese language, and we not only found that in general our design of the emotional expressions “fearful" and “surprised" were often confused, but also that Japanese participants more often confused “angry" with “sad" than the German and English participants. Although facial displays of the same emotions portrayed by the model person of Geminoid F achieved higher recognition rates overall, portraying fearful has been similarly difficult for her. Finally, from the analysis of free responses that the participants were invited to give, a number of interesting further conclusions are drawn that help to clarify the question of how intercultural differences impact on the interpretation of facial displays of an android's emotions.},
  url =             {http://jaiscr.eu/issues.aspx}
}
Takayuki Kanda, Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Interactive Humanoid Robots and Androids in Children's Lives", Children, Youth and Environments, vol. 19, no. 1, pp. 12-33, 2009.
Abstract: This paper provides insight into how recent progress in robotics could affect children's lives in the not-so-distant future. We describe two studies in which robots were presented to children in the context of their daily lives. The results of the first study, which was conducted in an elementary school with a mechanical-looking humanoid robot, showed that the robot affected children's behaviors, feelings, and even their friendships. The second study is a case study in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. The results showed that children gradually adapted to conversations with the geminoid and developed an awareness of the personality or presence of the person controlling the geminoid. These studies provide clues to the process of children's adaptation to interactions with robots and particularly how they start treating robots as intelligent beings.
BibTeX:
@Article{Kanda2009,
  author =          {Takayuki Kanda and Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title =           {Interactive Humanoid Robots and Androids in Children's Lives},
  journal =         {Children, Youth and Environments},
  year =            {2009},
  volume =          {19},
  number =          {1},
  pages =           {12--33},
  abstract =        {This paper provides insight into how recent progress in robotics could affect children's lives in the not-so-distant future. We describe two studies in which robots were presented to children in the context of their daily lives. The results of the first study, which was conducted in an elementary school with a mechanical-looking humanoid robot, showed that the robot affected children's behaviors, feelings, and even their friendships. The second study is a case study in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. The results showed that children gradually adapted to conversations with the geminoid and developed an awareness of the personality or presence of the person controlling the geminoid. These studies provide clues to the process of children's adaptation to interactions with robots and particularly how they start treating robots as intelligent beings.},
  file =            {Kanda2009.pdf:Kanda2009.pdf:PDF;19_1_02_HumanoidRobots.pdf:http\://www.colorado.edu/journals/cye/19_1/19_1_02_HumanoidRobots.pdf:PDF},
}
Hiroshi Ishiguro, Shuichi Nishio, "Building artificial humans to understand humans", Journal of Artificial Organs, vol. 10, no. 3, pp. 133-142, September, 2007.
Abstract: If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspectives: one by organic mechanism and the other by appearance. Further, the current rapid progress in artificial organs makes this distinction confusing. The approach discussed in this article is to create artificial humans with humanlike appearances. The developed artificial humans, an android and a geminoid, can be used to improve understanding of humans through psychological and cognitive tests conducted using the artificial humans. We call this new approach to understanding humans android science.
BibTeX:
@Article{Ishiguro2007,
  author =      {Hiroshi Ishiguro and Shuichi Nishio},
  title =       {Building artificial humans to understand humans},
  journal =     {Journal of Artificial Organs},
  year =        {2007},
  volume =      {10},
  number =      {3},
  pages =       {133--142},
  month =       Sep,
  abstract =    {If we could build an android as a very humanlike robot, how would we humans distinguish a real human from an android? The answer to this question is not so easy. In human-android interaction, we cannot see the internal mechanism of the android, and thus we may simply believe that it is a human. This means that a human can be defined from two perspectives: one by organic mechanism and the other by appearance. Further, the current rapid progress in artificial organs makes this distinction confusing. The approach discussed in this article is to create artificial humans with humanlike appearances. The developed artificial humans, an android and a geminoid, can be used to improve understanding of humans through psychological and cognitive tests conducted using the artificial humans. We call this new approach to understanding humans android science.},
  doi =         {10.1007/s10047-007-0381-4},
  file =        {Ishiguro2007.pdf:Ishiguro2007.pdf:PDF},
  institution = {{ATR} Intelligent Robotics and Communication Laboratories, Department of Adaptive Machine Systems, Osaka University, Osaka, Japan.},
  keywords =    {Behavior; Behavioral Sciences, methods; Cognitive Science, methods; Facial Expression; Female; Humans, anatomy /&/ histology/psychology; Male; Movement; Perception; Robotics, instrumentation/methods},
  medline-pst = {ppublish},
  pmid =        {17846711},
  url =         {http://www.springerlink.com/content/pmv076w723140244/}
}
Shuichi Nishio, Hiroshi Ishiguro, Norihiro Hagita, "Can a Teleoperated Android Represent Personal Presence? - A Case Study with Children", Psychologia, vol. 50, no. 4, pp. 330-342, 2007.
Abstract: Our purpose is to investigate the key elements for representing personal presence, which is the sense of being with a certain individual. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@Article{Nishio2007,
  author =          {Shuichi Nishio and Hiroshi Ishiguro and Norihiro Hagita},
  title =           {Can a Teleoperated Android Represent Personal Presence? - A Case Study with Children},
  journal =         {Psychologia},
  year =            {2007},
  volume =          {50},
  number =          {4},
  pages =           {330--342},
  abstract =        {Our purpose is to investigate the key elements for representing personal presence, which is the sense of being with a certain individual. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  doi =             {10.2117/psysoc.2007.330},
  file =            {Nishio2007.pdf:Nishio2007.pdf:PDF},
  url =             {http://www.jstage.jst.go.jp/article/psysoc/50/4/50_330/_article}
}
Reviewed Conference Papers
Soheil Keshmiri, Hidenobu Sumioka, Junya Nakanishi, Hiroshi Ishiguro, "Emotional State Estimation Using a Modified Gradient-Based Neural Architecture with Weighted Estimates", In The International Joint Conference on Neural Networks (IJCNN 2017), Anchorage, Alaska, USA, May, 2017.
Abstract: We present a minimalist two-hidden-layer neural architecture for emotional state estimation using electroencephalogram (EEG) data. Our model introduces a new meta-parameter, referred to as reinforced gradient coefficient, to overcome the peculiar vanishing gradient behaviour exhibited by deep neural architecture. This allows our model to further reduce its deviation from expected prediction to significantly minimize its estimation error. Furthermore, it adopts a weighing step that captures the discrepancy between two consecutive predictions during training. The value of this weighing factor is learned throughout the training phase, given its positive effect on the overall prediction accuracy of the model. We validate our approach through comparative analysis of its performance in contrast with stateof- the-art techniques in the literature, using two well known EEG databases. Our model shows significant improvement on prediction accuracy of emotional states of human subjects, while maintaining a highly simple, minimalist architecture.
BibTeX:
@InProceedings{Keshmiri2017a,
  author =    {Soheil Keshmiri and Hidenobu Sumioka and Junya Nakanishi and Hiroshi Ishiguro},
  title =     {Emotional State Estimation Using a Modified Gradient-Based Neural Architecture with Weighted Estimates},
  booktitle = {The International Joint Conference on Neural Networks (IJCNN 2017)},
  year =      {2017},
  address =   {Anchorage, Alaska, USA},
  month =     May,
  abstract =  {We present a minimalist two-hidden-layer neural architecture for emotional state estimation using electroencephalogram (EEG) data. Our model introduces a new meta-parameter, referred to as reinforced gradient coefficient, to overcome the peculiar vanishing gradient behaviour exhibited by deep neural architecture. This allows our model to further reduce its deviation from expected prediction to significantly minimize its estimation error. Furthermore, it adopts a weighing step that captures the discrepancy between two consecutive predictions during training. The value of this weighing factor is learned throughout the training phase, given its positive effect on the overall prediction accuracy of the model. We validate our approach through comparative analysis of its performance in contrast with stateof- the-art techniques in the literature, using two well known EEG databases. Our model shows significant improvement on prediction accuracy of emotional states of human subjects, while maintaining a highly simple, minimalist architecture.},
  day =       {18},
  file =      {Keshmiri2017a.pdf:pdf/Keshmiri2017a.pdf:PDF},
  url =       {htp://www.ijcnn.org/}
}
Dylan F. Glas, Malcolm Doering, Phoebe Liu, Takayuki Kanda, Hiroshi Ishiguro, "Robot's Delight - A Lyrical Exposition on Learning by Imitation from Human-human Interaction", In 2017 Conference on Human-Robot Interaction (HRI2017) Video Presentation, Vienna, Austria, March, 2017.
Abstract: Now that social robots are beginning to appear in the real world, the question of how to program social behavior is becoming more pertinent than ever. Yet, manual design of interaction scripts and rules can be time-consuming and strongly dependent on the aptitude of a human designer in anticipating the social situations a robot will face. To overcome these challenges, we have proposed the approach of learning interaction logic directly from data captured from natural human-human interactions. While similar in some ways to crowdsourcing approaches like [1], our approach has the benefit of capturing the naturalness and immersion of real interactions, but it faces the added challenges of dealing with sensor noise and an unconstrained action space. In the form of a musical tribute to The Sugarhill Gang's 1979 hit “Rapper's Delight", this video presents a summary of our technique for capturing and reproducing multimodal interactive social behaviors, originally presented in [2], as well as preliminary progress from a new study in which we apply this technique to a stationary android for interactive spoken dialogue.
BibTeX:
@InProceedings{Glas2017,
  author =    {Dylan F. Glas and Malcolm Doering and Phoebe Liu and Takayuki Kanda and Hiroshi Ishiguro},
  title =     {Robot's Delight - A Lyrical Exposition on Learning by Imitation from Human-human Interaction},
  booktitle = {2017 Conference on Human-Robot Interaction (HRI2017) Video Presentation},
  year =      {2017},
  address =   {Vienna, Austria},
  month =     Mar,
  abstract =  {Now that social robots are beginning to appear in the real world, the question of how to program social behavior is becoming more pertinent than ever. Yet, manual design of interaction scripts and rules can be time-consuming and strongly dependent on the aptitude of a human designer in anticipating the social situations a robot will face. To overcome these challenges, we have proposed the approach of learning interaction logic directly from data captured from natural human-human interactions. While similar in some ways to crowdsourcing approaches like [1], our approach has the benefit of capturing the naturalness and immersion of real interactions, but it faces the added challenges of dealing with sensor noise and an unconstrained action space. In the form of a musical tribute to The Sugarhill Gang's 1979 hit “Rapper's Delight", this video presents a summary of our technique for capturing and reproducing multimodal interactive social behaviors, originally presented in [2], as well as preliminary progress from a new study in which we apply this technique to a stationary android for interactive spoken dialogue.},
  doi =       {10.1145/3029798.3036646},
  file =      {Glas2017.pdf:pdf/Glas2017.pdf:PDF},
  url =       {https://youtu.be/CY1WIfPJHqI}
}
Masa Jazbec, Shuichi Nishio, Hiroshi Ishiguro, Masataka Okubo, Christian Penaloza, "Body-swapping Experiment with an Android - Investigation of The Relationship Between Agency and a Sense of Ownership Toward a Different Body", In The 2017 Conference on Human-Robot Interaction (HRI2017), Vienna, Austria, pp. 143-144, March, 2017.
Abstract: The experiment described in this paper is performed within a system that provides a human with the possibility and capability to be physically immersed in the body of an android robot, Geminoid HI-2.
BibTeX:
@InProceedings{Jazbec2017,
  author =    {Masa Jazbec and Shuichi Nishio and Hiroshi Ishiguro and Masataka Okubo and Christian Penaloza},
  title =     {Body-swapping Experiment with an Android - Investigation of The Relationship Between Agency and a Sense of Ownership Toward a Different Body},
  booktitle = {The 2017 Conference on Human-Robot Interaction (HRI2017)},
  year =      {2017},
  pages =     {143-144},
  address =   {Vienna, Austria},
  month =     Mar,
  abstract =  {The experiment described in this paper is performed within a system that provides a human with the possibility and capability to be physically immersed in the body of an android robot, Geminoid HI-2.},
  url =       {http://humanrobotinteraction.org/2017/}
}
Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro, "Can children anthropomorphize human-shaped communication media?: a pilot study on co-sleesleeping with a huggable communication medium", In The 4th annual International Conference on Human-Agent Interaction (HAI 2016), Biopolis, Singapore, pp. 103-106, October, 2016.
Abstract: This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in cosleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.
BibTeX:
@InProceedings{Nakanishi2016a,
  author =    {Junya Nakanishi and Hidenobu Sumioka and Hiroshi Ishiguro},
  title =     {Can children anthropomorphize human-shaped communication media?: a pilot study on co-sleesleeping with a huggable communication medium},
  booktitle = {The 4th annual International Conference on Human-Agent Interaction (HAI 2016)},
  year =      {2016},
  pages =     {103-106},
  address =   {Biopolis, Singapore},
  month =     Oct,
  abstract =  {This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in cosleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.},
  file =      {Nakanishi2016a.pdf:pdf/Nakanishi2016a.pdf:PDF},
  url =       {http://hai-conference.net/hai2016/}
}
Carlos T. Ishi, Tomo Funayama, Takashi Minato, Hiroshi Ishiguro, "Motion generation in android robots during laughing speech", In The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, DaeJeon, Korea, pp. 3327-3332, October, 2016.
Abstract: We are dealing with the problem of generating natural human-like motions during speech in android robots, which have human-like appearances. So far, automatic generation methods have been proposed for lip and head motions of tele-presence robots, based on the speech signal of the tele-operator. In the present study, we aim on extending the speech-driven motion generation methods for laughing speech, since laughter often occurs in natural dialogue interactions and may cause miscommunication if there is mismatch between audio and visual modalities. Based on analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results show the effectiveness of controlling different parts of the face, head and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion and upper body motion control).
BibTeX:
@InProceedings{Ishi2016b,
  author =    {Carlos T. Ishi and Tomo Funayama and Takashi Minato and Hiroshi Ishiguro},
  title =     {Motion generation in android robots during laughing speech},
  booktitle = {The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year =      {2016},
  pages =     {3327-3332},
  address =   {DaeJeon, Korea},
  month =     Oct,
  abstract =  {We are dealing with the problem of generating natural human-like motions during speech in android robots, which have human-like appearances. So far, automatic generation methods have been proposed for lip and head motions of tele-presence robots, based on the speech signal of the tele-operator. In the present study, we aim on extending the speech-driven motion generation methods for laughing speech, since laughter often occurs in natural dialogue interactions and may cause miscommunication if there is mismatch between audio and visual modalities. Based on analysis results of human behaviors during laughing speech, we proposed a motion generation method given the speech signal and the laughing speech intervals. Subjective experiments were conducted using our android robot by generating five different motion types, considering several modalities. Evaluation results show the effectiveness of controlling different parts of the face, head and upper body (eyelid narrowing, lip corner/cheek raising, eye blinking, head motion and upper body motion control).},
  file =      {Ishi2016b.pdf:pdf/Ishi2016b.pdf:PDF},
  url =       {http://www.iros2016.org/}
}
Carlos T. Ishi, Chaoran Liu, Jani Even, Norihiro Hagita, "Hearing support system using environment sensor network", In The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, DaeJeon, Korea, pp. 1275-1280, October, 2016.
Abstract: In order to solve the problems of current hearing aid devices, we make use of sound environment intelligence technologies, and propose a hearing support system, where individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources is reconstructed. The performance of the sound separation module was evaluated for different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. In the same noise condition, subjective intelligibility tests were conducted, and an improvement of 65 to 90% word intelligibility rates could be achieved by using the proposed hearing support system.
BibTeX:
@InProceedings{Ishi2016c,
  author =    {Carlos T. Ishi and Chaoran Liu and Jani Even and Norihiro Hagita},
  title =     {Hearing support system using environment sensor network},
  booktitle = {The 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year =      {2016},
  pages =     {1275-1280},
  address =   {DaeJeon, Korea},
  month =     Oct,
  abstract =  {In order to solve the problems of current hearing aid devices, we make use of sound environment intelligence technologies, and propose a hearing support system, where individual target and anti-target sound sources in the environment can be selected, and spatial information of the target sound sources is reconstructed. The performance of the sound separation module was evaluated for different noise conditions. Results showed that signal-to-noise ratios of around 15dB could be achieved by the proposed system for a 65dB babble noise plus directional music noise condition. In the same noise condition, subjective intelligibility tests were conducted, and an improvement of 65 to 90% word intelligibility rates could be achieved by using the proposed hearing support system.},
  file =      {Ishi2016c.pdf:pdf/Ishi2016c.pdf:PDF},
  url =       {http://www.iros2016.org/}
}
Takahisa Uchida, Takashi Minato, Hiroshi Ishiguro, "Does a Conversational Robot Need to Have its own Values? A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots", In the 4th annual International Conference on Human-Agent Interaction (iHAI2016), Singapore, pp. 187-192, October, 2016.
Abstract: This work studies a dialogue strategy aimed at building people's motivation for autonomous conversational robots. Spoken dialogue systems have recently been rapidly developed, but the existing systems are insufficient for continuous use because they fail to inspire the user's motivation to talk with them. One of the reasons is that users fail to interpret an intention of the system's utterance based on its values. It can be said that people know the other's values and change their values in human-human conversations, therefore, we hypothesize that a dialogue strategy making the user saliently feel the difference of his and system's values promotes the motivation for dialogue. The experiment to evaluate human-human dialogue supported our hypothesis. However, the experiment with human-android dialogue did not produce same result, suggesting that people did not attribute values to the android. For a conversational robot, we need further technique to make people believe the robot speaks based on its values.
BibTeX:
@InProceedings{Uchida2016a,
  author =    {Takahisa Uchida and Takashi Minato and Hiroshi Ishiguro},
  title =     {Does a Conversational Robot Need to Have its own Values? A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots},
  booktitle = {the 4th annual International Conference on Human-Agent Interaction (iHAI2016)},
  year =      {2016},
  pages =     {187-192},
  address =   {Singapore},
  month =     Oct,
  abstract =  {This work studies a dialogue strategy aimed at building people's motivation for autonomous conversational robots. Spoken dialogue systems have recently been rapidly developed, but the existing systems are insufficient for continuous use because they fail to inspire the user's motivation to talk with them. One of the reasons is that users fail to interpret an intention of the system's utterance based on its values. It can be said that people know the other's values and change their values in human-human conversations, therefore, we hypothesize that a dialogue strategy making the user saliently feel the difference of his and system's values promotes the motivation for dialogue. The experiment to evaluate human-human dialogue supported our hypothesis. However, the experiment with human-android dialogue did not produce same result, suggesting that people did not attribute values to the android. For a conversational robot, we need further technique to make people believe the robot speaks based on its values.},
  file =      {Uchida2016a.pdf:pdf/Uchida2016a.pdf:PDF},
  url =       {http://hai-conference.net/hai2016/}
}
Dylan F. Glas, Takashi Minato, Carlos T. Ishi, Tatsuya Kawahara, Hiroshi Ishiguro, "ERICA: The ERATO Intelligent Conversational Android", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, New York, NY, USA, pp. 22-29, August, 2016.
Abstract: The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, comprised of state-of-the-art component technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.
BibTeX:
@InProceedings{Glas2016b,
  author =    {Dylan F. Glas and Takashi Minato and Carlos T. Ishi and Tatsuya Kawahara and Hiroshi Ishiguro},
  title =     {ERICA: The ERATO Intelligent Conversational Android},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year =      {2016},
  pages =     {22-29},
  address =   {New York, NY, USA},
  month =     Aug,
  abstract =  {The development of an android with convincingly lifelike appearance and behavior has been a long-standing goal in robotics, and recent years have seen great progress in many of the technologies needed to create such androids. However, it is necessary to actually integrate these technologies into a robot system in order to assess the progress that has been made towards this goal and to identify important areas for future work. To this end, we are developing ERICA, an autonomous android system capable of conversational interaction, comprised of state-of-the-art component technologies, and arguably the most humanlike android built to date. Although the project is ongoing, initial development of the basic android platform has been completed. In this paper we present an overview of the requirements and design of the platform, describe the development process of an interactive application, report on ERICA's first autonomous public demonstration, and discuss the main technical challenges that remain to be addressed in order to create humanlike, autonomous androids.},
  file =      {Glas2016b.pdf:pdf/Glas2016b.pdf:PDF},
  url =       {http://www.ro-man2016.org/}
}
Takahisa Uchida, Takashi Minato, Hiroshi Ishiguro, "A Values-based Dialogue Strategy to Build Motivation for Conversation with Autonomous Conversational Robots", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, Teachers College, Columbia University, USA, pp. 206-211, August, 2016.
Abstract: The goal of this study is to develop a humanoid robot that can continuously have a conversation with people. Recent spoken dialogue systems have been quickly developed, however, the existing systems are not continuously used since they are not sufficient to promote users' motivation to talk with them. It is because a user cannot feel that a robot has its own intention, therefore, it is necessary that a robot has its own values and hereby users feel the intentionality on its saying. This paper focuses on a dialogue strategy to promote people's motivation when the robot is assumed to have a values-based dialogue system. People's motivation can be influenced by the intentionality and also by the affinity of the robot. We hypothesized that there is a good disagreement / agreement ratio in the conversation to nicely balance the people's feeling of intentionality and affinity. The result of psychological experiment using an android robot partially supported our hypothesis.
BibTeX:
@InProceedings{Uchida2016,
  author =    {Takahisa Uchida and Takashi Minato and Hiroshi Ishiguro},
  title =     {A Values-based Dialogue Strategy to Build Motivation for Conversation with Autonomous Conversational Robots},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year =      {2016},
  pages =     {206-211},
  address =   {Teachers College, Columbia University, USA},
  month =     Aug,
  abstract =  {The goal of this study is to develop a humanoid robot that can continuously have a conversation with people. Recent spoken dialogue systems have been quickly developed, however, the existing systems are not continuously used since they are not sufficient to promote users' motivation to talk with them. It is because a user cannot feel that a robot has its own intention, therefore, it is necessary that a robot has its own values and hereby users feel the intentionality on its saying. This paper focuses on a dialogue strategy to promote people's motivation when the robot is assumed to have a values-based dialogue system. People's motivation can be influenced by the intentionality and also by the affinity of the robot. We hypothesized that there is a good disagreement / agreement ratio in the conversation to nicely balance the people's feeling of intentionality and affinity. The result of psychological experiment using an android robot partially supported our hypothesis.},
  file =      {Uchida2016.pdf:pdf/Uchida2016.pdf:PDF},
  url =       {http://ro-man2016.org/}
}
Phoebe Liu, Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, "Learning Interactive Behavior for Service Robots - The Challenge of Mixed-Initiative Interaction", In The 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR), New York, NY, USA, August, 2016.
Abstract: Learning-by-imitation approaches for developing human-robot interaction logic are relatively new, but they have been gaining popularity in the research community in recent years. Learning interaction logic from human-human interaction data provides several benefits over explicit programming, including a reduced level of effort for interaction design and the ability to capture unconscious, implicit social rules that are difficult to articulate or program. In previous work, we have shown a technique capable of learning behavior logic for a service robot in a shopping scenario, based on non-annotated speech and motion data from human-human example interactions. That approach was effective in reproducing reactive behavior, such as question-answer interactions. In our current work (still in progress), we are focusing on reproducing mixed-initiative interactions which include proactive behavior on the part of the robot. We have collected a much more challenging data set featuring high variability of behavior and proactive behavior in response to backchannel utterances. We are currently investigating techniques for reproducing this mixed-initiative behavior and for adapting the robot's behavior to customers with different needs.
BibTeX:
@InProceedings{Liu2016a,
  author =    {Phoebe Liu and Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro},
  title =     {Learning Interactive Behavior for Service Robots - The Challenge of Mixed-Initiative Interaction},
  booktitle = {The 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016) Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR)},
  year =      {2016},
  address =   {New York, NY, USA},
  month =     Aug,
  abstract =  {Learning-by-imitation approaches for developing human-robot interaction logic are relatively new, but they have been gaining popularity in the research community in recent years. Learning interaction logic from human-human interaction data provides several benefits over explicit programming, including a reduced level of effort for interaction design and the ability to capture unconscious, implicit social rules that are difficult to articulate or program. In previous work, we have shown a technique capable of learning behavior logic for a service robot in a shopping scenario, based on non-annotated speech and motion data from human-human example interactions. That approach was effective in reproducing reactive behavior, such as question-answer interactions. In our current work (still in progress), we are focusing on reproducing mixed-initiative interactions which include proactive behavior on the part of the robot. We have collected a much more challenging data set featuring high variability of behavior and proactive behavior in response to backchannel utterances. We are currently investigating techniques for reproducing this mixed-initiative behavior and for adapting the robot's behavior to customers with different needs.},
  file =      {Liu2016a.pdf:pdf/Liu2016a.pdf:PDF},
}
Kurima Sakai, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Speech Driven Trunk Motion Generating System Based on Physical Constraint", In the IEEE International Symposium on Robot and Human Interactive Communication for 2016, Teachers College, Columbia University, USA, pp. 232-239, August, 2016.
Abstract: We developed a method to automatically generate humanlike trunk motions (neck and waist motions) of a conversational android from its speech in real-time. It is based on a spring-dumper dynamical model to simulate human's trunk movement involved by human speech. Differing from the existing methods based on a machine learning, our system can easily modulate the generated motions due to speech patterns since the parameters in the model correspond to a muscular hardness. The experimental result showed that the android motions generated by our model can be more natural and enhance the participants' motivation to talk more, compared with the copy of human motions.
BibTeX:
@InProceedings{Sakai2016,
  author =    {Kurima Sakai and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  title =     {Speech Driven Trunk Motion Generating System Based on Physical Constraint},
  booktitle = {the IEEE International Symposium on Robot and Human Interactive Communication for 2016},
  year =      {2016},
  pages =     {232-239},
  address =   {Teachers College, Columbia University, USA},
  month =     Aug,
  abstract =  {We developed a method to automatically generate humanlike trunk motions (neck and waist motions) of a conversational android from its speech in real-time. It is based on a spring-dumper dynamical model to simulate human's trunk movement involved by human speech. Differing from the existing methods based on a machine learning, our system can easily modulate the generated motions due to speech patterns since the parameters in the model correspond to a muscular hardness. The experimental result showed that the android motions generated by our model can be more natural and enhance the participants' motivation to talk more, compared with the copy of human motions.},
  file =      {Sakai2016.pdf:pdf/Sakai2016.pdf:PDF},
  url =       {http://ro-man2016.org/}
}
Hiroaki Hatano, Carlos T. Ishi, Tsuyoshi Komatsubara, Masahiro Shiomi, Takayuki Kanda, "Analysis of laughter events and social status of children in classrooms", In Speech Prosody 2016 boston (Speech Prosody 8), Boston, USA, pp. 1004-1008, May, 2016.
Abstract: Aiming on analyzing the social interactions of children, we have collected data in a science classroom of an elementary school, using our developed system which is able to get information about who is talking, when and where in an environment, based on integration of multiple microphone arrays and human tracking technologies. In the present work, among the sound activities in the classroom, we focused on laughter events, since laughter conveys important social functions in communication and is a possible cue for identifying social status. Social status is often studied in educational and developmental research, as it is importantly related to children's social and academic life. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system, while social status was quantified based on a sociometry questionnaire. Analysis results revealed that the number of laughter events in the children with high social status was significantly higher than the ones with low social status. Relationship between laughter type and social status was also investigated.
BibTeX:
@InProceedings{Hatano2016,
  author =          {Hiroaki Hatano and Carlos T. Ishi and Tsuyoshi Komatsubara and Masahiro Shiomi and Takayuki Kanda},
  title =           {Analysis of laughter events and social status of children in classrooms},
  booktitle =       {Speech Prosody 2016 boston (Speech Prosody 8)},
  year =            {2016},
  pages =           {1004-1008},
  address =         {Boston, USA},
  month =           May,
  abstract =        {Aiming on analyzing the social interactions of children, we have collected data in a science classroom of an elementary school, using our developed system which is able to get information about who is talking, when and where in an environment, based on integration of multiple microphone arrays and human tracking technologies. In the present work, among the sound activities in the classroom, we focused on laughter events, since laughter conveys important social functions in communication and is a possible cue for identifying social status. Social status is often studied in educational and developmental research, as it is importantly related to children's social and academic life. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system, while social status was quantified based on a sociometry questionnaire. Analysis results revealed that the number of laughter events in the children with high social status was significantly higher than the ones with low social status. Relationship between laughter type and social status was also investigated.},
  file =            {Hatano2016.pdf:pdf/Hatano2016.pdf:PDF},
  keywords =        {laughter, social status, children, natural conversation, real environment},
  url =             {http://sites.bu.edu/speechprosody2016/}
}
Carlos T. Ishi, Hiroaki Hatano, Hiroshi Ishiguro, "Audiovisual analysis of relations between laughter types and laughter motions", In Speech Prosody 2016 Boston (Speech Prosode 8), Boston, USA, pp. 806-810, May, 2016.
Abstract: Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also for expressing some type of attitude, having important social functions in communication. The background of the present work is generation of natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatch between audio and visual modalities, especially in laughter intervals. In the present work, we analyze a multimodal dialogue database, and investigate the relations between different types of laughter (such as production type, laughing style, and laughter functions) and the facial expressions, head and body motions during laughter.
BibTeX:
@InProceedings{Ishi2016,
  author =    {Carlos T. Ishi and Hiroaki Hatano and Hiroshi Ishiguro},
  title =     {Audiovisual analysis of relations between laughter types and laughter motions},
  booktitle = {Speech Prosody 2016 Boston (Speech Prosode 8)},
  year =      {2016},
  pages =     {806-810},
  address =   {Boston, USA},
  month =     May,
  abstract =  {Laughter commonly occurs in daily interactions, and is not only simply related to funny situations, but also for expressing some type of attitude, having important social functions in communication. The background of the present work is generation of natural motions in a humanoid robot, so that miscommunication might be caused if there is mismatch between audio and visual modalities, especially in laughter intervals. In the present work, we analyze a multimodal dialogue database, and investigate the relations between different types of laughter (such as production type, laughing style, and laughter functions) and the facial expressions, head and body motions during laughter.},
  file =      {Ishi2016.pdf:pdf/Ishi2016.pdf:PDF},
  url =       {http://sites.bu.edu/speechprosody2016/}
}
Dylan F. Glas, Takayuki Kanda, Hiroshi Ishiguro, "Human-Robot Interaction Design using Interaction Composer - Eight Years of Lessons Learned", In 11th ACM/IEEE International Conference on Human-Robot Interaction, Christchurch, New Zealand, pp. 303-310, March, 2016.
Abstract: Interaction Composer, a visual programming environment designed to enable programmers and non-programmers to collaboratively design human-robot interactions in the form of state-based flows, has been in use at our laboratory for eight years. The system architecture and the design principles behind the framework have been presented in other work. In this paper, we take a case-study approach, examining several actual examples of the use of this toolkit over an eight-year period. We examine the structure and content of interaction flows, identify recurring design patterns, and observe which elements of the framework have proven valuable, as well as documenting its failures: features which did not solve their intended purposes, and workarounds which might be better addressed by different approaches. It is hoped that the insights gained from this study will contribute to the development of more effective and more usable tools and frameworks for interaction design.
BibTeX:
@InProceedings{Glas2016a,
  author =    {Dylan F. Glas and Takayuki Kanda and Hiroshi Ishiguro},
  title =     {Human-Robot Interaction Design using Interaction Composer - Eight Years of Lessons Learned},
  booktitle = {11th ACM/IEEE International Conference on Human-Robot Interaction},
  year =      {2016},
  pages =     {303-310},
  address =   {Christchurch, New Zealand},
  month =     Mar,
  abstract =  {Interaction Composer, a visual programming environment designed to enable programmers and non-programmers to collaboratively design human-robot interactions in the form of state-based flows, has been in use at our laboratory for eight years. The system architecture and the design principles behind the framework have been presented in other work. In this paper, we take a case-study approach, examining several actual examples of the use of this toolkit over an eight-year period. We examine the structure and content of interaction flows, identify recurring design patterns, and observe which elements of the framework have proven valuable, as well as documenting its failures: features which did not solve their intended purposes, and workarounds which might be better addressed by different approaches. It is hoped that the insights gained from this study will contribute to the development of more effective and more usable tools and frameworks for interaction design.},
  file =      {Glas2016a.pdf:pdf/Glas2016a.pdf:PDF},
  url =       {http://humanrobotinteraction.org/2016/}
}
Hidenobu Sumioka, Yuichiro Yoshikawa, Yasuo Wada, Hiroshi Ishiguro, "Teachers' impressions on robots for therapeutic applications", In International Workshop on Intervention of Children with Autism Spectrum Disorders using a Humanoid Robot, Kanagawa, Japan, pp. (ASD-HR2), November, 2015.
Abstract: Autism spectrum disorders(ASD) can cause lifelong challenges. However, there are a variety of therapeutic and educational approaches, any of which may have educational benefits in some but not all individuals with ASD. Given recent rapid technological advances, it has been argued that specific robotic applications could be effectively harnessed to provide innovative clinical treatments for children with ASD. There have, however, been few exchanges between psychiatrists and robotic researchers. Exchanges between psychiatrists and robotic researchers are now beginning to occur. In this symposium, to promote a world-wide interdisciplinary discussion about the potential robotic applications for ASD fields, pioneering research activities using robots for children with ASD are introduced by psychiatrists and robotics researchers.
BibTeX:
@InProceedings{Sumioka2015c,
  Title                    = {Teachers' impressions on robots for therapeutic applications},
  Author                   = {Hidenobu Sumioka and Yuichiro Yoshikawa and Yasuo Wada and Hiroshi Ishiguro},
  Booktitle                = {International Workshop on Intervention of Children with Autism Spectrum Disorders using a Humanoid Robot},
  Year                     = {2015},

  Address                  = {Kanagawa, Japan},
  Month                    = NOV,
  Pages                    = {(ASD-HR2)},

  Abstract                 = {Autism spectrum disorders(ASD) can cause lifelong challenges. However, there are a variety of therapeutic and educational approaches, any of which may have educational benefits in some but not all individuals with ASD. Given recent rapid technological advances, it has been argued that specific robotic applications could be effectively harnessed to provide innovative clinical treatments for children with ASD. There have, however, been few exchanges between psychiatrists and robotic researchers. Exchanges between psychiatrists and robotic researchers are now beginning to occur. In this symposium, to promote a world-wide interdisciplinary discussion about the potential robotic applications for ASD fields, pioneering research activities using robots for children with ASD are introduced by psychiatrists and robotics researchers.},
  File                     = {Sumioka2015c.pdf:pdf/Sumioka2015c.pdf:PDF},
  Grant                    = {ERATO},
  Language                 = {en},
  Reviewed                 = {y},
  Url                      = {https://sites.google.com/site/asdhr2015/home}
}
Hiroaki Hatano, Carlos T. Ishi, Makiko Matsuda, "Automatic evaluation for accentuation of Japanese read speech", In International Workshop Construction of Digital Resources for Learning Japanese, Italy, pp. 4-5 (Abstracts), October, 2015.
Abstract: The purpose of our research is to consider the method of automatic evaluation for Japanese accentuation based on acoustic features. For this purpose, we use “Julius" which is the large vocabulary continuous speech recognition decoder software, to divide speech into phonemes. We employed the open-source database for the analysis. We selected a read speech by 10 native speakers of Japanese and Chinese from "The Contrastive Linguistic Database for Japanese Language Learners' Spoken Language in Japanese and their First Languages". The accent unit is "bunsetsu" which consist of a word and particles. All the number of units are about 2,500 (10 speakers * 2 native language * about 125 “bunsetsu"). The accent-type of each unit was judged by a native speaker of Japanese (Japanese-language teacher) and a native speaker of Chinese (Japanese-language student who has N1). We use these results as correct data for verifying our method. We extracted fundamental frequencies (F0) from each vowel portion in read speech, and compared adjacencies whether difference of F0 exceed a threshold. We employed vowel section's F0 value not only on average, but also on median and extrapolation. The result of the investigation, our method showed 70   80 % agreement rates with human's assessment. It seems reasonable to conclude that our proposal method for evaluating accentuation has native-like accuracy.
BibTeX:
@InProceedings{Hatano2015a,
  author =    {Hiroaki Hatano and Carlos T. Ishi and Makiko Matsuda},
  title =     {Automatic evaluation for accentuation of Japanese read speech},
  booktitle = {International Workshop Construction of Digital Resources for Learning Japanese},
  year =      {2015},
  pages =     {4-5 (Abstracts)},
  address =   {Italy},
  month =     Oct,
  abstract =  {The purpose of our research is to consider the method of automatic evaluation for Japanese accentuation based on acoustic features. For this purpose, we use “Julius" which is the large vocabulary continuous speech recognition decoder software, to divide speech into phonemes. We employed the open-source database for the analysis. We selected a read speech by 10 native speakers of Japanese and Chinese from "The Contrastive Linguistic Database for Japanese Language Learners' Spoken Language in Japanese and their First Languages". The accent unit is "bunsetsu" which consist of a word and particles. All the number of units are about 2,500 (10 speakers * 2 native language * about 125 “bunsetsu"). The accent-type of each unit was judged by a native speaker of Japanese (Japanese-language teacher) and a native speaker of Chinese (Japanese-language student who has N1). We use these results as correct data for verifying our method. We extracted fundamental frequencies (F0) from each vowel portion in read speech, and compared adjacencies whether difference of F0 exceed a threshold. We employed vowel section's F0 value not only on average, but also on median and extrapolation. The result of the investigation, our method showed 70 ~ 80 % agreement rates with human's assessment. It seems reasonable to conclude that our proposal method for evaluating accentuation has native-like accuracy.},
  file =      {Hatano2015a.pdf:pdf/Hatano2015a.pdf:PDF},
  url =       {https://events.unibo.it/dit-workshop-japanese-digital-resources}
}
Jani Even, Florent B.B. Ferreri, Atsushi Watanabe, Luis Y. S. Morales, Carlos T. Ishi, Norihiro Hagita, "Audio Augmented Point Clouds for Applications in Robotics", In The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 4846-4851, September, 2015.
Abstract: This paper presents a method for representing acoustic information with point clouds by tying it to geometrical features. The motivation is to create a representation of this information that is well suited for mobile robotic applications. In particular, the proposed approach is designed to take advantage of the use of multiple coordinate frames. As an illustrative example, we present a way to create an audio augmented point cloud by adding estimated audio power to the point cloud created by a RGB-D camera. A few applications of this methods are presented.
BibTeX:
@InProceedings{Jani2015a,
  Title                    = {Audio Augmented Point Clouds for Applications in Robotics},
  Author                   = {Jani Even and Florent B.B. Ferreri and Atsushi Watanabe and Luis Y. S. Morales and Carlos T. Ishi and Norihiro Hagita},
  Booktitle                = {The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  Year                     = {2015},

  Address                  = {Hamburg, Germany},
  Month                    = SEP,
  Pages                    = {4846-4851},

  Abstract                 = {This paper presents a method for representing acoustic information with point clouds by tying it to geometrical features. The motivation is to create a representation of this information that is well suited for mobile robotic applications. In particular, the proposed approach is designed to take advantage of the use of multiple coordinate frames. As an illustrative example, we present a way to create an audio augmented point cloud by adding estimated audio power to the point cloud created by a RGB-D camera. A few applications of this methods are presented.},
  File                     = {Jani2015a.pdf:pdf/Jani2015a.pdf:PDF},
  Grant                    = {ERATO},
  Language                 = {en},
  Reviewed                 = {y}
}
Carlos T. Ishi, Even Jani, Norihiro Hagita, "Speech activity detection and face orientation estimation using multiple microphone arrays and human position information", In The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 5574-5579, September, 2015.
Abstract: We developed a system for detecting the speech intervals of multiple speakers by combining multiple microphone arrays and human tracking technologies. We also proposed a method for estimating the face orientation of the detected speakers. The developed system was evaluated in two steps: individual utterances in different positions and orientations; and simultaneous dialogues by multiple speakers. Evaluation results revealed that the proposed system could detect speech intervals with more than 94% of accuracy, and face orientations with standard deviations within 30 degrees, in situations excluding the cases where all arrays are in the opposite direction to the speaker's face orientation.
BibTeX:
@InProceedings{Ishi2015b,
  author =    {Carlos T. Ishi and Even Jani and Norihiro Hagita},
  title =     {Speech activity detection and face orientation estimation using multiple microphone arrays and human position information},
  booktitle = {The 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year =      {2015},
  pages =     {5574-5579},
  address =   {Hamburg, Germany},
  month =     SEP,
  abstract =  {We developed a system for detecting the speech intervals of multiple speakers by combining multiple microphone arrays and human tracking technologies. We also proposed a method for estimating the face orientation of the detected speakers. The developed system was evaluated in two steps: individual utterances in different positions and orientations; and simultaneous dialogues by multiple speakers. Evaluation results revealed that the proposed system could detect speech intervals with more than 94% of accuracy, and face orientations with standard deviations within 30 degrees, in situations excluding the cases where all arrays are in the opposite direction to the speaker's face orientation.},
  file =      {Ishi2015b:pdf/Ishi2015b.pdf:PDF},
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "BCI-teleoperated androids; A study of embodiment and its effect on motor imagery learning", In Workshop "Quo Vadis Robotics & Intelligent Systems" in IEEE 19 th International Conference on Intelligent Engineering Systems 2015, Bratislava, Slovakia, September, 2015.
Abstract: This paper presents a brain computer interface(BCI) system developed for the tele-operation of a very humanlike android. Employing this system, we review two studies that give insights into the cognitive mechanism of agency and body ownership during BCI control, as well as feedback designs for optimization of user's BCI skills. In the first experiment operators experienced an illusion of embodiment (in terms of body ownership and agency) for the robot's body only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we could further discover that during BCIoperation of the android, biasing the timing and accuracy of the performance feedback could improve operators'modulation of brain activities during the motor imagery task. Our experiments showed that the motor imagery skills acquired through this technique were not limited to the android robot, and had long-lasting effects for other BCI usage as well. Therefore, by focusing on the human side of BCIs and demonstrating a relationship between the body ownership sensation and motor imagery learning, our BCIteleoperation system offers a new and efficient platform for general BCI application.
BibTeX:
@InProceedings{Alimardani2015,
  author =    {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =     {BCI-teleoperated androids; A study of embodiment and its effect on motor imagery learning},
  booktitle = {Workshop "Quo Vadis Robotics \& Intelligent Systems" in IEEE 19 th International Conference on Intelligent Engineering Systems 2015},
  year =      {2015},
  address =   {Bratislava, Slovakia},
  month =     Sep,
  abstract =  {This paper presents a brain computer interface(BCI) system developed for the tele-operation of a very humanlike android. Employing this system, we review two studies that give insights into the cognitive mechanism of agency and body ownership during BCI control, as well as feedback designs for optimization of user's BCI skills. In the first experiment operators experienced an illusion of embodiment (in terms of body ownership and agency) for the robot's body only by imagining a movement (motor imagery) and watching the robot perform it. Also using the same setup, we could further discover that during BCIoperation of the android, biasing the timing and accuracy of the performance feedback could improve operators'modulation of brain activities during the motor imagery task. Our experiments showed that the motor imagery skills acquired through this technique were not limited to the android robot, and had long-lasting effects for other BCI usage as well. Therefore, by focusing on the human side of BCIs and demonstrating a relationship between the body ownership sensation and motor imagery learning, our BCIteleoperation system offers a new and efficient platform for general BCI application.},
  file =      {Alimardani2015.pdf:pdf/Alimardani2015.pdf:PDF},
}
Kurima Sakai, Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Online speech-driven head motion generating system and evaluation on a tele-operated robot", In IEEE International Symposium on Robot and Human Interactive Communication, Kobe, Japan, pp. 529-534, August, 2015.
Abstract: We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation
BibTeX:
@InProceedings{Sakai2015,
  author =    {Kurima Sakai and Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  title =     {Online speech-driven head motion generating system and evaluation on a tele-operated robot},
  booktitle = {IEEE International Symposium on Robot and Human Interactive Communication},
  year =      {2015},
  pages =     {529-534},
  address =   {Kobe, Japan},
  month =     AUG,
  abstract =  {We developed a tele-operated robot system where the head motions of the robot are controlled by combining those of the operator with the ones which are automatically generated from the operator's voice. The head motion generation is based on dialogue act functions which are estimated from linguistic and prosodic information extracted from the speech signal. The proposed system was evaluated through an experiment where participants interact with a tele-operated robot. Subjective scores indicated the effectiveness of the proposed head motion generation system, even under limitations in the dialogue act estimation},
  file =      {Sakai2015.pdf:pdf/Sakai2015.pdf:PDF},
}
Dylan F. Glas, Phoebe Liu, Takayuki Kanda, Hiroshi Ishiguro, "Can a social robot train itself just by observing human interactions?", In IEEE International Conference on Robotics and Automation, Seattle, WA, USA, May, 2015.
Abstract: In HRI research, game simulations and teleoperation interfaces have been used as tools for collecting example behaviors which can be used for creating robot interaction logic. We believe that by using sensor networks and wearable devices it will be possible to use observations of live human-human interactions to create even more humanlike robot behavior in a scalable way. We present here a fully-automated method for reproducing speech and locomotion behaviors observed from natural human-human social interactions in a robot through machine learning. The proposed method includes techniques for representing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a na&239;ve Bayesian classifier, and we propose ways to generate stable robot behaviors from noisy tracking and speech recognition inputs. We show an example of how our technique can train a robot to play the role of a shop clerk in a simple camera shop scenario.
BibTeX:
@InProceedings{Glas2015a,
  author =    {Dylan F. Glas and Phoebe Liu and Takayuki Kanda and Hiroshi Ishiguro},
  title =     {Can a social robot train itself just by observing human interactions?},
  booktitle = {IEEE International Conference on Robotics and Automation},
  year =      {2015},
  address =   {Seattle, WA, USA},
  month =     May,
  abstract =  {In HRI research, game simulations and teleoperation interfaces have been used as tools for collecting example behaviors which can be used for creating robot interaction logic. We believe that by using sensor networks and wearable devices it will be possible to use observations of live human-human interactions to create even more humanlike robot behavior in a scalable way. We present here a fully-automated method for reproducing speech and locomotion behaviors observed from natural human-human social interactions in a robot through machine learning. The proposed method includes techniques for representing the speech and locomotion observed in training interactions, using clustering to identify typical behavior elements and identifying spatial formations using established HRI proxemics models. Behavior logic is learned based on discretized actions captured from the sensor data stream, using a naïve Bayesian classifier, and we propose ways to generate stable robot behaviors from noisy tracking and speech recognition inputs. We show an example of how our technique can train a robot to play the role of a shop clerk in a simple camera shop scenario.},
  file =      {Glas2015a.pdf:pdf/Glas2015a.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, "Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems", In 10th ACM/IEEE International Conference on Human-Robot Interaction 2015, Portland, Oregon, USA, pp. 279-286, March, 2015.
Abstract: n a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).
BibTeX:
@InProceedings{Liu2015,
  author =    {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro},
  title =     {Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems},
  booktitle = {10th ACM/IEEE International Conference on Human-Robot Interaction 2015},
  year =      {2015},
  pages =     {279-286},
  address =   {Portland, Oregon, USA},
  month =     MAR,
  abstract =  {n a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).},
  file =      {Liu2015.pdf:pdf/Liu2015.pdf:PDF},
}
Junya Nakanishi, Hidenobu Sumioka, Kurima Sakai, Daisuke Nakamichi, Masahiro Shiomi, Hiroshi Ishiguro, "Huggable Communication Medium Encourages Listening to Others", In 2nd International Conference on Human-Agent Interraction, Tsukuba, Japan, pp. pp 249-252, October, 2014.
Abstract: We propose that a huggable communication device helps children concentrate on listening to others by reducing their stress and feeling a storyteller's presence close to them. Our observation of storytelling to preschool children suggests that Hugvie, which is one of such devices, facilitates children's attention to the story. This indicates the usefulness of Hugvie to relieve the educational problem that children show selfish behavior during class. We discuss Hugvie's effect on learning and memory and potential application to children with special support.
BibTeX:
@InProceedings{Nakanishi2014,
  author =    {Junya Nakanishi and Hidenobu Sumioka and Kurima Sakai and Daisuke Nakamichi and Masahiro Shiomi and Hiroshi Ishiguro},
  title =     {Huggable Communication Medium Encourages Listening to Others},
  booktitle = {2nd International Conference on Human-Agent Interraction},
  year =      {2014},
  pages =     {pp 249-252},
  address =   {Tsukuba, Japan},
  month =     OCT,
  abstract =  {We propose that a huggable communication device helps children concentrate on listening to others by reducing their stress and feeling a storyteller's presence close to them. Our observation of storytelling to preschool children suggests that Hugvie, which is one of such devices, facilitates children's attention to the story. This indicates the usefulness of Hugvie to relieve the educational problem that children show selfish behavior during class. We discuss Hugvie's effect on learning and memory and potential application to children with special support.},
  file =      {Nakanishi2014.pdf:pdf/Nakanishi2014.pdf:PDF},
  url =       {http://hai-conference.net/hai2014/}
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "The effect of feedback presentation on motor imagery performance during BCI-teleoperation of a humanlike robot", In IEEE International Conference on Biomedical Robotics and Biomechatronics, Sao Paulo, Brazil, pp. 403-408, August, 2014.
Abstract: Users of a brain-computer interface (BCI) learn to co-adapt with the system through the feedback they receive. Particularly in case of motor imagery BCIs, feedback design can play an important role in the course of motor imagery training. In this paper we investigated the effect of biased visual feedback on performance and motor imagery skills of users during BCI control of a pair of humanlike robotic hands. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback. We discuss how this effect could be possibly due to the humanlike design of feedback and occurrence of body ownership illusion. Our findings suggest that in general training protocols for BCIs, realistic feedback design and subject's self-evaluation of performance can play an important role in the optimization of motor imagery skills.
BibTeX:
@InProceedings{Alimardani2014,
  author =          {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {The effect of feedback presentation on motor imagery performance during BCI-teleoperation of a humanlike robot},
  booktitle =       {IEEE International Conference on Biomedical Robotics and Biomechatronics},
  year =            {2014},
  pages =           {403-408},
  address =         {Sao Paulo, Brazil},
  month =           Aug,
  abstract =        {Users of a brain-computer interface (BCI) learn to co-adapt with the system through the feedback they receive. Particularly in case of motor imagery BCIs, feedback design can play an important role in the course of motor imagery training. In this paper we investigated the effect of biased visual feedback on performance and motor imagery skills of users during BCI control of a pair of humanlike robotic hands. Although the subject specific classifier, which was set up at the beginning of experiment, detected no significant change in the subjects' online performance, evaluation of brain activity patterns revealed that subjects' self-regulation of motor imagery features improved due to a positive bias of feedback. We discuss how this effect could be possibly due to the humanlike design of feedback and occurrence of body ownership illusion. Our findings suggest that in general training protocols for BCIs, realistic feedback design and subject's self-evaluation of performance can play an important role in the optimization of motor imagery skills.},
  day =             {12-15},
  doi =             {10.1109/BIOROB.2014.6913810},
  file =            {Alimardani2014b.pdf:pdf/Alimardani2014b.pdf:PDF},
}
Daisuke Nakamichi, Shuichi Nishio, Hiroshi Ishiguro, "Training of telecommunication through teleoperated android "Telenoid" and its effect", In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, Scotland, UK, pp. 1083-1088, August, 2014.
Abstract: Telenoid, a teleoperated android is a medium through which its teleoperators can transmit both verbal and nonverbal information to interlocutors. Telenoid promotes conversation with its interlocutors, especially elderly people. But since teleoperators admit that they have difficulty feeling that they are actually teleoperating their robots, they can't use them effectively to transmit nonverbal information; such nonverbal information is one of Telenoid's biggest merits. In this paper, we propose a training program for teleoperators so that they can understand Telenoid's teleoperation and how to transmit nonverbal information through it. We investigated its effect on teleoperation and communication and identified three results. First, our training improved Telenoid's head motions for clearer transmission of nonverbal information. Second, our training found different effects between genders. Females communicated with their interlocutors more smoothly than males. Males communicated with their interlocutors more smoothly by simply more talking practice. Third, correlations exist among freely controlling the robot, regarding the robot as themselves, and tele-presence in the interlocutors room as well as correlations between the interactions and themselves. But there are not correlations between feelings about Telenoids teleoperation and the head movements.
BibTeX:
@InProceedings{Nakamichi2014,
  author =          {Daisuke Nakamichi and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Training of telecommunication through teleoperated android "Telenoid" and its effect},
  booktitle =       {The 23rd IEEE International Symposium on Robot and Human Interactive Communication},
  year =            {2014},
  pages =           {1083-1088},
  address =         {Edinburgh, Scotland, UK},
  month =           Aug,
  abstract =        {Telenoid, a teleoperated android is a medium through which its teleoperators can transmit both verbal and nonverbal information to interlocutors. Telenoid promotes conversation with its interlocutors, especially elderly people. But since teleoperators admit that they have difficulty feeling that they are actually teleoperating their robots, they can't use them effectively to transmit nonverbal information; such nonverbal information is one of Telenoid's biggest merits. In this paper, we propose a training program for teleoperators so that they can understand Telenoid's teleoperation and how to transmit nonverbal information through it. We investigated its effect on teleoperation and communication and identified three results. First, our training improved Telenoid's head motions for clearer transmission of nonverbal information. Second, our training found different effects between genders. Females communicated with their interlocutors more smoothly than males. Males communicated with their interlocutors more smoothly by simply more talking practice. Third, correlations exist among freely controlling the robot, regarding the robot as themselves, and tele-presence in the interlocutors room as well as correlations between the interactions and themselves. But there are not correlations between feelings about Telenoids teleoperation and the head movements.},
  day =             {25-29},
  file =            {Nakamichi2014.pdf:pdf/Nakamichi2014.pdf:PDF},
  url =             {http://rehabilitationrobotics.net/ro-man14/}
}
Marco Nørskov, "Human-Robot Interaction and Human Self-Realization: Reflections on the Epistemology of Discrimination", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 319-327, August, 2014.
BibTeX:
@InProceedings{Noerskov2014,
  Title                    = {Human-Robot Interaction and Human Self-Realization: Reflections on the Epistemology of Discrimination},
  Author                   = {Marco N\orskov},
  Booktitle                = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  Year                     = {2014},

  Address                  = {Aarhus, Denmark},
  Editor                   = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  Month                    = Aug,
  Pages                    = {319-327},
  Publisher                = {IOS Press},
  Volume                   = {273},

  Doi                      = {10.3233/978-1-61499-480-0-319},
  Grant                    = {Velux},
  Language                 = {en},
  Reviewed                 = {y},
  Url                      = {http://ebooks.iospress.nl/publication/38578}
}
Ryuji Yamazaki, "Conditions of Empathy in Human-Robot Interaction", In Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, IOS Press, vol. 273, Aarhus, Denmark, pp. 179-186, August, 2014.
BibTeX:
@InProceedings{Yamazaki2014c,
  Title                    = {Conditions of Empathy in Human-Robot Interaction},
  Author                   = {Ryuji Yamazaki},
  Booktitle                = {Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014},
  Year                     = {2014},

  Address                  = {Aarhus, Denmark},
  Editor                   = {Johanna Seibt and Raul Hakli and Marco N\orskov},
  Month                    = Aug,
  Pages                    = {179-186},
  Publisher                = {IOS Press},
  Volume                   = {273},

  Doi                      = {10.3233/978-1-61499-480-0-179},
  Grant                    = {Velux},
  Language                 = {en},
  Reviewed                 = {y},
  Url                      = {http://ebooks.iospress.nl/publication/38560}
}
Rosario Sorbello, Antonio Chella, Marcello Giardina, Shuichi Nishio, Hiroshi Ishiguro, "An Architecture for Telenoid Robot as Empathic Conversational Android Companion for Elderly People", In the 13th International Conference on Intelligent Autonomous Systems, Padova, Italy, July, 2014.
Abstract: In Human-Humanoid Interaction (HHI), empathy is a crucial key in order to overcome the current limitations of social robots. In facts, a principal de ning characteristic of human social behaviour is empathy. The present paper presents a robotic architecture for an android robot as a basis for natural empathic human-android interaction. We start from the hypothesis that the robots, in order to become personal companions need to know how to empathic interact with human beings. To validate our research, we have used the proposed system with the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with elderly people with no prior interaction experience with robot. During the experiment the elderly persons engaged a stimulated conversation with the humanoid robot. Our goal is to overcome the state of loneliness of elderly people using this minimalistic humanoid robot capa- ble to exhibit a dialogue similar to what usually happens in the real life between human beings.The experimental results have shown a humanoid robotic system capable to exhibit a natural and empathic interaction and conversation with a human user.
BibTeX:
@InProceedings{Sorbello2014,
  Title                    = {An Architecture for Telenoid Robot as Empathic Conversational Android Companion for Elderly People},
  Author                   = {Rosario Sorbello and Antonio Chella and Marcello Giardina and Shuichi Nishio and Hiroshi Ishiguro},
  Booktitle                = {the 13th International Conference on Intelligent Autonomous Systems},
  Year                     = {2014},

  Address                  = {Padova, Italy},
  Month                    = Jul,

  Abstract                 = {In Human-Humanoid Interaction (HHI), empathy is a crucial key in order to overcome the current limitations of social robots. In facts, a principal dening characteristic of human social behaviour is empathy. The present paper presents a robotic architecture for an android robot as a basis for natural empathic human-android interaction. We start from the hypothesis that the robots, in order to become personal companions need to know how to empathic interact with human beings. To validate our research, we have used the proposed system with the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with elderly people with no prior interaction experience with robot. During the experiment the elderly persons engaged a stimulated conversation with the humanoid robot. Our goal is to overcome the state of loneliness of elderly people using this minimalistic humanoid robot capa- ble to exhibit a dialogue similar to what usually happens in the real life between human beings.The experimental results have shown a humanoid robotic system capable to exhibit a natural and empathic interaction and conversation with a human user.},
  Day                      = {15-19},
  File                     = {Sorbello2014.pdf:pdf/Sorbello2014.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Humanoid Robot; Humanoid Robot Interaction; Life Support Empathic Robot; Telenoid},
  Language                 = {en},
  Reviewed                 = {Y}
}
Kaiko Kuwamura, Shuichi Nishio, Hiroshi Ishiguro, "Designing Robots for Positive Communication with Senior Citizens", In The 13th Intelligent Autonomous Systems conference, Padova, Italy, July, 2014.
Abstract: Several previous researches indicated that the elderly, especially those with cognitive disorders, have positive impressions of Telenoid, a teleoperated android covered with soft vinyl. Senior citizens with cognitive disorders have low cognitive ability and duller senses due to their age. To communicate, we believe that they have to imagine the information that is missing because they failed to completely receive it in their mind. We hypothesize that Telenoid triggers and enhances such an ability to imagine and positively complete the information, and so they become attracted to Telenoid. Based on this hypothesis, we discuss the factors that trigger imagination and complete positive impressions toward a robot for elderly care.
BibTeX:
@InProceedings{Kuwamura2014c,
  author =          {Kaiko Kuwamura and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Designing Robots for Positive Communication with Senior Citizens},
  booktitle =       {The 13th Intelligent Autonomous Systems conference},
  year =            {2014},
  address =         {Padova, Italy},
  month =           Jul,
  abstract =        {Several previous researches indicated that the elderly, especially those with cognitive disorders, have positive impressions of Telenoid, a teleoperated android covered with soft vinyl. Senior citizens with cognitive disorders have low cognitive ability and duller senses due to their age. To communicate, we believe that they have to imagine the information that is missing because they failed to completely receive it in their mind. We hypothesize that Telenoid triggers and enhances such an ability to imagine and positively complete the information, and so they become attracted to Telenoid. Based on this hypothesis, we discuss the factors that trigger imagination and complete positive impressions toward a robot for elderly care.},
  day =             {15-19},
  file =            {Kuwamura2014c.pdf:pdf/Kuwamura2014c.pdf:PDF},
  url =             {http://www.ias-13.org/}
}
Ryuji Yamazaki, Kaiko Kuwamura, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Activating Embodied Communication: A Case Study of People with Dementia Using a Teleoperated Android Robot", In The 9th World Conference of Gerontechnology, vol. 13, no. 2, Taipei, Taiwan, pp. 311, June, 2014.
BibTeX:
@InProceedings{Yamazaki2014a,
  author =    {Ryuji Yamazaki and Kaiko Kuwamura and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title =     {Activating Embodied Communication: A Case Study of People with Dementia Using a Teleoperated Android Robot},
  booktitle = {The 9th World Conference of Gerontechnology},
  year =      {2014},
  volume =    {13},
  number =    {2},
  pages =     {311},
  address =   {Taipei, Taiwan},
  month =     Jun,
  day =       {18-21},
  doi =       {10.4017/gt.2014.13.02.166.00},
  file =      {Yamazaki2014a.pdf:pdf/Yamazaki2014a.pdf:PDF},
  keywords =  {Elderly care robot; social isolation; embodied communication; community design},
  url =       {http://gerontechnology.info/index.php/journal/article/view/gt.2014.13.02.166.00/0}
}
Kaiko Kuwamura, Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, "Elderly Care Using Teleoperated Android Telenoid", In The 9th World Conference of Gerontechnology, vol. 13, no. 2, Taipei, Taiwan, pp. 226, June, 2014.
BibTeX:
@InProceedings{Kuwamura2014,
  Title                    = {Elderly Care Using Teleoperated Android Telenoid},
  Author                   = {Kaiko Kuwamura and Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro},
  Booktitle                = {The 9th World Conference of Gerontechnology},
  Year                     = {2014},

  Address                  = {Taipei, Taiwan},
  Month                    = Jun,
  Number                   = {2},
  Pages                    = {226},
  Volume                   = {13},

  Day                      = {18-21},
  Doi                      = {10.4017/gt.2014.13.02.091.00},
  File                     = {Kuwamura2014.pdf:pdf/Kuwamura2014.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Elderly care robot; teleoperated android; cognitive disorder},
  Language                 = {en},
  Reviewed                 = {Y},
  Url                      = {http://gerontechnology.info/index.php/journal/article/view/gt.2014.13.02.091.00}
}
Carlos T. Ishi, Hiroaki Hatano, Miyako Kiso, "Acoustic-prosodic and paralinguistic analyses of “uun" and “unun"", In Speech Prosody 7, Dublin, Ireland, pp. 100-104, May, 2014.
Abstract: The speaking style of an interjection contains discriminative features on its expressed intention, attitude or emotion. In the present work, we analyzed acoustic-prosodic features and the paralinguistic functions of two variations of the interjection “un", a lengthened pattern “uun" and a repeated pattern “unun", which are often found in Japanese conversational speech. Analysis results indicate that there are differences in the paralinguistic function expressed by “uun" and “unun", as well as different trends on F0 contour types according to the conveyed paralinguistic information.
BibTeX:
@InProceedings{Ishi2014,
  author =          {Carlos T. Ishi and Hiroaki Hatano and Miyako Kiso},
  title =           {Acoustic-prosodic and paralinguistic analyses of “uun" and “unun"},
  booktitle =       {Speech Prosody 7},
  year =            {2014},
  pages =           {100-104},
  address =         {Dublin, Ireland},
  month =           May,
  abstract =        {The speaking style of an interjection contains discriminative features on its expressed intention, attitude or emotion. In the present work, we analyzed acoustic-prosodic features and the paralinguistic functions of two variations of the interjection “un", a lengthened pattern “uun" and a repeated pattern “unun", which are often found in Japanese conversational speech. Analysis results indicate that there are differences in the paralinguistic function expressed by “uun" and “unun", as well as different trends on F0 contour types according to the conveyed paralinguistic information.},
  day =             {20-23},
  file =            {Ishi2014.pdf:pdf/Ishi2014.pdf:PDF},
  keywords =        {interjections; acoustic-prosodic features; paralinguistic information; spontaneous conversational speech},
}
Kaiko Kuwamura, Shuichi Nishio, "Modality reduction for enhancing human likeliness", In Selected papers of the 50th annual convention of the Artificial Intelligence and the Simulation of Behaviour, London, UK, pp. 83-89, April, 2014.
Abstract: We proposed a method to enhance one's affection by reducing number of transferred modalities. When we dream of an artificial partner for “love", its appearance is the first thing of con- cern; a very humanlike, beautiful robot. However, we did not design a medium with a beautiful appearance but a medium which ignores the appearance and let users imagine and complete the appearance. By reducing the number of transferred modalities, we can enhance one's affection toward a robot. Moreover, not just by transmitting, but by inducing active, unconscious behavior of users, we can increase this effect. In this paper, we will introduce supporting results from our experiments and discuss further applicability of our findings.
BibTeX:
@InProceedings{Kuwamura2014b,
  author =          {Kaiko Kuwamura and Shuichi Nishio},
  title =           {Modality reduction for enhancing human likeliness},
  booktitle =       {Selected papers of the 50th annual convention of the Artificial Intelligence and the Simulation of Behaviour},
  year =            {2014},
  pages =           {83-89},
  address =         {London, UK},
  month =           Apr,
  abstract =        {We proposed a method to enhance one's affection by reducing number of transferred modalities. When we dream of an artificial partner for “love", its appearance is the first thing of con- cern; a very humanlike, beautiful robot. However, we did not design a medium with a beautiful appearance but a medium which ignores the appearance and let users imagine and complete the appearance. By reducing the number of transferred modalities, we can enhance one's affection toward a robot. Moreover, not just by transmitting, but by inducing active, unconscious behavior of users, we can increase this effect. In this paper, we will introduce supporting results from our experiments and discuss further applicability of our findings.},
  day =             {1-4},
  file =            {Kuwamura2014b.pdf:pdf/Kuwamura2014b.pdf:PDF},
  url =             {http://doc.gold.ac.uk/aisb50/AISB50-S16/AISB50-S16-Kuwamura-paper.pdf}
}
Junya Nakanishi, Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Evoking Affection for a Communication Partner by a Robotic Communication Medium", In the First International Conference on Human-Agent Interaction, Hokkaido University, Sapporo, Japan, pp. III-1-4, August, 2013.
Abstract: This paper reveals a new effect of robotic communication media that can function as avatars of communication partners. Users interaction with a medium may alter feelings their toward partners. The paper hypothesized that talking while hugging a robotic medium increases romantic feelings or attraction toward a partner in robot-mediated tele-communication. Our experiment used Hugvie, a human-shaped medium, for talking in a hugging state. We found that people subconsciously increased their romantic attraction toward opposite sex partners by hugging Hugvie. This resultant effect is novel because we revealed the effect of user hugging on the user's own feelings instead of being hugged by a partner.
BibTeX:
@InProceedings{Nakanishi2013,
  Title                    = {Evoking Affection for a Communication Partner by a Robotic Communication Medium},
  Author                   = {Junya Nakanishi and Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  Booktitle                = {the First International Conference on Human-Agent Interaction},
  Year                     = {2013},

  Address                  = {Hokkaido University, Sapporo, Japan},
  Month                    = Aug,
  Pages                    = {III-1-4},

  Abstract                 = {This paper reveals a new effect of robotic communication media that can function as avatars of communication partners. Users interaction with a medium may alter feelings their toward partners. The paper hypothesized that talking while hugging a robotic medium increases romantic feelings or attraction toward a partner in robot-mediated tele-communication. Our experiment used Hugvie, a human-shaped medium, for talking in a hugging state. We found that people subconsciously increased their romantic attraction toward opposite sex partners by hugging Hugvie. This resultant effect is novel because we revealed the effect of user hugging on the user's own feelings instead of being hugged by a partner.},
  Acknowledgement          = {This work was partially supported by {JST} (Japan Science and Technology Agency) {CREST} (Core Research of Evolutional Science & Technology) research promo-tion program.},
  Day                      = {7-9},
  File                     = {Nakanishi2013.pdf:pdf/Nakanishi2013.pdf:PDF},
  Grant                    = {CREST},
  Language                 = {en},
  Reviewed                 = {Y},
  Url                      = {http://hai-conference.net/ihai2013/proceedings/html/paper/paper-III-1-4.html}
}
Hidenobu Sumioka, Kensuke Koda, Shuichi Nishio, Takashi Minato, Hiroshi Ishiguro, "Revisiting ancient design of human form for communication avatar: Design considerations from chronological development of Dogu", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, Korea, pp. 726-731, August, 2013.
Abstract: Robot avatar systems give the feeling we share a space with people who are actually at a distant location. Since our cognitive system specializes in recognizing a human, avatars of the distant people can make us strongly feel that we share space with them, provided that their appearance has been designed to sufficiently resemble humans. In this paper, we investigate the minimal requirements of robot avatars for distant people to feel their presence, Toward this aim, we give an overview of the chronological development of Dogu, which are human figurines made in ancient Japan. This survey of the Dogu shows that the torso, not the face, was considered the primary element for representing a human. It also suggests that some body parts can be represented in a simple form. Following the development of Dogu, we also use a conversation task to examine what kind of body representation is necessary to feel a distant person's presence. The experimental results show that the forms for the torso and head are required to enhance this feeling, while other body parts have less impact. We discuss the connection between our findings and an avatar's facial expression and motion.
BibTeX:
@InProceedings{Sumioka2013b,
  author =          {Hidenobu Sumioka and Kensuke Koda and Shuichi Nishio and Takashi Minato and Hiroshi Ishiguro},
  title =           {Revisiting ancient design of human form for communication avatar: Design considerations from chronological development of Dogu},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2013},
  pages =           {726-731},
  address =         {Gyeongju, Korea},
  month =           Aug,
  abstract =        {Robot avatar systems give the feeling we share a space with people who are actually at a distant location. Since our cognitive system specializes in recognizing a human, avatars of the distant people can make us strongly feel that we share space with them, provided that their appearance has been designed to sufficiently resemble humans. In this paper, we investigate the minimal requirements of robot avatars for distant people to feel their presence, Toward this aim, we give an overview of the chronological development of Dogu, which are human figurines made in ancient Japan. This survey of the Dogu shows that the torso, not the face, was considered the primary element for representing a human. It also suggests that some body parts can be represented in a simple form. Following the development of Dogu, we also use a conversation task to examine what kind of body representation is necessary to feel a distant person's presence. The experimental results show that the forms for the torso and head are required to enhance this feeling, while other body parts have less impact. We discuss the connection between our findings and an avatar's facial expression and motion.},
  day =             {26-29},
  doi =             {10.1109/ROMAN.2013.6628399},
  file =            {Sumioka2013b.pdf:pdf/Sumioka2013b.pdf:PDF},
}
Shuichi Nishio, Koichi Taura, Hidenobu Sumioka, Hiroshi Ishiguro, "Effect of Social Interaction on Body Ownership Transfer to Teleoperated Android", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeonguju, Korea, pp. 565-570, August, 2013.
Abstract: Body Ownership Transfer (BOT) is an illusion that we feel external objects as parts of our own body that occurs when teleoperating android robots. In past studies, we have been investigating under what conditions this illusion occurs. However, past studies were only conducted with simple operation tasks such as by only moving the robot's hand. Does this illusion occur under much complex tasks such as having a conversation? What kind of conversation setting is required to invoke this illusion? In this paper, we examined how factors in social interaction affects occurrence of BOT. Participants had conversation using the teleoperated robot under different situations and teleoperation settings. The results revealed that BOT does occur by the act of having a conversation, and that conversation partner's presence and appropriate responses are necessary for enhancement of BOT.
BibTeX:
@InProceedings{Nishio2013,
  author =          {Shuichi Nishio and Koichi Taura and Hidenobu Sumioka and Hiroshi Ishiguro},
  title =           {Effect of Social Interaction on Body Ownership Transfer to Teleoperated Android},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2013},
  pages =           {565-570},
  address =         {Gyeonguju, Korea},
  month =           Aug,
  abstract =        {Body Ownership Transfer (BOT) is an illusion that we feel external objects as parts of our own body that occurs when teleoperating android robots. In past studies, we have been investigating under what conditions this illusion occurs. However, past studies were only conducted with simple operation tasks such as by only moving the robot's hand. Does this illusion occur under much complex tasks such as having a conversation? What kind of conversation setting is required to invoke this illusion? In this paper, we examined how factors in social interaction affects occurrence of BOT. Participants had conversation using the teleoperated robot under different situations and teleoperation settings. The results revealed that BOT does occur by the act of having a conversation, and that conversation partner's presence and appropriate responses are necessary for enhancement of BOT.},
  day =             {26-29},
  doi =             {10.1109/ROMAN.2013.6628539},
  file =            {Nishio2013.pdf:pdf/Nishio2013.pdf:PDF},
}
Kaiko Kuwamura, Kurima Sakai, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Hugvie: A medium that fosters love", In IEEE International Symposium on Robot and Human Interactive Communication, Gyeongju, Korea, pp. 70-75, August, 2013.
Abstract: We introduce a communication medium that en- courages users to fall in love with their counterparts. Hugvie, the huggable tele-presence medium, enables users to feel like hugging their counterparts while chatting. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging Hugvie, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked.
BibTeX:
@InProceedings{Kuwamura2013,
  Title                    = {Hugvie: A medium that fosters love},
  Author                   = {Kaiko Kuwamura and Kurima Sakai and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  Booktitle                = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  Year                     = {2013},

  Address                  = {Gyeongju, Korea},
  Month                    = Aug,
  Pages                    = {70-75},

  Abstract                 = {We introduce a communication medium that en- courages users to fall in love with their counterparts. Hugvie, the huggable tele-presence medium, enables users to feel like hugging their counterparts while chatting. In this paper, we report that when a participant talks to his communication partner during their first encounter while hugging Hugvie, he mistakenly feels as if they are establishing a good relationship and that he is being loved rather than just being liked.},
  Acknowledgement          = {This research was supported by JST, CREST.},
  Day                      = {26-29},
  Doi                      = {10.1109/ROMAN.2013.6628533},
  File                     = {Kuwamura2013.pdf:pdf/Kuwamura2013.pdf:PDF},
  Grant                    = {CREST},
  Language                 = {en},
  Reviewed                 = {Y}
}
Rosario Sorbello, Hiroshi Ishiguro, Antonio Chella, Shuichi Nishio, Giovan Battista Presti, Marcello Giardina, "Telenoid mediated ACT Protocol to Increase Acceptance of Disease among Siblings of Autistic Children", In HRI2013 Workshop on Design of Humanlikeness in HRI : from uncanny valley to minimal design, Tokyo, Japan, pp. 26, March, 2013.
Abstract: We introduce a novel research proposal project aimed to build a robotic setup in which the Telenoid[1] is used as therapist for the sibling of children with autism. Many existing research studies have shown good results relating to the important impact of Acceptance and Commitment Therapy (ACT)[2] applied to siblings of children with autism. The overall behaviors of the siblings may potentially benefit from treatment with a humanoid robot therapist instead of a real one. In particular in the present study, Telenoid humanoid robot[3] is used as therapist to achieve a specific therapeutic objective: the acceptance of diversity from the sibling of children with autism. In the proposed architecture, the Telenoid acts[4] in teleoperated mode[5] during the learning phase, while it becomes more and more autonomous during the working phase with patients. A goal of the research is to improve siblings tolerance and acceptance towards their brothers. The use of ACT[6] will reinforce the acceptance of diversity and it will create a psicological flexibilty along the dimension of diversity. In the present article, we briefly introduce Acceptance and Commitment Therapy (ACT) as a clinical model and its theoretical foundations (Relational Frame Theory). We then explain the six core processes of Hexaflex model of ACT adapted to Telenoid behaviors acting as humanoid robotic therapist. Finally, we present an experimental example about how Telenoid could apply the six processes[7] of hexaflex model of ACT to the patient during its human-humanoid interaction (HHI) in order to realize an applied clinical behavior analysis[8] that increase in the sibling their acceptance of brother' disease.
BibTeX:
@InProceedings{Sorbello2013,
  author =    {Rosario Sorbello and Hiroshi Ishiguro and Antonio Chella and Shuichi Nishio and Giovan Battista Presti and Marcello Giardina},
  title =     {Telenoid mediated {ACT} Protocol to Increase Acceptance of Disease among Siblings of Autistic Children},
  booktitle = {{HRI}2013 Workshop on Design of Humanlikeness in {HRI} : from uncanny valley to minimal design},
  year =      {2013},
  pages =     {26},
  address =   {Tokyo, Japan},
  month =     Mar,
  abstract =  {We introduce a novel research proposal project aimed to build a robotic setup in which the Telenoid[1] is used as therapist for the sibling of children with autism. Many existing research studies have shown good results relating to the important impact of Acceptance and Commitment Therapy (ACT)[2] applied to siblings of children with autism. The overall behaviors of the siblings may potentially benefit from treatment with a humanoid robot therapist instead of a real one. In particular in the present study, Telenoid humanoid robot[3] is used as therapist to achieve a specific therapeutic objective: the acceptance of diversity from the sibling of children with autism. In the proposed architecture, the Telenoid acts[4] in teleoperated mode[5] during the learning phase, while it becomes more and more autonomous during the working phase with patients. A goal of the research is to improve siblings tolerance and acceptance towards their brothers. The use of ACT[6] will reinforce the acceptance of diversity and it will create a psicological flexibilty along the dimension of diversity. In the present article, we briefly introduce Acceptance and Commitment Therapy (ACT) as a clinical model and its theoretical foundations (Relational Frame Theory). We then explain the six core processes of Hexaflex model of ACT adapted to Telenoid behaviors acting as humanoid robotic therapist. Finally, we present an experimental example about how Telenoid could apply the six processes[7] of hexaflex model of ACT to the patient during its human-humanoid interaction (HHI) in order to realize an applied clinical behavior analysis[8] that increase in the sibling their acceptance of brother' disease.},
  day =       {3},
  file =      {Sorbello2013.pdf:pdf/Sorbello2013.pdf:PDF},
}
Christian Becker-Asano, Severin Gustorff, Kai Oliver Arras, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, Bernhard Nebe, "Robot Embodiment, Operator Modality, and Social Interaction in Tele-Existence: A Project Outline", In 8th ACM/IEEE International Conference on Human-Robot Interaction, National Museum of Emerging Science and innovation (Miraikan), Tokyo, pp. 79-80, March, 2013.
Abstract: This paper outlines our ongoing project, which aims to investigate the effects of robot embodiment and operator modality on an operator's task efficiency and concomitant level of copresence in remote social interaction. After a brief introductionto related work has been given, five research questions are presented. We discuss how these relate to our choice of the two robotic embodiments "DARYL" and "Geminoid F" and the two operator modalities “console interface" and “head-mounted display". Finally, we postulate that the usefulness of one operator modality over the other will depend on the type of situation an operator has to deal with. This hypothesis is currently being investigated empirically using DARYL at Freiburg University
BibTeX:
@InProceedings{Becker-Asano2013,
  author =          {Christian Becker-Asano and Severin Gustorff and Kai Oliver Arras and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro and Bernhard Nebe},
  title =           {Robot Embodiment, Operator Modality, and Social Interaction in Tele-Existence: A Project Outline},
  booktitle =       {8th ACM/IEEE International Conference on Human-Robot Interaction},
  year =            {2013},
  pages =           {79-80},
  address =         {National Museum of Emerging Science and innovation (Miraikan), Tokyo},
  month =           Mar,
  abstract =        {This paper outlines our ongoing project, which aims to investigate the effects of robot embodiment and operator modality on an operator's task efficiency and concomitant level of copresence in remote social interaction. After a brief introductionto related work has been given, five research questions are presented. We discuss how these relate to our choice of the two robotic embodiments "DARYL" and "Geminoid F" and the two operator modalities “console interface" and “head-mounted display". Finally, we postulate that the usefulness of one operator modality over the other will depend on the type of situation an operator has to deal with. This hypothesis is currently being investigated empirically using DARYL at Freiburg University},
  day =             {3-6},
  doi =             {10.1109/HRI.2013.6483510},
  file =            {Becker-Asano2013.pdf:pdf/Becker-Asano2013.pdf:PDF},
  keywords =        {Tele-existence; Copresence; Tele-robotic; Social robotics},
  url =             {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6483510}
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Takashi Minato, Marco Nørskov, Nobu Ishiguro, Masaru Nishikawa, Tsutomu Fujinami, "Social Inclusion of Senior Citizens by a Teleoperated Android : Toward Inter-generational TeleCommunity Creation", In 2012 IEEE International Workshop on Assistance and Service Robotics in a Human Environment, International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 53-58, October, 2012.
Abstract: As populations continue to age, there is a growing need for assistive technologies that help senior citizens maintain their autonomy and enjoy their lives. We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. Our exploratory study focused on the social aspects of Telenoid, a teleoperated android designed as a minimalistic human, which might facilitate communication between senior citizens and its operators. We conducted cross-cultural field trials in Japan and Denmark by introducing Telenoid into care facilities and the private homes of seniors to observe how they responded to it. In Japan, we set up a teleoperation system in an elementary school and investigated how it shaped communication through the internet between the elderly in a care facility and the children who acted as teleoperators. In both countries, the elderly commonly assumed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Telenoid lowered the barriers for the children as operators for communicating with demented seniors so that they became more relaxed to participate in and positively continue conversations using Telenoid. Our results suggest that its minimalistic human design is inclusive for seniors with or without dementia and facilitates inter-generational communication, which may be expanded to a social network of trans-national supportive relationships among all generations.
BibTeX:
@InProceedings{Yamazaki2012d,
  author =    {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Takashi Minato and Marco N\orskov and Nobu Ishiguro and Masaru Nishikawa and Tsutomu Fujinami},
  title =     {Social Inclusion of Senior Citizens by a Teleoperated Android : Toward Inter-generational TeleCommunity Creation},
  booktitle = {2012 {IEEE} International Workshop on Assistance and Service Robotics in a Human Environment, International Conference on Intelligent Robots and Systems},
  year =      {2012},
  pages =     {53--58},
  address =   {Vilamoura, Algarve, Portugal},
  month =     Oct,
  abstract =  {As populations continue to age, there is a growing need for assistive technologies that help senior citizens maintain their autonomy and enjoy their lives. We explore the potential of teleoperated androids, which are embodied telecommunication media with humanlike appearances. Our exploratory study focused on the social aspects of Telenoid, a teleoperated android designed as a minimalistic human, which might facilitate communication between senior citizens and its operators. We conducted cross-cultural field trials in Japan and Denmark by introducing Telenoid into care facilities and the private homes of seniors to observe how they responded to it. In Japan, we set up a teleoperation system in an elementary school and investigated how it shaped communication through the internet between the elderly in a care facility and the children who acted as teleoperators. In both countries, the elderly commonly assumed positive attitudes toward Telenoid and imaginatively developed various dialogue strategies. Telenoid lowered the barriers for the children as operators for communicating with demented seniors so that they became more relaxed to participate in and positively continue conversations using Telenoid. Our results suggest that its minimalistic human design is inclusive for seniors with or without dementia and facilitates inter-generational communication, which may be expanded to a social network of trans-national supportive relationships among all generations.},
  day =       {7-12},
  file =      {Yamazaki2012d.pdf:Yamazaki2012d.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Hiroshi Ishiguro, Marco Nørskov, Nobu Ishiguro, Giuseppe Balistreri, "Social Acceptance of a Teleoperated Android: Field Study on Elderly's Engagement with an Embodied Communication Medium in Denmark", In International Conference on Social Robotics, Chengdu, China, pp. 428-437, October, 2012.
Abstract: We explored the potential of teleoperated android robots, which are embodied telecommunication media with humanlike appearances, and how they affect people in the real world when they are employed to express a telepresence and a sense of ‘being there'. In Denmark, our exploratory study focused on the social aspects of Telenoid, a teleoperated android, which might facilitate communication between senior citizens and Telenoid's operator. After applying it to the elderly in their homes, we found that the elderly assumed positive attitudes toward Telenoid, and their positivity and strong attachment to its huggable minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.
BibTeX:
@InProceedings{Yamazaki2012c,
  author =          {Ryuji Yamazaki and Shuichi Nishio and Hiroshi Ishiguro and Marco N\orskov and Nobu Ishiguro and Giuseppe Balistreri},
  title =           {Social Acceptance of a Teleoperated Android: Field Study on Elderly's Engagement with an Embodied Communication Medium in Denmark},
  booktitle =       {International Conference on Social Robotics},
  year =            {2012},
  pages =           {428-437},
  address =         {Chengdu, China},
  month =           Oct,
  abstract =        {We explored the potential of teleoperated android robots, which are embodied telecommunication media with humanlike appearances, and how they affect people in the real world when they are employed to express a telepresence and a sense of ‘being there'. In Denmark, our exploratory study focused on the social aspects of Telenoid, a teleoperated android, which might facilitate communication between senior citizens and Telenoid's operator. After applying it to the elderly in their homes, we found that the elderly assumed positive attitudes toward Telenoid, and their positivity and strong attachment to its huggable minimalistic human design were cross-culturally shared in Denmark and Japan. Contrary to the negative reactions by non-users in media reports, our result suggests that teleoperated androids can be accepted by the elderly as a kind of universal design medium for social inclusion.},
  day =             {29-31},
  doi =             {10.1007/978-3-642-34103-8_43},
  file =            {Yamazaki2012c.pdf:pdf/Yamazaki2012c.pdf:PDF},
  keywords =        {android;teleoperation;minimal design;communication;embodiment;inclusion;acceptability;elderly care},
  url =             {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_43}
}
Shuichi Nishio, Koichi Taura, Hiroshi Ishiguro, "Regulating Emotion by Facial Feedback from Teleoperated Android Robot", In International Conference on Social Robotics, Chengdu, China, pp. 388-397, October, 2012.
Abstract: In this paper, we experimentally examined whether facial expression changes in teleoperated androids can affect and regulate operators' emotion, based on the facial feedback theory of emotion and the body ownership transfer phenomena to teleoperated android robot. We created a conversational situation where participants felt anger and, during the conversation, the android's facial expression were automatically changed. We examined whether such changes affected the operator emotions. As a result, we found that when one can well operate the robot, the operator's emotional states are affected by the android's facial expression changes.
BibTeX:
@InProceedings{Nishio2012b,
  author =    {Shuichi Nishio and Koichi Taura and Hiroshi Ishiguro},
  title =     {Regulating Emotion by Facial Feedback from Teleoperated Android Robot},
  booktitle = {International Conference on Social Robotics},
  year =      {2012},
  pages =     {388-397},
  address =   {Chengdu, China},
  month =     Oct,
  abstract =  {In this paper, we experimentally examined whether facial expression changes in teleoperated androids can affect and regulate operators' emotion, based on the facial feedback theory of emotion and the body ownership transfer phenomena to teleoperated android robot. We created a conversational situation where participants felt anger and, during the conversation, the android's facial expression were automatically changed. We examined whether such changes affected the operator emotions. As a result, we found that when one can well operate the robot, the operator's emotional states are affected by the android's facial expression changes.},
  day =       {29-31},
  doi =       {10.1007/978-3-642-34103-8_39},
  file =      {Nishio2012b.pdf:pdf/Nishio2012b.pdf:PDF},
  url =       {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_39}
}
Shuichi Nishio, Tetsuya Watanabe, Kohei Ogawa, Hiroshi Ishiguro, "Body Ownership Transfer to Teleoperated Android Robot", In International Conference on Social Robotics, Chengdu, China, pp. 398-407, October, 2012.
Abstract: Teleoperators of android robots occasionally feel as if the robotic bodies are extensions of their own. When others touch the tele-operated android, even without tactile feedback, some operators feel as if they themselves have been touched. In the past, a similar phenomenon named “Rubber Hand Illusion" have been studied for its reflection of a three-way interaction among vision, touch and proprioception. In this study, we examined whether a similar interaction occurs when replacing a tactile sensation with android robot teleoperation; that is, whether the interaction among vision, motion and proprioception occurs. The result showed that when the operator and the android motions are synchronized, operators feel as if their sense of body ownership is transferred to the android robot.
BibTeX:
@InProceedings{Nishio2012a,
  author =    {Shuichi Nishio and Tetsuya Watanabe and Kohei Ogawa and Hiroshi Ishiguro},
  title =     {Body Ownership Transfer to Teleoperated Android Robot},
  booktitle = {International Conference on Social Robotics},
  year =      {2012},
  pages =     {398-407},
  address =   {Chengdu, China},
  month =     Oct,
  abstract =  {Teleoperators of android robots occasionally feel as if the robotic bodies are extensions of their own. When others touch the tele-operated android, even without tactile feedback, some operators feel as if they themselves have been touched. In the past, a similar phenomenon named “Rubber Hand Illusion" have been studied for its reflection of a three-way interaction among vision, touch and proprioception. In this study, we examined whether a similar interaction occurs when replacing a tactile sensation with android robot teleoperation; that is, whether the interaction among vision, motion and proprioception occurs. The result showed that when the operator and the android motions are synchronized, operators feel as if their sense of body ownership is transferred to the android robot.},
  day =       {29-31},
  doi =       {10.1007/978-3-642-34103-8_40},
  file =      {Nishio2012a.pdf:pdf/Nishio2012a.pdf:PDF},
  url =       {http://link.springer.com/chapter/10.1007/978-3-642-34103-8_40}
}
Martin Cooney, Shuichi Nishio, Hiroshi Ishiguro, "Recognizing Affection for a Touch-based Interaction with a Humanoid Robot", In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 1420-1427, October, 2012.
Abstract: In order to facilitate integration into domestic and public environments, companion robots can seek to communicate in a familiar, socially intelligent´ manner, recognizing typical behaviors which people direct toward them. One important type of behavior to recognize is the displaying and seeking of affection, which is fundamentally associated with the modality of touch. This paper identifies how people communicate affection through touching a humanoid robot appearance, and reports on the development of a recognition system exploring the modalities of touch and vision. Results of evaluation indicate the proposed system can recognize people's affectionate behavior in the designated context.
BibTeX:
@InProceedings{Cooney2012a,
  Title                    = {Recognizing Affection for a Touch-based Interaction with a Humanoid Robot},
  Author                   = {Martin Cooney and Shuichi Nishio and Hiroshi Ishiguro},
  Booktitle                = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  Year                     = {2012},

  Address                  = {Vilamoura, Algarve, Portugal},
  Month                    = Oct,
  Pages                    = {1420--1427},

  Abstract                 = {In order to facilitate integration into domestic and public environments, companion robots can seek to communicate in a familiar, socially intelligent´ manner, recognizing typical behaviors which people direct toward them. One important type of behavior to recognize is the displaying and seeking of affection, which is fundamentally associated with the modality of touch. This paper identifies how people communicate affection through touching a humanoid robot appearance, and reports on the development of a recognition system exploring the modalities of touch and vision. Results of evaluation indicate the proposed system can recognize people's affectionate behavior in the designated context.},
  Acknowledgement          = {We'd like to thank Takashi Minato for help with the skin sensors, and everyone else who supported this project.},
  Day                      = {7-12},
  File                     = {Cooney2012a.pdf:Cooney2012a.pdf:PDF},
  Grant                    = {CREST},
  Reviewed                 = {Y}
}
Hiroshi Ishiguro, Shuichi Nishi, Antonio Chella, Rosario Sorbello, Giuseppe Balistreri, Marcello Giardina, Carmelo Cali, "Investigating Perceptual Features for a Natural Human - Humanoid Robot Interaction inside a Spontaneous Setting", In Biologically Inspired Cognitive Architectures 2012, Palermo, Italy, October, 2012.
Abstract: The present paper aims to validate our research on human-humanoid interaction (HHMI) using the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with 100 young people with no prier interaction experience with this robot. The main goal is the analysis of the two social dimension (perception and believability) useful for increasing the natural behavior between users and Telenoid. We administrated our custom questionnaire to these subjects after a well defined experimental setting (ordinary and goal-guided task). After the analysis of the questionnaires, we obtained the proof that perceptual and believability conditions are necessary social dimensions for a success fully and efficiency HHI interaction in every daylife activities.
BibTeX:
@InProceedings{Ishiguro2012a,
  Title                    = {Investigating Perceptual Features for a Natural Human - Humanoid Robot Interaction inside a Spontaneous Setting},
  Author                   = {Hiroshi Ishiguro and Shuichi Nishi and Antonio Chella and Rosario Sorbello and Giuseppe Balistreri and Marcello Giardina and Carmelo Cali},
  Booktitle                = {Biologically Inspired Cognitive Architectures 2012},
  Year                     = {2012},

  Address                  = {Palermo, Italy},
  Month                    = Oct,

  Abstract                 = {The present paper aims to validate our research on human-humanoid interaction (HHMI) using the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with 100 young people with no prier interaction experience with this robot. The main goal is the analysis of the two social dimension (perception and believability) useful for increasing the natural behavior between users and Telenoid. We administrated our custom questionnaire to these subjects after a well defined experimental setting (ordinary and goal-guided task). After the analysis of the questionnaires, we obtained the proof that perceptual and believability conditions are necessary social dimensions for a success fully and efficiency HHI interaction in every daylife activities.},
  Grant                    = {CREST},
  Reviewed                 = {Y}
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Evaluation of formant-based lip motion generation in tele-operated humanoid robots", In IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 2377-2382, October, 2012.
Abstract: Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.
BibTeX:
@InProceedings{Ishi2012,
  author =    {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title =     {Evaluation of formant-based lip motion generation in tele-operated humanoid robots},
  booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  year =      {2012},
  pages =     {2377--2382},
  address =   {Vilamoura, Algarve, Portugal},
  month =     Oct,
  abstract =  {Generating natural motion in robots is important for improving human-robot interaction. We developed a tele-operation system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present work, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization. Lip height control is evaluated in two types of humanoid robots (Telenoid-R2 and Geminoid-F). Subjective evaluation indicated that the proposed audio-based method can generate lip motion with naturalness superior to vision-based and motion capture-based approaches. Partial lip width control was shown to improve lip motion naturalness in Geminoid-F, which also has an actuator for stretching the lip corners. Issues regarding online real-time processing are also discussed.},
  day =       {7-12},
  file =      {Ishi2012.pdf:pdf/Ishi2012.pdf:PDF},
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Evaluation of a formant-based speech-driven lip motion generation", In 13th Annual Conference of the International Speech Communication Association, Portland, Oregon, pp. P1a.04, September, 2012.
Abstract: The background of the present work is the development of a tele-presence robot system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present paper, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization, so that no training of dedicated models is necessary. Lip height control is evaluated in a female android robot and in animated lips. Subjective evaluation indicated that naturalness of lip motion generated in the robot is improved by the inclusion of a partial lip width control (with stretching of the lip corners). Highest naturalness scores were achieved for the animated lips, showing the effectiveness of the proposed method.
BibTeX:
@InProceedings{Ishi2012b,
  author =          {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title =           {Evaluation of a formant-based speech-driven lip motion generation},
  booktitle =       {13th Annual Conference of the International Speech Communication Association},
  year =            {2012},
  pages =           {P1a.04},
  address =         {Portland, Oregon},
  month =           Sep,
  abstract =        {The background of the present work is the development of a tele-presence robot system where the lip motion of a remote humanoid robot is automatically controlled from the operator's voice. In the present paper, we introduce an improved version of our proposed speech-driven lip motion generation method, where lip height and width degrees are estimated based on vowel formant information. The method requires the calibration of only one parameter for speaker normalization, so that no training of dedicated models is necessary. Lip height control is evaluated in a female android robot and in animated lips. Subjective evaluation indicated that naturalness of lip motion generated in the robot is improved by the inclusion of a partial lip width control (with stretching of the lip corners). Highest naturalness scores were achieved for the animated lips, showing the effectiveness of the proposed method.},
  day =             {9-13},
  file =            {Ishi2012b.pdf:pdf/Ishi2012b.pdf:PDF},
  keywords =        {lip motion, formant, tele-operation, humanoid robot},
}
Kohei Ogawa, Koichi Taura, Hiroshi Ishiguro, "Possibilities of Androids as Poetry-reciting Agent", Poster presentation at IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 565-570, September, 2012.
Abstract: In recent years, research has increased on very human-like androids, generally investigating the following: (1) how people treat such very human-like androids and (2) whether it is possible to replace such existing communication media as telephones or TV conference systems with androids as a communication medium. We found that androids have advantages over humans in specific contexts. For example, in a collaboration theatrical project between artists and androids, audiences were impressed by the androids that read poetry. We, therefore, experimentally compared androids and humans in a poetryreciting context by conducting an experiment to illustrate the influence of an android who recited poetry. Participants listened to poetry that was read by three poetryreciting agents: the android, the human model on which the android was based, and a box. The experiment results showed that the enjoyment of the poetry gained the highest score under the android condition, indicating that the android has an advantage for communicating the meaning of poetry.
BibTeX:
@InProceedings{Ogawa2012d,
  author =          {Kohei Ogawa and Koichi Taura and Hiroshi Ishiguro},
  title =           {Possibilities of Androids as Poetry-reciting Agent},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2012},
  pages =           {565--570},
  address =         {Paris, France},
  month =           Sep,
  abstract =        {In recent years, research has increased on very human-like androids, generally investigating the following: (1) how people treat such very human-like androids and (2) whether it is possible to replace such existing communication media as telephones or TV conference systems with androids as a communication medium. We found that androids have advantages over humans in specific contexts. For example, in a collaboration theatrical project between artists and androids, audiences were impressed by the androids that read poetry. We, therefore, experimentally compared androids and humans in a poetryreciting context by conducting an experiment to illustrate the influence of an android who recited poetry. Participants listened to poetry that was read by three poetryreciting agents: the android, the human model on which the android was based, and a box. The experiment results showed that the enjoyment of the poetry gained the highest score under the android condition, indicating that the android has an advantage for communicating the meaning of poetry.},
  day =             {9-13},
  doi =             {10.1109/ROMAN.2012.6343811},
  file =            {Ogawa2012d.pdf:Ogawa2012d.pdf:PDF},
  keywords =        {Robot; Android; Art; Geminoid; Poetry},
}
Martin Cooney, Francesco Zanlungo, Shuichi Nishio, Hiroshi Ishiguro, "Designing a Flying Humanoid Robot (FHR): Effects of Flight on Interactive Communication", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 364-371, September, 2012.
Abstract: This research constitutes an initial investigation into key issues which arise in designing a flying humanoid robot (FHR), with a focus on human-robot interaction (HRI). The humanoid form offers an interface for natural communication; flight offers excellent mobility. Combining both will yield companion robots capable of approaching, accompanying, and communicating naturally with humans in difficult environments. Problematic is how such a robot should best fly around humans, and what effect the robot's flight will have on a person in terms of paralinguistic (non-verbal) cues. To answer these questions, we propose an extension to existing proxemics theory (“z-proxemics") and predict how typical humanoid flight motions will be perceived. Data obtained from participants watching animated sequences are analyzed to check our predictions. The paper also reports on the building of a flying humanoid robot, which we will use in interactions.
BibTeX:
@InProceedings{Cooney2012b,
  author =          {Martin Cooney and Francesco Zanlungo and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Designing a Flying Humanoid Robot ({FHR}): Effects of Flight on Interactive Communication},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2012},
  pages =           {364--371},
  address =         {Paris, France},
  month =           Sep,
  abstract =        {This research constitutes an initial investigation into key issues which arise in designing a flying humanoid robot ({FHR}), with a focus on human-robot interaction ({HRI}). The humanoid form offers an interface for natural communication; flight offers excellent mobility. Combining both will yield companion robots capable of approaching, accompanying, and communicating naturally with humans in difficult environments. Problematic is how such a robot should best fly around humans, and what effect the robot's flight will have on a person in terms of paralinguistic (non-verbal) cues. To answer these questions, we propose an extension to existing proxemics theory (“z-proxemics") and predict how typical humanoid flight motions will be perceived. Data obtained from participants watching animated sequences are analyzed to check our predictions. The paper also reports on the building of a flying humanoid robot, which we will use in interactions.},
  day =             {9-13},
  doi =             {10.1109/ROMAN.2012.6343780},
  file =            {Cooney2012b.pdf:Cooney2012b.pdf:PDF},
}
Hidenobu Sumioka, Shuichi Nishio, Erina Okamoto, Hiroshi Ishiguro, "Isolation of physical traits and conversational content for personality design", Poster presentation at IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 596-601, September, 2012.
Abstract: In this paper, we propose the "Doppel teleoperation system,'' which isolates several physical traits from a speaker, to investigate how personal information is conveyed to others during conversation. An underlying problem on designing personality in social robots is that it remains unclear how humans judge the personalities of conversation partners. With the Doppel system, for each of the communication channels to be transferred, one can choose it in its original form or in the one generated by the system. For example, voice and body motions can be replaced by the Doppel system while preserving the speech content. This allows us to analyze the individual effects of the physical traits of the speaker and the content in the speaker's speech on the identification of personality. This selectivity of personal traits provides a useful approach to investigate which information conveys our personality through conversation. To show the potential of our system, we experimentally tested how much the conversation content conveys the personality of speakers to interlocutors without any of their physical traits. Preliminary results show that although interlocutors have difficulty identifying speakers only using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey personality
BibTeX:
@InProceedings{Sumioka2012d,
  author =          {Hidenobu Sumioka and Shuichi Nishio and Erina Okamoto and Hiroshi Ishiguro},
  title =           {Isolation of physical traits and conversational content for personality design},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2012},
  pages =           {596--601},
  address =         {Paris, France},
  month =           Sep,
  abstract =        {In this paper, we propose the "Doppel teleoperation system,'' which isolates several physical traits from a speaker, to investigate how personal information is conveyed to others during conversation. An underlying problem on designing personality in social robots is that it remains unclear how humans judge the personalities of conversation partners. With the Doppel system, for each of the communication channels to be transferred, one can choose it in its original form or in the one generated by the system. For example, voice and body motions can be replaced by the Doppel system while preserving the speech content. This allows us to analyze the individual effects of the physical traits of the speaker and the content in the speaker's speech on the identification of personality. This selectivity of personal traits provides a useful approach to investigate which information conveys our personality through conversation. To show the potential of our system, we experimentally tested how much the conversation content conveys the personality of speakers to interlocutors without any of their physical traits. Preliminary results show that although interlocutors have difficulty identifying speakers only using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey personality},
  day =             {9-13},
  doi =             {10.1109/ROMAN.2012.6343816},
  file =            {Sumioka2012d.pdf:Sumioka2012d.pdf:PDF},
}
Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Teleoperated android for mediated communication : body ownership, personality distortion, and minimal human design", In the RO-MAN 2012 workshop on social robotic telepresence, Paris, France, pp. 32-39, September, 2012.
Abstract: In this paper we discuss the impact of humanlike appearance on telecommunication, giving an overview of studies with teleoperated androids. We show that, due to humanlike appearance, teleoperated androids do not only affect interlocutors communicating with them but also teleoperators controlling them in another location. They enhance teleoperator's feeling of telepresence by inducing a sense of ownership over their body parts. It is also pointed out that a mismatch between an android and a teleoperator in appearance distorts the teleoperator's personality to be conveyed to an interlocutor. To overcome this problem, the concept of minimal human likeness design is introduced. We demonstrate that a new teleoperated android developed with the concept reduces the distortion in telecommunication. Finally, some research issues are discussed on a sense of ownership over telerobot's body, minimal human likeness design, and interface design.
BibTeX:
@InProceedings{Sumioka2012c,
  author =          {Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Teleoperated android for mediated communication : body ownership, personality distortion, and minimal human design},
  booktitle =       {the {RO-MAN} 2012 workshop on social robotic telepresence},
  year =            {2012},
  pages =           {32--39},
  address =         {Paris, France},
  month =           Sep,
  abstract =        {In this paper we discuss the impact of humanlike appearance on telecommunication, giving an overview of studies with teleoperated androids. We show that, due to humanlike appearance, teleoperated androids do not only affect interlocutors communicating with them but also teleoperators controlling them in another location. They enhance teleoperator's feeling of telepresence by inducing a sense of ownership over their body parts. It is also pointed out that a mismatch between an android and a teleoperator in appearance distorts the teleoperator's personality to be conveyed to an interlocutor. To overcome this problem, the concept of minimal human likeness design is introduced. We demonstrate that a new teleoperated android developed with the concept reduces the distortion in telecommunication. Finally, some research issues are discussed on a sense of ownership over telerobot's body, minimal human likeness design, and interface design.},
  day =             {9-13},
  file =            {Sumioka2012c.pdf:Sumioka2012c.pdf:PDF},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "From an Object to a Subject -- Transitions of an Android Robot into a Social Being", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 821-826, September, 2012.
Abstract: What are the characteristics that make something appear as a social entity? Is sociality limited to human beings? The following article will deal with the borders of sociality and the characterizations of animating a physical object (here: android robot) to a living being. The transition is attributed during interactive encounters. We will introduce implications of an ethnomethodological analysis which shows characteristics of transitions in social attribution towards an android robot, which is treated and perceived gradually shifting from an object to a social entity. These characteristics should a) fill the gap in current anthropological and sociological research, dealing with the limits and characteristics of social entities, and b) contribute to the discussion of specifics in human-android interaction compared to human-human interaction.
BibTeX:
@InProceedings{Straub2012,
  author =          {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {From an Object to a Subject -- Transitions of an Android Robot into a Social Being},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2012},
  pages =           {821--826},
  address =         {Paris, France},
  month =           Sep,
  abstract =        {What are the characteristics that make something appear as a social entity? Is sociality limited to human beings? The following article will deal with the borders of sociality and the characterizations of animating a physical object (here: android robot) to a living being. The transition is attributed during interactive encounters. We will introduce implications of an ethnomethodological analysis which shows characteristics of transitions in social attribution towards an android robot, which is treated and perceived gradually shifting from an object to a social entity. These characteristics should a) fill the gap in current anthropological and sociological research, dealing with the limits and characteristics of social entities, and b) contribute to the discussion of specifics in human-android interaction compared to human-human interaction.},
  day =             {9-13},
  doi =             {10.1109/ROMAN.2012.6343853},
  file =            {Straub2012.pdf:Strabu2012.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, "Teleoperated Android as an Embodied Communication Medium: A Case Study with Demented Elderlies in a Care Facility", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 1066-1071, September, 2012.
Abstract: Teleoperated androids, which are robots with humanlike appearances, are being produced as new media of human relationships. We explored the potential of humanoid robots and how they affect people in the real world when they are employed to express a telecommunication presence and a sense of ‘being there'. We introduced Telenoid, a teleoperated android, to a residential care facility to see how the elderly with dementia respond to it. Our exploratory study focused on the social aspects that might facilitate communication between the elderly and Telenoid's operator. Telenoid elicited positive images and interactive reactions from the elderly with mild dementia, even from those with severe cognitive impairment. They showed strong attachment to its child-like huggable design and became willing to converse with it. Our result suggests that an affectionate bond may be formed between the elderly and the android to provide the operator with easy communication to elicit responses from senior citizens.
BibTeX:
@InProceedings{Yamazaki2012b,
  author =    {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro},
  title =     {Teleoperated Android as an Embodied Communication Medium: A Case Study with Demented Elderlies in a Care Facility},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =      {2012},
  pages =     {1066--1071},
  address =   {Paris, France},
  month =     Sep,
  abstract =  {Teleoperated androids, which are robots with humanlike appearances, are being produced as new media of human relationships. We explored the potential of humanoid robots and how they affect people in the real world when they are employed to express a telecommunication presence and a sense of ‘being there'. We introduced Telenoid, a teleoperated android, to a residential care facility to see how the elderly with dementia respond to it. Our exploratory study focused on the social aspects that might facilitate communication between the elderly and Telenoid's operator. Telenoid elicited positive images and interactive reactions from the elderly with mild dementia, even from those with severe cognitive impairment. They showed strong attachment to its child-like huggable design and became willing to converse with it. Our result suggests that an affectionate bond may be formed between the elderly and the android to provide the operator with easy communication to elicit responses from senior citizens.},
  day =       {9-13},
  file =      {Yamazaki2012b.pdf:Yamazaki2012b.pdf:PDF},
}
Kohei Ogawa, Koichi Taura, Shuichi Nishio, Hiroshi Ishiguro, "Effect of perspective change in body ownership transfer to teleoperated android robot", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 1072-1077, September, 2012.
Abstract: We previously investigated body ownership transfer to a teleoperated android body caused by motion synchronization between the robot and its operator. Although visual feedback is the only information provided from the robot, due to body ownership transfer, some operators feel as if they were touched when the robot's body was touched. This illusion can help operators transfer their presence to the robotic body during teleoperation. By enhancing this phenomenon, we can improve our communication interface and the quality of the interaction between operator and interlocutor. In this paper, we examined how the change in the operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on the rubber hand illusion, we hypothesized that the perspective change will suppress the body owner transfer. Our results, however, showed that in any perspective condition, the participants felt the body ownership transfer. This shows that its generation process differs to teleoperated androids and the rubber hand illusion.
BibTeX:
@InProceedings{Ogawa2012c,
  author =          {Kohei Ogawa and Koichi Taura and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Effect of perspective change in body ownership transfer to teleoperated android robot},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2012},
  pages =           {1072--1077},
  address =         {Paris, France},
  month =           Sep,
  abstract =        {We previously investigated body ownership transfer to a teleoperated android body caused by motion synchronization between the robot and its operator. Although visual feedback is the only information provided from the robot, due to body ownership transfer, some operators feel as if they were touched when the robot's body was touched. This illusion can help operators transfer their presence to the robotic body during teleoperation. By enhancing this phenomenon, we can improve our communication interface and the quality of the interaction between operator and interlocutor. In this paper, we examined how the change in the operator's perspective affects the body ownership transfer during teleoperation. Based on past studies on the rubber hand illusion, we hypothesized that the perspective change will suppress the body owner transfer. Our results, however, showed that in any perspective condition, the participants felt the body ownership transfer. This shows that its generation process differs to teleoperated androids and the rubber hand illusion.},
  day =             {9-13},
  doi =             {10.1109/ROMAN.2012.6343891},
  file =            {Ogawa2012c.pdf:Ogawa2012c.pdf:PDF},
}
Takashi Minato, Hidenobu Sumioka, Shuichi Nishio, Hiroshi Ishiguro, "Studying the Influence of Handheld Robotic Media on Social Communications", In the RO-MAN 2012 workshop on social robotic telepresence, Paris, France, pp. 15-16, September, 2012.
Abstract: This paper describes research issues on social robotic telepresence using “Elfoid". It is a portable tele-operated humanoid that is designed to transfer individuals' presence to remote places at anytime, anywhere, to provide a new communication style in which individuals talk with persons in remote locations in such a way that they feel each other's presence. However, it is not known how people adapt to the new communication style and how social communications change by Elfoid. Investigating the influence of Elfoid on social communications are very interesting in the view of social robotic telepresence. This paper introduces Elfoid and shows the position of our studies in social robotic telepresence.
BibTeX:
@InProceedings{Minato2012c,
  author =    {Takashi Minato and Hidenobu Sumioka and Shuichi Nishio and Hiroshi Ishiguro},
  title =     {Studying the Influence of Handheld Robotic Media on Social Communications},
  booktitle = {the {RO-MAN} 2012 workshop on social robotic telepresence},
  year =      {2012},
  pages =     {15--16},
  address =   {Paris, France},
  month =     Sep,
  abstract =  {This paper describes research issues on social robotic telepresence using “Elfoid". It is a portable tele-operated humanoid that is designed to transfer individuals' presence to remote places at anytime, anywhere, to provide a new communication style in which individuals talk with persons in remote locations in such a way that they feel each other's presence. However, it is not known how people adapt to the new communication style and how social communications change by Elfoid. Investigating the influence of Elfoid on social communications are very interesting in the view of social robotic telepresence. This paper introduces Elfoid and shows the position of our studies in social robotic telepresence.},
  day =       {9-13},
  file =      {Minato2012c.pdf:Minato2012c.pdf:PDF},
}
Kaiko Kuwamura, Takashi Minato, Shuichi Nishio, Hiroshi Ishiguro, "Personality Distortion in Communication through Teleoperated Robots", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 49-54, September, 2012.
Abstract: Recent research has focused on such physical communication media as teleoperated robots, which provide a feeling of being with people in remote places. Recent invented media resemble cute animals or imaginary creatures that quickly attract attention. However, such appearances could distort tele-communications because they are different from human beings. This paper studies the effect on the speaker's personality that is transmitted through physical media by regarding appearances as a function that transmits the speaker's information. Although communication media's capability to transmit information reportedly influences conversations in many aspects, the effect of appearances remains unclear. To reveal the effect of appearance, we compared three appearances of communication media: stuffed-bear teleoperated robot, human-like teleoperated robot, and video chat. Our results show that communication media whose appearance greatly differs from that of the speaker distorts the personality perceived by interlocutors. This paper suggests that the design of the appearance of physical communication media needs to be carefully selected.
BibTeX:
@InProceedings{Kuwamura2012,
  Title                    = {Personality Distortion in Communication through Teleoperated Robots},
  Author                   = {Kaiko Kuwamura and Takashi Minato and Shuichi Nishio and Hiroshi Ishiguro},
  Booktitle                = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  Year                     = {2012},

  Address                  = {Paris, France},
  Month                    = Sep,
  Pages                    = {49--54},

  Abstract                 = {Recent research has focused on such physical communication media as teleoperated robots, which provide a feeling of being with people in remote places. Recent invented media resemble cute animals or imaginary creatures that quickly attract attention. However, such appearances could distort tele-communications because they are different from human beings. This paper studies the effect on the speaker's personality that is transmitted through physical media by regarding appearances as a function that transmits the speaker's information. Although communication media's capability to transmit information reportedly influences conversations in many aspects, the effect of appearances remains unclear. To reveal the effect of appearance, we compared three appearances of communication media: stuffed-bear teleoperated robot, human-like teleoperated robot, and video chat. Our results show that communication media whose appearance greatly differs from that of the speaker distorts the personality perceived by interlocutors. This paper suggests that the design of the appearance of physical communication media needs to be carefully selected.},
  Day                      = {9-13},
  File                     = {Kuwamura2012.pdf:pdf/Kuwamura2012.pdf:PDF},
  Grant                    = {CREST},
  Reviewed                 = {Y}
}
Shuichi Nishio, Kohei Ogawa, Yasuhiro Kanakogi, Shoji Itakura, Hiroshi Ishiguro, "Do robot appearance and speech affect people's attitude? Evaluation through the Ultimatum Game", In IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, pp. 809-814, September, 2012.
Abstract: In this study, we examine the factors with which robots are recognized as social beings. Participants joined ses- sions of the Ultimatum Game, a procedure commonly used for examining attitudes toward others in the fields of economics and social psychology. Several agents differing in their appearances are tested with speech stimuli that are expected to induce a mentalizing effect toward the agents. As a result, we found that while appearance itself did not show significant difference in the attitudes, the mentalizing stimuli affected the attitudes in different ways depending on robots' appearances. This results showed that such elements as simple conversation with the agents and their appearance are important factors so that robots are treated more humanlike and as social beings.
BibTeX:
@InProceedings{Nishio2012,
  author =          {Shuichi Nishio and Kohei Ogawa and Yasuhiro Kanakogi and Shoji Itakura and Hiroshi Ishiguro},
  title =           {Do robot appearance and speech affect people's attitude? Evaluation through the {U}ltimatum {G}ame},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2012},
  pages =           {809--814},
  address =         {Paris, France},
  month =           Sep,
  abstract =        {In this study, we examine the factors with which robots are recognized as social beings. Participants joined ses- sions of the Ultimatum Game, a procedure commonly used for examining attitudes toward others in the fields of economics and social psychology. Several agents differing in their appearances are tested with speech stimuli that are expected to induce a mentalizing effect toward the agents. As a result, we found that while appearance itself did not show significant difference in the attitudes, the mentalizing stimuli affected the attitudes in different ways depending on robots' appearances. This results showed that such elements as simple conversation with the agents and their appearance are important factors so that robots are treated more humanlike and as social beings.},
  day =             {9-13},
  doi =             {10.1109/ROMAN.2012.6343851},
  file =            {Nishio2012.pdf:Nishio2012.pdf:PDF},
}
Hidenobu Sumioka, Shuichi Nishio, Erina Okamoto, Hiroshi Ishiguro, "Doppel Teleoperation System: Isolation of physical traits and intelligence for personality study", In Annual meeting of the Cognitive Science Society (CogSci2012), Sapporo Convention Center, pp. 2375-2380, August, 2012.
Abstract: We introduce the “Doppel teleoperation system", which isolates several physical traits from a speaker, to investigate how personal information is conveyed to other people during conversation. With the Doppel system, one can choose for each of the communication channels to be transferred whether in its original form or in the one generated by the system. For example, the voice and body motion can be replaced by the Doppel system while the speech content is preserved. This will allow us to analyze individual effects of physical traits of the speaker and content in the speaker's speech on identification of personality. This selectivity of personal traits provides us with useful approach to investigate which information conveys our personality through conversation. To show a potential of this proposed system, we conduct an experiment to test how much the content of conversation conveys the personality of speakers to interlocutors, without any physical traits of the speakers. Preliminary results show that although interlocutors have difficulty identifying their speakers only by using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey our personality.
BibTeX:
@InProceedings{Sumioka2012,
  author =          {Hidenobu Sumioka and Shuichi Nishio and Erina Okamoto and Hiroshi Ishiguro},
  title =           {Doppel Teleoperation System: Isolation of physical traits and intelligence for personality study},
  booktitle =       {Annual meeting of the Cognitive Science Society ({C}og{S}ci2012)},
  year =            {2012},
  pages =           {2375-2380},
  address =         {Sapporo Convention Center},
  month =           Aug,
  abstract =        {We introduce the “Doppel teleoperation system", which isolates several physical traits from a speaker, to investigate how personal information is conveyed to other people during conversation. With the Doppel system, one can choose for each of the communication channels to be transferred whether in its original form or in the one generated by the system. For example, the voice and body motion can be replaced by the Doppel system while the speech content is preserved. This will allow us to analyze individual effects of physical traits of the speaker and content in the speaker's speech on identification of personality. This selectivity of personal traits provides us with useful approach to investigate which information conveys our personality through conversation. To show a potential of this proposed system, we conduct an experiment to test how much the content of conversation conveys the personality of speakers to interlocutors, without any physical traits of the speakers. Preliminary results show that although interlocutors have difficulty identifying their speakers only by using conversational contents, they can recognize their acquaintances when their acquaintances are the speakers. We point out some potential physical traits to convey our personality.},
  day =             {1-4},
  file =            {Sumioka2012.pdf:Sumioka2012.pdf:PDF},
  keywords =        {social cognition; android science; human-robot interaction; personality psychology; personal presence},
  url =             {http://mindmodeling.org/cogsci2012/papers/0413/paper0413.pdf}
}
Takashi Minato, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, "Development of Cellphone-type Tele-operated Android", Poster presentation at The 10th Asia Pacific Conference on Computer Human Interaction, Matsue, Japan, pp. 665-666, August, 2012.
Abstract: This paper presents a newly developed portable human-like robotic avatar “Elfoid" which can be a novel communication medium in that a user can talk with another person in a remote location in such a way that they feel each other's presence. It is designed to convey individuals' presence using voice, human-like appearance, and touch. Thanks to its cellphone capability, it can be used at anytime, anywhere. The paper describes the design concept of Elfoid and argues research issues on this communication medium.
BibTeX:
@InProceedings{Minato2012b,
  author =    {Takashi Minato and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro},
  title =     {Development of Cellphone-type Tele-operated Android},
  booktitle = {The 10th Asia Pacific Conference on Computer Human Interaction},
  year =      {2012},
  pages =     {665-666},
  address =   {Matsue, Japan},
  month =     Aug,
  abstract =  {This paper presents a newly developed portable human-like robotic avatar “Elfoid" which can be a novel communication medium in that a user can talk with another person in a remote location in such a way that they feel each other's presence. It is designed to convey individuals' presence using voice, human-like appearance, and touch. Thanks to its cellphone capability, it can be used at anytime, anywhere. The paper describes the design concept of Elfoid and argues research issues on this communication medium.},
  day =       {28-31},
  file =      {Minato2012b.pdf:Minato2012b.pdf:PDF},
  keywords =  {Communication media; minimal design; human's presence},
}
Hidenobu Sumioka, Takashi Minato, Kurima Sakai, Shuichi Nishio, Hiroshi Ishiguro, "Motion Design of an Interactive Small Humanoid Robot with Visual Illusion", In The 10th Asia Pacific Conference on Computer Human Interaction, Matsue, Japan, pp. 93-100, August, 2012.
Abstract: We propose a method that enables users to convey nonver- bal information, especially their gestures, through portable robot avatar based on illusory motion. The illusory mo- tion of head nodding is realized with blinking lights for a human-like mobile phone called Elfoid. Two blinking pat- terns of LEDs are designed based on biological motion and illusory motion from shadows. The patterns are compared to select an appropriate pattern for the illusion of motion in terms of the naturalness of movements and quick percep- tion. The result shows that illusory motions show better per- formance than biological motion. We also test whether the illusory motion of head nodding provides a positive effect compared with just blinking lights. In experiments, subjects, who are engaged in role-playing game, are asked to com- plain to Elfoids about their unpleasant situation. The results show that the subject frustration is eased by Elfoid's illusory head nodding.
BibTeX:
@InProceedings{Sumioka2012a,
  Title                    = {Motion Design of an Interactive Small Humanoid Robot with Visual Illusion},
  Author                   = {Hidenobu Sumioka and Takashi Minato and Kurima Sakai and Shuichi Nishio and Hiroshi Ishiguro},
  Booktitle                = {The 10th Asia Pacific Conference on Computer Human Interaction},
  Year                     = {2012},

  Address                  = {Matsue, Japan},
  Month                    = Aug,
  Pages                    = {93-100},

  Abstract                 = {We propose a method that enables users to convey nonver- bal information, especially their gestures, through portable robot avatar based on illusory motion. The illusory mo- tion of head nodding is realized with blinking lights for a human-like mobile phone called Elfoid. Two blinking pat- terns of LEDs are designed based on biological motion and illusory motion from shadows. The patterns are compared to select an appropriate pattern for the illusion of motion in terms of the naturalness of movements and quick percep- tion. The result shows that illusory motions show better per- formance than biological motion. We also test whether the illusory motion of head nodding provides a positive effect compared with just blinking lights. In experiments, subjects, who are engaged in role-playing game, are asked to com- plain to Elfoids about their unpleasant situation. The results show that the subject frustration is eased by Elfoid's illusory head nodding.},
  Day                      = {28-31},
  File                     = {Sumioka2012a.pdf:Sumioka2012a.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {telecommunication; nonverbal communication; portable robot avatar; visual illusion of motion},
  Reviewed                 = {Y},
  Url                      = {http://dl.acm.org/authorize?6720741}
}
Antonio Chella, Haris Dindo, Rosario Sorbello, Shuichi Nishio, Hiroshi Ishiguro, "Sing with the Telenoid", In CogSci 2012 Workshop on Teleoperated Android as a Tool for Cognitive Studies, Communication and Art, Sapporo Convention Center, pp. 16-20, August, 2012.
BibTeX:
@InProceedings{Chella2012,
  author =    {Antonio Chella and Haris Dindo and Rosario Sorbello and Shuichi Nishio and Hiroshi Ishiguro},
  title =     {Sing with the Telenoid},
  booktitle = {{C}og{S}ci 2012 Workshop on Teleoperated Android as a Tool for Cognitive Studies, Communication and Art},
  year =      {2012},
  pages =     {16--20},
  address =   {Sapporo Convention Center},
  month =     Aug,
  day =       {1-4},
  file =      {Chella2012.pdf:Chella2012.pdf:PDF},
  keywords =  {Computer Music; Embodiment; Emotions; Imitation learning; Creativity; Human-robot Interaction},
}
Shuichi Nishio, "Transmitting human presence with teleoperated androids: from proprioceptive transfer to elderly care", In CogSci2012 Workshop on Teleopearted Android as a Tool for Cognitive Studies, Communication and Art, Sapporo, Japan, August, 2012.
Abstract: Teleoperated androids, robots owning humanlike appearance equipped with semi-autonomous teleoperation facility, was first introduce in 2007 with the public release of Geminoid HI-1. Both its appearance that resembles the source person and its teleoperation functionality serves in making Geminoid as a research tool for seeking the nature of human presence and personality traits, tracing their origins and implementing into robots. Since the development of the first teleoperated android, we have been using them in a variety of domains, from studies on basic human natures to practical applications such as elderly care. In this talk, I will introduce some of our findings and ongoing projects.
BibTeX:
@InProceedings{Nishio2012d,
  author =    {Shuichi Nishio},
  title =     {Transmitting human presence with teleoperated androids: from proprioceptive transfer to elderly care},
  booktitle = {CogSci2012 Workshop on Teleopearted Android as a Tool for Cognitive Studies, Communication and Art},
  year =      {2012},
  address =   {Sapporo, Japan},
  month =     Aug,
  abstract =  {Teleoperated androids, robots owning humanlike appearance equipped with semi-autonomous teleoperation facility, was first introduce in 2007 with the public release of Geminoid HI-1. Both its appearance that resembles the source person and its teleoperation functionality serves in making Geminoid as a research tool for seeking the nature of human presence and personality traits, tracing their origins and implementing into robots. Since the development of the first teleoperated android, we have been using them in a variety of domains, from studies on basic human natures to practical applications such as elderly care. In this talk, I will introduce some of our findings and ongoing projects.},
}
Hiroshi Ishiguro, Shuichi Nishio, Antonio Chella, Rosario Sorbello, Giuseppe Balistreri, Marcello Giardina, Carmelo Cali, "Perceptual Social Dimensions of Human-Humanoid Robot Interaction", In The 12th International Conference on Intelligent Autonomous Systems, Springer Berlin Heidelberg, vol. 194, Jeju International Convention Center, Korea, pp. 409-421, June, 2012.
Abstract: The present paper aims at a descriptive analysis of the main perceptual and social features of natural conditions of agent interaction, which can be specified by agent in human- humanoid robot interaction. A principled approach to human- robot interaction may be assumed to comply with the natural conditions of agents overt perceptual and social behaviour. To validate our research we used the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with people with no prior interaction experience with robot. By administrating our questionnaire to subject after well defined experimental conditions, an analysis of significant variance corre- lation among dimensions in ordinary and goal guided contexts of interaction has been performed in order to prove that perception and believability are indicators of social interaction and increase the degree of interaction in human-humanoid interaction. The experimental results showed that Telenoid is seen from the users as an autonomous agent on its own rather than a teleoperated artificial agent and as a believable agent for its naturally acting in response to human agent actions.
BibTeX:
@InProceedings{Ishiguro2012,
  Title                    = {Perceptual Social Dimensions of Human-Humanoid Robot Interaction},
  Author                   = {Hiroshi Ishiguro and Shuichi Nishio and Antonio Chella and Rosario Sorbello and Giuseppe Balistreri and Marcello Giardina and Carmelo Cali},
  Booktitle                = {The 12th International Conference on Intelligent Autonomous Systems},
  Year                     = {2012},

  Address                  = {Jeju International Convention Center, Korea},
  Month                    = Jun,
  Pages                    = {409-421},
  Publisher                = {Springer Berlin Heidelberg},
  Series                   = {Advances in Intelligent Systems and Computing},
  Volume                   = {194},

  Abstract                 = {The present paper aims at a descriptive analysis of the main perceptual and social features of natural conditions of agent interaction, which can be specified by agent in human- humanoid robot interaction. A principled approach to human- robot interaction may be assumed to comply with the natural conditions of agents overt perceptual and social behaviour. To validate our research we used the minimalistic humanoid robot Telenoid. We have conducted human-robot interactions test with people with no prior interaction experience with robot. By administrating our questionnaire to subject after well defined experimental conditions, an analysis of significant variance corre- lation among dimensions in ordinary and goal guided contexts of interaction has been performed in order to prove that perception and believability are indicators of social interaction and increase the degree of interaction in human-humanoid interaction. The experimental results showed that Telenoid is seen from the users as an autonomous agent on its own rather than a teleoperated artificial agent and as a believable agent for its naturally acting in response to human agent actions.},
  Day                      = {26-29},
  Doi                      = {10.1007/978-3-642-33932-5_38},
  File                     = {Ishiguro2012.pdf:Ishiguro2012.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Telenoid, Geminoid, Human Robot Interaction, Social Robot, Humanoid Robot},
  Reviewed                 = {Y},
  Url                      = {http://link.springer.com/chapter/10.1007/978-3-642-33932-5_38}
}
Ryuji Yamazaki, Shuichi Nishio, Kohei Ogawa, Hiroshi Ishiguro, Kohei Matsumura, Kensuke Koda, Tsutomu Fujinami, "How Does Telenoid Affect the Communication between Children in Classroom Setting ?", In Extended Abstracts of the Conference on Human Factors in Computing Systems, Austin, Texas, USA, pp. 351-366, May, 2012.
Abstract: Recent advances in robotics have produced kinds of robots that are not only autonomous but can also tele- operated and have humanlike appearances. However, it is not sufficiently investigated how the tele-operated humanoid robots can affect and be accepted by people in a real world. In the present study, we investigated how elementary school children accepted Telenoid R1, a tele-operated humanoid robot. We conducted a school-based action research project to explore their responses to the robot. Our research theme was the social aspects that might facilitate communication and the purpose was problem finding. There have been considerable studies for resolving the remote disadvantage; although face-to-face is always supposed to be the best way for our communication, we ask whether it is possible to determine the primacy of remote communication over face-to-face. As a result of the field experiment in a school, the structure of children's group work changed and their attitude turned more positive than usual. Their spontaneity was brought out and role differentiation occurred with them. Mainly due to the limitations by Telenoid, children changed their attitude and could cooperatively work. The result suggested that the remote communication that set a limit to our capability could be useful for us to know and be trained the effective way to work more cooperatively than usual face-to-face. It remained as future work to compare Telenoid with various media and to explore the appropriate conditions that promote our cooperation.
BibTeX:
@InProceedings{Yamazaki2012,
  Title                    = {How Does Telenoid Affect the Communication between Children in Classroom Setting ?},
  Author                   = {Ryuji Yamazaki and Shuichi Nishio and Kohei Ogawa and Hiroshi Ishiguro and Kohei Matsumura and Kensuke Koda and Tsutomu Fujinami},
  Booktitle                = {Extended Abstracts of the Conference on Human Factors in Computing Systems},
  Year                     = {2012},

  Address                  = {Austin, Texas, {USA}},
  Month                    = May,
  Pages                    = {351-366},

  Abstract                 = {Recent advances in robotics have produced kinds of robots that are not only autonomous but can also tele- operated and have humanlike appearances. However, it is not sufficiently investigated how the tele-operated humanoid robots can affect and be accepted by people in a real world. In the present study, we investigated how elementary school children accepted Telenoid R1, a tele-operated humanoid robot. We conducted a school-based action research project to explore their responses to the robot. Our research theme was the social aspects that might facilitate communication and the purpose was problem finding. There have been considerable studies for resolving the remote disadvantage; although face-to-face is always supposed to be the best way for our communication, we ask whether it is possible to determine the primacy of remote communication over face-to-face. As a result of the field experiment in a school, the structure of children's group work changed and their attitude turned more positive than usual. Their spontaneity was brought out and role differentiation occurred with them. Mainly due to the limitations by Telenoid, children changed their attitude and could cooperatively work. The result suggested that the remote communication that set a limit to our capability could be useful for us to know and be trained the effective way to work more cooperatively than usual face-to-face. It remained as future work to compare Telenoid with various media and to explore the appropriate conditions that promote our cooperation.},
  Acknowledgement          = {This research was partially supported by {JST},{CREST}.},
  Day                      = {5-10},
  Doi                      = {10.1145/2212776.2212814},
  File                     = {Yamazaki2012.pdf:Yamazaki2012.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Tele-operation; android; minimal design; human interaction; role differentiation; cooperation},
  Reviewed                 = {Y},
  Url                      = {http://dl.acm.org/authorize?6764060}
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "BMI-teleoperation of androids can transfer the sense of body ownership", Poster presentation at Cognitive Neuroscience Society's Annual Meeting, Chicago, Illinois, USA, April, 2012.
Abstract: This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in perfect synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real limb. In this work however, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer when the robot's hands moved according to participant's imagination.
BibTeX:
@InProceedings{Alimardani2012,
  author =    {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =     {{BMI}-teleoperation of androids can transfer the sense of body ownership},
  booktitle = {Cognitive Neuroscience Society's Annual Meeting},
  year =      {2012},
  address =   {Chicago, Illinois, {USA}},
  month =     Apr,
  abstract =  {This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in perfect synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real limb. In this work however, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer when the robot's hands moved according to participant's imagination.},
  day =       {1},
  file =      {Alimardani2012.pdf:Alimardani2012.pdf:PDF},
}
Chaoran Liu, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction", In ACM/IEEE International Conference on Human Robot Interaction, Boston, USA, pp. 285-292, March, 2012.
Abstract: Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, ``Geminoid F'', a typical humanoid robot with less facial degrees of freedom, ``Robovie R2'', and a robot with a 3- axis rotatable neck and movable lips, ``Telenoid R2''). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only and directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.
BibTeX:
@InProceedings{Liu2012,
  Title                    = {Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction},
  Author                   = {Chaoran Liu and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {{ACM/IEEE} International Conference on Human Robot Interaction},
  Year                     = {2012},

  Address                  = {Boston, USA},
  Month                    = Mar,
  Pages                    = {285--292},

  Abstract                 = {Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, ``Geminoid F'', a typical humanoid robot with less facial degrees of freedom, ``Robovie R2'', and a robot with a 3- axis rotatable neck and movable lips, ``Telenoid R2''). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only and directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.},
  Acknowledgement          = {This work was supported by JST CREST.},
  Day                      = {5-8},
  Doi                      = {10.1145/2157689.2157797},
  File                     = {Liu2012.pdf:Liu2012.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {Head motion; dialogue acts; eye gazing; motion generation.},
  Reviewed                 = {Y}
}
Maryam Alimardani, Shuichi Nishio, Hiroshi Ishiguro, "Body ownership transfer to tele-operated android through mind controlling", In HAI-2011, Kyoto Institute of Technology, pp. I-2A-1, December, 2011.
Abstract: This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real hand. In this work, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer participant's imagination.
BibTeX:
@InProceedings{Alimardani2011,
  author =          {Maryam Alimardani and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Body ownership transfer to tele-operated android through mind controlling},
  booktitle =       {{HAI}-2011},
  year =            {2011},
  pages =           {I-2{A}-1},
  address =         {Kyoto Institute of Technology},
  month =           Dec,
  abstract =        {This work examines whether body ownership transfer can be induced by mind controlling android robots. Body ownership transfer is an illusion that happens for some people while tele-operating an android. They occasionally feel the robot's body has become a part of their own body and may feel a touch or a poke on robot's body or face even in the absence of tactile feedback. Previous studies have demonstrated that this feeling of ownership over an agent hand can be induced when robot's hand motions are in synchronization with operator's motions. However, it was not known whether this occurs due to the agency of the motion or by proprioceptive feedback of the real hand. In this work, subjects imagine their own right or left hand movement while watching android's corresponding hand moving according to the analysis of their brain activity. Through this research, we investigated whether elimination of proprioceptive feedback from operator's real limb can result in the illusion of ownership over external agent body. Evaluation was made by two measurement methods of questionnaire and skin conductance response and results from both methods proved a significant difference in intensity of bodily feeling transfer participant's imagination.},
  day =             {3-5},
  file =            {Alimardani2011.pdf:Alimardani2011.pdf:PDF;I-2A-1.pdf:http\://www.ii.is.kit.ac.jp/hai2011/proceedings/pdf/I-2A-1.pdf:PDF},
  url =             {http://www.ii.is.kit.ac.jp/hai2011/proceedings/html/paper/paper-1-2a-1.html}
}
Giuseppe Balistreri, Shuichi Nishio, Rosario Sorbello, Antonio Chella, Hiroshi Ishiguro, "A Natural Human Robot Meta-comunication through the Integration of Android's Sensors with Environment Embedded Sensors", In Biologically Inspired Cognitive Architectures 2011- Proceedings of the Second Annual Meeting of the BICA Society, IOS Press, vol. 233, Arlington, Virginia, USA, pp. 26-38, November, 2011.
Abstract: Building robots that closely resemble humans allow us to study phenom- ena in our daily human-to-human natural interactions that cannot be studied using mechanical-looking robots. This is supported by the fact that human-like devices can more easily elicit the same kind of responses that people use in their natural interactions. However, several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and, as Masahiro Mori observed, a human-like appearance is not enough for give a pos- itive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experi- ence with the android “Geminoid HI-1" demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environ- ment around the robot with perceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the controlling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The integration of the de- velopment of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.
BibTeX:
@InProceedings{Balistreri2011a,
  author =    {Giuseppe Balistreri and Shuichi Nishio and Rosario Sorbello and Antonio Chella and Hiroshi Ishiguro},
  title =     {A Natural Human Robot Meta-comunication through the Integration of Android's Sensors with Environment Embedded Sensors},
  booktitle = {Biologically Inspired Cognitive Architectures 2011- Proceedings of the Second Annual Meeting of the {BICA} Society},
  year =      {2011},
  volume =    {233},
  pages =     {26-38},
  address =   {Arlington, Virginia, {USA}},
  month =     Nov,
  publisher = {{IOS} Press},
  abstract =  {Building robots that closely resemble humans allow us to study phenom- ena in our daily human-to-human natural interactions that cannot be studied using mechanical-looking robots. This is supported by the fact that human-like devices can more easily elicit the same kind of responses that people use in their natural interactions. However, several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and, as Masahiro Mori observed, a human-like appearance is not enough for give a pos- itive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experi- ence with the android “Geminoid HI-1" demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environ- ment around the robot with perceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the controlling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The integration of the de- velopment of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.},
  day =       {5-6},
  file =      {Balistreri2011a.pdf:Balistreri2011a.pdf:PDF},
  keywords =  {Android; gaze; sensor network},
}
Martin Cooney, Takayuki Kanda, Aris Alissandrakis, Hiroshi Ishiguro, "Interaction Design for an Enjoyable Play Interaction with a Small Humanoid Robot", In IEEE-RAS International Conference on Humanoid Robots (Humanoids), Bled, Slovenia, pp. 112-119, October, 2011.
Abstract: Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robot's full body ("full-body gestures"). However, such recognition by itself is not enough to provide a nice interaction. In fact, interactions with an initial, naive version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of interaction design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide ``reward'' and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with ``persisting intentions'' can be used to establish an enjoyable play interaction.
BibTeX:
@InProceedings{Cooney2011,
  Title                    = {Interaction Design for an Enjoyable Play Interaction with a Small Humanoid Robot},
  Author                   = {Martin Cooney and Takayuki Kanda and Aris Alissandrakis and Hiroshi Ishiguro},
  Booktitle                = {{IEEE-RAS} International Conference on Humanoid Robots (Humanoids)},
  Year                     = {2011},

  Address                  = {Bled, Slovenia},
  Month                    = Oct,
  Pages                    = {112--119},

  Abstract                 = {Robots designed to act as companions are expected to be able to interact with people in an enjoyable fashion. In particular, our aim is to enable small companion robots to respond in a pleasant way when people pick them up and play with them. To this end, we developed a gesture recognition system capable of recognizing play gestures which involve a person moving a small humanoid robot's full body ("full-body gestures"). However, such recognition by itself is not enough to provide a nice interaction. In fact, interactions with an initial, naive version of our system frequently fail. The question then becomes: what more is required? I.e., what sort of interaction design is required in order to create successful interactions? To answer this question, we analyze typical failures which occur and compile a list of guidelines. Then, we implement this model in our robot, proposing strategies for how a robot can provide ``reward'' and suggest goals for the interaction. As a consequence, we conduct a validation experiment. We find that our interaction design with ``persisting intentions'' can be used to establish an enjoyable play interaction.},
  Acknowledgement          = {We'd like to thank everyone who helped with this project.},
  Day                      = {26-28},
  File                     = {Cooney2011.pdf:Cooney2011.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {interaction design; enjoyment; playful human-robot interaction; small humanoid robot},
  Reviewed                 = {Y}
}
Giuseppe Balistreri, Shuichi Nishio, Rosario Sorbello, Hiroshi Ishiguro, "Integrating Built-in Sensors of an Android with Sensors Embedded in the Environment for Studying a More Natural Human-Robot Interaction", In Lecture Notes in Computer Science (12th International Conference of the Italian Association for Artificial Intelligence), Springer, vol. 6934, Palermo, Italy, pp. 432-437, September, 2011.
Abstract: Several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and that a human-like appearance is not enough for give a positive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experience with the android ``Geminoid HI-1'' demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environment around the robot with per- ceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the control- ling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The inte- gration of the development of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.
BibTeX:
@InProceedings{Balistreri2011,
  author =    {Giuseppe Balistreri and Shuichi Nishio and Rosario Sorbello and Hiroshi Ishiguro},
  title =     {Integrating Built-in Sensors of an Android with Sensors Embedded in the Environment for Studying a More Natural Human-Robot Interaction},
  booktitle = {Lecture Notes in Computer Science (12th International Conference of the Italian Association for Artificial Intelligence)},
  year =      {2011},
  volume =    {6934},
  pages =     {432--437},
  address =   {Palermo, Italy},
  month =     Sep,
  publisher = {Springer},
  abstract =  {Several studies supported that there is a strict and complex relationship between outer appearance and the behavior showed by the robot and that a human-like appearance is not enough for give a positive impression. The robot should behave closely to humans, and should have a sense of perception that enables it to communicate with humans. Our past experience with the android ``Geminoid HI-1'' demonstrated that the sensors equipping the robot are not enough to perform a human-like communication, mainly because of a limited sensing range. To overcome this problem, we endowed the environment around the robot with per- ceptive capabilities by embedding sensors such as cameras into it. This paper reports a preliminary study about an improvement of the control- ling system by integrating cameras in the surrounding environment, so that a human-like perception can be provided to the android. The inte- gration of the development of androids and the investigations of human behaviors constitute a new research area fusing engineering and cognitive sciences.},
  bibsource = {DBLP, http://dblp.uni-trier.de},
  doi =       {10.1007/978-3-642-23954-0_43},
  file =      {Balistreri2011.pdf:Balistreri2011.pdf:PDF},
  keywords =  {Android; gaze; sensor network},
  url =       {http://www.springerlink.com/content/c015680178436107/}
}
Panikos Heracleous, Miki Sato, Carlos T. Ishi, Hiroshi Ishiguro, Norihiro Hagita, "Speech Production in Noisy Environments and the Effect on Automatic Speech Recognition", In International Congress of Phonetic Sciences, Hong Kong, China, pp. 855-858, August, 2011.
Abstract: Speech is bimodal in nature and includes the audio and visual modalities. In addition to acoustic speech perception, speech can be also perceived using visual information provided by the mouth/face (i.e., automatic lipreading). In this study, the visual speech production in noisy environments is investigated. The authors show that the Lombard effect plays an important role not only in audio speech but also in visual speech production. Experimental results show that when visual speech is produced in noisy environments, the visual parameters of the mouth/face change. As a result, the performance of a visual speech recognizer decreases.
BibTeX:
@InProceedings{Heracleous2011e,
  Title                    = {Speech Production in Noisy Environments and the Effect on Automatic Speech Recognition},
  Author                   = {Panikos Heracleous and Miki Sato and Carlos T. Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {International Congress of Phonetic Sciences},
  Year                     = {2011},

  Address                  = {Hong Kong, China},
  Month                    = Aug,
  Pages                    = {855--858},

  Abstract                 = {Speech is bimodal in nature and includes the audio and visual modalities. In addition to acoustic speech perception, speech can be also perceived using visual information provided by the mouth/face (i.e., automatic lipreading). In this study, the visual speech production in noisy environments is investigated. The authors show that the Lombard effect plays an important role not only in audio speech but also in visual speech production. Experimental results show that when visual speech is produced in noisy environments, the visual parameters of the mouth/face change. As a result, the performance of a visual speech recognizer decreases.},
  Acknowledgement          = {This work has been partially supported by {JST CREST} 'Studies on Cellphone-type Teleoperated Androids Transmitting Human Presence'.},
  Day                      = {18-21},
  File                     = {Heracleous2011e.pdf:Heracleous2011e.pdf:PDF;Heracleous.pdf:http\://www.icphs2011.hk/resources/OnlineProceedings/RegularSession/Heracleous/Heracleous.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {speech; noisy environments; Lombard effect; lipreading},
  Reviewed                 = {Y}
}
Kohei Ogawa, Shuichi Nishio, Kensuke Koda, Koichi Taura, Takashi Minato, Carlos T. Ishi, Hiroshi Ishiguro, "Telenoid: Tele-presence android for communication", In SIGGRAPH Emerging Technology, Vancouver, Canada, pp. 15, August, 2011.
Abstract: In this research, a new system of telecommunication called "Telenoid" is presented which focuses on the idea of transferring human's "presence". Telenoid was developed to appear and behave as a minimal design of human features. (Fig. 2(A)) A minimal human conveys the impression of human existence at first glance, but it doesn't suggest anything about personal features such as being male or female, old or young. Previously an android with more realistic features called Geminoid was proposed. However, because of its unique appearance, which is the copy of a model, it is too difficult to imagine other people's presence through Geminoid while they are operating it. On the other hand, Telenoid is designed as it holds an anonymous identity, which allows people to communicate with their acquaintances far away regardless of their gender and age. We expect that the Telenoid can be used as a medium that transfers human's presence by its minimal feature design.
BibTeX:
@InProceedings{Ogawa2011a,
  Title                    = {Telenoid: Tele-presence android for communication},
  Author                   = {Kohei Ogawa and Shuichi Nishio and Kensuke Koda and Koichi Taura and Takashi Minato and Carlos T. Ishi and Hiroshi Ishiguro},
  Booktitle                = {{SIGGRAPH} Emerging Technology},
  Year                     = {2011},

  Address                  = {Vancouver, Canada},
  Month                    = Aug,
  Pages                    = {15},

  Abstract                 = {In this research, a new system of telecommunication called "Telenoid" is presented which focuses on the idea of transferring human's "presence". Telenoid was developed to appear and behave as a minimal design of human features. (Fig. 2(A)) A minimal human conveys the impression of human existence at first glance, but it doesn't suggest anything about personal features such as being male or female, old or young. Previously an android with more realistic features called Geminoid was proposed. However, because of its unique appearance, which is the copy of a model, it is too difficult to imagine other people's presence through Geminoid while they are operating it. On the other hand, Telenoid is designed as it holds an anonymous identity, which allows people to communicate with their acquaintances far away regardless of their gender and age. We expect that the Telenoid can be used as a medium that transfers human's presence by its minimal feature design.},
  Acknowledgement          = {JST/CREST},
  Day                      = {7-11},
  Doi                      = {10.1145/2048259.2048274},
  File                     = {Ogawa2011a.pdf:Ogawa2011a.pdf:PDF},
  Grant                    = {CREST},
  Reviewed                 = {Y},
  Url                      = {http://dl.acm.org/authorize?6594082}
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Speech-driven lip motion generation for tele-operated humanoid robots", In the International Conference on Audio-Visual Speech Processing 2011, Volterra, Italy, pp. 131-135, August, 2011.
Abstract: (such as android) from the utterances of the operator, we developed a speech-driven lip motion generation method. The proposed method is based on the rotation of the vowel space, given by the first and second formants, around the center vowel, and a mapping to the lip opening degrees. The method requires the calibration of only one parameter for speaker normalization, so that no other training of models is required. In a pilot experiment, the proposed audio-based method was perceived as more natural than vision-based approaches, regardless of the language.
BibTeX:
@InProceedings{Ishi2011a,
  Title                    = {Speech-driven lip motion generation for tele-operated humanoid robots},
  Author                   = {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {the International Conference on Audio-Visual Speech Processing 2011},
  Year                     = {2011},

  Address                  = {Volterra, Italy},
  Month                    = Aug,
  Pages                    = {131-135},

  Abstract                 = {(such as android) from the utterances of the operator, we developed a speech-driven lip motion generation method. The proposed method is based on the rotation of the vowel space, given by the first and second formants, around the center vowel, and a mapping to the lip opening degrees. The method requires the calibration of only one parameter for speaker normalization, so that no other training of models is required. In a pilot experiment, the proposed audio-based method was perceived as more natural than vision-based approaches, regardless of the language.},
  Acknowledgement          = {This work was supported by JST/CREST. We thank Dr. Takashi Minato for advices on the motion control of the robots.},
  Day                      = {31-3},
  File                     = {Ishi2011a.pdf:pdf/Ishi2011a.pdf:PDF},
  Grant                    = {CREST},
  Keywords                 = {lip motion; formant; humanoid robot; tele-operation; synchronization},
  Reviewed                 = {Y}
}
Panikos Heracleous, Norihiro Hagita, "Automatic Recognition of Speech without any audio information", In IEEE International Conference on Acoustics, Speech and Signal Processing, Prague, Czech Republic, pp. 2392-2395, May, 2011.
Abstract: This article introduces automatic recognition of speech without any audio information. Movements of the tongue, lips, and jaw are tracked by an Electro-Magnetic Articulography (EMA) device and are used as features to create hidden Markov models (HMMs) and conduct automatic speech recognition in a conventional way. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). The results also show that using tongue parameters result in a higher accuracy compared with the lip parameters.
BibTeX:
@InProceedings{Heracleous2011a,
  Title                    = {Automatic Recognition of Speech without any audio information},
  Author                   = {Panikos Heracleous and Norihiro Hagita},
  Booktitle                = {{IEEE} International Conference on Acoustics, Speech and Signal Processing},
  Year                     = {2011},

  Address                  = {Prague, Czech Republic},
  Month                    = May,
  Pages                    = {2392--2395},

  Abstract                 = {This article introduces automatic recognition of speech without any audio information. Movements of the tongue, lips, and jaw are tracked by an Electro-Magnetic Articulography ({EMA}) device and are used as features to create hidden Markov models ({HMM}s) and conduct automatic speech recognition in a conventional way. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). The results also show that using tongue parameters result in a higher accuracy compared with the lip parameters.},
  Day                      = {22-27},
  Doi                      = {10.1109/ICASSP.2011.5946965},
  File                     = {Heracleous2011a.pdf:Heracleous2011a.pdf:PDF},
  Grant                    = {CREST},
  Reviewed                 = {Y},
  Url                      = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5946965}
}
Panikos Heracleous, Hiroshi Ishiguro, Norihiro Hagita, "Visual-speech to text conversion applicable to telephone communication for deaf individuals", In International Conference on Telecommunications, Ayia Napa, Cyprus, pp. 130-133, May, 2011.
Abstract: The access to communication technologies has become essential for the handicapped people. This study introduces the initial step of an automatic translation system able to translate visual speech used by deaf individuals to text, or auditory speech. A such a system would enable deaf users to communicate with each other and with normal-hearing people through telephone networks or through Internet by only using telephone devices equipped with simple cameras. In particular, this paper introduces automatic recognition and conversion to text of Cued Speech for French. Cued speech is a visual mode used for communication in the deaf society. Using hand shapes placed in different positions near the face as a complement to lipreading, all the sounds of a spoken language can be visually distinguished and perceived. Experimental results show high recognition rates for both isolated word and continuous phoneme recognition experiments in Cued Speech for French.
BibTeX:
@InProceedings{Heracleous2011f,
  Title                    = {Visual-speech to text conversion applicable to telephone communication for deaf individuals},
  Author                   = {Panikos Heracleous and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {International Conference on Telecommunications},
  Year                     = {2011},

  Address                  = {Ayia Napa, Cyprus},
  Month                    = May,
  Pages                    = {130--133},

  Abstract                 = {The access to communication technologies has become essential for the handicapped people. This study introduces the initial step of an automatic translation system able to translate visual speech used by deaf individuals to text, or auditory speech. A such a system would enable deaf users to communicate with each other and with normal-hearing people through telephone networks or through Internet by only using telephone devices equipped with simple cameras. In particular, this paper introduces automatic recognition and conversion to text of Cued Speech for French. Cued speech is a visual mode used for communication in the deaf society. Using hand shapes placed in different positions near the face as a complement to lipreading, all the sounds of a spoken language can be visually distinguished and perceived. Experimental results show high recognition rates for both isolated word and continuous phoneme recognition experiments in Cued Speech for French.},
  Day                      = {8-11},
  Doi                      = {10.1109/CTS.2011.5898904},
  File                     = {Heracleous2011f.pdf:Heracleous2011f.pdf:PDF},
  Grant                    = {CREST},
  Reviewed                 = {Y},
  Url                      = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5898904}
}
Panikos Heracleous, Miki Sato, Carlos Toshinori Ishi, Hiroshi Ishiguro, Norihiro Hagita, "The effect of environmental noise to automatic lip-reading", In Spring Meeting Acoustical Society of Japan, Waseda University, Tokyo, Japan, pp. 5-8, March, 2011.
Abstract: In automatic visual speech recognition, verbal messages can be interpreted by monitoring a talker's lip and facial movements using automated tools based on statistical methods (i.e., automatic visual speech recognition). Automatic visual speech recognition has applications in audiovisual speech recognition and in lip shape synthesis. This study investigates the automatic visual and audiovisual speech recognition in the presence of noise. The authors show that the Lombard effect plays an important role not only in audio, but also in automatic visual speech recognition. Experimental results of a multispeaker continuous phoneme recognition experiment show that the performance of a visual and an audiovisual speech recognition system further increases when the visual Lombard effect is also considered.
BibTeX:
@InProceedings{Heracleous2011c,
  Title                    = {The effect of environmental noise to automatic lip-reading},
  Author                   = {Panikos Heracleous and Miki Sato and Carlos Toshinori Ishi and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {Spring Meeting Acoustical Society of Japan},
  Year                     = {2011},

  Address                  = {Waseda University, Tokyo, Japan},
  Month                    = Mar,
  Pages                    = {5--8},
  Series                   = {1-5-3},

  Abstract                 = {In automatic visual speech recognition, verbal messages can be interpreted by monitoring a talker's lip and facial movements using automated tools based on statistical methods (i.e., automatic visual speech recognition). Automatic visual speech recognition has applications in audiovisual speech recognition and in lip shape synthesis. This study investigates the automatic visual and audiovisual speech recognition in the presence of noise. The authors show that the Lombard effect plays an important role not only in audio, but also in automatic visual speech recognition. Experimental results of a multispeaker continuous phoneme recognition experiment show that the performance of a visual and an audiovisual speech recognition system further increases when the visual Lombard effect is also considered.},
  Acknowledgement          = {JST/CREST},
  File                     = {Heracleous2011c.pdf:Heracleous2011c.pdf:PDF},
  Grant                    = {CREST},
  Reviewed                 = {Y}
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Hiroshi Ishiguro, "An Android in the Field", In the 6th ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, Switzerland, pp. 283-284, March, 2011.
Abstract: Since most robots are not easily displayable in real-life scenarios, only a few studies investigate users' behavior towards humanoids or androids in a natural environment. We present an observational field study and data on unscripted interactions between humans and the android robot "Geminoid HI-1". First results show that almost half of the subjects mistook Geminoid HI-1 for a human. Even those who recognized the android as a robot rather showed interest than negative emotions and explored the robots capabilities.
BibTeX:
@InProceedings{Putten2011,
  author =    {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Hiroshi Ishiguro},
  title =     {An Android in the Field},
  booktitle = {the 6th {ACM/IEEE} International Conference on Human-Robot Interaction},
  year =      {2011},
  pages =     {283--284},
  address =   {Lausanne, Switzerland},
  month =     Mar,
  abstract =  {Since most robots are not easily displayable in real-life scenarios, only a few studies investigate users' behavior towards humanoids or androids in a natural environment. We present an observational field study and data on unscripted interactions between humans and the android robot "Geminoid HI-1". First results show that almost half of the subjects mistook Geminoid HI-1 for a human. Even those who recognized the android as a robot rather showed interest than negative emotions and explored the robots capabilities.},
  day =       {6-9},
  doi =       {10.1145/1957656.1957772},
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "Incorporated identity in interaction with a teleoperated android robot: A case study", In IEEE International Symposium on Robot and Human Interactive Communication, Viareggio, Italy, pp. 139-144, September, 2010.
Abstract: In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.
BibTeX:
@InProceedings{Straub2010a,
  author =          {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title =           {Incorporated identity in interaction with a teleoperated android robot: A case study},
  booktitle =       {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =            {2010},
  pages =           {139--144},
  address =         {Viareggio, Italy},
  month =           Sep,
  abstract =        {In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.},
  doi =             {10.1109/ROMAN.2010.5598695},
  file =            {Straub2010a.pdf:Straub2010a.pdf:PDF},
  issn =            {1944-9445},
  keywords =        {Geminoid HI-1;artificial social agent robot;identity-creation;identity-imitation;identity-mediation;identity-switching;interaction tool analysis;metaphorical language;qualitative methods;teleoperated android robot;virtual agents;human-robot interaction;humanoid robots;telerobotics;},
  url =             {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5598695}
}
Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "Exploring the uncanny valley with Geminoid HI-1 in a real-world application", In IADIS International Conference on Interfaces and Human Computer Interaction, Freiburg, Germany, pp. 121-128, July, 2010.
Abstract: This paper presents a qualitative analysis of 24 interviews with visitors of the ARS Electronica festival in September 2009 in Linz, Austria, who interacted with the android robot Geminoid HI-1, while it was tele-operated by the first author. Only 37.5\% of the interviewed visitors reported an uncanny feeling with 29\% even enjoying the conversation. In five cases the interviewees' feelings even changed during the interaction with Geminoid HI-1. A number of possible improvements regarding Geminoid's bodily movements, facial expressivity, and ability to direct its gaze became apparent, which inform our future research with and development of android robots.
BibTeX:
@InProceedings{Becker-Asano2010,
  author =    {Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title =     {Exploring the uncanny valley with Geminoid {HI}-1 in a real-world application},
  booktitle = {{IADIS} International Conference on Interfaces and Human Computer Interaction},
  year =      {2010},
  pages =     {121--128},
  address =   {Freiburg, Germany},
  month =     Jul,
  abstract =  {This paper presents a qualitative analysis of 24 interviews with visitors of the ARS Electronica festival in September 2009 in Linz, Austria, who interacted with the android robot Geminoid {HI-1}, while it was tele-operated by the first author. Only 37.5\% of the interviewed visitors reported an uncanny feeling with 29\% even enjoying the conversation. In five cases the interviewees' feelings even changed during the interaction with Geminoid {HI-1}. A number of possible improvements regarding Geminoid's bodily movements, facial expressivity, and ability to direct its gaze became apparent, which inform our future research with and development of android robots.},
  file =      {Becker-Asano2010.pdf:Becker-Asano2010.pdf:PDF},
  url =       {http://www.iadisportal.org/digital-library/exploring-the-uncanny-valley-with-geminoid-hi-1-in-a-real-world-application}
}
Ilona Straub, Shuichi Nishio, Hiroshi Ishiguro, "Incorporated Identity in Interaction with a Teleoperated Android Robot: A Case Study", In International Conference on Culture and Computing, Kyoto, Japan, pp. 63-75, February, 2010.
Abstract: In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.
BibTeX:
@InProceedings{Straub2010,
  author =    {Ilona Straub and Shuichi Nishio and Hiroshi Ishiguro},
  title =     {Incorporated Identity in Interaction with a Teleoperated Android Robot: A Case Study},
  booktitle = {International Conference on Culture and Computing},
  year =      {2010},
  pages =     {63--75},
  address =   {Kyoto, Japan},
  month =     Feb,
  abstract =  {In near future artificial social agents embodied as virtual agents or as robots with humanoid appearance, will be placed in public settings and used as interaction tools. Considering the uncanny-valley-effect or images of robots as threat for humanity, a study about the acceptance and handling of such an interaction tool in the broad public is of great interest. The following study is based on qualitative methods of interaction analysis focusing on tendencies of peoples' ways to control or perceive a teleoperated android robot in an open public space. This field study shows the tendency of the users to ascribe an own identity to the teleoperated android robot Geminoid HI-1, which is independent from the identity of the controlling person. Both sides of the interaction unit were analyzed for 1) verbal cues about identity presentation on the side of the teleoperator, controlling the robot and for 2) verbal cues about identity perception of Geminoid HI-1 from the side of the interlocutor talking to the robot. The study unveils identity-creation, identity-switching, identity-mediation and identity-imitation of the teleoperators' own identity cues and the use of metaphorical language of the interlocutors showing forms to anthropomorphize and mentalize the android robot whilst interaction. Both sides of the interaction unit thus confer an `incorporated identity' towards the android robot Geminoid HI-1 and unveil tendencies to treat the android robot as social agent.},
  file =      {Straub2010.pdf:Straub2010.pdf:PDF},
}
Christian Becker-Asano, Hiroshi Ishiguro, "Laughter in Social Robotics - no laughing matter", In International Workshop on Social Intelligence Design, Kyoto, Japan, pp. 287-300, November, 2009.
Abstract: In this paper we describe our work in progress on investigating an understudied aspect of social interaction, namely laughter. In social interaction between humans laughter occurs in a variety of contexts featuring diverse meanings and connotations. Thus, we started to investigate the usefulness of this auditory and behavioral signal applied to social robotics. We first report on results of two surveys conducted to assess the subjectively evaluated naturalness of different types of laughter applied to two humanoid robots. Then we describe the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human. In essence, we learned that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which is partly depending on a perceiver's gender, personality, and cultural as well as educational background.
BibTeX:
@InProceedings{Becker-Asano2009,
  author =          {Christian Becker-Asano and Hiroshi Ishiguro},
  title =           {Laughter in Social Robotics - no laughing matter},
  booktitle =       {International Workshop on Social Intelligence Design},
  year =            {2009},
  pages =           {287--300},
  address =         {Kyoto, Japan},
  month =           Nov,
  abstract =        {In this paper we describe our work in progress on investigating an understudied aspect of social interaction, namely laughter. In social interaction between humans laughter occurs in a variety of contexts featuring diverse meanings and connotations. Thus, we started to investigate the usefulness of this auditory and behavioral signal applied to social robotics. We first report on results of two surveys conducted to assess the subjectively evaluated naturalness of different types of laughter applied to two humanoid robots. Then we describe the effects of laughter when combined with an android's motion and presented to uninformed participants, during playful interaction with another human. In essence, we learned that the social effect of laughter heavily depends on at least the following three factors: First, the situational context, which is not only determined by the task at hand, but also by linguistic content as well as non-verbal expressions; second, the type and quality of laughter synthesis in combination with an artificial laugher's outer appearance; and third, the interaction dynamics, which is partly depending on a perceiver's gender, personality, and cultural as well as educational background.},
  file =            {Becker-Asano2009.pdf:Becker-Asano2009.pdf:PDF},
  keywords =        {Affective Computing; Natural Interaction; Laughter; Social Robotics.},
  url =             {http://www.becker-asano.de/SID09_LaughterInSocialRoboticsCameraReady.pdf}
}
Kohei Ogawa, Christoph Bartneck, Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, "Can an android persuade you?", In IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, pp. 516-521, September, 2009.
Abstract: The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.
BibTeX:
@InProceedings{Ogawa2009,
  author =    {Kohei Ogawa and Christoph Bartneck and Daisuke Sakamoto and Takayuki Kanda and Tetsuo Ono and Hiroshi Ishiguro},
  title =     {Can an android persuade you?},
  booktitle = {{IEEE} International Symposium on Robot and Human Interactive Communication},
  year =      {2009},
  pages =     {516--521},
  address =   {Toyama, Japan},
  month =     Sep,
  abstract =  {The first robotic copies of real humans have become available. They enable their users to be physically present in multiple locations at the same time. This study investigates what influence the embodiment of an agent has on its persuasiveness and its perceived personality. Is a robotic copy as persuasive as its human counterpart? Does it have the same personality? We performed an experiment in which the embodiment of the agent was the independent variable and the persuasiveness and perceived personality were the dependent measurement. The persuasive agent advertised a Bluetooth headset. The results show that an android is found to be as persuasive as a real human or a video recording of a real human. The personality of the participant had a considerable influence on the measurements. Participants that were more open to new experiences rated the persuasive agent lower on agreeableness and extroversion. They were also more willing to spend money on the advertised product.},
  doi =       {10.1109/ROMAN.2009.5326352},
  file =      {Ogawa2009.pdf:Ogawa2009.pdf:PDF},
  issn =      {1944-9445},
  keywords =  {Bluetooth headset;human counterpart;persuasive agent;persuasive android robot;robotic copy;Bluetooth;humanoid robots;},
  url =       {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5326352}
}
Shuichi Nishio, Hiroshi Ishiguro, Miranda Anderson, Norihiro Hagita, "Expressing individuality through teleoperated android: a case study with children", In IASTED International Conference on Human Computer Interaction, ACTA Press, Innsbruck, Autria, pp. 297-302, March, 2008.
Abstract: When utilizing robots as communication interface medium, the appearance of the robots, and the atmosphere or sense of presence they express will be one of the key issues in their design. Just like each person holds his/her own individual impressions they give when having a conversation with others, it might be effective for robots to hold a suitable sense of individuality, in order to effectively communicate with humans. In this paper, we report our investigation on the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@InProceedings{Nishio2008,
  author =    {Shuichi Nishio and Hiroshi Ishiguro and Miranda Anderson and Norihiro Hagita},
  title =     {Expressing individuality through teleoperated android: a case study with children},
  booktitle = {{IASTED} International Conference on Human Computer Interaction},
  year =      {2008},
  pages =     {297--302},
  address =   {Innsbruck, Autria},
  month =     Mar,
  publisher = {{ACTA} Press},
  abstract =  {When utilizing robots as communication interface medium, the appearance of the robots, and the atmosphere or sense of presence they express will be one of the key issues in their design. Just like each person holds his/her own individual impressions they give when having a conversation with others, it might be effective for robots to hold a suitable sense of individuality, in order to effectively communicate with humans. In this paper, we report our investigation on the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. A case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  file =      {Nishio2008.pdf:Nishio2008.pdf:PDF},
  keywords =  {android; human individuality; human-robot interaction; personal presence},
  url =       {http://dl.acm.org/citation.cfm?id=1722359.1722414}
}
Shuichi Nishio, Hiroshi Ishiguro, Miranda Anderson, Norihiro Hagita, "Representing Personal Presence with a Teleoperated Android: A Case Study with Family", In AAAI Spring Symposium on Emotion, Personality, and Social Behavior, Stanford University, Palo Alto, California, USA, March, 2008.
Abstract: Our purpose is to investigate the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. In this research, a case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.
BibTeX:
@InProceedings{Nishio2008a,
  author =          {Shuichi Nishio and Hiroshi Ishiguro and Miranda Anderson and Norihiro Hagita},
  title =           {Representing Personal Presence with a Teleoperated Android: A Case Study with Family},
  booktitle =       {{AAAI} Spring Symposium on Emotion, Personality, and Social Behavior},
  year =            {2008},
  address =         {Stanford University, Palo Alto, California, {USA}},
  month =           Mar,
  abstract =        {Our purpose is to investigate the key elements for representing personal presence, which we define as the sense of being with a certain individual, and eventually implement them into robots. In this research, a case study is reported in which children performed daily conversational tasks with a geminoid, a teleoperated android robot that resembles a living individual. Different responses to the geminoid and the original person are examined, especially concentrating on the case where the target child was the daughter of the geminoid source. Results showed that children gradually became adapted to conversation with the geminoid, but the operator's personal presence was not completely represented. Further research topics on the adaptation process to androids and on seeking for the key elements on personal presence are discussed.},
  file =            {Nishio2008a.pdf:Nishio2008a.pdf:PDF},
}
Carlos T. Ishi, Judith Haas, Freerk P. Wilbers, Hiroshi Ishiguro, Norihiro Hagita, "Analysis of head motions and speech, and head motion control in an android", In IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, California, USA, pp. 548-553, October, 2007.
Abstract: With the aim of automatically generating head motions during speech utterances, analyses are conducted for verifying the relations between head motions and linguistic and paralinguistic information carried by speech utterances. Motion captured data are recorded during natural dialogue, and the rotation angles are estimated from the head marker data. Analysis results showed that nods frequently occur during speech utterances, not only for expressing specific dialog acts such as agreement and affirmation, but also as indicative of syntactic or semantic units, appearing at the last syllable of the phrases, in strong phrase boundaries. Analyses are also conducted on the dependence on linguistic, prosodic and voice quality information of other head motions, like shakes and tilts, and discuss about the potentiality for their use in automatic generation of head motions. The paper also proposes a method for controlling the head actuators of an android based on the rotation angles, and evaluates the mapping from the human head motions.
BibTeX:
@InProceedings{Ishi2007,
  Title                    = {Analysis of head motions and speech, and head motion control in an android},
  Author                   = {Carlos T. Ishi and Judith Haas and Freerk P. Wilbers and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  Year                     = {2007},

  Address                  = {San Diego, California, USA},
  Month                    = Oct,
  Pages                    = {548--553},

  Abstract                 = {With the aim of automatically generating head motions during speech utterances, analyses are conducted for verifying the relations between head motions and linguistic and paralinguistic information carried by speech utterances. Motion captured data are recorded during natural dialogue, and the rotation angles are estimated from the head marker data. Analysis results showed that nods frequently occur during speech utterances, not only for expressing specific dialog acts such as agreement and affirmation, but also as indicative of syntactic or semantic units, appearing at the last syllable of the phrases, in strong phrase boundaries. Analyses are also conducted on the dependence on linguistic, prosodic and voice quality information of other head motions, like shakes and tilts, and discuss about the potentiality for their use in automatic generation of head motions. The paper also proposes a method for controlling the head actuators of an android based on the rotation angles, and evaluates the mapping from the human head motions.},
  Doi                      = {10.1109/IROS.2007.4399335},
  File                     = {Ishi2007.pdf:Ishi2007.pdf:PDF},
  Grant                    = {ATR},
  Keywords                 = {android;head motion control;natural dialogue;paralinguistic information;phrase boundaries;speech analysis;speech utterances;voice quality information;humanoid robots;motion control;speech synthesis;},
  Reviewed                 = {Y},
  Url                      = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4399335}
}
Freerk P. Wilbers, Carlos T. Ishi, Hiroshi Ishiguro, "A Blendshape Model for Mapping Facial Motions to an Android", In IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 542-547, October, 2007.
Abstract: An important part of natural, and therefore effective, communication is facial motion. The android Repliee Q2 should therefore display realistic facial motion. In computer graphics animation, such motion is created by mapping human motion to the animated character. This paper proposes a method for mapping human facial motion to the android. This is done using a linear model of the android, based on blendshape models used in computer graphics. The model is derived from motion capture of the android and therefore also models the android's physical limitations. The paper shows that the blendshape method can be successfully applied to the android. Also, it is shown that a linear model is sufficient for representing android facial motion, which means control can be very straightforward. Measurements of the produced motion identify the physical limitations of the android and allow identifying the main areas for improvement of the android design.
BibTeX:
@InProceedings{Wilbers2007,
  Title                    = {A Blendshape Model for Mapping Facial Motions to an Android},
  Author                   = {Freerk P. Wilbers and Carlos T. Ishi and Hiroshi Ishiguro},
  Booktitle                = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems},
  Year                     = {2007},
  Month                    = Oct,
  Pages                    = {542--547},

  Abstract                 = {An important part of natural, and therefore effective, communication is facial motion. The android Repliee Q2 should therefore display realistic facial motion. In computer graphics animation, such motion is created by mapping human motion to the animated character. This paper proposes a method for mapping human facial motion to the android. This is done using a linear model of the android, based on blendshape models used in computer graphics. The model is derived from motion capture of the android and therefore also models the android's physical limitations. The paper shows that the blendshape method can be successfully applied to the android. Also, it is shown that a linear model is sufficient for representing android facial motion, which means control can be very straightforward. Measurements of the produced motion identify the physical limitations of the android and allow identifying the main areas for improvement of the android design.},
  Doi                      = {10.1109/IROS.2007.4399394},
  File                     = {Wilbers2007.pdf:Wilbers2007.pdf:PDF},
  Grant                    = {ATR},
  Keywords                 = {Repliee Q2;android;animated character;blendshape model;computer graphics animation;facial motions mapping;computer animation;face recognition;motion compensation;},
  Reviewed                 = {Y},
  Url                      = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4399394}
}
Daisuke Sakamoto, Takayuki Kanda, Tetsuo Ono, Hiroshi Ishiguro, Norihiro Hagita, "Android as a telecommunication medium with a human-like presence", In ACM/IEEE International Conference on Human Robot Interaction, Arlington, Virginia, USA, pp. 193-200, March, 2007.
Abstract: In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.
BibTeX:
@InProceedings{Sakamoto2007,
  Title                    = {Android as a telecommunication medium with a human-like presence},
  Author                   = {Daisuke Sakamoto and Takayuki Kanda and Tetsuo Ono and Hiroshi Ishiguro and Norihiro Hagita},
  Booktitle                = {{ACM/IEEE} International Conference on Human Robot Interaction},
  Year                     = {2007},

  Address                  = {Arlington, Virginia, {USA}},
  Month                    = Mar,
  Pages                    = {193--200},

  Abstract                 = {In this research, we realize human telepresence by developing a remote-controlled android system called Geminoid HI-1. Experimental results confirm that participants felt stronger presence of the operator when he talked through the android than when he appeared on a video monitor in a video conference system. In addition, participants talked with the robot naturally and evaluated its human likeness as equal to a man on a video monitor. At this paper's conclusion, we will discuss a remote-control system for telepresence that uses a human-like android robot as a new telecommunication medium.},
  Doi                      = {10.1145/1228716.1228743},
  Grant                    = {ATR},
  Keywords                 = {android science; humanoid robot; telecommunication; telepresence},
  Numpages                 = {8},
  Reviewed                 = {Y},
  Url                      = {http://doi.acm.org/10.1145/1228716.1228743}
}
Non-Reviewed Conference Papers
Hidenobu Sumioka, "Brain and soft body in Human-Robot interaction", In The Human Brain Project Symposium on Building Bodies for Brains & Brains for Bodies, Geneva, Switzerland, June, 2017.
Abstract: This is a one-day symposium in the field of neurorobotics with the goal of improving robot behavior by exploiting ideas from neuroscience and investigating brain function using real physical robots or simulations thereof. Contributions to this workshop will focus on (but are not limited to) the relation between neural systems - artificial or biological - and soft-material robotic platforms, in particular the “control” of such systems by capitalizing on their intrinsic dynamical characteristics like stiffness, viscosity and compliance.
BibTeX:
@InProceedings{Sumioka2017,
  author =    {Hidenobu Sumioka},
  title =     {Brain and soft body in Human-Robot interaction},
  booktitle = {The Human Brain Project Symposium on Building Bodies for Brains \& Brains for Bodies},
  year =      {2017},
  address =   {Geneva, Switzerland},
  month =     Jun,
  abstract =  {This is a one-day symposium in the field of neurorobotics with the goal of improving robot behavior by exploiting ideas from neuroscience and investigating brain function using real physical robots or simulations thereof. Contributions to this workshop will focus on (but are not limited to) the relation between neural systems - artificial or biological - and soft-material robotic platforms, in particular the “control” of such systems by capitalizing on their intrinsic dynamical characteristics like stiffness, viscosity and compliance.},
  day =       {16},
  file =      {Sumioka2017.pdf:pdf/Sumioka2017.pdf:PDF},
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Automatic labelling for DNN pitch classification", In 日本音響学会2017年春季研究発表会 (ASJ2017 Spring), vol. 1-P-32, 明治大学生田キャンパス, 神奈川, pp. 595-596, march, 2017.
Abstract: This paper presents a framework for gathering audio data and train a deep neural network for pitch classification. The goal is to obtain a large amount of labeled data to train the network. A throat microphone is used along side usual microphones while recording the training set. The throat microphone signal is not contaminated by the background noise. Consequently, a conventional pitch estimation algorithm gives a satisfactory pitch estimate. That pitch estimate is used as label to train the network to classify the pitch directly from the usual microphones. Preliminary experiments show that the proposed automatic labelling produces enough data to train the network.
BibTeX:
@InProceedings{Even2017,
  author =    {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title =     {Automatic labelling for DNN pitch classification},
  booktitle = {日本音響学会2017年春季研究発表会 (ASJ2017 Spring)},
  year =      {2017},
  volume =    {1-P-32},
  pages =     {595-596},
  address =   {明治大学生田キャンパス, 神奈川},
  month =     March,
  abstract =  {This paper presents a framework for gathering audio data and train a deep neural network for pitch classification. The goal is to obtain a large amount of labeled data to train the network. A throat microphone is used along side usual microphones while recording the training set. The throat microphone signal is not contaminated by the background noise. Consequently, a conventional pitch estimation algorithm gives a satisfactory pitch estimate. That pitch estimate is used as label to train the network to classify the pitch directly from the usual microphones. Preliminary experiments show that the proposed automatic labelling produces enough data to train the network.},
  day =       {15},
  file =      {Even2017.pdf:pdf/Even2017.pdf:PDF},
  url =       {http://www.asj.gr.jp/annualmeeting/index.html}
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Using utterance timing to generate gaze pattern", In 第46回 人工知能学会 AIチャレンジ研究会(SIG-Challenge 2016), vol. SIG-Challenge-046-09, 慶応義塾大学 日吉キャンパス 來往舎, 神奈川, pp. 50-55, November, 2016.
Abstract: This paper presents a method for generating the gaze pattern of a robot while it is talking. The goal is to prevent the robot's conversational partner from interrupting the robot at inappropriate moments. The proposed approach has two steps: First, the robot's utterance are split into meaningful parts. Then, for each of these parts, the robot performs or avoids eyes contact with the partner. The generated gaze pattern indicates the conversational partner that the robot has finished talking or not. To measure the efficiency of the approach, we propose to use speech overlap during conversations and average response time. Preliminary results showed that setting a gaze pattern for a robot with a very human-like appearance is not straight forward as we did not find satisfying parameters.
BibTeX:
@InProceedings{Even2016,
  author =    {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title =     {Using utterance timing to generate gaze pattern},
  booktitle = {第46回 人工知能学会 AIチャレンジ研究会(SIG-Challenge 2016)},
  year =      {2016},
  volume =    {SIG-Challenge-046-09},
  pages =     {50-55},
  address =   {慶応義塾大学 日吉キャンパス 來往舎, 神奈川},
  month =     Nov,
  abstract =  {This paper presents a method for generating the gaze pattern of a robot while it is talking. The goal is to prevent the robot's conversational partner from interrupting the robot at inappropriate moments. The proposed approach has two steps: First, the robot's utterance are split into meaningful parts. Then, for each of these parts, the robot performs or avoids eyes contact with the partner. The generated gaze pattern indicates the conversational partner that the robot has finished talking or not. To measure the efficiency of the approach, we propose to use speech overlap during conversations and average response time. Preliminary results showed that setting a gaze pattern for a robot with a very human-like appearance is not straight forward as we did not find satisfying parameters.},
  day =       {9},
  url =       {http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-046/program.html}
}
Jani Even, Carlos T. Ishi, Hiroshi Ishiguro, "Using Sensor Network for Android gaze control", In 第43回 人工知能学会 AIチャレンジ研究会, 慶応義塾大学 日吉キャンパス 來往舎, 神奈川, November, 2015.
Abstract: This paper presents the approach developed for controlling the gaze of an android robot. A sensor network composed of RGB-D cameras and microphone arrays is in charge of tracking the person interacting with the android and determining the speech activity. The information provided by the sensor network makes it possible for the robot to establish eye contact with the person. A subjective evaluation of the performance is made by subjects that were interacting with the android robot.
BibTeX:
@InProceedings{Even2015a,
  author =    {Jani Even and Carlos T. Ishi and Hiroshi Ishiguro},
  title =     {Using Sensor Network for Android gaze control},
  booktitle = {第43回 人工知能学会 AIチャレンジ研究会},
  year =      {2015},
  address =   {慶応義塾大学 日吉キャンパス 來往舎, 神奈川},
  month =     Nov,
  abstract =  {This paper presents the approach developed for controlling the gaze of an android robot. A sensor network composed of RGB-D cameras and microphone arrays is in charge of tracking the person interacting with the android and determining the speech activity. The information provided by the sensor network makes it possible for the robot to establish eye contact with the person. A subjective evaluation of the performance is made by subjects that were interacting with the android robot.},
  file =      {Even2015a.pdf:pdf/Even2015a.pdf:PDF},
}
石井カルロス寿憲, エヴァンイアニ, モラレスサイキルイスヨウイチ, 渡辺敦志, "複数のマイクロホンアレイの連携による音環境知能技術の研究開発", In ICTイノベーションフォーラム2015, 幕張メッセ, 千葉, October, 2015.
Abstract: 平成24~26年度に実施した総務省SCOPEのプロジェクト「複数のマイクロホンアレイの連携による音環境知能技術の研究開発」における成果を報告する。 「複数の固定・移動型マイクアレイとLRF 群の連携・協調において、従来の音源定位・分離及び分類の技術を発展させ、環境内の音源の空間的及び音響的特性を20cm の位置精度かつ100ms の時間分解能で表現した音環境地図の生成技術を開発する。本技術によって得られる音環境の事前知識を用いて、施設内の場所や時間帯に応じた雑音推定に役立てる。本技術は、聴覚障碍者のための音の可視化、高齢者のための知的な補聴器、音のズーム機能、防犯用の異常音検知など、幅広い応用性を持つ。」
BibTeX:
@InProceedings{石井カルロス寿憲2015c,
  Title                    = {複数のマイクロホンアレイの連携による音環境知能技術の研究開発},
  Author                   = {石井カルロス寿憲 and エヴァンイアニ and モラレスサイキルイスヨウイチ and 渡辺敦志},
  Booktitle                = {ICTイノベーションフォーラム2015},
  Year                     = {2015},

  Address                  = {幕張メッセ, 千葉},
  Month                    = OCT,

  Abstract                 = {平成24~26年度に実施した総務省SCOPEのプロジェクト「複数のマイクロホンアレイの連携による音環境知能技術の研究開発」における成果を報告する。 「複数の固定・移動型マイクアレイとLRF 群の連携・協調において、従来の音源定位・分離及び分類の技術を発展させ、環境内の音源の空間的及び音響的特性を20cm の位置精度かつ100ms の時間分解能で表現した音環境地図の生成技術を開発する。本技術によって得られる音環境の事前知識を用いて、施設内の場所や時間帯に応じた雑音推定に役立てる。本技術は、聴覚障碍者のための音の可視化、高齢者のための知的な補聴器、音のズーム機能、防犯用の異常音検知など、幅広い応用性を持つ。」},
  File                     = {石井カルロス寿憲2015c.pdf:pdf/石井カルロス寿憲2015c.pdf:PDF},
  Grant                    = {SCOPE},
  Language                 = {jp},
  Yomi                     = {Carlos Toshinori Ishi and Even Jani and Luis Yoichi Saiki Morales and Atsushi Watanabe}
}
Carlos T. Ishi, Takashi Minato, Hiroshi Ishiguro, "Investigation of motion generation in android robots during laughing speech", In International Workshop on Speech Robotics, Dresden, Germany, September, 2015.
Abstract: In the present work, we focused on motion generation during laughing speech. We analyzed how humans behave during laughing speech, and proposed a method for motion generation in our android robot, based on the main trends from the analysis results. The proposed method for laughter motion generation was evaluated through subjective experiments.
BibTeX:
@InProceedings{Ishi2015c,
  Title                    = {Investigation of motion generation in android robots during laughing speech},
  Author                   = {Carlos T. Ishi and Takashi Minato and Hiroshi Ishiguro},
  Booktitle                = {International Workshop on Speech Robotics},
  Year                     = {2015},

  Address                  = {Dresden, Germany},
  Month                    = SEP,

  Abstract                 = {In the present work, we focused on motion generation during laughing speech. We analyzed how humans behave during laughing speech, and proposed a method for motion generation in our android robot, based on the main trends from the analysis results. The proposed method for laughter motion generation was evaluated through subjective experiments.},
  File                     = {Ishi2015c.pdf:pdf/Ishi2015c.pdf:PDF},
  Grant                    = {ERATO},
  Language                 = {en},
  Url                      = {https://register-tubs.de/interspeech}
}
Jani Even, Jonas Furrer Michael, Carlos Toshinori Ishi, Norihiro Hagita, "In situ automated impulse response measurement with a mobile robot", In 日本音響学会 2015年春季研究発表会, 中央大学後楽園キャンパス(東京都文京区), March, 2015.
Abstract: This paper presents a framework for measuring the impulse responses from different positions for a microphone array using a mobile robot. The automated measurement method makes it possible to estimate the impulse response at a large number of positions. Moreover, this approach enables the impulse responses to be measured in the environment where the system is to be used. The effectiveness of the proposed approach is demonstrated by using it to set a beamforming system in an experiment room.
BibTeX:
@InProceedings{Jani2015,
  Title                    = {In situ automated impulse response measurement with a mobile robot},
  Author                   = {Jani Even and Jonas Furrer Michael and Carlos Toshinori Ishi and Norihiro Hagita},
  Booktitle                = {日本音響学会 2015年春季研究発表会},
  Year                     = {2015},

  Address                  = {中央大学後楽園キャンパス(東京都文京区)},
  Month                    = Mar,

  Abstract                 = {This paper presents a framework for measuring the impulse responses from different positions for a microphone array using a mobile robot. The automated measurement method makes it possible to estimate the impulse response at a large number of positions. Moreover, this approach enables the impulse responses to be measured in the environment where the system is to be used. The effectiveness of the proposed approach is demonstrated by using it to set a beamforming system in an experiment room.},
  File                     = {Even2015.pdf:pdf/Even2015.pdf:PDF},
  Grant                    = {ERATO},
  Language                 = {en}
}
劉超然, 石井カルロス寿憲, 石黒浩, 萩田紀博, "臨場感の伝わる遠隔操作システムのデザイン ~マイクロホンアレイ処理を用いた音環境の再構築~", In 第41回 人工知能学会 AIチャレンジ研究会, 慶應義塾大学日吉キャンパス 来住舎(東京), pp. 26-32, November, 2014.
Abstract: 本稿では遠隔地にあるロボットの周囲の音環境をマイクロフォンアレイ処理によって定位・分離し,ヴァーチャル位置にレンダリングするシステムを提案した。
BibTeX:
@InProceedings{劉超然2014,
  author =    {劉超然 and 石井カルロス寿憲 and 石黒浩 and 萩田紀博},
  title =     {臨場感の伝わる遠隔操作システムのデザイン ~マイクロホンアレイ処理を用いた音環境の再構築~},
  booktitle = {第41回 人工知能学会 AIチャレンジ研究会},
  year =      {2014},
  pages =     {26-32},
  address =   {慶應義塾大学日吉キャンパス 来住舎(東京)},
  month =     Nov,
  abstract =  {本稿では遠隔地にあるロボットの周囲の音環境をマイクロフォンアレイ処理によって定位・分離し,ヴァーチャル位置にレンダリングするシステムを提案した。},
  file =      {劉超然2014.pdf:pdf/劉超然2014.pdf:PDF},
}
Ryuji Yamazaki, Marco Nørskov, "Self-alteration in HRI", Poster presentation at International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics, Hanse Wissenschaftskolleg (HWK) - Institute for Advanced Study, Delmenhorst, Germany, February, 2014.
BibTeX:
@InProceedings{Yamazaki2014,
  author =    {Ryuji Yamazaki and Marco N\orskov},
  title =     {Self-alteration in HRI},
  booktitle = {International Conference : Going Beyond the Laboratory - Ethical and Societal Challenges for Robotics},
  year =      {2014},
  address =   {Hanse Wissenschaftskolleg (HWK) - Institute for Advanced Study, Delmenhorst, Germany},
  month =     Feb,
  day =       {13-15},
  file =      {Yamazaki2014.pdf:pdf/Yamazaki2014.pdf:PDF},
}
Ryuji Yamazaki, Shuichi Nishio, Kaiko Kuwamura, "Identity Construction of the Hybrid of Robot and Human", In 22nd IEEE International Symposium on Robot and Human Interactive Communication, Workshop on Enhancement/Training of Social Robotics Teleoperation and its Applications, Gyeongju, Korea, August, 2013.
BibTeX:
@InProceedings{Yamazaki2013,
  Title                    = {Identity Construction of the Hybrid of Robot and Human},
  Author                   = {Ryuji Yamazaki and Shuichi Nishio and Kaiko Kuwamura},
  Booktitle                = {22nd IEEE International Symposium on Robot and Human Interactive Communication, Workshop on Enhancement/Training of Social Robotics Teleoperation and its Applications},
  Year                     = {2013},

  Address                  = {Gyeongju, Korea},
  Month                    = Aug,

  Day                      = {26-29},
  Grant                    = {CREST},
  Language                 = {en}
}
Astrid M. von der Pütten, Christian Becker-Asano, Kohei Ogawa, Shuichi Nishio, Hiroshi Ishiguro, "Exploration and Analysis of People's Nonverbal Behavior Towards an Android", In the annual meeting of the International Communication Association, Phoenix, USA, May, 2012.
BibTeX:
@InProceedings{Putten2012,
  author =    {Astrid M. von der P\"{u}tten and Christian Becker-Asano and Kohei Ogawa and Shuichi Nishio and Hiroshi Ishiguro},
  title =     {Exploration and Analysis of People's Nonverbal Behavior Towards an Android},
  booktitle = {the annual meeting of the International Communication Association},
  year =      {2012},
  address =   {Phoenix, USA},
  month =     May,
}
Carlos T. Ishi, Chaoran Liu, Hiroshi Ishiguro, Norihiro Hagita, "Tele-operating the lip motion of humanoid robots from the operator's voice", In 第29回日本ロボット学会学術講演会, 芝浦工業大学豊洲キャンパス, 東京, pp. C1J3-6, September, 2011.
BibTeX:
@InProceedings{Ishi2011,
  author =          {Carlos T. Ishi and Chaoran Liu and Hiroshi Ishiguro and Norihiro Hagita},
  title =           {Tele-operating the lip motion of humanoid robots from the operator's voice},
  booktitle =       {第29回日本ロボット学会学術講演会},
  year =            {2011},
  pages =           {C1J3-6},
  address =         {芝浦工業大学豊洲キャンパス, 東京},
  month =           Sep,
  day =             {7-9},
  file =            {Ishi2011.pdf:pdf/Ishi2011.pdf:PDF},
}
Astrid M. von der Pütten, Nicole C. Krämer, Christian Becker-Asano, Hiroshi Ishiguro, "An android in the field. How people react towards Geminoid HI-1 in a real world scenario", In the 7th Conference of the Media Psychology Division of the German Psychological Society, Jacobs University, Bremen, Germany, August, 2011.
BibTeX:
@InProceedings{Putten2011a,
  author =    {Astrid M. von der P\"{u}tten and Nicole C. Kr\"{a}mer and Christian Becker-Asano and Hiroshi Ishiguro},
  title =     {An android in the field. How people react towards Geminoid HI-1 in a real world scenario},
  booktitle = {the 7th Conference of the Media Psychology Division of the German Psychological Society},
  year =      {2011},
  address =   {Jacobs University, Bremen, Germany},
  month =     Aug,
  day =       {10-11},
}
Panikos Heracleous, Norihiro Hagita, "A visual mode for communication in the deaf society", In Spring Meeting of Acoustical Society of Japan, Waseda University, Tokyo, Japan, pp. 57-60, March, 2011.
Abstract: In this article, automatic recognition of Cued Speech in French based on hidden Markov models (HMMs) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable to deaf and hearing-impaired people. The aim of Cued Speech is to overcome the problems of lip-reading and thus enable deaf children and adults to understand full spoken language. In this study, lip shape component is fused with hand component using multi-stream HMM decision fusion to realize Cued Speech recognition, and continuous phoneme recognition experiments using data from a normal-hearing and a deaf cuer were conducted. In the case of the normal-hearing cuer, the obtained phoneme correct was 87.3%, and in the case of the deaf cuer 84.3%. The current study also includes the description of Cued Speech in Japanese.
BibTeX:
@InProceedings{Heracleous2011d,
  Title                    = {A visual mode for communication in the deaf society},
  Author                   = {Panikos Heracleous and Norihiro Hagita},
  Booktitle                = {Spring Meeting of Acoustical Society of Japan},
  Year                     = {2011},

  Address                  = {Waseda University, Tokyo, Japan},
  Month                    = Mar,
  Pages                    = {57--60},
  Series                   = {2-5-6},

  Abstract                 = {In this article, automatic recognition of Cued Speech in French based on hidden Markov models ({HMM}s) is presented. Cued Speech is a visual mode, which uses hand shapes in different positions and in combination with lip-patterns of speech makes all the sounds of spoken language clearly understandable to deaf and hearing-impaired people. The aim of Cued Speech is to overcome the problems of lip-reading and thus enable deaf children and adults to understand full spoken language. In this study, lip shape component is fused with hand component using multi-stream HMM decision fusion to realize Cued Speech recognition, and continuous phoneme recognition experiments using data from a normal-hearing and a deaf cuer were conducted. In the case of the normal-hearing cuer, the obtained phoneme correct was 87.3%, and in the case of the deaf cuer 84.3%. The current study also includes the description of Cued Speech in Japanese.},
  Acknowledgement          = {{JST/CREST}},
  File                     = {Heracleous2011d.pdf:Heracleous2011d.pdf:PDF},
  Grant                    = {CREST}
}