{"id":151,"date":"2022-02-19T17:49:14","date_gmt":"2022-02-19T08:49:14","guid":{"rendered":"https:\/\/slp.cs.tut.ac.jp\/?page_id=151"},"modified":"2025-06-09T15:35:40","modified_gmt":"2025-06-09T06:35:40","slug":"home","status":"publish","type":"page","link":"https:\/\/slp.cs.tut.ac.jp\/","title":{"rendered":"HOME"},"content":{"rendered":"\n<div class=\"wp-block-group alignwide is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading alignwide ribbon\" id=\"research\">RESEARCH<\/h2>\n\n\n\n<div class=\"wp-block-group alignwide is-nowrap is-layout-flex wp-container-core-group-is-layout-6c531013 wp-block-group-is-layout-flex\">\n<figure class=\"wp-block-image size-medium\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"225\" src=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/Saya\u5bfe\u8a71-300x225.jpg\" alt=\"\" class=\"wp-image-140\" srcset=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/Saya\u5bfe\u8a71-300x225.jpg 300w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/Saya\u5bfe\u8a71-1024x767.jpg 1024w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/Saya\u5bfe\u8a71-768x575.jpg 768w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/Saya\u5bfe\u8a71.jpg 1442w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/figure>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h3 class=\"wp-block-heading alignwide has-foreground-color has-text-color\" id=\"\u97f3\u58f0\u5bfe\u8a71\u30a4\u30f3\u30bf\u30d5\u30a7\u30fc\u30b9-1\">Spoken Dialog&nbsp;Interface<\/h3>\n\n\n\n<p class=\"has-foreground-color has-text-color\">One of the challenges of spoken dialogue systems is to prevent the user from feeling unnatural. Therefore, we build a spoken dialogue system that takes into account the timing of the other party and the pitch of the voice.<br>On the other hand, we also consider the semantic content of the dialogue. In this way, we are building a dialogue system that is robust and responds naturally.<\/p>\n<\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-nowrap is-layout-flex wp-container-core-group-is-layout-6c531013 wp-block-group-is-layout-flex\">\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h3 class=\"wp-block-heading has-foreground-color has-text-color\" id=\"speech-recognition\">Speech Recognition<\/h3>\n\n\n\n<p class=\"has-foreground-color has-text-color\">We are working on improving the performance of speech recognition by improving the model of human voice (acoustic model) using HMM and DNN. We are also working on improving statistical language models.<\/p>\n<\/div>\n\n\n\n<figure class=\"wp-block-image size-medium\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"259\" src=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-300x259.jpg\" alt=\"\" class=\"wp-image-112\" srcset=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-300x259.jpg 300w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-1024x886.jpg 1024w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-768x664.jpg 768w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-1536x1328.jpg 1536w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-2048x1771.jpg 2048w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-1200x1038.jpg 1200w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/onsei-1980x1712.jpg 1980w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-nowrap is-layout-flex wp-container-core-group-is-layout-6c531013 wp-block-group-is-layout-flex\">\n<figure class=\"wp-block-image size-medium\"><img loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"169\" src=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/\u81ea\u52d5\u904b\u8ee2short_Moment-2-300x169.jpg\" alt=\"\" class=\"wp-image-122\" srcset=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/\u81ea\u52d5\u904b\u8ee2short_Moment-2-300x169.jpg 300w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/\u81ea\u52d5\u904b\u8ee2short_Moment-2-1024x576.jpg 1024w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/\u81ea\u52d5\u904b\u8ee2short_Moment-2-768x432.jpg 768w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/\u81ea\u52d5\u904b\u8ee2short_Moment-2-1536x864.jpg 1536w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/\u81ea\u52d5\u904b\u8ee2short_Moment-2-1200x675.jpg 1200w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2022\/02\/\u81ea\u52d5\u904b\u8ee2short_Moment-2.jpg 1920w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/figure>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<h3 class=\"wp-block-heading has-foreground-color has-text-color\" id=\"\u30de\u30eb\u30c1\u30e2\u30fc\u30c0\u30eb\u30a4\u30f3\u30bf\u30fc\u30d5\u30a7\u30fc\u30b9-1\">Multimodal Interface<\/h3>\n\n\n\n<p class=\"has-foreground-color has-text-color\">When interacting with speech, people often use pointing and eye contact to convey information. We are trying to realize such human-to-human interaction between humans and machines.<\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading alignwide ribbon\">MEMBER<\/h2>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2025\/06\/2025zentai1-1-1024x576.jpg\" alt=\"\" class=\"wp-image-2013\" srcset=\"https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2025\/06\/2025zentai1-1-1024x576.jpg 1024w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2025\/06\/2025zentai1-1-300x169.jpg 300w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2025\/06\/2025zentai1-1-768x432.jpg 768w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2025\/06\/2025zentai1-1-1536x864.jpg 1536w, https:\/\/slp.cs.tut.ac.jp\/wp-content\/uploads\/2025\/06\/2025zentai1-1.jpg 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">FY2025<\/figcaption><\/figure>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading alignwide ribbon\" id=\"news\">NEWS<\/h2>\n\n\n<ul class=\"wp-block-latest-posts__list has-dates wp-block-latest-posts\"><li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/slp.cs.tut.ac.jp\/en\/2025\/06\/06\/%e5%85%a8%e5%93%a1%e5%86%99%e7%9c%9f%e3%82%92%e6%92%ae%e5%bd%b1%e3%81%97%e3%81%be%e3%81%97%e3%81%9f\/\">We have taken all the photos.<\/a><time datetime=\"2025-06-06T15:40:00+09:00\" class=\"wp-block-latest-posts__post-date\">2025\u5e746\u67086\u65e5<\/time><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/slp.cs.tut.ac.jp\/en\/2025\/05\/16\/d2%e9%ab%98%e5%9f%8e%e5%b7%bd%e6%88%90%e3%81%95%e3%82%93%e3%83%bbd1%e4%b8%89%e6%b2%b3%e5%a4%9a%e8%81%9e%e3%81%95%e3%82%93%e3%83%bbm2%e5%b1%b1%e4%b8%ad%e7%a8%9c%e6%96%97%e3%81%95%e3%82%93%e3%81%8c\/\">D2 Tatsunari Takagi, D1 Tamon Mikawa, and M2 Rikuto Yamanaka were interviewed by FM Aichi radio with Prof. Kitaoka.<\/a><time datetime=\"2025-05-16T11:00:37+09:00\" class=\"wp-block-latest-posts__post-date\">2025\u5e745\u670816\u65e5<\/time><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/slp.cs.tut.ac.jp\/en\/2025\/05\/02\/%e6%96%b0%e5%85%a5%e7%94%9f%e6%ad%93%e8%bf%8e%e4%bc%9a%e3%82%92%e8%a1%8c%e3%81%84%e3%81%be%e3%81%97%e3%81%9f\/\">A welcome party for new students was held.<\/a><time datetime=\"2025-05-02T11:00:02+09:00\" class=\"wp-block-latest-posts__post-date\">2025\u5e745\u67082\u65e5<\/time><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/slp.cs.tut.ac.jp\/en\/2025\/04\/04\/%e6%96%b0%e3%83%a1%e3%83%b3%e3%83%90%e3%83%bc%e3%81%a8%e9%a1%94%e5%90%88%e3%82%8f%e3%81%9b%e4%bc%9a%e3%82%92%e8%a1%8c%e3%81%84%e3%81%be%e3%81%97%e3%81%9f\/\">We had a meeting with new members.<\/a><time datetime=\"2025-04-04T11:00:11+09:00\" class=\"wp-block-latest-posts__post-date\">2025\u5e744\u67084\u65e5<\/time><\/li>\n<li><a class=\"wp-block-latest-posts__post-title\" href=\"https:\/\/slp.cs.tut.ac.jp\/en\/2025\/04\/01\/%e8%a5%bf%e6%9d%91%e8%89%af%e5%a4%aa%e5%87%86%e6%95%99%e6%8e%88%e3%81%8c%e7%9d%80%e4%bb%bb%e3%81%95%e3%82%8c%e3%81%be%e3%81%97%e3%81%9f%e3%80%82%e5%8c%97%e5%b2%a1%e7%a0%94%e3%81%af%e8%a5%bf%e6%9d%91\/\">Associate Professor Ryota Nishimura has been appointed. Kitaoka Lab will collaborate with Nishimura Lab.<\/a><time datetime=\"2025-04-01T13:30:30+09:00\" class=\"wp-block-latest-posts__post-date\">2025\u5e744\u67081\u65e5<\/time><\/li>\n<\/ul><\/div>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-group alignwide is-layout-constrained wp-block-group-is-layout-constrained\">\n<h2 class=\"wp-block-heading alignwide ribbon\" id=\"access\">ACCESS<\/h2>\n\n\n\n<div class=\"wp-block-group alignwide is-nowrap is-layout-flex wp-container-core-group-is-layout-6c531013 wp-block-group-is-layout-flex\">\n<p>Room F303, Research F Building, <br>Department of Information and Intelligent Engineering,<br>Toyohashi University of Technology (Kitaoka)<br>1-1 Hibarigaoka, Tenpaku-cho, Toyohashi, Aichi 441-8580, Japan<\/p>\n\n\n\n<iframe src=\"https:\/\/www.google.com\/maps\/embed?pb=!1m14!1m8!1m3!1d3096.072705646467!2d137.40776708464236!3d34.70077614996416!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x6004d47dd04c0149%3A0xe9e79241bf839526!2sF%20Building%2C%20Hibarigaoka%20Tenpakuch%C5%8D%2C%20Toyohashi%2C%20Aichi%20441-8122!5e0!3m2!1sen!2sjp!4v1690851630915!5m2!1sen!2sjp\" width=\"600\" height=\"450\" style=\"border:0;\" allowfullscreen=\"\" loading=\"lazy\" referrerpolicy=\"no-referrer-when-downgrade\"><\/iframe>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>RESEARCH Spoken Dialog&nbsp;Interface One of the challenges of spoken dialogue systems is to prevent the user from feeling unnatural. Therefore, we build a spoken dialogue system that takes into account the timing of the other party and the pitch of the voice.On the other hand, we also consider the semantic content of the dialogue. In [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"wp-custom-template-en%e3%83%95%e3%83%ad%e3%83%b3%e3%83%88%e3%83%9a%e3%83%bc%e3%82%b8","meta":{"_locale":"en_US","_original_post":"https:\/\/slp.cs.tut.ac.jp\/?page_id=45","footnotes":""},"class_list":["post-151","page","type-page","status-publish","hentry","en-US"],"_links":{"self":[{"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/pages\/151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/comments?post=151"}],"version-history":[{"count":35,"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/pages\/151\/revisions"}],"predecessor-version":[{"id":2041,"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/pages\/151\/revisions\/2041"}],"wp:attachment":[{"href":"https:\/\/slp.cs.tut.ac.jp\/wp-json\/wp\/v2\/media?parent=151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}