Name
Affiliation
Papers
YUYA CHIBA
Tohoku Univ, Grad Sch Engn, Sendai, Miyagi 980, Japan
20
Collaborators
Citations 
PageRank 
15
8
6.96
Referers 
Referees 
References 
8
207
91
Search Limit
100207
Title
Citations
PageRank
Year
Successive Japanese Lyrics Generation Based on Encoder-Decoder Model.00.342020
Incremental Response Generation Using Prefix-to-Prefix Model for Dialogue System.00.342020
Spoken Term Detection Based on Acoustic Models Trained in Multiple Languages for Zero-Resource Language.00.342020
Multi-Stream Attention-Based BLSTM with Feature Segmentation for Speech Emotion Recognition.00.342020
Automatic assessment of English proficiency for Japanese learners without reference sentences based on deep neural network acoustic models.00.342020
Construction and Analysis of a Multimodal Chat-talk Corpus for Dialog Systems Considering Interpersonal Closeness.00.342020
Analysis and Estimation of Sentence Speakability for English Pronunciation Evaluation.00.342020
Filler Prediction Based on Bidirectional LSTM for Generation of Natural Response of Spoken Dialog.00.342020
Improving human scoring of prosody using parametric speech synthesis.10.372019
An Analysis of the Effect of Emotional Speech Synthesis on Non-Task-Oriented Dialogue System.00.342018
Improving User Impression in Spoken Dialog System with Gradual Speech Form Control.00.342018
Effect of Mutual Self-Disclosure in Spoken Dialog System on User Impression.00.342018
Analysis of efficient multimodal features for estimating user's willingness to talk: Comparison of human-machine and human-human dialog.00.342017
Collection of Example Sentences for Non-task-Oriented Dialog Using a Spoken Dialog System and Comparison with Hand-Crafted DB.00.342017
Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt.00.342017
Estimation of User's Willingness to Talk About the Topic: Analysis of Interviews Between Humans.00.342016
User Modeling by Using Bag-of-Behaviors for Building a Dialog System Sensitive to the Interlocutor's Internal State10.352014
Estimation of User's State during a Dialog Turn with Sequential Multi-modal Features.20.372013
Estimation of User's Internal State before the User's First Utterance Using Acoustic Features and Face Orientation10.402012
Estimating a user’s internal state before the first input utterance30.402012