Colloquium

R-GIRO "AI & Semiotics" Colloquium is a monthly colloquium where professors, researchers
and Ph.D. students come together and have a frank discussion.
Main participants are professors, researchers, and Ph.D. students in this project.
But, people from outside of this project is also welcome.

Each talk will include 30 min presentation and 30 min discussion, basically.
Let's enjoy scientific discussions.

Schedule for 2017 to 2018
 Month Organizer Details
June Prof.Taniguchi 2017/6/16 "Machine learning for finding latent variables by a robot"
NovemberProf. Nishiura 2017/11/2
"Voice-enabled interface for interactive robots"
DecemberProf. Fukao2017/12/7

JanuaryProf. Shimada
MarchProf. Wada
 
 to be continued

(2017/06/16) Colloquium on "Machine learning for finding latent variables by a robot"

posted 5 Jun 2017, 00:43 by Tadahiro Taniguchi   [ updated 10 Sep 2017, 23:26 by 前田真知子 ]

Organizer: Professor Tadahiro Taniguchi

I'm very happy to have a colloquium entitled "Machine learning for finding latent variables by a robot."
This is the first colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 18:00-20:00, Friday, 16th,  June 2017
Place: CC206, Creation Core, BKC, Ritsumeikan University

Language: English (maybe partially Japanese) 

[1st talk]
Title: A Generative Framework for Multimodal Learning of Spatial Concepts and Object Categories: An Unsupervised Part-of-Speech Tagging and 3D Visual Perception Based Approach (tentative)

Speaker: Amir Aly (Senior researcher, Ritsumeikan University)

Abstract: (tentative)
Future human-robot collaboration employs language in instructing a robot about specific tasks to perform in its surroundings. This requires the robot to be able to associate spatial knowledge with language to understand the details of an assigned task so as to behave appropriately in the context of interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.


[2nd talk]
Title: Hidden Feature Extraction of Driving Behavior via Deep Learning and it’s Applications

Speaker: Hailong Liu (Ph.D. candidate / JSPS Research Fellow (DC2), Ritsumeikan University)



Abstract:
In this work, we propose a defect-repairable feature extraction method to extract shared low-dimensional time-series data of driving behavior by using deep sparse autoencoder (DSAE) which can also reduce the negative effects of the defective sensor time-series data. In the first experiment, it was shown that DSAE can extract low-dimensional time-series data that were shared in different multi-dimensional time-series data with different degrees of redundancy. Those extracted low-dimensional time-series data can be used for various applications, e.g., driving behavior visualization.The result of the second experiment illustrated that DSAE is an effective method for repairing defective sensor time-series data during the feature extraction by the back propagation method.Finally, the third experiment demonstrated that the negative effect of defects on a driving behavior segmentation task was reduced by using DSAE which outperformed other comparative methods. In summary, this study showed that DSAE had a high performance of feature extraction for driving behavior analysis even when sensor time-series data include defects. I will also talk about a further challenge about “How a self-driving deep learning network decides driving behavior”.







1-1 of 1