Colloquium

R-GIRO "AI & Semiotics" Colloquium is a monthly colloquium where professors, researchers
and Ph.D. students come together and have a frank discussion.
Main participants are professors, researchers, and Ph.D. students in this project.
But, people from outside of this project is also welcome.

Each talk will include 30 min presentation and 30 min discussion, basically.
Let's enjoy scientific discussions.

Schedule for 2017 to 2018
 Month Organizer Details
June Prof.Taniguchi 2017/6/16
"Machine learning for finding latent variables by a robot"
NovemberProf. Nishiura 2017/11/2
"Voice-enabled interface for interactive robots"
DecemberProf. Fukao2017/12/7
 "Driving Act Analysis"
FebruaryProf. Shimada2018/2/28
" TBA"
MarchProf. Wada
 
 to be continued

(2018/3/12)COLLOQUIUM ON "Human Driver and Automated Driving"

posted 12 Feb 2018, 19:03 by 前田真知子   [ updated 15 Feb 2018, 19:35 ]

Organizer: Professor Takahiro Wada

 We are very happy to have a colloquium entitled ”Human Driver and Automated Driving” 

This is the fifth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 16:00-18:00, Monday, 12th,  March 2018
Place: CC206, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese (maybe partially English) 

[1st talk]
Title:  Human Perception in Driving Behavior: Automated driving and Merging Judgment
Speaker: Dr. Kohei Sonoda
Human Robotics Lab. Ritsumeikan University

[2nd talk] 
Title: Jam reduction in developing countries 
~Prospect of emergent crowd-optimization with formation-control~
Speaker:  Dr. Akihito Nagahama
The University of Tokyo

(2018/2/28)Colloquium on "Intelligent Surveillance System using CV technology"

posted 14 Jan 2018, 22:02 by 前田真知子   [ updated ]

Organizer: Professor Nobutaka Shimada

 We are very happy to have a colloquium entitled “Intelligent Surveillance System using CV technology”.

This is the fourth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 10:40-12:10, Wednesday, 28th,  February 2018
Place: CC206, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: English (maybe partially Japanese) 

[1st talk]
Title:  Intelligent Surveillance System using CV technology
Speaker: Prof. Kang-Hyun Jo  
University of Ulsan, Korea

Abstract:
In the lecture, I briefly address the general current CV related topics in the ISLab(UOU) including self-introduction.
Then I focus on the Intelligent Surveillance System (ISS) we had developed and also spotlighted on the key issues
which are relevant in the general ISS especially into real societal implementation.
Later if the time allowable, other issues which are taken into account nowadays for example behavior based human
identification or affordance-based human behavior expectation problems, etc will be discussed.

http://islab.ulsan.ac.kr/about.php

(2017/06/16) Colloquium on "Machine learning for finding latent variables by a robot"

posted 5 Jun 2017, 00:43 by Tadahiro Taniguchi   [ updated 10 Sep 2017, 23:26 by 前田真知子 ]

Organizer: Professor Tadahiro Taniguchi

I'm very happy to have a colloquium entitled "Machine learning for finding latent variables by a robot."
This is the first colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 18:00-20:00, Friday, 16th,  June 2017
Place: CC206, Creation Core, BKC, Ritsumeikan University

Language: English (maybe partially Japanese) 

[1st talk]
Title: A Generative Framework for Multimodal Learning of Spatial Concepts and Object Categories: An Unsupervised Part-of-Speech Tagging and 3D Visual Perception Based Approach (tentative)

Speaker: Amir Aly (Senior researcher, Ritsumeikan University)

Abstract: (tentative)
Future human-robot collaboration employs language in instructing a robot about specific tasks to perform in its surroundings. This requires the robot to be able to associate spatial knowledge with language to understand the details of an assigned task so as to behave appropriately in the context of interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.


[2nd talk]
Title: Hidden Feature Extraction of Driving Behavior via Deep Learning and it’s Applications

Speaker: Hailong Liu (Ph.D. candidate / JSPS Research Fellow (DC2), Ritsumeikan University)



Abstract:
In this work, we propose a defect-repairable feature extraction method to extract shared low-dimensional time-series data of driving behavior by using deep sparse autoencoder (DSAE) which can also reduce the negative effects of the defective sensor time-series data. In the first experiment, it was shown that DSAE can extract low-dimensional time-series data that were shared in different multi-dimensional time-series data with different degrees of redundancy. Those extracted low-dimensional time-series data can be used for various applications, e.g., driving behavior visualization.The result of the second experiment illustrated that DSAE is an effective method for repairing defective sensor time-series data during the feature extraction by the back propagation method.Finally, the third experiment demonstrated that the negative effect of defects on a driving behavior segmentation task was reduced by using DSAE which outperformed other comparative methods. In summary, this study showed that DSAE had a high performance of feature extraction for driving behavior analysis even when sensor time-series data include defects. I will also talk about a further challenge about “How a self-driving deep learning network decides driving behavior”.







1-3 of 3