Colloquium

R-GIRO "AI & Semiotics" Colloquium is a monthly colloquium where professors, researchers
and Ph.D. students come together and have a frank discussion.
Main participants are professors, researchers, and Ph.D. students in this project.
But, people from outside of this project is also welcome.

Each talk will include 30 min presentation and 30 min discussion, basically.
Let's enjoy scientific discussions.

Schedule for 2018 to 2019
 Month Organizer Details
July Prof.Taniguchi2018/7/20 15:00-17:30
Lecture on Symbol Emergence in Music by Prof. Keiji Hirata. 
JulyProf.Nishiura2018/7/31 10:30-16:10  
Joint Workshop with IEEE Kansai Section on "Basics of Artificial Intelligence and Its Real-world” 
AugustProf.ShimadaCo-hosting with Prof.Kitano R-GIRO,  Lecture by Prof. Kato.
OctoberProf.WadaTBA
NovemberProf.Nishiura We are planning to invite robotic speech researchers from the enterprise.
December Prof.FukaoWe are planning to organize the public event about self-driving related to project.
February Prof.Aoyama Workshop on TBA
   
 
 to be continued

(2018/7/20)COLLOQUIUM on "Symbol Emergence in Music"

posted 14 Jun 2018, 21:48 by 前田真知子


Organizer: Professor Tadahiro Taniguchi

 We are very happy to have a colloquium entitled "Symbol Emergence in Music"
This is the sixth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 15:00-17:00, Friday, 20th,  July 2018
Place: CC103, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese

[1st talk]
Title:  TBA
Speaker: Prof. Keiji Hirata
Future University Hakodate

[2nd talk] 
Title: TBA
Speaker:  Associate Prof. Tetsuro Kitahara
Nihon University

***********************************

2018年7月20日(金) 音楽における記号創発をテーマにコロキウムを開催します。
R-GIRO『次世代人工知能と記号学の国際融合研究拠点』第6回目のコロキウム開催となります。
申し込み不要で、どなたでもご参加いただけます。
多くのご参加をお待ちしております。

オーガナイザー: 谷口 忠大教授

日時:2018年7月20日15時~17時
場所:立命館大学 BKC クリエーションコア1F 103教室

会議言語:日本語

【講演1】
タイトル:未定
講演者 :平田 圭二教授(公立はこだてみらい大学)

【講演2】
タイトル:未定
講演者 :北原鉄朗准教授(日本大学)



(2018/7/31) Joint Workshop with IEEE Kansai on "Basics of Artificial Intelligence and Its Real-world Applications"

posted 21 May 2018, 22:28 by 前田真知子   [ updated 21 May 2018, 23:03 ]

Organizer: Professor Takanobu Nishiura

 We are very happy to have a joint workshop entitled “Basics of Artificial Intelligence and Its Real-world Applications”.

We co-organize this workshop with IEEE Kansai Section.
Anybody who is interested in the talk can attend the lecture. 

Registration required.
About the details, please visit IEEE Kansai's Website.
Deadline for registering: Tuesday, 17th Jul. 2018

Date & time: Tuesday, July 31, 2018  10:30-16:10      
Place: CC102, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese

[Keynote lecture]
Title:  Artificial intelligence, I experienced and watched
Lecturer: Professor Yoshiaki Shirai, Ritsumeikan University

About this program and the details, please visit IEEE Kansai Section's website.

(2018/3/12)COLLOQUIUM ON "Human Driver and Automated Driving"

posted 12 Feb 2018, 19:03 by 前田真知子   [ updated 15 Feb 2018, 19:35 ]

Organizer: Professor Takahiro Wada

 We are very happy to have a colloquium entitled ”Human Driver and Automated Driving” 

This is the fifth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 16:00-18:00, Monday, 12th,  March 2018
Place: CC206, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese (maybe partially English) 

[1st talk]
Title:  Human Perception in Driving Behavior: Automated driving and Merging Judgment
Speaker: Dr. Kohei Sonoda
Human Robotics Lab. Ritsumeikan University

[2nd talk] 
Title: Jam reduction in developing countries 
~Prospect of emergent crowd-optimization with formation-control~
Speaker:  Dr. Akihito Nagahama
The University of Tokyo

(2018/2/28)Colloquium on "Intelligent Surveillance System using CV technology"

posted 14 Jan 2018, 22:02 by 前田真知子   [ updated 18 Feb 2018, 19:20 ]

Organizer: Professor Nobutaka Shimada

 We are very happy to have a colloquium entitled “Intelligent Surveillance System using CV technology”.

This is the fourth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 10:40-12:10, Wednesday, 28th,  February 2018
Place: CC206, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: English (maybe partially Japanese) 

[1st talk]
Title:  Intelligent Surveillance System using CV technology
Speaker: Prof. Kang-Hyun Jo  
University of Ulsan, Korea

Abstract:
In the lecture, I briefly address the general current CV related topics in the ISLab(UOU) including self-introduction.
Then I focus on the Intelligent Surveillance System (ISS) we had developed and also spotlighted on the key issues
which are relevant in the general ISS especially into real societal implementation.
Later if the time allowable, other issues which are taken into account nowadays for example behavior based human
identification or affordance-based human behavior expectation problems, etc will be discussed.

http://islab.ulsan.ac.kr/about.php

(2017/06/16) Colloquium on "Machine learning for finding latent variables by a robot"

posted 5 Jun 2017, 00:43 by Tadahiro Taniguchi   [ updated 10 Sep 2017, 23:26 by 前田真知子 ]

Organizer: Professor Tadahiro Taniguchi

I'm very happy to have a colloquium entitled "Machine learning for finding latent variables by a robot."
This is the first colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 18:00-20:00, Friday, 16th,  June 2017
Place: CC206, Creation Core, BKC, Ritsumeikan University

Language: English (maybe partially Japanese) 

[1st talk]
Title: A Generative Framework for Multimodal Learning of Spatial Concepts and Object Categories: An Unsupervised Part-of-Speech Tagging and 3D Visual Perception Based Approach (tentative)

Speaker: Amir Aly (Senior researcher, Ritsumeikan University)

Abstract: (tentative)
Future human-robot collaboration employs language in instructing a robot about specific tasks to perform in its surroundings. This requires the robot to be able to associate spatial knowledge with language to understand the details of an assigned task so as to behave appropriately in the context of interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.


[2nd talk]
Title: Hidden Feature Extraction of Driving Behavior via Deep Learning and it’s Applications

Speaker: Hailong Liu (Ph.D. candidate / JSPS Research Fellow (DC2), Ritsumeikan University)



Abstract:
In this work, we propose a defect-repairable feature extraction method to extract shared low-dimensional time-series data of driving behavior by using deep sparse autoencoder (DSAE) which can also reduce the negative effects of the defective sensor time-series data. In the first experiment, it was shown that DSAE can extract low-dimensional time-series data that were shared in different multi-dimensional time-series data with different degrees of redundancy. Those extracted low-dimensional time-series data can be used for various applications, e.g., driving behavior visualization.The result of the second experiment illustrated that DSAE is an effective method for repairing defective sensor time-series data during the feature extraction by the back propagation method.Finally, the third experiment demonstrated that the negative effect of defects on a driving behavior segmentation task was reduced by using DSAE which outperformed other comparative methods. In summary, this study showed that DSAE had a high performance of feature extraction for driving behavior analysis even when sensor time-series data include defects. I will also talk about a further challenge about “How a self-driving deep learning network decides driving behavior”.







1-5 of 5