Colloquium

R-GIRO "AI & Semiotics" Colloquium is a monthly colloquium where professors, researchers
and Ph.D. students come together and have a frank discussion.
Main participants are professors, researchers, and Ph.D. students in this project.
But, people from outside of this project is also welcome.

Each talk will include 30 min presentation and 30 min discussion, basically.
Let's enjoy scientific discussions.

Schedule for 2018 to 2019
 Month Organizer Details
July Prof.Taniguchi2018/7/20 15:00-17:30
Lecture on Symbol Emergence in Music by Prof. Keiji Hirata. 
JulyProf.Shimada2018/7/27 16:30-18:30
Co-hosting with Prof.Kitano R-GIRO,  Lecture by Prof. Kato.
COLLOQUIUM on "Machine Learning based Video Event Analysis"
JulyProf.Nishiura2018/7/31 10:30-16:10  
Joint Workshop with IEEE Kansai Section on "Basics of Artificial Intelligence and Its Real-world” 
NovemberProf.Wada2018/11/7 13:30-16:30
Joint Workshop with the Society of Automotive Engineers of Japan (JSAE)
NovemberProf.Nishiura 2018/11/26 14:40-17:00
We are planning to invite robotic speech researchers from the enterprise.
December Prof.Fukao2018/12/11 14:30-16:15
We are planning to organize the public event about self-driving related to project.
February Prof.Aoyama 2019/2/16 13:00-18:00 (at Ritsumeikan Umeda Campus)
TBA
   
 
 to be continued

(2018/11/7) Joint Workshop with the Society of Automotive Engineers of Japan (JSAE)

posted 15 Oct 2018, 01:20 by 前田真知子   [ updated 16 Oct 2018, 23:44 ]

Organizer: Professor Takahiro Wada

 We are very happy to have a joint workshop with the Society of Automotive Engineers of Japan (JSAE).
Workshop language is Japanese.

第8回R-GIRO人工知能と記号学の国際拠点コロキウムを、自動車技術会 ドライバ評価手法検討部門委員会の見学会と共催で開催いたします。

オーガナイザー:和田隆広教授
日時:2018年11月7日(水) 13:30-16:30
場所:立命館大学BKC クリエーションコア CC102

***プログラム(暫定)***********
13:30~13:50 和田隆広「研究室紹介」
13:50~14:20 園田耕平「運転行動と知覚:自動運転と合流判断+α」
14:20~14:50 長濱章仁 「車種混合交通における運転特性と追従モデルの再現性」
14:50~15:00 休憩 
15:00~15:30 佐藤勇起「身体性拡張の研究開発」
15:30~16:30 研究室見学(複数研究室)


(2018/7/27)COLLOQUIUM on "Machine Learning based Video Event Analysis"

posted 16 Jul 2018, 20:24 by 前田真知子   [ updated 25 Jul 2018, 23:58 ]

Organizer: Professor Nobutaka SHIMADA

 We are very happy to have a colloquium entitled "Machine Learning based Video Event Analysis"
This is the seventh colloquium of the R-GIRO project.
We co-host this Colloquium with Prof. KITANO Katsunori's R-GIRO Program.
 
Anybody who is interested in the talk can attend the lecture. 

Date & time: 16:30-18:30, Friday, 27th,  July 2018
Place: CC103, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese

[Talk]
Title: Machine Learning based Video Event Analysis
Speaker: Prof. KATO Jien
Ritsumeikan University

***********************************

2018年7月27日(金) 機械学習を用いた映像イベント解析をテーマにコロキウムを開催します。
R-GIRO『次世代人工知能と記号学の国際融合研究拠点』第7回目のコロキウム開催となります。

今回は、北野勝則教授のR-GIROと共催で、加藤ジェーン先生による講演会を開催します。

申し込み不要で、どなたでもご参加いただけます。
多くのご参加をお待ちしております。

オーガナイザー: 島田伸敬教授

日時:2018年7月27日16時半~18時半
場所:立命館大学 BKC クリエーションコア1F 103教室

会議言語:日本語

【講演】
タイトル:機械学習を用いた映像イベント解析
講演者 :加藤ジェーン教授(立命館大学)


(2018/7/20)COLLOQUIUM on "Symbol Emergence in Music"

posted 14 Jun 2018, 21:48 by 前田真知子   [ updated 17 Jul 2018, 18:33 ]


Organizer: Professor Tadahiro Taniguchi

 We are very happy to have a colloquium entitled "Symbol Emergence in Music"
This is the sixth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 15:00-17:00, Friday, 20th,  July 2018
Place: CC103, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese

[1st talk]
Title:  Symbol Emergence in Music
Speaker: Prof. Keiji Hirata
Future University Hakodate

[2nd talk] 
Title: Expression of melody not treated as a music note sequence 
Speaker:  Associate Prof. Tetsuro Kitahara
Nihon University

***********************************

2018年7月20日(金) 音楽における記号創発をテーマにコロキウムを開催します。
R-GIRO『次世代人工知能と記号学の国際融合研究拠点』第6回目のコロキウム開催となります。
申し込み不要で、どなたでもご参加いただけます。
多くのご参加をお待ちしております。

オーガナイザー: 谷口 忠大教授

日時:2018年7月20日15時~17時
場所:立命館大学 BKC クリエーションコア1F 103教室

会議言語:日本語

【講演1】
タイトル:音楽における記号創発について
講演者 :平田 圭二教授(公立はこだてみらい大学)

【アブストラクト】
人には言語や音楽を認知する機能が生得的に内在されていると言われており、
音楽を通して、言語とは異なる知性の側面が明らかになると期待されている。
本講演では、主に音楽における記号創発について議論する。音楽における記号
の成り立ち、参照、意味を考察し、音楽理論には2つの大きな系譜があること
を紹介する。そして現在、講演者らが取り組んでいる音楽における統計的文法
理論と構成的意味論に関するプロジェクトについて述べる。

【講演2】
タイトル:音符列として扱わないメロディの表現について
講演者 :北原鉄朗准教授(日本大学)

【アブストラクト】
メロディの表現として最も典型的なものの1つとして音符列があげられる。
しかし、人は必ずしも1つ1つの音符を認識してメロディを聴いているとは限らない。
本講演では、メロディを音符列として扱わないメロディ表現について、
講演者の考えと試みを述べる。

(2018/7/31) Joint Workshop with IEEE Kansai on "Basics of Artificial Intelligence and Its Real-world Applications"

posted 21 May 2018, 22:28 by 前田真知子   [ updated 24 Jul 2018, 18:32 ]

Organizer: Professor Takanobu Nishiura

 We are very happy to have a joint workshop entitled “Basics of Artificial Intelligence and Its Real-world Applications”.

We co-organize this workshop with IEEE Kansai Section.
Anybody who is interested in the talk can attend the lecture. 

About the details, please visit IEEE Kansai's Website.

Date & time: Tuesday, July 31, 2018  10:30-16:10      
Place: CC102, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese

[Keynote lecture]
Title:  Artificial intelligence, I experienced and watched
Lecturer: Professor Yoshiaki Shirai, Ritsumeikan University

About this program and the details, please visit IEEE Kansai Section's website.

************************

オーガナイザー:西浦 敬信教授

R-GIRO次世代人工知能と記号学の国際拠点とIEEE関西支部とのジョイントワークショップを開催します。

ワークショップテーマ :
人工知能の実世界応用の基礎と展開
Basics of Artificial Intelligence and Its Real-world Applications

日時 :
2018年7月31日(火)10:30-16:10

場所 :
立命館大学BKC クリエーションコア1F CC102
http://www.ritsumei.ac.jp/accessmap/bkc/ 


プログラム (Program):
10:30-10:40 開会の挨拶(IEEE関西支部TPC Chair)

10:40-10:45 白井先生ご紹介

10:45-12:15 基調講演「私が観て体験した人工知能」
立命館大学教授 白井 良明 先生

========= 昼食 ==========

13:15-14:45 若手研究者らによる人工知能関連のチュートリアル講演

画像認識への機械学習・ディープラーニング応用
松尾 直志 博士(立命館大学 助教)

音声認識への機械学習・ディープラーニング応用
福森 隆寛 博士(立命館大学 助教)

機械学習の家庭用ロボティクス応用 
谷口 彰 博士(立命館大学 学振PD)

15:00-16:00 グループディスカッション

16:00-16:10 閉会の挨拶

16:15-17:15 R-GIRO AI+Semiotics関連の研究成果体験会
                   
17:15-18:00 情報交換会


基調講演アブストラクト
 私は、電総研に就職し、ロボットの目の研究を行っていたことから、人工知能国際会議での発表、組織委員、 MITのAI Labでの滞在など、早くからAIに携わることになった。国内のロボット学会、人工知能学会の設立、 東京での国際会議開催に関わり、AIの種々の分野の発展を観てきた。専門のコンピュータビジョン(CV)も AIの他分野の影響を受けた。ニューラルネットも1980年代からCVに適用されたが、計算量の限界にはね返さ れた。現在は、ディープラーニングが広い分野で用いられているが、その効果とともに、問題点も解りつつ ある。AIの過去と現状を考えるとともに、立命館大学で行われているプロジェクト「人工知能と記号学」を 紹介する。



(2018/3/12)COLLOQUIUM ON "Human Driver and Automated Driving"

posted 12 Feb 2018, 19:03 by 前田真知子   [ updated 15 Feb 2018, 19:35 ]

Organizer: Professor Takahiro Wada

 We are very happy to have a colloquium entitled ”Human Driver and Automated Driving” 

This is the fifth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 16:00-18:00, Monday, 12th,  March 2018
Place: CC206, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: Japanese (maybe partially English) 

[1st talk]
Title:  Human Perception in Driving Behavior: Automated driving and Merging Judgment
Speaker: Dr. Kohei Sonoda
Human Robotics Lab. Ritsumeikan University

[2nd talk] 
Title: Jam reduction in developing countries 
~Prospect of emergent crowd-optimization with formation-control~
Speaker:  Dr. Akihito Nagahama
The University of Tokyo

(2018/2/28)Colloquium on "Intelligent Surveillance System using CV technology"

posted 14 Jan 2018, 22:02 by 前田真知子   [ updated 18 Feb 2018, 19:20 ]

Organizer: Professor Nobutaka Shimada

 We are very happy to have a colloquium entitled “Intelligent Surveillance System using CV technology”.

This is the fourth colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 10:40-12:10, Wednesday, 28th,  February 2018
Place: CC206, Creation Core, BKC, Ritsumeikan University
(Campus map) 
http://en.ritsumei.ac.jp/file.jsp?id=246775&f=.pdf
 (No.16 is ”Creation Core”)

Language: English (maybe partially Japanese) 

[1st talk]
Title:  Intelligent Surveillance System using CV technology
Speaker: Prof. Kang-Hyun Jo  
University of Ulsan, Korea

Abstract:
In the lecture, I briefly address the general current CV related topics in the ISLab(UOU) including self-introduction.
Then I focus on the Intelligent Surveillance System (ISS) we had developed and also spotlighted on the key issues
which are relevant in the general ISS especially into real societal implementation.
Later if the time allowable, other issues which are taken into account nowadays for example behavior based human
identification or affordance-based human behavior expectation problems, etc will be discussed.

http://islab.ulsan.ac.kr/about.php

(2017/06/16) Colloquium on "Machine learning for finding latent variables by a robot"

posted 5 Jun 2017, 00:43 by Tadahiro Taniguchi   [ updated 10 Sep 2017, 23:26 by 前田真知子 ]

Organizer: Professor Tadahiro Taniguchi

I'm very happy to have a colloquium entitled "Machine learning for finding latent variables by a robot."
This is the first colloquium of the R-GIRO project.
Anybody who is interested in the talk can attend the lecture. 

Date & time: 18:00-20:00, Friday, 16th,  June 2017
Place: CC206, Creation Core, BKC, Ritsumeikan University

Language: English (maybe partially Japanese) 

[1st talk]
Title: A Generative Framework for Multimodal Learning of Spatial Concepts and Object Categories: An Unsupervised Part-of-Speech Tagging and 3D Visual Perception Based Approach (tentative)

Speaker: Amir Aly (Senior researcher, Ritsumeikan University)

Abstract: (tentative)
Future human-robot collaboration employs language in instructing a robot about specific tasks to perform in its surroundings. This requires the robot to be able to associate spatial knowledge with language to understand the details of an assigned task so as to behave appropriately in the context of interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.interaction. In this paper, we propose a probabilistic framework for learning the meaning of language spatial concepts (i.e., spatial prepositions) and object categories based on visual cues representing spatial layouts and geometric characteristics of objects in a tabletop scene. The model investigates unsupervised Part-of-Speech (POS) tagging through a Hidden Markov Model (HMM) that infers the corresponding hidden tags to words. Spatial configurations and geometric characteristics of objects on the tabletop are described through 3D point cloud information that encodes spatial semantics and categories of referents and landmarks in the environment. The proposed model is evaluated through human user interaction with Toyota HSR robot, where the obtained results show the significant effect of the model in making the robot able to successfully engage in interaction with the user in space.


[2nd talk]
Title: Hidden Feature Extraction of Driving Behavior via Deep Learning and it’s Applications

Speaker: Hailong Liu (Ph.D. candidate / JSPS Research Fellow (DC2), Ritsumeikan University)



Abstract:
In this work, we propose a defect-repairable feature extraction method to extract shared low-dimensional time-series data of driving behavior by using deep sparse autoencoder (DSAE) which can also reduce the negative effects of the defective sensor time-series data. In the first experiment, it was shown that DSAE can extract low-dimensional time-series data that were shared in different multi-dimensional time-series data with different degrees of redundancy. Those extracted low-dimensional time-series data can be used for various applications, e.g., driving behavior visualization.The result of the second experiment illustrated that DSAE is an effective method for repairing defective sensor time-series data during the feature extraction by the back propagation method.Finally, the third experiment demonstrated that the negative effect of defects on a driving behavior segmentation task was reduced by using DSAE which outperformed other comparative methods. In summary, this study showed that DSAE had a high performance of feature extraction for driving behavior analysis even when sensor time-series data include defects. I will also talk about a further challenge about “How a self-driving deep learning network decides driving behavior”.







1-7 of 7