Department
of Mechano-Informatics, Graduate School of
Information
Science and Technology
The University of Tokyo
東京大学 大学院情報理工学系研究科 知能機械情報学専攻
Machine Intelligence Laboratory
機械知能研究室
兼担:東京大学 大学院情報理工学系研究科 創造情報学専攻
We propose the Journalist Robot that supports and/or partially replaces human journalist's tasks. The robot moves about in the real world, automatically finds news-like events based on the news criteria, recognizes scenes and objects, and interviews with people. Finally the robot generates the articles with the pictures using the recognized events and interviews. |
"AI Goggles" assists recognition and memory capability of human users who wear it. It is a wearable system consisting of a camera and head mount display (HMD) attached on the goggles, and a tiny wearable PC. The AI Goggles instantly recognize images in the user's field of view, automatically name them, and records movie clips automatically annotated with the names. The recorded data can then be searched by specifying a keyword (i.e. "key") in order to retrieve movie clips around the time when the specified scene was seen (i.e. leaving it at the bathroom mirror) - as if the user's "external" visual memories. Moreover, the system can quickly learn unknown images by teaching and learn to label and retrieve them on site, without loss of recognition ability for previously learnt ones. This is a major contribution to development of visual and memory assistive man-machine user interface. |
buildings, water, city, light, night |
cat, tiger, water, rocks, forest |
Image annotation and retrieval are the promising technology for widespread applications. However, it is a very hard task because target images have various appearances and provide a wide variety of categories. Also we have to attack the huge number of images to obtain the satisfactory results. We propose a new image annotation and retrieval method for miscellaneous weakly labeled images. A distance between images can be defined in the intrinsic space for annotation using PCCA based approach between images and labels. Because this intrinsic space is highly compressed compared to the image feature space, and obtained by just calculating the Eigen problem, our method provides fast, accurate and scalable image annotation and retrieval results. |
Time series prediction is an important issue in a wide range of areas. There are various real world processes whose states vary continuously, and those processes may have influences on each other. If the past information of one process X improves the predictability of another process Y, X is said to have a causal influence on Y. In order to make good predictions, it is necessary to identify the appropriate causal relationships. We propose a new method for quantifying the strength of the causal influence from one time series to another. The proposed method can represent the strength of causality as the number of bits, whether each of two time series is symbolic or numerical. |
We cannot fully capture the essence of motion without tactile information, and sometimes the lack of such information causes critical problems. To achieve a better understanding of motion behavior, we developed a wearable motion capture suit with full body tactile sensors. We also developed a motion sensor which can estimate its orientation with its inner CPU. We also built a tactile sensor module which can fit many kinds of body shapes. With this system, we can measure a user’s movement and tactile information simultaneously. By integrating tactile data with motion data, we can achieve many kinds of meaningful insights. |
We propose a method of high-speed 3D object recognition using our 3D features. This method can be applied to partial models with any size in any posture. Our 3D features consider the co-occurrence of shape and colors of an object’s surface. The additive property of these features makes it possible to calculate the similarity between a query part and the subspace of each object in a database without division, and therefore the time for recognition is quite short. |
Room# 81D1 Eng. Bldg.2 7-3-1 Hongo Bunkyo-ku, Tokyo 113-8656 JAPAN
Tel/Fax: +81-3-5841-1650
harada@mi.t.u-tokyo.ac.jp