로봇신문사
> 뉴스 > 개인서비스 로봇
Vision-based machine learning for soft wearable robot enables disabled person to naturally grasp objectsSoft Robotics Research Center
폰트키우기 폰트줄이기 프린트하기 메일보내기 신고하기
승인 2019.01.30  15:30:54
트위터 페이스북 구글+ 밴드

Professor Sungho Jo (KAIST) and Kyu-Jin Cho (Seoul National University), a collaboration research team in Soft Robotics Research Center (SRRC), Seoul, Korea, have proposed a new intention detection paradigm for soft wearable hand robots. The proposed paradigm predicts grasping/releasing intentions based on user behaviors, enabling the spinal cord injury (SCI) patients with lost hand mobility to pick-and-place objects. (The researchers include (KAIST) Daekyum Kim & Jeesoo Ha, (Seoul National University) Brian Byunghyun Kang, Kyu Bum Kim & Hyungmin Choi)

They developed a method based on a machine learning algorithm that predicts user intentions for wearable hand robots by utilizing a first-person-view camera. Their development is based on the hypothesis: user intentions can be inferred through the collection of user arm behaviors and hand-object interactions.

The machine learning model used in this study, Vision-based Intention Detection network from an EgOcentric view (VIDEO-Net), is designed based on this hypothesis. VIDEO-Net is composed of spatial and temporal sub-networks, where the temporal sub-network is to recognize user arm behaviors and the spatial sub-network is to recognize hand-object interactions.

An SCI patient wearing Exo-Glove Poly II, (Video of previous version ) a soft wearable hand robot, successfully pick-and-place various objects and perform essential activities of daily living, such as drinking coffee without any additional helps. Their development is advantageous in that it detects user intentions without requiring any person-to-person calibrations and additional actions. This enables the wearable hand robot to interact with humans seamlessly.

The research will be published in the 26th issueof Science Robotics as a focus article on January 30, 2018.

Q: How does this system work?

A: This technology aims to predict user intentions, specifically grasping and releasing intent toward a target object, by utilizing a first-person-view camera mounted on glasses. VIDEO-Net, a deep learning-based algorithm, is devised to predict user intentions from the camera based on user arm behaviors and hand-object interactions. With Vision, the environment and the human movement data is captured, which is used to train the machine learning algorithm.
Instead of using bio-signals, which is often used for intention detection of disabled people, we use a simple camera to find out the intention of the user. Whether the person is trying to grasp or not. This works because the target users are able to move their arm, but not their hands. We can predict the user’s intention of grasping by observing the arm movement and the distance from the object and the hand, and interpreting the observation using machine learning.

Q: Who benefits using this technology?

A: As mentioned earlier, this technology detects user intentions from human arm behaviors and hand-object interactions. This technology can be used to any people with lost hand mobility, such as spinal cord injury, stroke, cerebral palsy or any other injuries, as long as they can move their arm voluntarily.

Q: What are the limitations and future works?

A: Most of the limitations come from the drawbacks of using a monocular camera. For example, if a target object is occluded by another object, the performance of this technology decreases. Also, if user hand gesture is not able to be seen in the camera scene, this technology is not usable. In order to overcome the lack of generality due to these, the algorithm needs to be improved by incorporating other sensor information or other existing intention detection methods, such as using an electromyography sensor or tracking eye gaze.

Q: To use this technology in daily life, what do you need?

A: In order for this technology to be used in daily life, these devices are needed: a wearable hand robot with an actuation module, a computing device, and glasses with a camera mounted. We aim to decrease the size and weight of the computing device so that the robot can be portable to be used in daily life. So far, we could find compact computing device that fulfills our requirements, but we expect that neuromorphic chips that are able to perform deep learning computations will be commercially available.

정원영  robot3@irobotnews.com
정원영의 다른기사 보기  
폰트키우기 폰트줄이기 프린트하기 메일보내기 신고하기
트위터 페이스북 구글+ 밴드 뒤로가기 위로가기
인기기사
1
[창간 7주년 기획]로봇R&D현장을 가다 ② 한국생산기술연구원 융합기술연구소 로봇응용연구부문
2
[ICROS 2020]LG전자 인공지능 및 로봇기술 현황 및 전략
3
드론 실증도시 본격 착수… 국토부, 7월 한 달간 집중점검 실시
4
[ICROS 2020]도심 자율주행을 위한 좋은 '경로 계획' 기술
5
대구 이동식 협동로봇 규제자유특구, 3차 규제자유특구로 지정돼
6
[ICROS 2020]필드로봇 포럼(국방로봇 현황과 미래)
7
소프트뱅크, 상업용 자율주행 청소로봇 1만대 판매 돌파...세계 1위
8
[ICROS 2020]사흘간 일정으로 2일 속초에서 개막
9
[ICROS 2020]우수신진연구자 세션
10
재활-돌봄로봇 의료ㆍ복지 서비스 강화되나?
이 기사에 대한 댓글 이야기 (0)
자동등록방지용 코드를 입력하세요!   
확인
- 200자까지 쓰실 수 있습니다. (현재 0 byte / 최대 400byte)
- 욕설등 인신공격성 글은 삭제 합니다. [운영원칙]
이 기사에 대한 댓글 이야기 (0)
로봇신문 소개기사제보광고문의불편신고개인정보취급방침이메일무단수집거부청소년보호정책    *국제표준간행물번호 ISSN 2636-0381 *본지는 인터넷신문위원회 자율심의 준수 서약사입니다
08298) 서울 구로구 공원로 41(구로동, 현대파크빌 427호)  |  대표전화 : 02)867-6200  |  팩스 : 02)867-6203
등록번호 : 서울 아 02659  |  등록일자 : 2013.5.21  |  발행인·편집인 : 조규남  |  청소년보호책임자 : 박경일
Copyright © 2013 로봇신문사. All rights reserved. mail to editor@irobotnews.com