Robotics

Computer vision can be used by robotic leg prostheses and exoskeletons to improve high-level transitions between different locomotion modes (e.g., level-ground walking to stair ascent) through the prediction of future environmental states. Here we developed the StairNet dataset to support research and development in vision-based automated stair recognition.

Categories:
806 Views

Dataset asscociated with a paper in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems

"Talk the talk and walk the walk: Dialogue-driven navigation in unknown indoor environments"

If you use this code or data, please cite the above paper.

 

Categories:
165 Views

This is the original data and the fitted data from the paper Scalability in Computing and Robotics by Heiko Hamann and Andreagiovanni Reina. For more information please refer to the paper.

Categories:
101 Views

Given the difficulty to handle planetary data we provide downloadable files in PNG format from the missions Chang'E-3 and Chang'E-4. In addition to a set of scripts to do the conversion given a different PDS4 Dataset.

Categories:
288 Views

Data validation of the article "A framework for assessing, comparing and predicting the performance of autonomous RFID-based inventory robots for retail"

Categories:
195 Views

Dataset of rosbags collected during autonomous drone flight inside a warehouse of stockpiles. PCD files created using reconstruction method proposed by article.

Data still being move to IEEE-dataport. 

Categories:
1027 Views

Synergistic prostheses enable the coordinated movement of the human-prosthetic arm, as required by activities of daily living. This is achieved by coupling the motion of the prosthesis to the human command, such as residual limb movement in motion-based interfaces. Previous studies demonstrated that developing human-prosthetic synergies in joint-space must consider individual motor behaviour and the intended task to be performed, requiring personalisation and task calibration.

Categories:
288 Views

We introduce a new robotic RGBD dataset with difficult luminosity conditions: ONERA.ROOM. It comprises RGB-D data (as pairs of images) and corresponding annotations in PASCAL VOC format (xml files)

It aims at People detection, in (mostly) indoor and outdoor environments. People in the field of view can be standing, but also lying on the ground as after a fall.

Categories:
296 Views

These datasets are of the hydraulically actuated robot HyQ’s proprioceptive sensors. They include absolute and relative encoders, force and torque sensors, and MEMS-based and fibre optic-based inertial measurement units (IMUs). Additionally, a motion capture system recorded the ground truth data with millimetre accuracy. In the datasets HyQ was manually controlled to trot in place or move around the laboratory. The sequence includes: forward and backwards motion, side-to-side motion, zig-zags, yaw motion, and a mix of linear and yaw motion.

Categories:
1095 Views

In recent years, researchers have explored human body posture and motion to control robots in more natural ways. These interfaces require the ability to track the body movements of the user in three dimensions. Deploying motion capture systems for tracking tends to be costly and intrusive and requires a clear line of sight, making them ill adapted for applications that need fast deployment. In this article, we use consumer-grade armbands, capturing orientation information and muscle activity, to interact with a robotic system through a state machine controlled by a body motion classifier.

Categories:
213 Views

Pages