1. Movie "movie_S1.avi" demonstrates the normalized absolute element values of dynamic influence matrices and its scaled matrices by comb drive torque along the amplitude. The dynamic influence matrices are plotted according to the comb drive frequency component, index n, and the input frequency component, index m. The absolute values of the elements of the matrix are normalized by its maximum element. The maximum element is drawn in white and the normalized elements less than 10e−6 of the maximum element are depicted in black.


This dataset provides GPS, IMU, and wheel odometry readings on various terrains for the Pathfinder robot which is a lightweight, 4-wheeled, skid-steered, custom-built rover testbed platform. Rover uses a rocker system with a differential bar connected to the front wheels. Pathfinder is utilized with slick wheels to encounter more slippage. The IMU incorporated on the rover is an ADIS-16495 with 50Hz data rate. Pathfinder's quadrature encoders with 47,000 pulses/m resolution are used for wheel odometry readings with 10Hz data rate.


This dataset contains 41 zipped folders. Each folder has at least one bag file and GPS data. The name of the folder represents the data collection date and the test terrain. Folders include bag files(IMU, WO) and GPS solution data on gravel, unpaved, paved, and rough terrains. Bag files can be processed through the code in CoreNav-GP repository. 


The ability of detecting human postures is particularly important in several fields like ambient intelligence, surveillance, elderly care, and human-machine interaction. Most of the earlier works in this area are based on computer vision. However, mostly these works are limited in providing real time solution for the detection activities. Therefore, we are currently working toward the Internet of Things (IoT) based solution for the human posture recognition.

  • See our next journal papers*.
  • *Suppl. to: Proc.
  •  XVI International Conference on Thermal Analysis and Calorimetry in Russia (RTAC-2020). July 6th , 2020, Moscow, Russia. Book of Abstracts. — Moscow. “Pero” Publisher, 2020. — 9 MB. [Electronic edition]. ISBN 978-5-00171-240-4

Supplementary Material To: A  Unified  Perception  Benchmark  for  Capacitive  Proximity  Sensing Towards  Safe  Human-Robot  Collaboration  (HRC)


-- Accepted for presentation at IEEE International Conference on Robotics and Automation (ICRA), 2021 Xi'an, China

-- Final formal acceptance pending

-- Conference proceedings pending


Paper Abstract:



****Test Objects****

The test objects used in this work can be found in "". They are avaibale as STL files. Test objects "sphere" and "ellipsoid" are made out of two separate files each.

Each test objects has one ore more sockets to fit a 12 mm tube (outer diameter) to mount the test objects. You may find such tubes in your local hardware store, or print them as well. A *.stl of the file is not included.


Additional Notes:

  • You may use general purpose glue to combine separate parts of the test objects.
  • When covering your test object with copper foil, make sure to use conductive adhesive


****Sensing directivity measurements****


Short hand notation AAU / JR.. Klagenfurt Univeristy & Joanneum Research RoboticsKIT... Karlsruhe Institute of TechnologyTUC... Chemnitz University of Technology


Each file (except spatial resolution and grounding measurement by KIT) is structured as follows:

  • Columns 1 to 3 represent the Cartesian Coordinates of the lowest point (in height (z)) of the test object with respect to the center of the electrode
  • Column 4 shows the average of measurements conducted in 10 ms calibrated by the baseline value
  • Column 5 shows the standard deviation of 500 measurements at each point
  • Column 6 and 7 represent basline value and baseline standard deviation (by the same means as for Column 5-6) respectively.   (AAU/JR recorded the baseline value at 40 cm (this is why that value is constant for the whole column))
  • Column 8 indicates if that point was detected by means explained in Section IV - A of the paper




File: Results_AAU/

Contains results of the sphere and cylinder

  • test_object_sphere_aau.txt
  • test_object_cylinder_aau.txt

Measurements of the ellipsoid will be posted soon.




Contains results of the sphere for three different electrode configuration of 21cm², 42cm² and 84 cm²

  • kit_sphere_21qcm.txt
  • kit_sphere_42qcm.txt
  • kit_sphere_21qcm.txt

Additionally contains analysis of different groundings and spatial resolution (see section VI.B of the paper) respectively.

  • kit_different_grounding_42qcm.txt
  • kit_spatial_resolution_42qcm.txt

These measurements were conducted using the "sphere" and an active electrode size of 42 cm², the files are structured as follows:

  • Column 1 gives the z coordinate,
  • Column 2 to 4 show the values for signal/spatial resolution 1.5 kOhm, 1.5 kOhm and 100 pF and 0 Ohm (hard grounding) respectively.




Contains results of the sphere:

  • sphere_tuc.txt






Temperature profiles for thermal detection.


The dataset contains medical signs of the sign language including different modalities of color frames, depth frames, infrared frames, body index frames, mapped color body on depth scale, and 2D/3D skeleton information in color and depth scales and camera space. The language level of the signs is mostly Word and 55 signs are performed by 16 persons two times (55x16x2=1760 performance in total).



The signs are collected at Shahid Beheshti University, Tehran, and show local gestures. The SignCol software (code: , paper: ) is used for defining the signs and also connecting to Microsoft Kinect v2 for collecting the multimodal data, including frames and skeletons. Two demonstration videos of the signs are available at youtube: vomit: , asthma spray: . Demonstration videos of the SignCol are also available at and .

The dataset contains 13 zip files totally: One zipfile contains readme, sample codes and data (, the next zip file contains sample videos ( and other 11 zip files contain 5 signs in each (e.g. Signs(11-15).zip). For quick start, consider the

Each performed gesture is located in a directory named in Sign_X_Performer_Y_Z format which shows the Xth sign performed by the Yth person at the Znd iteration (X=[1,...,55], Y=[1,...,16], Z=[1,2]). The actual names of the signs are listed in the file: table_signs.csv.

Each directory includes 7 subdirectories:

1.      Times: time information of frames saved in CSV file.

2.      Color Frames: RGB frames saved in 8 bits *.jpg format with the size of 1920x1080.

3.      Infrared Frames: Infrared frames saved in 8 bits *.jpg format with the size of 512x424.

4.      Depth Frames: Depth frames saved in 8 bits *.jpg format with the size of 512x424.

5.      Body Index Frames: Body Index frames scaled in depth saved in 8 bits *.jpg format with the size of 512x424.

6.      Body Skels data: For each frame, there is a CSV file containing 25 rows according to 25 joints of body and columns for specifying the joint type, locations and space environments. Each joint location is saved in three spaces, 3D camera space, 2D depth space (image) and 2D color space (image). The 21 joints are visible in this dataset.

7.      Color Body Frames: frames of RGB Body scaled in depth frame saved in 8 bits *.jpg format with the size of 512x424.


Frames are saved as a set of numbered images and the MATLAB script PrReadFrames_AND_CreateVideo.m shows how to read frames and also how to create videos, if is required.

The 21 visible joints are Spine Base, Spine Mid, Neck, Head, Shoulder Left, Elbow Left, Wrist Left, Hand Left, Shoulder Right, Elbow Right, Wrist Right, Hand Right, Hip Left, Knee Left, Hip Right, Knee Right, Spine Shoulder, Hand TipLeft, Thumb Left, Hand Tip Right, Thumb Right. The MATLAB script PrReadSkels_AND_CreateVideo.m shows an example of reading joint’s informtaion, fliping them and drawing the skeleton on depth and color scale.

The updated information about the dataset and corresponding paper are available at GitHub repository MedSLset.

Terms and conditions for the use of dataset: 

1- This dataset is released for academic research purposes only.

2- Please cite both the paper and dataset if you found this data useful for your research. You can find the references and bibtex at MedSLset.

3- You must not distribute the dataset or any parts of it to others. 

4- The dataset just inclues image, text and video files and is scanned via malware protection softwares. You accept full responsibility for your use of the dataset. This data comes with no warranty or guarantee of any kind, and you accept full liability.

5- You will treat people appearing in this data with respect and dignity.

6- You will not try to identify and recognize the persons in the dataset.


Coventry-2018 is a human activity recognition dataset captured by three Panasonic® Grid-EYE (AMG8833) infrared sensors in March 2018. The Grid-EYE sensors represent a 60 field of view scene by an 8 × 8 array named frame. The data streams are synchronized to 10 frames per second and saved as *.csv recordings using the LabVIEW® software. Two layouts are considered in this dataset with different geometry sizes: 1) small layout; and 2) large layout.


This article describes the possible design of the electron-ion trap combined density sensor and the composition of the upper atmosphere and simulation of the processes occurring in it. The simulation of the electric field between the electrodes of the trap and the motion of charged particles in it is carried out. The calculation of the maximum speed and energy of the particles below which the trap holds all charged particles, even in the case of the most unfavorable direction of their speed – along the gap between the electrodes.


open access