The video demonstrates an accurate, low-latency body tracking approach for VR-based applications using Vive Trackers. Using a HTC Vive headset and Vive Trackers, an immersive VR experience, by animating the motions of the avatar as smoothly, rapidly and as accurately as possible, has been created. The user can see her from the first-person view.

Categories:
206 Views

Description of the proposed method is presented with the support of experimental videos.

Categories:
109 Views

For more information please take a look at the corresponding paper (DOI: 10.1109/JBHI.2019.2963786)

Instructions: 

.mp4 files can be played with a variety of multimedia software.

Categories:
88 Views

Low light scenes often come with acquisition noise, which not only disturbs the viewers, but it also makes video compression harder. These type of videos are often encountered in cinema as a result of artistic perspective or the nature of a scene. Other examples include shots of wildlife (e.g. mobula rays at night in Blue Planet II), concerts and shows, surveillance camera footage and more. Inspired by all above, we are proposing a challenge on encoding low-light captured videos.

Last Updated On: 
Fri, 05/01/2020 - 09:40

The Data Fusion Contest 2016: Goals and Organization

The 2016 IEEE GRSS Data Fusion Contest, organized by the IEEE GRSS Image Analysis and Data Fusion Technical Committee, aimed at promoting progress on fusion and analysis methodologies for multisource remote sensing data.

New multi-source, multi-temporal data including Very High Resolution (VHR) multi-temporal imagery and video from space were released. First, VHR images (DEIMOS-2 standard products) acquired at two different dates, before and after orthorectification:

Instructions: 

 

After unzip, each directory contains:

  • original GeoTiff for panchromatic (VHR) and multispectral (4bands) images,

  • quick-view image for both in png format,

  • capture parameters (RPC file).

 

Categories:
470 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed. To achieve this goal, we introduced a large-sacle (~1.72M frames) traffic sign detection video dataset (CURE-TSD) which is among the most comprehensive datasets with controlled synthetic challenging conditions. The video sequences in the 

Instructions: 

The name format of the video files are as follows: “sequenceType_sequenceNumber_challengeSourceType_challengeType_challengeLevel.mp4”

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

·         challengeSourceType: 00 – No challenge source (which means no challenge) 01 – After affect

·         challengeType: 00 – No challenge 01 – Decolorization 02 – Lens blur 03 – Codec error 04 – Darkening 05 – Dirty lens 06 – Exposure 07 – Gaussian blur 08 – Noise 09 – Rain 10 – Shadow 11 – Snow 12 – Haze

·         challengeLevel: A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.

Test Sequences

We split the video sequences into 70% training set and 30% test set. The sequence numbers corresponding to test set are given below:

[01_04_x_x_x, 01_05_x_x_x, 01_06_x_x_x, 01_07_x_x_x, 01_08_x_x_x, 01_18_x_x_x, 01_19_x_x_x, 01_21_x_x_x, 01_24_x_x_x, 01_26_x_x_x, 01_31_x_x_x, 01_38_x_x_x, 01_39_x_x_x, 01_41_x_x_x, 01_47_x_x_x, 02_02_x_x_x, 02_04_x_x_x, 02_06_x_x_x, 02_09_x_x_x, 02_12_x_x_x, 02_13_x_x_x, 02_16_x_x_x, 02_17_x_x_x, 02_18_x_x_x, 02_20_x_x_x, 02_22_x_x_x, 02_28_x_x_x, 02_31_x_x_x, 02_32_x_x_x, 02_36_x_x_x]

The videos with all other sequence numbers are in the training set. Note that “x” above refers to the variations listed earlier.

The name format of the annotation files are as follows: “sequenceType_sequenceNumber.txt“

Challenge source type, challenge type, and challenge level do not affect the annotations. Therefore, the video sequences that start with the same sequence type and the sequence number have the same annotations.

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

The format of each line in the annotation file (txt) should be: “frameNumber_signType_llx_lly_lrx_lry_ulx_uly_urx_ury”. You can see a visual coordinate system example in our GitHub page.

·         frameNumber: A number in between [001-300]

·         signType: 01 – speed_limit 02 – goods_vehicles 03 – no_overtaking 04 – no_stopping 05 – no_parking 06 – stop 07 – bicycle 08 – hump 09 – no_left 10 – no_right 11 – priority_to 12 – no_entry 13 – yield 14 – parking

Categories:
543 Views

Herein we present a multi-threshold-based constant micro-increment control strategy to detect and suppress the slip for the prosthetic hand, and to minimize the loading force increment after the stabilization. The proposed strategy primarily encompasses slipping process model, multi-threshold detection method, constant micro-increment controller and a preset filter. First and foremost, a slipping process model is proposed that involves the nonlinear and noise characteristics of the system.

Instructions: 

There are two sets of experiments, a total of four videos, which need to be decompressed and watched.

Categories:
108 Views

Seeking to improve wireless power transfer efficiency, a low-cost embedded circuit for self-tuning of impedance matching is proposed and successfully demonstrated. Applying the Maximum Power Point Tracking (MPPT) algorithm, the equivalent capacitance value on a L-match circuit is automatically changed.

Categories:
135 Views

VideoSupplement "SEM-assisted (LVEM-assisted) isopotential mapping of dielectric charging of the nonwoven fabric structures using Sobel–Feldman operator (Sobel filter)" for our article in russian journal (translated in English). 

Categories:
72 Views

Pages