ETFP (Eye-Tracking and Fixation Points) consists of two eye-tracking datasets: EToCVD (Eye-Tracking of Colour Vision Deficiencies) and ETTO (Eye-Tracking Through Objects). The former is a collection of images, their corresponding eye-movement coordinates and the fixation point maps, obtained by involving two cohorts, respectively, people with and without CVD (Colour Vision Deficiencies). The latter collects images with just one object laying on a homogeneous background, the corresponding eye-movement coordinates and fixation point maps gathered during eye-tracking sessions.

Categories:
177 Views

We have developed this dataset for the Bangla image caption. Here, we have recorded 500 images with one caption of each. Basically the lifestyle, festivals are mainly focused in this dataset. We have accomplished rice/harvest festivals, snake charming, palanquin, merry-go-round, slum, blacksmith, potter, fisherman, tat shilpo, jamdani, shutki chash, date juice, hal chash, tokai, pohela falgun, gaye holud, etc.

   

Categories:
384 Views

The mean shift (MS) algorithm is a nonparametric method used to cluster sample points and find the local modes of kernel density estimates, using an idea based on iterative gradient ascent. In this paper we develop a mean-shift-inspired algorithm to estimate the modes of regression functions and partition the sample points in the input space. We prove convergence of the sequences generated by the algorithm and derive the non-asymptotic rates of convergence of the estimated local modes for the underlying regression model.

Instructions: 

Biomolecular structure data analyzed in "Space Partitioning and Regression Mode Seeking via a Mean-Shift-Inspired Algorithm" by Wanli Qiao and Amarda Shehu.

Categories:
51 Views

LiDAR point cloud data serves as an machine vision alternative other than image. Its advantages when compared to image and video includes depth estimation and distance measruement. Low-density LiDAR point cloud data can be used to achieve navigation, obstacle detection and obstacle avoidance for mobile robots. autonomous vehicle and drones. In this metadata, we scanned over 1200 objects and classified it into 4 groups of object namely, human, cars, motorcyclist.

Categories:
107 Views

This dataset was used in our work "See-through a Vehicle: Augmenting Road Safety Information using Visual Perception and Camera Communication in Vehicles" published in the IEEE Transactions on Vehicular Technology (TVT). In this work, we present the design, implementation and evaluation of non-line-of-sight (NLOS) perception to achieve a virtual see-through functionality for road vehicles.

Instructions: 

Non-Line of Sight Perception Vehicular Camera Communication

This project is an end-end python-3 application with a continuous loop captures and analyses 100 frames captured in a second to derive appropriate safety warnings.

Contact

Dr. Ashwin Ashok, Assistant Professor, Computer Science, Georgia State University

Collaborators

Project contents

This project contains 3 modules that should be run in parallel and interact with each other using 3 CSV files.

Modules

  1. non-line-of-sight-perception
  2. intelligent-vehicular-perception_ivp
  3. warning-transmission

CSV Files

  1. packet.csv
  2. perceived_info.csv
  3. receiver_action.csv

Usage :

Folling commands must be run in parallel. For more information on libraries needed for execution, see detailed sections below.

# Terminal 1
python3 non-line-of-sight-perception/VLC_project_flow.py zed

# Terminal 2
python3 intelligent-vehicular-perception_ivp/src-code/ivp_impl.py

# Terminal 3
python3 warning-transmission/send_bits_to_transmitter.py

1. non-line-of-sight-perception : Object Detection and Scene Perception Module

This folder, For the YOLO-V3 training and inference: This project is a fork of public repository keras-yolo3. Refer the readme of that repository here. Relevant folders from this repository have been placed in training and configuration folders in this repository.

Installation of python libraries

Use the package manager pip to install foobar.

pip install opencv-python
pip install tensorflow-gpu
pip install Keras
pip install Pillow

Hardware requirements

  1. This code was tested on Jetson Xavier, but any GPU enabled machine should be sufficient.
  2. Zed Camera: Uses Zed camera to capture the images. (Requires GPU to operate at 100 fps).
  3. (Optional) the code can be modified as per the comments in file to use zed, 0 for the camera, or ' the video path' for mp4 or svo files)

Output

perceived_info.csv

2. intelligent-vehicular-perception_ivp : Safety Message/Warning Mapping Module

This module is responsible for making intelligent recommendation to the driver as well as generating Safety Warnings for the following vehicles in the road. The module ouuputs with a fusion of Received Safety Warning through VLC channel and the vehicle's own Scene Perception Data.

Python Library Dependencies

  • json
  • operator
  • csv
  • enum
  • fileinput
  • re

Input

Output

The output is two-fold.

  • packet.csv : Intelligent Recommendation to the Driver.
  • receiver_action.csv : Generated Packet bits. Each Packet bits are logged into the 'packet.csv' file. This CSV files works as a queue. Every new packet logged here eventually gets transmitted by the VLC transmission module.

3. warning-transmission: Communication Module

Detailed transmitter notes including hardware requirement is present in transmitter_notes.txt

Python Library Dependencies

  • serial

Input

  • packet.csv : Intelligent Recommendation to the Driver.

Output

LED flashes high/low in correspondence to the packets in input file.

Dataset used for training the model

The dataset has been generated using Microsoft VoTT.

This is a "Brakelight" labelled dataset that can be used for training Brakelight detection models. The dataset contains brakelights labelled on images from

*Reference : Cui, Z., Yang, S. W., & Tsai, H. M. (2015, September). A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition. In Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on (pp. 931-937). IEEE.

Labeled Dataset contains 1720 trained images as well a csv file that lists 4102 bounding boxes in the format : image | xmin | ymin | xmax | ymax | label

This can be further converted into the format as required by the training module using convert_dataset_for_training.py - (Replace annotations.txt with the Microsoft VoTT generated CSV) .

 

Acknowledgements

This work has been partially supported the US National Science Foundation (NSF) grants 1755925, 1929171 and 1901133.

Categories:
79 Views

Silk fibroin is the structural fiber of the silk filament and it is usually separated from the external fibroin by a chemical process called degumming. This process consists in an alkali bath in which the silk cocoons are boiled for a determined time. It is also known that the degumming process impacts the property of the outcoming silk fibroin fibers.

Instructions: 

The data contained in the first sheet of the dataset is in tidy format (each row correspond to an observation) and can be directly imported in R and elaborated with the package Tidyverse. It should be noticed that the row with the standard order 49 correspond to the reference degumming while the row 50 correspond to the test made on the bare silk fiber (not degummed). In this last case neither the mass loss nor the secondary structures were determined. In fact, being not degummed the sericine was surrounding the fiber so the examination of the secondary structure could not be done. The first two column of the dataset represent the Standard order (the standard order in which the Design of Experiment data are elaborated) and the Run order (the randomized order in whcih the trials were performed). The next four columns are the Studied factors while the rest of the dataset reports the process yields (in this case, the properties of the outcoming silk fibers). 

The second sheet contains the information of the molecular weight of the tested samples. In this case only one sample for each triplicate was tested. Both the standard order and the run order referred to the same samples of the first sheet. 

Categories:
74 Views

Feature tables and source code for Camargo et al. A Machine Learning Strategy for Locomotion Classification and Parameter Estimation using Fusion of Wearable Sensors. Transactions on Biomedical Engineering. 2021

Instructions: 

The feature tables used for this paper can be found in ‘Classification.zip’ and ‘Regression.zip’, while source code is found in ‘CombinedLocClassAndParamEst-sourcecode.zip’. To get started, download all the files into a single folder and unzip them. Within ‘CombinedLocClassAndParamEst-master’, the folder ‘sf_analysis’ contains the main code to run, split into ‘Classification’ and ‘Regression’ code folders. There is also a 'README.md' file within the source code with more information and dependencies. If you’d like to just regenerate plots and results from the paper, then move all contents of the ‘zz_results_published’ folders (found under the feature table folders) up one folder so they are just within the ‘Classification’ or ‘Regression’ data folders. Go into the source code, find the ‘analysis’ folders, and run any ‘analyze*.m’ script with updated ‘datapath’ variables to point to the results folders you just moved.

Categories:
103 Views

Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. Existing humor detection datasets usually combined formal non-humorous texts and informal jokes with incompatible statistics (text length, words count, etc.). This makes it more likely to detect humor with simple analytical models and without understanding the underlying latent lingual features and structures.

Categories:
105 Views

Dataset asscociated with a paper in Computer Vision and Pattern Recognition (CVPR)

 

"Object classification from randomized EEG trials"

 

If you use this code or data, please cite the above paper.

Instructions: 

See the paper "Object classification from randomized EEG trials" on IEEE Xplore.

 

Code for analyzing the dataset is included in the online supplementary materials for the paper.

 

The code from the online supplementary materials is also included here.

 

If you use this code or data, please cite the above paper.

Categories:
143 Views

Dataset used in the article "On the shape of timing distributions in free text keystroke dynamics profiles". Contains CSV files with the timing features (hold times and flight times) of every keypress in three free text datasets used in previous studies, by the author (LSIA) and two other unrelated groups (KM from and PROSODY, subdivided in GAY, GUN, and REVIEW). The timing features are grouped by dataset, user, task, virtual key code, and feature. Two different languages are represented, Spanish in LSIA and English in KM and PROSODY.

Categories:
76 Views

Pages