Liver tumor segmentation.
Here we present recordings from a new high-throughput instrument to optogenetically manipulate neural activity in moving
Raw Data for Liu, et al., 2021
This is the raw data corresponding to: Liu, Kumar, Sharma and Leifer, "A high-throughput method to deliver targeted optogenetic stimulation to moving C. elegans population" available at https://arxiv.org/abs/2109.05303 and forthcoming in PLOS Biology.
The code used to analyze this data is availabe on GitHub at https://github.com/leiferlab/liu-closed-loop-code.git
This dataset is publicly hosted on IEEE DataParts. It is >300 GB of data containing many many individual image frames. We have bundled the data into one large
.tar bundle. Download the
.tar bundle and extract before use. Consider using an AWS client to download the bundle instead of your web browser as we have heard of reports that download such large files over the browser can be problematic.
This dataset as-is includes only raw camera and other output of the real-time instrument used to optogenetically activate the animal and record its motion. To extract final tracks, final centerlines, final velocity etc, these raw outputs must be processed.
Post-processing can be done by running the
/ProcessDateDirectory.m MATLAB script from https://github.com/leiferlab/liu-closed-loop-code.git. Note post processing was optimized to run in parallel on a high performance computing cluster. It is computationally intensive and also requires an egregious amount of RAM.
Repository Directory Structure
Recordings from the instrument are organized into directories by date, which we call "Date directories."
Each experiment is it's own timestamped folder within a date directory, and it contains the following files:
camera_distortion.pngcontains camera spatial calibration information in the image metadata
CameraFrames.mkvis the raw camera images compressed with H.265
labview_parameters.csvis the settings used by the instrument in the real-time experiment
labview_tracks.matcontains the real-time tracking data in a MATLAB readable HDF5 format
projector_to_camera_distortion.pngcontains the spatial calibration information that maps projector pixel space into camera pixel space
tags.txtcontains tagged information for the experiment and is used to organize and select experiments for analysis
timestamps.matcontains timing information saved during the real-time experiments, including closed-loop lag.
ConvertedProjectorFramesfolder contains png compressed stimulus images converted to the camera's frame of reference.
Naming convention for individual recordings
A typical folder is
20210624- Date the dataset was collected in format
RunRailsTriggeredByTurning- Experiment type describes the type of experiment. For example this experiment was performed in closed loop triggered on turning. Open loop experiments are called "RunFullWormRails" experiments for historical reasons.
Sandeep- Name of the experimenter
AML67- C. elegans strain name. Note strain AML470 corresponds to internal strain name "AKS_483.7.e".
10ulRet- Concentration of all-trans-retinal used
red- LED color used to stimulate. Always red for this manuscript.
Once post processing has been run, figures from the mansucript can then be generated using scripts in https://github.com/leiferlab/liu-closed-loop-code.git
Please refer to
instructions_to_generate_figures.csv for instructions on which Matlab script to run to generate each specific figure.
This document describes the details of the BON Egocentric vision dataset. BON denotes the initials of the locations where the dataset was collected; Barcelona (Spain); Oxford (UK); and Nairobi (Kenya). BON comprises first-person video, recorded when subjects were conducting common office activities. The preceding version of this dataset, FPV-O dataset has fewersubjects for only a single location (Barcelona). To develop a location agnostic framework, data from multiple locations and/or office settings is essential.
Instructions are available on the attached document
To address the problem of online automatic inspection of drug liquid bottles in production line, an implantable visual inspection system is designed and the ensemble learning algorithm for detection is proposed based on multi-features fusion. A tunnel structure is designed for visual inspection system, which allows the bottles inspection to be automated without changing original processes and devices. A high precision method is proposed for vision detection of drug liquid bottles.
Dataset asscociated with a paper in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems
"Talk the talk and walk the walk: Dialogue-driven navigation in unknown indoor environments"
If you use this code or data, please cite the above paper.
See the docs directory.
We present here an annotated thermal dataset which is linked to the dataset present in https://ieee-dataport.org/open-access/thermal-visual-paired-dataset
To our knowledge, this is the only public dataset at present, which has multi class annotation on thermal images, comprised of 5 different classes.
This database was hand annotated over a period of 130 work hours.
We manually annotate all images using the VGG Image Annotator (VIA) [Dutta, Abhishek, Ankush Gupta, and Andrew Zissermann. "VGG image annotator (VIA)." URL: http://www.robots.ox.ac.uk/~vgg/software/via (2016).] for the creation of the box.
We use the standard annotation format provided.
'sonel_annotation.csv' uses the image present in the folder named 'sonel'.
Similarly, the files 'flir_annotation.csv' and 'flir_old_annotation.csv' are based on the images present in the fodlers 'flir' and 'flir_old'
The images can be found as a part of our older work which is presented as an open database [Suranjan Goswami, Nand Kumar Yadav, Satish Kumar Singh. "Thermal Visual Paired Dataset." doi: 10.21227/jjba-6220]
The data is classified into 5 different classes
modern infrastructure: inf:5
In each file, which is presented as an excel file, the data columns are as follows:
filename, file size, file attribute, region count, region id, region shape attributes and region attributes.
region count shows the number of regions present in each image, region attribute presents the details of the rectangle which contains the said attribute and the region attributes presents the attribute name.
These can be directly input into VIA after loading the corresponding database images to see the outlined annotations.
Since the annotation presented by VIA might not be easily usable by all data readers, we have modified the same to be easily processed as the numbers files
These are 'sonel_annotation-numbers.csv', 'flir_annotation-numbers.csv' and 'flir_old_annotation-numbers.csv' .
Here, the class abbreviations are replaced by their corresponding number key as provided above.
Please note that the database we have used contains both registered and unregistered images as a part of the database.
All registered thermal images that have been annotated only, not the unregistered ones as our work required registered thermal images.
This is a one way registration: that is, the annotation done on the thermal images should reflect on the optical images.
We have not included the optical annotation method here, wherein we use DETR to annotate the registered optical images and use the corresponding mapping to create the 2 way annotation.
We also include 3 ZIP files with the images and their corresponding annotations both manually and done with DETR.
All annotations are labelled as NAME, X_START coordinate, Y_START coordinate, WIDTH, HEIGHT, CLASS for the individual manual annotations.
FOr the DETR annotations, they correspond to NAME, X_START coordinate, Y_START coordinate, X_END coordinate, Y_END coordinate, CLASS.
This database is presented as a part of our work "Novel Deep Learning Method for Thermal to Annotated Thermal-Optical Fused Images"
This dataset contains satellite images of areas of interest surrounding 30 different European airports. It also provides ground-truth annotations of flying airplanes in part of those images to support future research involving flying airplane detection. This dataset is part of the work entitled "Measuring economic activity from space: a case study using flying airplanes and COVID-19" published by the IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. It contains modified Sentinel-2 data processed by Euro Data Cube.
Details regarding dataset collection and usage are provided at https://github.com/maups/covid19-custom-script-contest
The region-based segmentation approach has been a major research area for many medical image applications. A vision guided autonomous system has used region-based segmentation information to operate heavy machinery and locomotive machines intended for computer vision applications. The dataset contains raw images in .png format fro brain tumor in various portions of brain.The dataset can be used fro training and testing. Images are calssified into three main regions as frontal lobe(level -1, level-2), optus-lobe(level-1), medula_lobe(level-1,level-2,level-3).
In the field of 3D reconstruction, although there exist some standard datasets for evaluating the segmentation results of close-up 3D models, these datasets cannot be used to evaluate the segmentation results of 3D models based on satellite images. To address this issue, we provide a standard dataset for evaluating the segmentation results of satellite images and their corresponding DSMs. In this dataset, the satellite images maintain an exact correspondence with the DSMs, thus the segmentation results of both satellite images and DSMs can be evaluated by our proposed dataset.