本研究中使用的柑橘叶数据集来自 PlantVillage [24],用于以下方面的开放访问公共资源: 与农业有关的内容。数据集包括三种类型柑桔叶片:柑桔健康,柑桔HLB(黄龙病) 一般,柑橘HLB严重。原始数据集包含4577张柑橘叶片图像,分为三部分 分类

Categories:
64 Views

This dataset contains the data associated with the electrically equivalent model of the IEEE Low Voltage (LV) test feeder for use of the distribution network studies. This dataset is for the letter entitled:" A Reduced Electrically-Equivalent Model of the IEEE European Low Voltage Test Feeder".

Instructions: 

The uploaded data includes a zip file containing the dataset in the form of CSV files for an electrically equivalent reduced model of IEEE LV European feeder. 

  • The test feeder is at the voltage level of 416 V, phase-to-phase.
  • Load shapes with one-minute time resolution over 24 hours are provided for the time-series load flow simulation.
  • Line data and load data of the network are given in seperate CSV files. 
  • The line codes specified by sequence impedances and admittances are available in a seperate CSV file.

 

Categories:
60 Views

This is the five mainstream stock market indices dataset.  It includes XJO, DJI, IXIC, HSI, and N225 indices from  Sep. 2010 ~ Aug. 2020.  

Categories:
61 Views

Optical Character Recognition (OCR) system is used to convert the document images, either printed or handwritten, into its electronic counterpart. But dealing with handwritten texts is much more challenging than printed ones due to erratic writing style of the individuals. Problem becomes more severe when the input image is doctor's prescription. Before feeding such image to the OCR engine, the classification of printed and handwritten texts is a necessity as doctor's prescription contains both handwritten and printed texts which are to be processed separately.

Categories:
495 Views

We compared the performances of an LwM2M device management protocol implementation and FIWARE’s Ultralight 2.0. In addition to demonstrating the viability of the proposed approach, the obtained results point to mixed advantages/disadvantages of one protocol over the other.

Categories:
116 Views

We introduce a new database of voice recordings with the goal of supporting research on vulnerabilities and protection of voice-controlled systems (VCSs). In contrast to prior efforts, the proposed database contains both genuine voice commands and replayed recordings of such commands, collected in realistic VCSs usage scenarios and using modern voice assistant development kits.

Instructions: 

The corpus consists of three sets: the core, evaluation, and complete set. The complete set contains all the data (i.e., complete set = core set + evaluation set) and allows the user to freely split the training/test set. Core/evaluation sets suggest a default training/test split. For each set, all *.wav files are in the /data directory and the meta information is in meta.csv file. The protocol is described in the readme.txt. A PyTorch data loader script is provided as an example of how to use the data. A python resample script is provided for resampling the dataset into the desired sample rate.

Categories:
300 Views

 

The uploaded data file is a part of data used or generated by a real time security system for frequency control in electrical grids with variable renewable generation proposed in a paper entitled: “Dynamic regulation in electrical networks with non-controlled sources”. The proposed security system analyzes the electrical network in both steady-state and dynamic state. The test systems IEEE 39-bus were used adding wind generation models to evaluate the proposed security system.

 

 

 

Categories:
149 Views

This dataset is accompanying the manuscript “Lossless Compression of Plenoptic Camera Sensor Images and of Light Field View Arrays” by Ioan Tabus and Emanuele Palma, submitted to IEEE Access in June 2020. It contains the archives and the programs for reconstructing the light field datasets publicly used in two major challenges for light field compression.

Instructions: 

This dataset is accompanying the manuscript “Lossless Compression of Plenoptic Camera Sensor Images and of Light Field View Arrays” by Ioan Tabus and Emanuele Palma, submitted to IEEE Access in June 2020. It contains the archives and the programs for reconstructing the light field datasets publicly used in two major challenges for light field compression.We propose a codec for lossless compression of plenoptic camera sensor images and then we embed the proposed codec into a full light field array codec, which encodes input sensor data and makes use specific plenoptic camera meta-information for creating lossless archives of light field view arrays. The sensor image codec takes the input lenslet image and splits it into rectangular patches, each patch corresponding to a microlens image. The codec exploits the correlation between neighbor patches using a patch-by-patch prediction mechanism, where each pixel of a patch has his own sparse predictor, designed to utilize only the relevant pixels from its neighbor patch. An intra-patch prediction mask is additionally utilized for sparse predictor design. The patches are labeled into M classes, according to several possible mechanisms, and one sparse design is performed for each pair of (class label; patch pixel). A relevant context selection mirrors the selection of relevant pixels to provide the arithmetic coding with skewed coding distributions at each context.Finally, we embed the proposed image sensor codec into a codec for the light field array of views, which is a generative mechanism, starting by encoding the sensor image or a devigneted and debayered version of it, and then including additional meta-information from the plenoptic camera, finally creating a lossless archive of the light field array of views. We exemplify the performance for two databases that were extensively used in the light field lossless compression literature, showing superior results for both cases.

Categories:
99 Views

To improve reproductivity of our papar, we would upload experimental data and resources of evaluations.

Categories:
75 Views

This dataset is very vast and contains tweets related to COVID-19. There are 226668 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Twitter doesn't allow public sharing of other details related to tweet data( texts,etc.) so can't upload here.

Instructions: 

Read the documentation properly and use the code snippet written in python to load data.

Categories:
1737 Views

Pages