An image dataset including five types of weather conditions (cloudy, sunny, foggy, rainy and snowy) was constructed.

 This dataset, called FWID, includes 4000 images for each weather category, leading to a total of 20000 images. 


The orchid flower dataset was selected from the northern part of Thailand. The dataset contains Thai native orchid flowers, and each class contains at least 20 samples. The orchid dataset including 52 species and the visual characteristics of the flower are varying in terms of shape, color, texture, size, and the other parts of the orchid plant like a leaf, inflorescence, roots, and surroundings. All images are taken from many devices such as a digital camera, a mobile phone, and other equipment. The orchids dataset contains 3,559 images from 52 categories.


Download links:

Test -

Train -


This dataset is only for research purposes.


Please remember cited correctly the paper: "Orchids Classification Using Spatial Transformer Network with Adaptive Scaling"





  title={Orchids Classification Using Spatial Transformer Network with Adaptive Scaling},

  author={Sarachai, Watcharin and Bootkrajang, Jakramate and Chaijaruwanich, Jeerayut and Somhom, Samerkae},

  booktitle={International Conference on Intelligent Data Engineering and Automated Learning – IDEAL 2019},




  organization={Springer International Publishing}





Features Extracted from BraTS 2012-2013


This dataset contains the comparison results on the 'Euroc' public dataset of DVIO, VINS-Mono, and ROVIO.


This dataset comes up as a benchmark dataset for machines to automatically recognizing the handwritten assamese digists (numerals) by extracting useful features by analyzing the structure. The Assamese language comprises of a total of 10 digits from 0 to 9. We have collected a total of 516 handwritten digits from 52 native assamese people irrespective of their age (12-86 years), gender, educational background etc. The digits are captured in .jpeg format using a paint mobile application developed by us which automatically saves the images in the internal storage of the mobile.


An accurate and reliable image-based quantification system for blueberries may be useful for the automation of harvest management. It may also serve as the basis for controlling robotic harvesting systems. Quantification of blueberries from images is a challenging task due to occlusions, differences in size, illumination conditions and the irregular amount of blueberries that can be present in an image. This paper proposes the quantification per image and per batch of blueberries in the wild, using high definition images captured using a mobile device.


The Contest: Goals and Organisation

 The 2019 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), the Johns Hopkins University (JHU), and the Intelligence Advanced Research Projects Activity (IARPA), aimed to promote research in semantic 3D reconstruction and stereo using machine intelligence and deep learning applied to satellite images.


Attempts to prevent invasion of marine biofouling on marine vessels are demanding. By developing a system to detect marine fouling on vessels in an early stage of fouling is a viable solution. However, there is a  lack of database for fouling images for performing image processing and machine learning algorithm.


The Contest: Goals and Organization


The 2017 IEEE GRSS Data Fusion Contest, organized by the IEEE GRSS Image Analysis and Data Fusion Technical Committee, aimed at promoting progress on fusion and analysis methodologies for multisource remote sensing data.





The 2017 Data Fusion Contest will consist in a classification benchmark. The task to perform is classification of land use (more precisely, Local Climate Zones or LCZ) in various urban environments. Several cities have been selected all over the world to test the ability of both LCZ prediction and domain adaptation. Input data are multi-temporal, multi-source and multi-mode (image and semantic layers). 5 cities are considered for training: Berlin, Hong Kong, Paris, Rome and Sao Paulo.


Each city folder contains:grid/        sampling gridlandsat_8/    Landsat 8 images at various dates (resampled at 100m res., split in selected bands)lcz/        Local Climate Zones as rasters (see below)osm_raster/    Rasters with areas (buildings, land-use, water) derived from OpenStreetMap layersosm_vector/    Vector data with OpenStreetMap zones and linessentinel_2/    Sentinel2 image (resampled at 100m res., split in selected bands)


Local Climate Zones

The lcz/ folder contains:`<city>_lcz_GT.tif`: The ground-truth for local climate zones, as a raster. It is single-band, in byte format. The pixel values range from 1 to 17 (maximum number of classes). Unclassified pixels have 0 value.`<city>_lcz_col.tif`: Color, georeferenced LCZ map, for visualization convenience only.Class nembers are the following:10 urban LCZs corresponding to various built types:

  • 1. Compact high-rise;
  • 2. Compact midrise;
  • 3. Compact low-rise;
  • 4. Open high-rise;
  • 5. Open midrise;
  • 6. Open low-rise;
  • 7. Lightweight low-rise;
  • 8. Large low-rise;
  • 9. Sparsely built;
  • 10. Heavy industry.

7 rural LCZs corresponding to various land cover types:

  • 11. Dense trees;
  • 12. Scattered trees;
  • 13. Bush and scrub;
  • 14. Low plants;
  • 15. Bare rock or paved;
  • 16. Bare soil or sand;
  • 17. Water



More info:




The 2017 IEEE GRSS Data Fusion Contest is organized by the Image Analysis and Data Fusion Technical Committee of IEEE GRSSLandsat 8 data available from the U.S. Geological Survey ( Data © OpenStreetMap contributors, available under the Open Database Licence - Original Copernicus Sentinel Data 2016 available from  the European Space Agency ( Contest is being organized in collaboration with the WUDAPT ( and GeoWIKI ( initiatives. The IADF TC chairs would like to thank the organizers and the IEEE GRSS for continuously supporting the annual Data Fusion Contest through funding and resources.


Iris recognition has been an interesting subject for many research studies in the last two decades and has raised many challenges for the researchers. One new and interesting challenge in the iris studies is gender recognition using iris images. Gender classification can be applied to reduce processing time of the identification process. On the other hand, it can be used in applications such as access control systems, and gender-based marketing and so on. To the best of our knowledge, only a few numbers of studies are conducted on gender recognition through analysis of iris images.