Artificial Intelligence

The Dataset

We introduce a novel large-scale dataset for semi-supervised semantic segmentation in Earth Observation: the MiniFrance suite.


We introduce a new database of voice recordings with the goal of supporting research on vulnerabilities and protection of voice-controlled systems (VCSs). In contrast to prior efforts, the proposed database contains both genuine voice commands and replayed recordings of such commands, collected in realistic VCSs usage scenarios and using modern voice assistant development kits.


Invasive lobular carcinoma (ILC) is the second most prevalent histologic subtype of invasive breast cancer. Here, we comprehensively profiled 817 breast tumors, including 127 ILC, 490 ductal (IDC), and 88 mixed IDC/ILC. Besides E-cadherin loss, the best known ILC genetic hallmark, we identified mutations targeting PTEN, TBX3 and FOXA1 as ILC enriched features. PTEN loss associated with increased AKT phosphorylation, which was highest in ILC among all breast cancer subtypes. Spatially clustered FOXA1 mutations correlated with increased FOXA1 expression and activity.


This dataset is a large-scale Chinese hotel review data set collected by Tan Songbo.  The corpus size is 10,000 reviews. The corpus is automatically collected and organized from


This dataset was created from all Landsat-8 images from South America in the year 2018. More than 31 thousand images were processed (15 TB of data), and approximately on half of them active fire pixels were found. The Landsat-8 sensor has 30 meters of spatial resolution (1 panchromatic band of 15m), 16 bits of radiometric resolution and 16 days of temporal resolution (revisit). The images in our dataset are in TIFF (geotiff) format with 10 bands (excluding the 15m panchromatic band).


Spoken Indian Language Identification Database

(9 languages, 8 different utterance lengths)


  1. Assamese 
  2. Bengali 
  3. Gujarati 
  4. Hindi 
  5. Kannada 
  6. Malayalam 
  7. Marathi 
  8. Tamil 
  9. Telugu


  1. 30 sec
  2. 10 sec
  3. 5 sec
  4. 3 sec
  5. 1 sec
  6. 0.5 sec
  7. 0.2 sec
  8. 0.1 sec




We present GeoCoV19, a large-scale Twitter dataset related to the ongoing COVID-19 pandemic. The dataset has been collected over a period of 90 days from February 1 to May 1, 2020 and consists of more than 524 million multilingual tweets. As the geolocation information is essential for many tasks such as disease tracking and surveillance, we employed a gazetteer-based approach to extract toponyms from user location and tweet content to derive their geolocation information using the Nominatim (Open Street Maps) data at different geolocation granularity levels. In terms of geographical coverage, the dataset spans over 218 countries and 47K cities in the world. The tweets in the dataset are from more than 43 million Twitter users, including around 209K verified accounts. These users posted tweets in 62 different languages.


This is a dataset of Finite Difference Time Domain (FDTD) simulation results of 13 defective crystals and one non-defective crystal.  There are 4 fields in the dataset, namely: Real, Img, Int, and Attribute. The header real shows a real part of the simulated result, img shows the imaginary part, int gives the intensity all in superimposed form. Attribute denotes the label of a crystal simulated. The label 0 is for the simulated crystal, which is non-defective.  Other 13 labels, from crystal 1 to crystal 13 are assigned to the 13 different crystals whose simulations are studied.


This dataset is very vast and contains tweets related to COVID-19. There are 226668 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "". Twitter doesn't allow public sharing of other details related to tweet data( texts,etc.) so can't upload here.


This dataset is very vast and contains Bengali tweets related to COVID-19. There are 36117 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "". Code snippet is given in Documentation file. Sharing Twitter data other than Tweet ids publicly violates Twitter regulation policies.