Aspect Sentiment Triplet Extraction (ASTE) is an Aspect-Based Sentiment Analysis subtask (ABSA). It aims to extract aspect-opinion pairs from a sentence and identify the sentiment polarity associated with them. For instance, given the sentence ``Large rooms and great breakfast", ASTE outputs the triplet T = {(rooms, large, positive), (breakfast, great, positive)}. Although several approaches to ASBA have recently been proposed, those for Portuguese have been mostly limited to extracting only aspects without addressing ASTE tasks.


The time-to-market pressure and the continuous growing complexity of hardware designs have promoted the globalization of the Integrated Circuit (IC) supply chain. However, such globalization also poses various security threats in each phase of the IC supply chain. Although the advancements of Machine Learning (ML) have pushed the frontier of hardware security, most conventional ML-based methods can only achieve the desired performance by manually finding a robust feature representation for circuits that are non-Euclidean data. As a result, modeling these circuits using graph learning to imp


This is a dataset of client-server Round Trip Time delays of an actual cloud gaming tournament run on the infrastructure of the cloud gaming company Swarmio Inc. The dataset can be used for designing algorithms and tuning models for user-server allocation and server selection. To collect the dataset, tournament players were connected to Swarmio servers and delay measurements were taken in real time and actual networking conditions.


Main dataset

For the main dataset, the 189 players and the 9 servers were distributed among 4 different regions: North America, South America, Europe, East Asia. The 9 servers were located in the following cities with their acronyms in the dataset:

  1. Santa Clara (nasc),
  2. Chicago (nach), 
  3. Dallas (nada),
  4. Toronto (nato),
  5. Brazil (sabr),
  6. London (uk), 
  7. Amsterdam (nl), 
  8. Hong Kong (hk), 
  9. Singapore (sg).

Each of the 189 players were able to connect to each of the 9 servers. The following data is registered for each player:

  1. User Identifier (in the field: user_id)
  2. Time of access (in the field: timestamp)
  3. Longitude (in the field: longitude)
  4. Latitude (in the field: latitude)
  5. IP Address (in the field: address)
  6. Access Support Network or Internet Service Provider (in the field: asn_org)

In the dataset file main-dataset.json, every record contains the network delay measurements from a particular player to each of the 9 servers. It should be noted that the URLs and the IP addresses of the servers are provided in a separate file main-dataset-servers.json.

The user ID is a unique 32-character identifier that is generated for each player; for example, 5193b0e1-2412-4338-ac8d-6f519049aa77. The time of access is based on the Unix timestamp which is counted in seconds January 1, 1970; for example, 1528484445170. Longitude and latitude are based on the geo-location of the player; for example, "longitude": "121.0409", "latitude": "14.5832". The Access Support Network is the ISP network in which the player is registered, for example Rogers Communications Canada Inc, Philippine Long Distance Telephone Company, AT&T Services Inc., tec.

After registering each player, a background JavaScript script was run in Swarmio’s client software to obtain the latency measurements connecting to all of the servers. The script would query Swarmio’s portal to retrieve a list of all servers. Then, it would cycle through each server and measure the RTT latency. It would then push the results back to Swarmio’s central server for storage. 

Each measurement consisted of sending 11 packets from the player to the server, and the following measurements were obtained (all in ms):

  1. Median latency/delay (in the field: latency)
  2. Delay jitter (in the field: jitter)
  3. Minimum obtained delay (in the field: min)
  4. Maximum obtained delay (in the field: max)
  5. Average obtained delay (in the field: avr)

It should be noted that out of the 9 servers, only the 1st server (“nl”) was used for testing the connection, and that can be noted from the field “testing” having the value of “1”. Therefore, the value of “stats” for the first server will have no measurements.

Secondary dataset

For the secondary dataset, we set up 11 different servers: 1 server owned by Swarmio Media in Toronto and 10 servers using the AWS cloud in the following locations:

  1. North Virginia,
  2. Ohio,
  3. Northern California,
  4. Oregon,
  5. Montreal,
  6. Brazil,
  7. Singapore,
  8. Mumbai,
  9. Sydney, AU
  10. Ireland

The same script as the main dataset was run in the Swarmio client software of 67 players. This time, each server sent 8 packets to each player, and only the average delay was recorded and stored.

The secondary dataset consists of the JSON file secondary-dataset.json, where the keys are the names of the servers, and the values contain a list of the delays to the 67 players. The players IPs are provided in order in a separate file secondary-dataset-users.json. It is also possible to reuse the code that was used to retrieve the measurements in the file  . The IP addresses of the 11 servers can also be accessed in the file secondary-dataset-servers.json where the key of the record will have the name of the server; for example “N Virginia”, and the value will have the IP address of the server

In contrast to the main dataset, the secondary dataset contains only the delay between the servers and the players whereas the main dataset has more information such as the geo-location and the ISP. This makes the secondary dataset more suitable for testing and verification due to having a single label with only 2 features (IP addresses and city names), while the main dataset contains more features and measurements suitable for training and inference.


As Science and technology evolve, the environment is getting affected daily. These cause major environmental issues like Global Warming, Ozone layer depletion, Natural resource depletion, etc. These are measured and regulated by local bodies. The data given by the local bodies are average values for a large area, those data might be inaccurate for a small sector or isolated zone. However, there are few techniques such as WSN (Wireless Sensor Networks), IoT (Internet of things) which measures and updates real-time data to a cloud server to overcome the trouble.


The "RetroRevMatchEvalICIP16" dataset provides a retrospective reviewer recommendation dataset and evaluation for IEEE ICIP 2016. The methodology via which the recommendations were obtained and the evaluation was performed is described in the associated paper.

Y. Zhao, A. Anand, and G. Sharma, “Reviewer recommendations using document vector embeddings and a publisher database: Implementation and evaluation,” IEEE Access, vol. 10, pp. 21 798–21 811, 2022.


 Download the zip file and unzip to extract individual files. See the file for details on what is included in the individual files.



The dataset is generated by performing different MiTM attacks in the synthetic electric grid in RESLab testbed at Texas A&M University, US. The testbed primarily consists of a dynamic power system simulator (Powerworld Dynamic Studio), network emulator (CORE), Snort IDS, open DNP3 master and Elasticsearch's Packetbeat index. There are raw and processed files that can be used by security enthusiasts to develop new features and also to train IDS using our feature space respectively.


These last decades, Earth Observation brought quantities of new perspectives from geosciences to human activity monitoring. As more data became available, artificial intelligence techniques led to very successful results for understanding remote sensing data. Moreover, various acquisition techniques such as Synthetic Aperture Radar (SAR) can also be used for problems that could not be tackled only through optical images. This is the case for weather-related disasters such as floods or hurricanes, which are generally associated with large clouds cover.


The dataset is composed of 336 sequences corresponding to areas in West and South-East Africa, Middle-East, and Australia. Each time series is located in a given folder named with the sequence ID (0001... 0336).

Two json files, S1list.json and S2list.json are provided to describe respectively the Sentinel-1 and Sentinel-2 images.The keys are the total number of images in the sequence, the folder name, the geography of the observed area, and the description of each image in the series. The SAR images description contains also the URLs to download the images.Each image is described by its acquisition date, its label (FLOODING: boolean), a boolean (FULL-DATA-COVERAGE: boolean) indicating if the area is fully or partially imaged, and the file prefix. For SAR images the orbit (ASCENDING or DESCENDING) is also indicated.

The Sentinel-2 images were obtained from the Mediaeval 2019 Multimedia Satellite Task [1] and are provided with Level 2A atmospheric correction. For one acquisition, there are 12 single-channel raster images provided corresponding to the different spectral bands.

The Sentinel-1 images were added to the dataset. The images are provided with radiometric calibration and range doppler terrain correction based on the SRTM digital elevation model. For one acquisition, two raster images are available corresponding to the polarimetry channels VV and VH.

The original dataset was split into 269 sequences for the train and 68 sequences for the test. Here all sequences are in the same folder.


To use this dataset please cite the following papers:

Flood Detection in Time Series of Optical and SAR Images, C. Rambour,N. Audebert,E. Koeniguer,B. Le Saux,  and M. Datcu, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 1343--1346

The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop


This dataset contains modified Copernicus Sentinel data [2018-2019], processed by ESA.

[1] The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop


We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and 2 male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data. Despite the existence of multi-view color datasets c


* At this moment, the paper of this dataset is under review. The dataset is going to be fully published along with the publication of the paper, while in the meanwhile, more parts of the dataset will be uploaded.

The dataset includes multi-view RGBD, 3D/2D pose, volumetric (mesh/point-cloud/3D character) and audio data along with metadata for spatiotemporal alignment.

The full dataset is splitted per subject and per activity per modality.

There are also two benchmarking subsets, H4D1 for single-person and H4D2 for two-person sequences, respectively.

The fornats are:

  • mRGBD: *.png
  • 3D/2D poses: *.npy
  • volumetric (mesh/point-cloud/): *.ply
  • 3D character: *.fbx
  • metadata: *.txt, *.json