Dataset of GPS, inertial and WiFi data collected during road vehicle trips in the district of Porto, Portugal. It contains 40 trip datasets collected with a smartphone fixed on the windshield or dashboard, inside the road vehicle. The dataset was collected and used in order to develop a proof-of-concept for "MagLand: Magnetic Landmarks for Road Vehicle Localization", an approach that leverages magnetic anomalies created by existing road infrastructure as landmarks, in order to support current vehicle localization system (e.g. GNSS, dead reckoning).
Dataset is organized in folders by date.Inside each folder, it is separated in folders by collection app or equipment.Inside collection app/equipemnt folders, it is separated by sensor.For each sensor there is a time series per trip.For details about the trips, including vehicles, smartphones, apps, and dates for data collection please read "README.txt".
This dataset features cooking activities with recipes and gestures labeled. The data has been collected using two smartphones (right arm and left hip), two smartwatches (both wrists) and one motion capture system with 29 markers. There were 4 subjects who prepared 3 recipes (sandwich, fruit salad, cereal) 5 times each. The subjects followed a script for each recipe but acted as naturally as possible
You can use our tutorials to get started https://abc-research.github.io/cook2020/tutorial/
The file 'GPS_P2.zip' is the dataset collected from the GNSS sensor of "Xinda" autonomous vehicle in the Connected Autonomous Vehicles Test Fields (the CAVs Test Fields) Weishui Campus,Chang'an University.
The file 'fault.zip' is the simulated faults in the healthy data in '.mat' format, where X_abrupt, X_noise and X_drift represent abrupt faults, noise and drift in the long run are added into the healthy data, respectively.
The development of electronic nose (e-nose) for a rapid, simple, and low-cost meat assessment system becomes the concern of researchers in recent years. Hence, we provide time-series datasets that were recorded from e-nose for beef quality monitoring experiment. This dataset is originated from 12 type of beef cuts including round (shank), top sirloin, tenderloin, flap meat (flank), striploin (shortloin), brisket, clod/chuck, skirt meat (plate), inside/outside, rib eye, shin, and fat.
This data is related to the article “On the Spectral Quality of Time-Resolved CMOS SPAD-Based Raman Spectroscopy with High Fluorescence Backgrounds” that have been submitted to the IEEE Sensors Journal.
This data is related to the article “On the Spectral Quality of Time-Resolved CMOS SPAD-Based Raman Spectroscopy with High Fluorescence Backgrounds” that have been submitted to the IEEE Sensors Journal. The folder named “Fluorescence_to_Raman_ratio_(post-it_notes)” contains the data that was collected in the measurements where the effects of fluorescence-to-Raman ratio on the spectral quality were studied. Please, see the measurement procedures and results from the article under sections III. B and IV. A, respectively. The folder named “Recording_time_and_excitation_intensity_(oils)” contains the data that was collected in the measurement where the effects of the recording time and excitation intensity on the spectral quality was studied. Please, see the measurement procedures and results from the article under sections III. C and IV. B, respectively.
The measurement data is stored to the text files named as “Data.txt”. The datafiles have 8 columns and 256 rows. The columns represent the 8 time bins of the sensor and the rows in the datafiles represent the 256 spectral columns in the line sensor. The numbers in the cells of the datafiles represent the photon counts at a specific time bin and spectral column, i.e. at a specific wavenumber. The text files named as “Wavenumber_axis.txt” under the two main data folders contains the wavenumber values for each of the spectral columns in the sensor for the different measurements. The folders named as “DCR_corresction_data.txt” contains the dark count correction data for the different measurements.
We provide a large benchmark dataset consisting of about: 3.5 million keystroke events; 57.1 million data-points for accelerometer and gyroscope each; and 1.7 million data-points for swipes. Data was collected between April 2017 and June 2017 after the required IRB approval. Data from 117 participants, in a session lasting between 2 to 2.5 hours each, performing multiple activities such as: typing (free and fixed text), gait (walking, upstairs and downstairs) and swiping activities while using desktop, phone and tablet is shared.
Detailed description of all data files is provided in the *BBMAS_README.pdf* file along with the dataset.
 Amith K. Belman and Vir V. Phoha. 2020. Discriminative Power of Typing Features on Desktops, Tablets, and Phones for User Identification. ACM Trans. Priv. Secur. Volume 23,Issue 1, Article 4 (February 2020), 36 pages. DOI:https://doi.org/10.1145/3377404
Amith K. Belman, Li Wang, S. S. Iyengar, Pawel Sniatala, Robert Wright, Robert Dora, Jacob Baldwin, Zhanpeng Jin and Vir V. Phoha, "Insights from BB-MAS -- A Large Dataset for Typing, Gait and Swipes of the Same Person on Desktop, Tablet and Phone", arXiv:1912.02736 , 2019.
 Amith K. Belman, Li Wang, Sundaraja S. Iyengar, Pawel Sniatala, Robert Wright, Robert Dora, Jacob Baldwin, Zhanpeng Jin, Vir V. Phoha, "SU-AIS BB-MAS (Syracuse University and Assured Information Security - Behavioral Biometrics Multi-device and multi-Activity data from Same users) Dataset ", IEEE Dataport, 2019. [Online]. Available: http://dx.doi.org/10.21227/rpaz-0h66
This study was conducted in Mayaguez – Puerto Rico, and an area of around 18 Km2 was covered, which were determined using the following classification of places:
· Main Avenues: Wide public ways that has hospitals, vegetation, buildings, on either side
· Open Places: Mall parking lots and public plazas
· Streets & Roads: Dense residential and commercial areas on both sides
Vendor Equipment Description
KEYSIGHT® N9343C Handheld Spectrum Analyzer
***For CSV Files:
You can open the CSV files on Microsoft Excel or any other. In these files you will find the following information in different columns:
- Individually the information of each of the 20 scanned frequencies (center frequency, bandwidth, power in dBm).
***For KML Files:
You can open simply this interactive files using the Google Earth Pro software by clicking on file, open, and selecting the desired KML file.
At this point, the interactive map must be located on the place of the points and showing several colored ellipses between green for the weakest power level, and red for the strongest. If you press each point, you can see the complete information of each location.
The frequencies used in this study are shown below:
Survey Frequencies (Bandwidth 4% --> Patch Antenna)
Central Frequency (MHz) / Band Description
7200 .csv files, each containing a 10 kHz recording of a 1 ms lasting 100 hz sound, recorded centimeterwise in a 20 cm x 60 cm locating range on a table. 3600 files (3 at each of the 1200 different positions) are without an obstacle between the loudspeaker and the microphone, 3600 RIR recordings are affected by the changes of the object (a book). The OOLA is initially trained offline in batch mode by the first instance of the RIR recordings without the book. Then it learns online in an incremental mode how the RIR changes by the book.
folder 'load and preprocess offline data': matlab sourcecodes and raw/working offline (no additional obstacle) data files
folder 'lvq and kmeans test': matlab sourcecodes to test and compare in-sample failure with and without LVQ
folder 'online data load and preprocess': matlab sourcecodes and raw/working online (additional obstacle) data files
folder 'OOL': matlab sourcecodes configurable for case 1-4
folder 'OOL2': matlab sourcecodes for case 5
folder 'plots': plots and simulations