This dataset is very vast and contains tweets related to COVID-19. There are 226668 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Twitter doesn't allow public sharing of other details related to tweet data( texts,etc.) so can't upload here.

Instructions: 

Read the documentation properly and use the code snippet written in python to load data.

Categories:
2507 Views

This dataset is very vast and contains Bengali tweets related to COVID-19. There are 36117 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Code snippet is given in Documentation file. Sharing Twitter data other than Tweet ids publicly violates Twitter regulation policies.    

Instructions: 

The script to load data is written in documentation.

Categories:
882 Views

This dataset is very vast and contains Spanish tweets related to COVID-19. There are 18958 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Code snippet is given in Documentation file. Sharing Twitter data other than Tweet ids publicly violates Twitter regulation policies.    

Instructions: 

Use the code snippet provided written in python to load data.

Categories:
601 Views

Purpose is to describe the dynamics of the COVID19 pandemics accounting for the mitigation measures, for the introduction or removal of the quarantine, and for the effect of vaccination when and if introduced.

Methods include the derivation of the Pandemic Equation describing the mitigation measures via the evolution of the growth time constant in the Pandemic Equation resulting in an asymmetric pandemic curve with a steeper rise than a decrease.

Instructions: 

Purpose is to describe the dynamics of the COVID19 pandemics accounting for the mitigation measures, for the introduction or removal of the quarantine, and for the effect of vaccination when and if introduced.

Methods include the derivation of the Pandemic Equation describing the mitigation measures via the evolution of the growth time constant in the Pandemic Equation resulting in an asymmetric pandemic curve with a steeper rise than a decrease.

Results: The Pandemic equation predicts how the quarantine removal and business opening lead to a spike in the pandemic curve. The effective vaccination reduces the new daily infections predicted by the Pandemic equation to nearly zero. The pandemic curves in many localities have similar time dependencies but shifted in time. The Pandemic Equation parameters extracted from the well advanced pandemic curves can be used for predicting the pandemic evolution in the localities, where the pandemics is still in the initial stages.

Conclusion: Using the multiple pandemic locations for the parameter extraction allows for the uncertainty quantification in predicting the pandemic evolution using the introduced Pandemic Equation..

 

Categories:
532 Views

Case and contact definitions are based on the current available information and are regularly revised as new information accumulates. Countries may need to adapt case definitions depending on their local epidemiological situation and other factors. All countries are encouraged to publish definitions used online and in regular situation reports, and to document periodic updates to definitions which may affect the interpretation of surveillance data.

Instructions: 

The First Few X cases and contacts (FFX) investigation protocol for Coronavirus Disease 2019 (COVID-19). This is about identification and tracing of cases and their close contacts in the general population or restricted to close settings (like households, health-care settings, schools). FFX is the primary investigation protocol to be initiated upon the identification of the initial laboratory-confirmed cases of COVID-19 in a country.

Categories:
609 Views

This dataset consists of RSS data measured from smartphones carried by two human beings.

Instructions: 

Two users were standing at a certain distance (d = {0.2:0.2:2, 3:5}) from each other. For each distance, the App was made to scan for incoming BLE signals for about 1min. The following information is logged: the truth distance, name of smartphone, MAC address of BLE chipset, the packet payload, RSS values, time elapsed, and timestamp.  Two phones were used: 1) gryphonelab, and 2) HTC One M9.

 

We consolidated the data and reorganized them while applying moving average to the raw RSS value.  The reorganized data has the following format:

  • ·         device name,
  • ·         time elapsed,
  • ·         rss (raw RSS value),
  • ·         mRSS10 (filtered RSS value with window size = 10),
  • ·         mRSS100 (filtered RSS value with window size = 100),
  • ·         distance, and
  • ·         label
Categories:
452 Views

Objectives:  Worldwide efforts to protect front line providers performing endotracheal intubation during the COVID-19 pandemic have led to innovative devices.  Authors evaluated the aerosol containment effectiveness of a novel intubation aerosol containment system (IACS) compared with a recently promoted intubation box and no protective barrier.  Methods:  In a simulation center at the authors’ university, the IACS was compared to no protective barrier and an intubation box.

Instructions: 

Download and play video file

Categories:
415 Views

This data set includes Covid-19 related Tweet messages written in Turkish that contain at least one of four keywords (Covid, Kovid, Corona, Korona). These keywords are used to express Covid-19 virus in Turkey. Tweets collection was started from 11th March 2020, the first Covid-19 case seen in Turkey.

Currently dataset contain 4,8 million tweets with 6 different attribute of each tweets that were sent from 9 March 2020 until 6 May 2020.

The data file contains comma separated values (CSV). It contains the following information (6 Column) for each tweet in the data file:

Instructions: 

Currently dataset contain 4,8 million tweets with 6 different attribute of each tweets that were sent from 9 March 2020 until 6 May 2020.

Original CSV data file is zipped by WinRAR to upload and download easily. The zipped file size is 76 MB.

This data can be used for text mining such as topic modelling, sentiment analysis etc.

The data file contains comma separated values (CSV). It contains the following information (6 Column) for each tweet in the data file:

Created-At: Exact creation time of the tweet
From-User-Id: Sender User Id
To-User-Id: if it is sent to a user, its user ID
Language: All Turkish
Retweet-Count: number of retweets
Id: ID of tweet that is unique for all tweets

Categories:
4315 Views

This paper applies AI (artificial intelligence) technology to analyze low-dose HRCT (High-resolution chest radiography) data in an attempt to detect COVID-19 pneumonia symptoms. A new model structure is proposed with segmentation of anatomical structures on DNNs-based (deep learning neural network) methods, relying on an abundance of labeled data for proper training.

Instructions: 

This tool model propose a Mask-RCNN detection of COVID-19 pneumonia symptoms by employing Stacked Autoencoders in deep unsupervised learning on Low-Dose High Resolution CT architecture. Based on autoencoder of Mask-RCNN for area mark feature maps objection detection for the identification of COVID-19 pneumonia have very serious pathological and always accompanied by various of symptoms. We collect a lot of lung x-ray images were be integrated into DICM style dataset prepare for experiment on computer on vision algorithms, and deep learning architecture based on autoencoder of Mask- RCNN algorithms are the main technological breakthrough.

Categories:
2702 Views

This dataset contains IDs and sentiment scores of geo-tagged tweets related to the COVID-19 pandemic. The real-time Twitter feed is monitored for coronavirus-related tweets using 90+ different keywords and hashtags that are commonly used while referencing the pandemic. Complying with Twitter's content redistribution policy, only the tweet IDs are shared. You can re-construct the dataset by hydrating these IDs. For detailed instructions on the hydration of tweet IDs, please read this article.

Instructions: 

Each CSV file contains a list of tweet IDs. You can use these tweet IDs to download fresh data from Twitter (read this article: hydrating tweet IDs). To make it easy for the NLP researchers to get access to the sentiment analysis of each collected tweet, the sentiment score computed by TextBlob has been appended as the second column. To hydrate the tweet IDs, you can use applications such as Hydrator (available for OS X, Windows and Linux) or twarc (python library).

Getting the CSV files of this dataset ready for hydrating the tweet IDs:

import pandas as pd

dataframe=pd.read_csv("april28_april29.csv", header=None)

dataframe=dataframe[0]

dataframe.to_csv("ready_april28_april29.csv", index=False, header=None)

The above example code takes in the original CSV file (i.e., april28_april29.csv) from this dataset and exports just the tweet ID column to a new CSV file (i.e., ready_april28_april29.csv). The newly created CSV file can now be consumed by the Hydrator application for hydrating the tweet IDs. To export the tweet ID column into a TXT file, just replace ".csv" with ".txt" in the to_csv function (last line) of the above example code.

If you are not comfortable with Python and pandas, you can upload these CSV files to your Google Drive and use Google Sheets to delete the second column. Once finished with the deletion, download the edited CSV files: File > Download > Comma-separated values (.csv, current sheet). These downloaded CSV files are now ready to be used with the Hydrator app for hydrating the tweet IDs.

Categories:
28456 Views

Pages