The last decade faced a number of pandemics [1]. The current outbreak of COVID is creating havoc globally. The daily incidences of COVID-2019 from 11th January 2020 to 9th May 2020 were collected from the official COVID dashboard of world health organization (WHO) [2] , i.e. https://covid19.who.int/explorer. The data is updated with the population of the countries and further Case fatality rate, Basic Attack Rate (BAR) and Household Secondary Attack Rate (HSAR) are computed for all the countries.

Instructions: 

The data will be used by epidemiologist, statisticians, data scientists for assessing the risk of the Covid 2019 globally and would be used as a model to predict the case fatality rate along with the possible spread of the disease along with its attack rate.Data was in raw format. A detailed analysis is carried out from Epidemiology point of view and a datasheet is prepared through the identification of the Risk Factor in a Defined Population.The daily incidences of COVID-2019 from 11th January 2020 to 9th May 2020 were collected form the official covid dashboard of world health organization (WHO), i.e. https://covid19.who.int/explorer. The data is compiled in Excel 2016 and a database is created. The database is updated with the population of the countries and Case fatality rate, Basic Attack Rate (BAR) and Household Secondary Attack Rate (HSAR) is computed for all the countries.  

 

Categories:
Category: 
422 Views

A set of chest CT data sets from multi-centre hospitals included five categories

Categories:
1111 Views

We present GeoCoV19, a large-scale Twitter dataset related to the ongoing COVID-19 pandemic. The dataset has been collected over a period of 90 days from February 1 to May 1, 2020 and consists of more than 524 million multilingual tweets. As the geolocation information is essential for many tasks such as disease tracking and surveillance, we employed a gazetteer-based approach to extract toponyms from user location and tweet content to derive their geolocation information using the Nominatim (Open Street Maps) data at different geolocation granularity levels. In terms of geographical coverage, the dataset spans over 218 countries and 47K cities in the world. The tweets in the dataset are from more than 43 million Twitter users, including around 209K verified accounts. These users posted tweets in 62 different languages.

Instructions: 

GeoCoV19 Dataset Description 

The GeoCoV19 Dataset comprises several TAR files, which contain zip files representing daily data. Each zip file contains a JSON with the following format:

{ "tweet_id": "122365517305623353", "created_at": "Sat Feb 01 17:11:42 +0000 2020", "user_id": "335247240", "geo_source": "user_location", "user_location": { "country_code": "br" }, "geo": {}, "place": { }, "tweet_locations": [ { "country_code": "it", "state": "Trentino-Alto", "county": "Pustertal - Val Pusteria" }, { "country_code": "us" }, { "country_code": "ru", "state": "Voronezh Oblast", "county": "Petropavlovsky District" }, { "country_code": "at", "state": "Upper Austria", "county": "Braunau am Inn" }, { "country_code": "it", "state": "Trentino-Alto", "county": "Pustertal - Val Pusteria" }, { "country_code": "cn" }, { "country_code": "in", "state": "Himachal Pradesh", "county": "Jubbal" } ] }

Description of all the fields in the above JSON 

Each JSON in the Geo file has the following eight keys:

1. Tweet_id: it represents the Twitter provided id of a tweet

2. Created_at: it represents the Twitter provided "created_at" date and time in UTC

3. User_id: it represents the Twitter provided user id

4. Geo_source: this field shows one of the four values: (i) coordinates, (ii) place, (iii) user_location, or (iv) tweet_text. The value depends on the availability of these fields. However, priority is given to the most accurate fields if available. The priority order is coordinates, places, user_location, and tweet_text. For instance, when a tweet has GPS coordinates, the value will be "coordinates" even though all other location fields are present. If a tweet does not have GPS, place, and user_location information, then the value of this field will be "tweet_text" if there is any location mention in the tweet text.

The remaining keys can have the following location_json inside them. Sample location_json: {"country_code":"us","state":"California","county":"San Francisco","city":"San Francisco"}. Depending on the available granularity, country_code, state, county or city keys can be missing in the location_json.

5. user_location: It can have a "location_json" as described above or an empty JSON {}. This field uses the "location" profile meta-data of a Twitter user and represents the user declared location in the text format. We resolve the text to a location.

6. geo: represents the "geo" field provided by Twitter. We resolve the provided latitude and longitude values to locations. It can have a "location_json" as described above or an empty JSON {}.

7. tweet_locations: This field can have an array of "location_json" as described above [location_json1, location_json2] or an empty array []. This field uses the tweet content (i.e., actual tweet message) to find toponyms. A tweet message can have several mentions of different locations (i.e., toponyms). That is why we have an array of locations representing all those toponyms in a tweet. For instance, in a tweet like "The UK has over 65,000 #COVID19 deaths. More than Qatar, Pakistan, and Norway.", there are four location mentions. Our tweet_locations array should represent these four separately.

8. place: It can have a "location_json" described above or an empty JSON {}. It represents the Twitter-provided "place" field.

 

Tweets hydrators:

CrisisNLP (Java): https://crisisnlp.qcri.org/#resource8

Twarc (Python): https://github.com/DocNow/twarc#dehydrate

Docnow (Desktop application): https://github.com/docnow/hydrator

If you have doubts or questions, feel free to contact us at: uqazi@hbku.edu.qa and mimran@hbku.edu.qa

Categories:
1244 Views

This dataset is very vast and contains tweets related to COVID-19. There are 226668 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Twitter doesn't allow public sharing of other details related to tweet data( texts,etc.) so can't upload here.

Instructions: 

Read the documentation properly and use the code snippet written in python to load data.

Categories:
896 Views

This dataset is very vast and contains Bengali tweets related to COVID-19. There are 36117 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Code snippet is given in Documentation file. Sharing Twitter data other than Tweet ids publicly violates Twitter regulation policies.    

Instructions: 

The script to load data is written in documentation.

Categories:
292 Views

This dataset is very vast and contains Spanish tweets related to COVID-19. There are 18958 unique tweet-ids in the whole dataset that ranges from December 2019 till May 2020 . The keywords that have been used to crawl the tweets are 'corona',  ,  'covid ' , 'sarscov2 ',  'covid19', 'coronavirus '.  For getting the other 33 fields of data drop a mail at "avishekgarain@gmail.com". Code snippet is given in Documentation file. Sharing Twitter data other than Tweet ids publicly violates Twitter regulation policies.    

Instructions: 

Use the code snippet provided written in python to load data.

Categories:
155 Views

Purpose is to describe the dynamics of the COVID19 pandemics accounting for the mitigation measures, for the introduction or removal of the quarantine, and for the effect of vaccination when and if introduced.

Methods include the derivation of the Pandemic Equation describing the mitigation measures via the evolution of the growth time constant in the Pandemic Equation resulting in an asymmetric pandemic curve with a steeper rise than a decrease.

Instructions: 

Purpose is to describe the dynamics of the COVID19 pandemics accounting for the mitigation measures, for the introduction or removal of the quarantine, and for the effect of vaccination when and if introduced.

Methods include the derivation of the Pandemic Equation describing the mitigation measures via the evolution of the growth time constant in the Pandemic Equation resulting in an asymmetric pandemic curve with a steeper rise than a decrease.

Results: The Pandemic equation predicts how the quarantine removal and business opening lead to a spike in the pandemic curve. The effective vaccination reduces the new daily infections predicted by the Pandemic equation to nearly zero. The pandemic curves in many localities have similar time dependencies but shifted in time. The Pandemic Equation parameters extracted from the well advanced pandemic curves can be used for predicting the pandemic evolution in the localities, where the pandemics is still in the initial stages.

Conclusion: Using the multiple pandemic locations for the parameter extraction allows for the uncertainty quantification in predicting the pandemic evolution using the introduced Pandemic Equation..

 

Categories:
Category: 
322 Views

Case and contact definitions are based on the current available information and are regularly revised as new information accumulates. Countries may need to adapt case definitions depending on their local epidemiological situation and other factors. All countries are encouraged to publish definitions used online and in regular situation reports, and to document periodic updates to definitions which may affect the interpretation of surveillance data.

Instructions: 

The First Few X cases and contacts (FFX) investigation protocol for Coronavirus Disease 2019 (COVID-19). This is about identification and tracing of cases and their close contacts in the general population or restricted to close settings (like households, health-care settings, schools). FFX is the primary investigation protocol to be initiated upon the identification of the initial laboratory-confirmed cases of COVID-19 in a country.

Categories:
Category: 
437 Views

This dataset consists of RSS data measured from smartphones carried by two human beings.

Instructions: 

Two users were standing at a certain distance (d = {0.2:0.2:2, 3:5}) from each other. For each distance, the App was made to scan for incoming BLE signals for about 1min. The following information is logged: the truth distance, name of smartphone, MAC address of BLE chipset, the packet payload, RSS values, time elapsed, and timestamp.  Two phones were used: 1) gryphonelab, and 2) HTC One M9.

 

We consolidated the data and reorganized them while applying moving average to the raw RSS value.  The reorganized data has the following format:

  • ·         device name,
  • ·         time elapsed,
  • ·         rss (raw RSS value),
  • ·         mRSS10 (filtered RSS value with window size = 10),
  • ·         mRSS100 (filtered RSS value with window size = 100),
  • ·         distance, and
  • ·         label
Categories:
Category: 
300 Views

Objectives:  Worldwide efforts to protect front line providers performing endotracheal intubation during the COVID-19 pandemic have led to innovative devices.  Authors evaluated the aerosol containment effectiveness of a novel intubation aerosol containment system (IACS) compared with a recently promoted intubation box and no protective barrier.  Methods:  In a simulation center at the authors’ university, the IACS was compared to no protective barrier and an intubation box.

Instructions: 

Download and play video file

Categories:
Category: 
358 Views

Pages