Coronavirus (COVID-19) Tweets Dataset

Coronavirus (COVID-19) Tweets Dataset

Citation Author(s):
Rabindra
Lamsal
JNU, New Delhi
Submitted by:
Rabindra Lamsal
Last updated:
Tue, 06/02/2020 - 00:47
DOI:
10.21227/781w-ef42
Data Format:
Links:
License:
Dataset Views:
47071
Rating:
4.833335
6 ratings - Please login to submit your rating.
Share / Embed Cite

This dataset includes CSV files that contain tweet IDs. The tweets have been collected by the model deployed here at https://live.rlamsal.com.np. The model monitors the real-time Twitter feed for coronavirus-related tweets, using filters: language “en”, and keywords “corona”, "coronavirus", "covid", "covid19" and variants of "sarscov2". As per Twitter's content redistribution policy, it is not possible for me to provide information other than the tweet IDs (this dataset has been completely re-designed on March 20, 2020, to comply with content redistribution policy set by Twitter). Note: This dataset should be solely used for non-commercial research purposes (ignore every other LICENSE category given on this page). A new list of tweet IDs will be added to this dataset every day. Bookmark this page for further updates.

----------------------------------------------------------------------

Tweets count: 147,476,115 ---  Global Tweets (EN) 

----------------------------------------------------------------------

Coronavirus (COVID-19) Geo-tagged Tweets Dataset: http://dx.doi.org/10.21227/fpsb-jz61

----------------------------------------------------------------------

Why are only tweet IDs being shared? It is Twitter's content redistribution policy that restricts the sharing of tweet information other than tweet IDs and/or user IDs. Twitter wants researchers to always pull fresh data using their API. Why? Here's my opinion. Maybe, some user might want to delete a particular tweet after a couple of minute(s)/hour(s)/day(s), and if the same tweet has already been pulled and is shared on a public domain, then it might make the user/community vulnerable to many inferences coming out of the shared data.

(Tweets collected in UTC; Local time mentioned below: GMT+5:45):

corona_tweets_01.csv: 831,327 tweets    (March 20, 2020 01:37 AM - March 20, 2020 10:28 AM)

corona_tweets_02.csv: 870,924 tweets    (March 20, 2020 10:31 AM - March 20, 2020 09:43 PM)

corona_tweets_03.csv: 773,729 tweets    (March 20, 2020 09:49 PM - March 21, 2020 09:25 AM)

corona_tweets_04.csv: 1,233,340 tweets (March 21, 2020 09:27 AM - March 22, 2020 07:46 AM)

corona_tweets_05.csv: 1,782,157 tweets (March 22, 2020 07:50 AM - March 23, 2020 09:08 AM)

corona_tweets_06.csv: 1,771,295 tweets (March 23, 2020 09:11 AM - March 24, 2020 11:35 AM)

corona_tweets_07.csv: 1,479,651 tweets (March 24, 2020 11:42 AM - March 25, 2020 11:43 AM)

corona_tweets_08.csv: 1,272,592 tweets (March 25, 2020 11:47 AM - March 26, 2020 12:46 PM)

corona_tweets_09.csv: 1,091,429 tweets (March 26, 2020 12:51 PM - March 27, 2020 11:53 AM)

corona_tweets_10.csv: 1,172,013 tweets (March 27, 2020 11:56 AM - March 28, 2020 01:59 PM)

corona_tweets_11.csv: 1,141,210 tweets (March 28, 2020 02:03 PM - March 29, 2020 04:01 PM)

----- March 29, 2020 04:05 PM - March 30, 2020 02:00 PM -- Some folk(s) messed around with the server. Some preventive measures have been taken. Tweets for this period won't be available. -----

corona_tweets_12.csv: 793,417 tweets (March 30, 2020 02:01 PM - March 31, 2020 10:16 AM)

corona_tweets_13.csv: 1,029,294 tweets (March 31, 2020 10:20 AM - April 01, 2020 10:59 AM)

corona_tweets_14.csv: 920,076 tweets (April 01, 2020 11:02 AM - April 02, 2020 12:19 PM)

corona_tweets_15.csv: 826,271 tweets (April 02, 2020 12:21 PM - April 03, 2020 02:38 PM)

corona_tweets_16.csv: 612,512 tweets (April 03, 2020 02:40 PM - April 04, 2020 11:54 AM)

corona_tweets_17.csv: 685,560 tweets (April 04, 2020 11:56 AM - April 05, 2020 12:54 PM)

corona_tweets_18.csv: 717,301 tweets (April 05, 2020 12:56 PM - April 06, 2020 10:57 AM)

corona_tweets_19.csv: 722,921 tweets (April 06, 2020 10:58 AM - April 07, 2020 12:28 PM)

corona_tweets_20.csv: 554,012 tweets (April 07, 2020 12:29 PM - April 08, 2020 12:34 PM)

corona_tweets_21.csv: 589,679 tweets (April 08, 2020 12:37 PM - April 09, 2020 12:18 PM)

corona_tweets_22.csv: 517,718 tweets (April 09, 2020 12:20 PM - April 10, 2020 09:20 AM)

corona_tweets_23.csv: 601,199 tweets (April 10, 2020 09:22 AM - April 11, 2020 10:22 AM)

corona_tweets_24.csv: 497,655 tweets (April 11, 2020 10:24 AM - April 12, 2020 10:53 AM)

corona_tweets_25.csv: 477,182 tweets (April 12, 2020 10:57 AM - April 13, 2020 11:43 AM)

corona_tweets_26.csv: 288,277 tweets (April 13, 2020 11:46 AM - April 14, 2020 12:49 AM)

corona_tweets_27.csv: 515,739 tweets (April 14, 2020 11:09 AM - April 15, 2020 12:38 PM)

corona_tweets_28.csv: 427,088 tweets (April 15, 2020 12:40 PM - April 16, 2020 10:03 AM)

corona_tweets_29.csv: 433,368 tweets (April 16, 2020 10:04 AM - April 17, 2020 10:38 AM)

corona_tweets_30.csv: 392,847 tweets (April 17, 2020 10:40 AM - April 18, 2020 10:17 AM)

----- Additional keywords: "coronavirus", "covid", "covid19" and variants of "sarscov2". With the addition of these keywords, the tweets/day reach beyond two million, therefore, the CSV files hereafter will be zipped. Lets save some bandwidth. -----

corona_tweets_31.csv: 2,671,818 tweets (April 18, 2020 10:19 AM - April 19, 2020 09:34 AM)

corona_tweets_32.csv: 2,393,006 tweets (April 19, 2020 09:43 AM - April 20, 2020 10:45 AM)

corona_tweets_33.csv: 2,227,579 tweets (April 20, 2020 10:56 AM - April 21, 2020 10:47 AM)

corona_tweets_34.csv: 2,211,689 tweets (April 21, 2020 10:54 AM - April 22, 2020 10:33 AM)

corona_tweets_35.csv: 2,265,189 tweets (April 22, 2020 10:45 AM - April 23, 2020 10:49 AM)

corona_tweets_36.csv: 2,201,138 tweets (April 23, 2020 11:08 AM - April 24, 2020 10:39 AM)

corona_tweets_37.csv: 2,338,713 tweets (April 24, 2020 10:51 AM - April 25, 2020 11:50 AM)

corona_tweets_38.csv: 1,981,835 tweets (April 25, 2020 12:20 PM - April 26, 2020 09:13 AM)

corona_tweets_39.csv: 2,348,827 tweets (April 26, 2020 09:16 AM - April 27, 2020 10:21 AM)

corona_tweets_40.csv: 2,212,216 tweets (April 27, 2020 10:33 AM - April 28, 2020 10:09 AM)

corona_tweets_41.csv: 2,118,853 tweets (April 28, 2020 10:20 AM - April 29, 2020 08:48 AM)

corona_tweets_42.csv: 2,390,703 tweets (April 29, 2020 09:09 AM - April 30, 2020 10:33 AM)

corona_tweets_43.csv: 2,184,439 tweets (April 30, 2020 10:53 AM - May 01, 2020 10:18 AM)

corona_tweets_44.csv: 2,223,013 tweets (May 01, 2020 10:23 AM - May 02, 2020 09:54 AM)

corona_tweets_45.csv: 2,216,553 tweets (May 02, 2020 10:18 AM - May 03, 2020 09:57 AM)

corona_tweets_46.csv: 2,266,373 tweets (May 03, 2020 10:09 AM - May 04, 2020 10:17 AM)

corona_tweets_47.csv: 2,227,489 tweets (May 04, 2020 10:32 AM - May 05, 2020 10:17 AM)

corona_tweets_48.csv: 2,218,774 tweets (May 05, 2020 10:38 AM - May 06, 2020 10:26 AM)

corona_tweets_49.csv: 2,164,251 tweets (May 06, 2020 10:35 AM - May 07, 2020 09:33 AM)

corona_tweets_50.csv: 2,203,686 tweets (May 07, 2020 09:55 AM - May 08, 2020 09:35 AM)

corona_tweets_51.csv: 2,250,019 tweets (May 08, 2020 09:39 AM - May 09, 2020 09:49 AM)

corona_tweets_52.csv: 2,273,705 tweets (May 09, 2020 09:55 AM - May 10, 2020 10:11 AM)

corona_tweets_53.csv: 2,208,264 tweets (May 10, 2020 10:23 AM - May 11, 2020 09:57 AM)

corona_tweets_54.csv: 2,216,845 tweets (May 11, 2020 10:08 AM - May 12, 2020 09:52 AM)

corona_tweets_55.csv: 2,264,472 tweets (May 12, 2020 09:59 AM - May 13, 2020 10:14 AM)

corona_tweets_56.csv: 2,339,709 tweets (May 13, 2020 10:24 AM - May 14, 2020 11:21 AM)

corona_tweets_57.csv: 2,096,878 tweets (May 14, 2020 11:38 AM - May 15, 2020 09:58 AM)

corona_tweets_58.csv: 2,214,205 tweets (May 15, 2020 10:13 AM - May 16, 2020 09:43 AM)

----- The server has been optimized well to handle the enormous traffic coming in from the Twitter's Streaming API and get the computations done efficiently; therefore, there is a significant rise in the number of tweets captured per day. -----

corona_tweets_59.csv: 3,389,090 tweets (May 16, 2020 09:58 AM - May 17, 2020 10:34 AM)

corona_tweets_60.csv: 3,530,933 tweets (May 17, 2020 10:36 AM - May 18, 2020 10:07 AM)

corona_tweets_61.csv: 3,899,631 tweets (May 18, 2020 10:08 AM - May 19, 2020 10:07 AM)

corona_tweets_62.csv: 3,767,009 tweets (May 19, 2020 10:08 AM - May 20, 2020 10:06 AM)

corona_tweets_63.csv: 3,790,455 tweets (May 20, 2020 10:06 AM - May 21, 2020 10:15 AM)

corona_tweets_64.csv: 3,582,020 tweets (May 21, 2020 10:16 AM - May 22, 2020 10:13 AM)

corona_tweets_65.csv: 3,461,470 tweets (May 22, 2020 10:14 AM - May 23, 2020 10:08 AM)

corona_tweets_66.csv: 3,477,564 tweets (May 23, 2020 10:08 AM - May 24, 2020 10:02 AM)

corona_tweets_67.csv: 3,656,446 tweets (May 24, 2020 10:02 AM - May 25, 2020 10:10 AM)

corona_tweets_68.csv: 3,474,952 tweets (May 25, 2020 10:11 AM - May 26, 2020 10:22 AM)

corona_tweets_69.csv: 3,422,960 tweets (May 26, 2020 10:22 AM - May 27, 2020 10:16 AM)

corona_tweets_70.csv: 3,480,999 tweets (May 27, 2020 10:17 AM - May 28, 2020 10:35 AM)

corona_tweets_71.csv: 3,446,008 tweets (May 28, 2020 10:36 AM - May 29, 2020 10:07 AM)

corona_tweets_72.csv: 3,492,841 tweets (May 29, 2020 10:07 AM - May 30, 2020 10:14 AM)

corona_tweets_73.csv: 3,098,817 tweets (May 30, 2020 10:15 AM - May 31, 2020 10:13 AM)

corona_tweets_74.csv: 3,234,848 tweets (May 31, 2020 10:13 AM - June 01, 2020 10:14 AM)

corona_tweets_75.csv: 3,206,132 tweets (June 01, 2020 10:15 AM - June 02, 2020 10:07 AM)

Instructions: 

Each CSV file contains a list of tweet IDs. You can use these tweet IDs to download fresh data from Twitter (hydrating the tweet IDs). To make it easy for the NLP researchers to get access to the sentiment analysis of each collected tweet, the sentiment score out of TextBlob has been appended as the second column.

To hydrate the tweet IDs, you can use applications such as DocNow's Hydrator (available for OS X, Windows and Linux) or QCRI's Tweets Downloader (java based). Please go through the documentation of the respective tools to know about the downloading process. Make sure that you consider keeping only the tweet IDs while feeding the CSV files to any of these hydrating applications.

Comments

I am getting this error

 

DatabaseError: database disk image is malformed

Submitted by Junaid khan on Mon, 03/16/2020 - 15:23

Can you tell me the name of the file you're experiencing this error with? I would recommend you to first use any kind of SQLite DB viewer to check if the downloaded file is not corrupted.

Submitted by Rabindra Lamsal on Wed, 04/29/2020 - 00:41

Hi! Could you mention what filters are you using to get the tweets? Thanks

Submitted by Victor Tavares on Tue, 03/17/2020 - 00:21

keyword: corona, language: en

A significant amount of tweets used the word 'corona' ignoring the word 'virus'. So I had to track tweets using the most generic word: just 'corona'. Therefore, a couple of tweets relating to 'corona beer' might also be present in the databases.

Submitted by Rabindra Lamsal on Tue, 03/17/2020 - 00:45

Hi ! I cannot access the LSTM Model.

Submitted by islam sadat on Wed, 03/18/2020 - 10:41

Try refreshing. Maybe the server was busy while you were trying to access the site. I just can't believe that more than 338,500 requests have been made to the model within the last 24 hours. And this amount of request is something that my model cannot handle. Sorry for the inconvenience!

Submitted by Rabindra Lamsal on Wed, 03/18/2020 - 11:09

Please fix this two datasets        

1. corona_tweets_2M.db.zip        2. corona_tweets_2M_2.zip

 

it shows this error DatabaseError: database disk image is malformed

Submitted by imran khan on Thu, 03/19/2020 - 08:19

I downloaded the very same compressed files from this page and loaded both the databases on an SQLite DB viewer. The databases work just fine. See the screenshot here: https://i.ibb.co/SyQ7ff1/Screen-Shot-2020-03-19-at-8-21-46-PM.png

I recommend you to open the databases (which are generating the image malformed error) using any DB viewer and re-save them on your machine or export to SQL or to any tabular format file system as per your preference.

Submitted by Rabindra Lamsal on Fri, 05/29/2020 - 01:31

Hi thanks for providing these datasets for the public. I have one questio, are all these files contain same structure? I wish if they had the other feilds twitter provides with tweets so we can directly do our research?

I wonder if the other files all have three columns only, unix, text and sentiment.

Submitted by ali ALdulaimi on Thu, 03/19/2020 - 11:54

Hello there! Yes, all the files have the same structure (unix, text, sentiment score). However, starting March 20 the collected tweets will also have one additional column, viz. tweet ID.

This is because, initially, the purpose of the deployed web app was not just to collect the tweets; it was more like an optimization project. However, when the corona outbreak started in China, I decided to release the collected tweets rather than just keeping them with me.

Submitted by Rabindra Lamsal on Thu, 03/19/2020 - 14:05

Hi

Rabindra Have the SQLite dbs been replaced with CSV with only time and sentiment score?Thanks  

Submitted by Bevan Ward on Sat, 03/21/2020 - 23:50

Hello Bevan. No, the first column in the CSV files is tweet ID. You'll have to automate the extraction of tweets using the list of tweet IDs. Twitter Policy; so I had to remove every other info except the tweet ID and sentiment score.

Submitted by Rabindra Lamsal on Sun, 03/22/2020 - 02:25

Thanks Rabindra for the reply - take care Bevan

Submitted by bevan ward on Sun, 03/29/2020 - 18:24

Hi, Can you please upload the tweet ids and sentiment of the old file from February and early March?

 

Thank you

Submitted by Rabia batool on Tue, 03/24/2020 - 05:17

Hello Rabia! unfortunately, I had to take down all the tweets which were collected between Feb 1, 2020, and Mar 19, 2020, because the old DB files didn't have tweet IDs collected. This was because, initially, the purpose of the deployed web app was not just to collect the tweets; it was more like an optimization project. However, when the corona outbreak started in China, I decided to release the collected tweets rather than just keeping them with me. Therefore, because of twitter data sharing policies, I am not authorized to share the old files. Sorry for the inconvenience.

Submitted by Rabindra Lamsal on Tue, 03/24/2020 - 10:42

Thank you for your response. I completely understand this. 

 

Submitted by Rabia batool on Wed, 03/25/2020 - 03:50

Hi, I'm trying to view a particular tweet using the tweet IDs that you provided with a piece of python code that you provided above after adding my credentials for (CONSUMER_KEY, CONSUMER_SECRET, OAUTH_TOKEN, OAUTH_TOKEN_SECRET), however,  it always gives me the following error message:

 

tweepy.error.TweepError: [{'code': 144, 'message': 'No status found with that ID.'}]

 

Have you hashed those tweet ids that you uploaded? Any advice is appreciated. 

 

Best regards, 

 

Submitted by Basheer Qolomany on Mon, 03/30/2020 - 18:26

Maybe the particular tweet which you're trying to view has been either removed or hidden by the user.

Submitted by Rabindra Lamsal on Mon, 03/30/2020 - 19:56

Thanks for replying, actually I don't think those tweets have been removed or hidden by the users,  because I tried in a for loop hundreds of different tweet ids and all of them gave me the same error message. While I got some tweet id from another source they worked just fine. 

Here are the some of tweet ids that I used from file number 10 for example: 

 

1243420522592910000

1243420476824640000

1243420477235660000

1243420477646720000

1243420477894190000

1243420478238150000

1243420478535890000

1243420478829510000

1243420478951180000

1243420479706150000

1243420479844530000

1243420479982990000

1243420479924250000

1243420478837900000

1243420480205280000

1243420481744560000

1243420482075930000

1243420482201770000

1243420482222730000

1243420482084270000

1243420482814100000

1243420482935760000

1243420482629590000

 

 

Thanks, 

Submitted by Basheer Qolomany on Mon, 03/30/2020 - 20:42

I double-checked corona_tweets_10.csv, but I could not find any of these IDs in the file. However, I can see one pattern in the tweet IDs you've listed above: they all end with a number of zeros. Use sublime text or a simple text editor to open the CSV files. Looks like the application which you're using to open these files is somehow chopping off some digits at the back and replacing the chopped ones with zeros.

For example, the last ID you've listed 1243420482629590000 should have been 1243420482629591040. See that the last 4 digits are zeroes at your end. Same is the case with all other IDs you've mentioned above.

Submitted by Rabindra Lamsal on Tue, 03/31/2020 - 02:13

Yes, that's right. I read the CSV files with R, it fixed the numbers. 

Also, if you have the tweet ids for March 13 to March 19, that would be great to upload it here. 

 

Thanks; 

Submitted by Basheer Qolomany on Tue, 03/31/2020 - 17:35

The model has been collecting the corona-related tweets since Jan 27, 2020. However, the model was designed as a part of an optimization project and therefore it was made to only extract the tweets but not the tweet IDs. And because of Twitter's data sharing policy, I am not allowed to share them. Therefore, I started extracting and uploading the tweet IDs since March 20, 2020, only.

Submitted by Rabindra Lamsal on Tue, 03/31/2020 - 21:59

Thank you,

Submitted by Basheer Qolomany on Wed, 04/01/2020 - 18:30

I'm haivng the exact same issue, i.e. all IDs end with four zeros while the zeros should in fact be other numbers. I was just opening it as csv file.

 

Could you please let me know how to fix it? Thank you very much!

Submitted by Mandy Huang on Wed, 04/08/2020 - 04:02

Are you trying to write a script to hydrate the tweet IDs or something else? Please see the instruction given in the dataset description field.

Submitted by Rabindra Lamsal on Wed, 04/08/2020 - 11:26

Thank you for the reply! I've tried using the QCRI's Tweets Downloader to hydrate the tweet IDs, but same as tweepy API, the first step is to get a list of correct tweet IDs, which I don't have because of the zeros at the end of the tweet_id column in the original dataset. 

 

I saw in the previous discussion you mentioned "For example, the last ID you've listed 1243420482629590000 should have been 1243420482629591040", could you please let me know how you get the correct tweet ID that ends with 1040? Many thanks!

Submitted by Mandy Huang on Wed, 04/08/2020 - 15:21

Can you do one thing? Download a CSV file, and open it with Notepad or Sublime text and let me know if the last 4 digits are represented properly.

Submitted by Rabindra Lamsal on Thu, 04/09/2020 - 04:25

Hi 

I try to download all data from twitter using user id, but the app Hydrator always stop downloading.

Is that mean the download tweets reach the rate limit?

 

thanks

Submitted by JINGLI SHI on Fri, 04/03/2020 - 00:09

Can you please elaborate? Also, I would recommend you to write to the app's author regarding the issue.

Submitted by Rabindra Lamsal on Sun, 04/05/2020 - 22:50

Congratulations for this work!

Submitted by Thiago Aparecid... on Thu, 04/09/2020 - 20:55

Thank you, Thiago.

Submitted by Rabindra Lamsal on Thu, 04/09/2020 - 22:11

Can someone share the code snippet to get the tweet text from tweet id.

Submitted by Haider Akram on Fri, 04/10/2020 - 12:11

Use Hydrator (https://github.com/DocNow/hydrator) or QCRI's Tweet Downloader tool (https://crisisnlp.qcri.org/data/tools/TweetsRetrievalTool-v2.0.zip) for downloading the tweets.

Submitted by Rabindra Lamsal on Fri, 04/10/2020 - 15:07

Can someone please help me with how to fetch the tweets? 

I am just able to see the 'Tweet IDs' and 'sentiment score'. Where and how can I download the tweets ? 

Thanks in advance. 

Submitted by Navya Shiva on Sun, 05/03/2020 - 05:48

Please refer to my reply to your comment below.

Submitted by Rabindra Lamsal on Sun, 05/03/2020 - 08:34

Hi, 

I am able to only see two columns ( 'Tweet ID' and 'sentiment score'). Could you please tell me if the tweets column is removed?

Submitted by Navya Shiva on Sun, 05/03/2020 - 06:11

Hello Navya. Because of Twitter's data sharing policy, we are not allowed to share anything except the Tweet IDs and/or User IDs. Therefore, this dataset contains only the Tweets IDs. In order to download the tweets, you'll need to hydrate these IDs using applications such as DocNow's Hydrator (available for OS X, Windows and Linux) or QCRI's Tweets Downloader (java based).

Submitted by Rabindra Lamsal on Sun, 05/03/2020 - 06:35

Hi Rabindra,

Thank you for your reply.  I have downloaded Hydrator and tried downloading the tweets. However, the CSV file isn't getting downloaded ( I chose only 85, 000 rows if Tweet IDs for sample). It is throwing an error. Could you please help me fix it? 

I am unable to post the picture here. The error is displayed as "A Javascript error occured in the main proces...", the error has so many lines with this heading. Please let me know how to go about this. 

Thank you. 

 

 

 

 

 

Submitted by Navya Shiva on Mon, 05/04/2020 - 05:20

Maybe it is an error associated with IEEE DP. Please try again to download any of the CSV files and hydrate the IDs.

Submitted by Rabindra Lamsal on Fri, 05/29/2020 - 01:33

Dear

Can you mention the library used to find the sentiment?

ThankYou

Submitted by Furqan Rustam on Mon, 06/01/2020 - 22:05

Hello Furqan. If you do not want to create a Machine learning model of your own to have the sentiment scores computed, you can make use of the TextBlob NLP toolkit.

Submitted by Rabindra Lamsal on Tue, 06/02/2020 - 00:49

Embed this dataset on another website

Copy and paste the HTML code below to embed your dataset:

Share via email or social media

Click the buttons below:

facebooktwittermailshare
[1] Rabindra Lamsal, "Coronavirus (COVID-19) Tweets Dataset", IEEE Dataport, 2020. [Online]. Available: http://dx.doi.org/10.21227/781w-ef42. Accessed: Jun. 02, 2020.
@data{781w-ef42-20,
doi = {10.21227/781w-ef42},
url = {http://dx.doi.org/10.21227/781w-ef42},
author = {Rabindra Lamsal },
publisher = {IEEE Dataport},
title = {Coronavirus (COVID-19) Tweets Dataset},
year = {2020} }
TY - DATA
T1 - Coronavirus (COVID-19) Tweets Dataset
AU - Rabindra Lamsal
PY - 2020
PB - IEEE Dataport
UR - 10.21227/781w-ef42
ER -
Rabindra Lamsal. (2020). Coronavirus (COVID-19) Tweets Dataset. IEEE Dataport. http://dx.doi.org/10.21227/781w-ef42
Rabindra Lamsal, 2020. Coronavirus (COVID-19) Tweets Dataset. Available at: http://dx.doi.org/10.21227/781w-ef42.
Rabindra Lamsal. (2020). "Coronavirus (COVID-19) Tweets Dataset." Web.
1. Rabindra Lamsal. Coronavirus (COVID-19) Tweets Dataset [Internet]. IEEE Dataport; 2020. Available from : http://dx.doi.org/10.21227/781w-ef42
Rabindra Lamsal. "Coronavirus (COVID-19) Tweets Dataset." doi: 10.21227/781w-ef42