Datasets
Open Access
Tweets Originating from India During COVID-19 Lockdowns
- Citation Author(s):
- Submitted by:
- Rabindra Lamsal
- Last updated:
- Sun, 05/19/2024 - 21:38
- DOI:
- 10.21227/k8gw-xz18
- Data Format:
- Links:
- License:
Abstract
This India-specific COVID-19 tweets dataset has been curated using the large-scale Coronavirus (COVID-19) Tweets Dataset. This dataset contains tweets originating from India during the first week of each of the four phases of nationwide lockdowns initiated by the Government of India. For more information on filtering keywords, please visit the primary dataset page.
Announcements:
- We have released BillionCOV — a billion-scale COVID-19 tweets dataset for efficient hydration. Hydration takes time due to limits placed by Twitter on its tweet lookup endpoint. We re-hydrated the tweets present in COV19Tweets and found that more than 500 million tweet identifiers point to either deleted or protected tweets. If we avoid hydrating those tweet identifiers alone, it saves almost two months in a single hydration task. BillionCOV will receive quarterly updates, while COV19Tweets will continue to receive updates every day. Learn more about BillionCOV on its page: https://dx.doi.org/10.21227/871g-yp65
- We also release a million-scale COVID-19-specific geotagged tweets dataset — MegaGeoCOV (on GitHub). The dataset is introduced in the paper "Twitter conversations predict the daily confirmed COVID-19 cases".
Related publications:
- Rabindra Lamsal. (2021). Design and analysis of a large-scale COVID-19 tweets dataset. Applied Intelligence, 51(5), 2790-2804.
- Rabindra Lamsal, Aaron Harwood, Maria Rodriguez Read. (2022). Socially Enhanced Situation Awareness from Microblogs using Artificial Intelligence: A Survey. ACM Computing Surveys, 55(4), 1-38. (arXiv)
- Rabindra Lamsal, Aaron Harwood, Maria Rodriguez Read. (2022). Twitter conversations predict the daily confirmed COVID-19 cases. Applied Soft Computing, 129, 109603. (arXiv)
- Rabindra Lamsal, Aaron Harwood, Maria Rodriguez Read. (2022). Addressing the location A/B problem on Twitter: the next generation location inference research. In 2022 ACM SIGSPATIAL LocalRec (pp. 1-4).
- Rabindra Lamsal, Aaron Harwood, Maria Rodriguez Read. (2022). Where did you tweet from? Inferring the origin locations of tweets based on contextual information. In 2022 IEEE International Conference on Big Data (pp. 3935-3944). (arXiv)
- Rabindra Lamsal, Maria Rodriguez Read, Shanika Karunasekera. (2023). BillionCOV: An Enriched Billion-scale Collection of COVID-19 tweets for Efficient Hydration. Data in Brief, 48, 109229. (arXiv)
- Rabindra Lamsal, Maria Rodriguez Read, Shanika Karunasekera. (2023). A Twitter narrative of the COVID-19 pandemic in Australia. In 20th International ISCRAM Conference (pp. 353-370). (arXiv)
- Rabindra Lamsal, Maria Rodriguez Read, Shanika Karunasekera. (2024). CrisisTransformers: Pre-trained language models and sentence encoders for crisis-related social media texts. Knowledge-Based Systems, 296, 111916. (arXiv)
- Rabindra Lamsal, Maria Rodriguez Read, Shanika Karunasekera. (2024). Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts. In 21st International ISCRAM Conference (in press). (arXiv)
— Dataset usage terms : By using this dataset, you agree to (i) use the content of this dataset and the data generated from the content of this dataset for non-commercial research only, (ii) remain in compliance with Twitter's Developer Policy and (iii) cite the following paper:
Lamsal, R. (2020). Design and analysis of a large-scale COVID-19 tweets dataset. Applied Intelligence, 1-15.
BibTeX:
@article{lamsal2020design, title={Design and analysis of a large-scale COVID-19 tweets dataset}, author={Lamsal, Rabindra}, journal={Applied Intelligence}, pages={1--15}, year={2020}, publisher={Springer} }
What's inside the dataset?
The files in the dataset contain IDs of the tweets present in the Coronavirus (COVID-19) Tweets Dataset. Note: Below, (all files) means that all the files mentioned and in-between have been considered to develop the ID file, while (only even-numbered files) suggests that only the even-numbered files have been considered.
Lockdown period tweets: (all files)
Lockdown1.zip: March 25, 2020 - April 02, 2020; corona_tweets_08.csv to corona_tweets_14.csv
Lockdown2.zip: April 14, 2020 - April 21, 2020; corona_tweets_27.csv to corona_tweets_33.csv
Lockdown3.zip: May 01, 2020 - May 07, 2020; corona_tweets_44.csv to corona_tweets_49.csv
Lockdown4.zip: May 18, 2020 - May 23, 2020; corona_tweets_61.csv to corona_tweets_66.csv
Extras: (all files)
extras_june1_june7.zip: corona_tweets_75.csv to corona_tweets_80.csv
Extras: (only even-numbered files)
extras_june24_july1.zip: corona_tweets_96.csv to corona_tweets_104.csv
extras_july2_july15.zip: corona_tweets_106.csv to corona_tweets_118.csv
extras_july16_august4.zip: corona_tweets_120.csv to corona_tweets_138.csv
extras_august5_august18.zip: corona_tweets_140.csv to corona_tweets_152.csv
extras_august19_september1.zip: corona_tweets_154.csv to corona_tweets_166.csv
extras_september2_september15.zip: corona_tweets_168.csv to corona_tweets_180.csv
The zipped files contain .db (SQLite database) files. Each .db file has a table 'geo'. To hydrate the IDs you can import the .db file as a pandas dataframe and then export it to .CSV or .TXT for hydration. For more details on hydrating the IDs, please visit the primary dataset page.
conn = sqlite3.connect('/path/to/the/db/file')
c = conn.cursor()
data = pd.read_sql("SELECT tweet_id FROM geo", conn)
Dataset Files
- lockdown1.zip (4.17 MB)
- lockdown2.zip (2.74 MB)
- lockdown3.zip (2.77 MB)
- lockdown4.zip (3.12 MB)
- extras_june1_june7.zip (3.11 MB)
- extras_june24_july1.zip (2.50 MB)
- extras_july2_july15.zip (3.70 MB)
- extras_july16_august4.zip (4.65 MB)
- extras_august5_august18.zip (2.63 MB)
- extras_august19_september1.zip (2.83 MB)
- extras_september2_september15.zip (2.54 MB)
Open Access dataset files are accessible to all logged in users. Don't have a login? Create a free IEEE account. IEEE Membership is not required.
Comments
Please also develop a Pakistan-specific COVID-19 tweets dataset.
Hello Saghir.
You can hydrate the IDs present in the primary dataset to create country-specific datasets. If you closely follow the instructions that I'd emailed you earlier, you can easily extract Pakistan specific tweets based on geo-tagged info and/or Twitter place info.
Sir, how were the scores generated?
Hello Aswin.
The scores are generated by TextBlob's sentiment analysis module. For more info please visit the primary dataset's page.
Sir, do these tweets are unique or have retweets?
Tweets in this dataset are unique because the retweets have NULL geo and place objects.
Sir, how to get tweets daily from india, especially from states/cities?
I am using geocode option in python but it is not responding..
Also in R this error is coming
::Warning message in doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :="100 tweets were requested but the API can only return 0"
Please refer to my comment posted below.
How to get tweets from cities like delhi, mumbai, kolkatta, hyderbad, bangalore directly from R or Python about covid daily?
Once the IDs are hydrated, you can filter out tweets as per your preference (use any spreadsheets).
Or if you are comfortable with some level of programming, you can apply conditions to the "place" Twitter Geo Object:
Eg, tweet["place"]["full_name"] == "New Delhi, India" or tweet["place"]["full_name"] == "Mumbai, India" and so no.
I hope this helps.