The ASU/UNSW-CICMOD01 was developed to support the novel Cyber Influence Campaign (CIC) model and ontology. It contains full captures of specific tags (hashtags) regardling individual cyber influence campaings scrapped from Twitter and Instagram.  

Categories:
362 Views

The dataset is composed of 595,460 users, 14,273,311 links, 1,345,913 diffusion cascades, and 1,311,498 tags from Mar 24 to Apr 25, 2012. In order to capture more information cascades, Weng et al. set the tracking objects as a group of users who are connected with mutual following. Thus, the follower network is an undirected network made up of a number of disconnected components.

Categories:
284 Views

One best circuit for the electronics choke for tube light

Categories:
36 Views

One new circuit design is invented by me for electronic fan regulator which is cheap and best

Categories:
39 Views

This database is provided for the Fake News Detection task. In addition to being used in other tasks of detecting fake news, it can be specifically used to detect fake news using the Natural Language Inference (NLI).

Instructions: 

This dataset is designed and stored to be compatible for use with both the LIAR test dataset and FakeNewsNet (PolitiFact) datasets as evaluation data. There are two folders, each containing three CSV files.

1- 15212 training samples, 1058 validation samples, and 1054 test samples are the same as (FakeNewsNet PolitiFact) data. The classes of this data are ”real” and ”fake”.

2. 15052 training samples, 1265 validation samples, and 1266 test samples, which is the same as the LIAR test data. The classes in this data are ”pants-fire”, ”false”, and ”barely true”, ”half-true”, ”mostly-true” and ”true”.

The DataSet columns:

id: matches the id in the PolitiFact website API (unique for each sample)

date: The time each article was published in the PolitiFact website

speaker: The person or organization to whom the Statement relates

statement: A claim published in the media by a person or an organization and has been investigated in the PolitiFact article.

sources: The sources used to analyze each Statement

paragraph_based_content: content stored as paragraphed in a list

fullText_based_content: Full text using pasted paragraphs

 

label: The class for each sample

Categories:
5430 Views

This dataset includes 24,201,654 tweets related to the US Presidential Election on November 3, 2020, collected between July 1, 2020, and November 11, 2020. The related party name and sentiment scores of tweets, also the words that affect the score were added to the data set.

Instructions: 

The dataset contains more than 20 million tweets with 11 different attributes of each of them. The data file is in comma-separated values (CSV) format and its size is 3,48 GB. It is zipped by WinRAR to upload and download easily. It is zipped file size is 766 MB. It contains the following information (11 Column) for each tweet in the data file:

Created-At: Exact creation time of the tweet [Jul 1, 2020 7:44:48 PM– Nov 12, 2020 5:47:59 PM]
From-User-Id: Unique ID of the user that sent the tweet
To-User-Id: Unique ID of the user that tweet sent to
Language: Language of tweets that are coded in ISO 639-1. [%90 of tweets en: English; %3,8 und: Unidentified; %2,5 es: Spanish].
Retweet-Count: number of retweets
PartyName: The Label showing which party the tweeting is about. [Democrats] or [Republicans] if the tweet contains any keyword (that are given above) related to the Democratic or Republican party. If it contains keywords about two parties then the label is [Both]. If it doesn’t contain any keyword about two major parties (Democratic or Republican) that the label is [Neither].
Id: Unique ID of the tweet
Score: The sentiment score of the tweets. A positive (negative) score means positive (negative) emotion.
Scoring String: Nominal attribute with all words taking part in the scoring
Negativity: The sum of negative components
Positivity: The sum of positive components

The VADER algorithm is used for sentiment analysis of tweets. The VADER (Valence Aware Dictionary and sEntiment Reasoner) lexicon and rule-based sentiment algorithm to score a text. it is specifically attuned to sentiments expressed in social media and produces scores based on a dictionary of words. This operator calculates and then exposes the sum of all sentiment word scores in the text. For more details about this algorithm: https://github.com/cjhutto/vaderSentiment

This data can be used for developing election result prediction methods by social media. Also, It can be used in text mining studies such as understanding the change of feelings in tweets about parties; determining the topics that cause positive or negative feelings about the candidates; to understand the main issues that Twitter users concern about the USA election.

Categories:
4811 Views

Modern science is build on systematic experimentation and observation.  The reproducibility and replicability of  the experiments and observations are central to science. However, reproducibility and replicability are not always guaranteed, sometimes referred to as 'crisis of reproducibility'. To analyze the extent of the crisis, we conducted a survey on the state of reproducibility in remote sensing. This survey was conducted as an online survey. The answers of the respondents are saved in this dataset in full-text CSV format.

Instructions: 

The file contains the answers to our online survey on reproducibility in remote sensing. The format is as comma-separated values (CSV) in full-text, i.e. the answers are saved in the full-text instead of numbers, allowing to easily understand and analyse.

 

The dataset also includes the report given from the website the survey was hosted on (kwiksurveys.com). This can be used for a quick overview of the results, but also to see the original quesetions and the possible answers. 

Categories:
146 Views

Reddit is one of the largest social media websites in the world and it contains valuable data about its users and their perspectives organized into virtual communities or subreddits, based on common areas of interest.  Substance use issues are particularly salient within this online community due to the burgeoning substance use (opioid) crisis within the United States among other countries.  A particularly important location for understanding user perceptions of opioids is the Philadelphia, Pennsylvania, USA region, due to the prevalence associated with overdose deaths.  To collect user sen

Instructions: 

Included is the dataset in a CSV file, data dictionary for all variables (column key) in a text file, keyword list used to query the Reddit API in a text file, and the targeted subreddit list in a text file. The dataset comprises entries (submissions, comments) that had keyword query results within targeted subreddits.  The dataset includes designations for submissions and comments within the data dictionary; submission denotes the first order entry within a subreddit, comment denotes entries that are posted in response to submissions or other comments. Rows include all potential entries within the targeted subreddits from January 1, 2005 – May 14, 2020.  

 

There are 56,979 rows of data in the CSV file.

Categories:
271 Views

This dataset is offered as .csv and is part of 3 files which are:

- File 1: has all 1699 arabic news headlines colllected with the corresponding emotion classification that 3 annotators agreed on with no bias

- File 2: has the dataset with BOW features extracted

- File 3: has the dataset with n-gram features extracted

Categories:
352 Views

Pages