Machine Learning

Data preprocessing is a fundamental stage in deep learning modeling and serves as the cornerstone of reliable data analytics. These deep learning models require significant amounts of training data to be effective, with small datasets often resulting in overfitting and poor performance on large datasets. One solution to this problem is parallelization in data modeling, which allows the model to fit the training data more effectively, leading to higher accuracy on large data sets and higher performance overall.

Categories:
100 Views

eLearning, or online learning, has reached every corner of the globe in this era of digitization. As a result of the COVID-19 pandemic, the value of eLearning has increased substantially. In eLearning recommendation systems, information overload, personalised suggestion, sparsity, and accuracy are all major problems. The correct eLearning Recommendation System is necessary to tailor the course recommendation according to the user's needs. To create this model, dataset of the User Profile and User Rating is needed.

Categories:
181 Views

eLearning, or online learning, has reached every corner of the globe in this era of digitization. As a result of the COVID-19 pandemic, the value of eLearning has increased substantially. In eLearning recommendation systems, information overload, personalised suggestion, sparsity, and accuracy are all major problems. The correct eLearning Recommendation System is necessary to tailor the course recommendation according to the user's needs. To create this model, dataset of the User Profile and User Rating is needed.

Categories:
96 Views

This dataset contains 3D models of 5 objects and numerous scenes where the objects are placed randomly creating occlusions and cluttered scenes. The 3D models are to be used to find them in the scene (object recognition) and segment them as well.

Categories:
455 Views

each application has up to two files. One for memory dataset and another for control flow dataset. Each dataset is composed of JSON objects. Each instruction is a JSON object.

Categories:
194 Views

eLearning, or online learning, has reached every corner of the globe in this era of digitization. As a result of the COVID-19 pandemic, the value of eLearning has increased substantially. In eLearning recommendation systems, information overload, personalised suggestion, sparsity, and accuracy are all major problems. The correct eLearning Recommendation System is necessary to tailor the course recommendation according to the user's needs. To create this model, dataset of the User Profile and User Rating is needed.

Categories:
177 Views

BillionCOV is a global billion-scale English-language COVID-19 tweets dataset with more than 1.4 billion tweets originating from 240 countries and territories between October 2019 and April 2022. This dataset has been curated by hydrating the 2 billion tweets present in COV19Tweets.

Categories:
2170 Views

CAD-EdgeTune dataset is acquired using a Husarion ROSbot 2.0 and ROSbot 2.0 Pro with the collection speed set to 5 frames per second from a suburban university environment. We may split the information into subgroups for noon, dusk, and dawn in order to depict our surroundings under various lighting situations. We have assembled 17 sequences totaling 8080 frames, of which 1619 have been manually analyzed using an open-source pixel annotation program. Since nearby photographs are highly similar to one another, we decide to annotate every five images.

Categories:
164 Views

This dataset aims to identify the polarity of tweets—whether they are supportive, oppositional, or neutral—towards the current government. It comprises a total of 26,000 tweets: 15,000 in English and 11,000 in Urdu. These tweets were collected from 80 different political users' accounts to ensure a diverse and comprehensive representation of opinions.

 

Categories:
621 Views

We collected data to train the ML module to determine the user’s device's location based on beacon frame characteristics and RSSI values from Wi-Fi APs. To collect the data, we defined a threshold distance of 7 feet as the maximum allowable distance between the user’s devices. We then collected two datasets: one with data collected while the two Raspberry Pis were within 7 feet or less of each other named ”authentic”, and another with data collected while the distance between the two Raspberry Pis was over 7 feet named ”unauthorized”.

Categories:
619 Views

Pages