Due to the high-speed of information generation and the need for data-knowledge conversion, there is an increasing need for data mining algorithms. Clustering is one of the data mining techniques, and its development leads to further understanding of the surrounding env More
Due to the high-speed of information generation and the need for data-knowledge conversion, there is an increasing need for data mining algorithms. Clustering is one of the data mining techniques, and its development leads to further understanding of the surrounding environments. In this paper, a dynamic and scalable solution for clustering mixed big data with a lack of data is presented. In this solution, the integration of common distance metrics with the concept of the closest neighborhood, as well as a kind of geometric coding are used. There is also a way to recover missing data in the dataset. By utilizing parallelization and distribution techniques, multiple nodes can be scalable and accelerated. The evaluation of this solution is based on speed, precision, and memory usage criteria compared to other ones.
Manuscript profile
The detection of outliers is a task in data mining and machine learning and it’s an important step in data preprocessing. In this paper, in order to detect proximity-based outliers, a non-parametric method is proposed called NPOD. The proposed method is a combination of More
The detection of outliers is a task in data mining and machine learning and it’s an important step in data preprocessing. In this paper, in order to detect proximity-based outliers, a non-parametric method is proposed called NPOD. The proposed method is a combination of distance-based and density-based methods and has the ability to detect outliers in both local and global scenarios. This method does not require to determine any parameters of neighborhood radius, the threshold of existing points in the neighborhood radius, and the nearest neighbor parameters. In order to detect outliers, a new method of scoring is presented. Experimental results on the UCI datasets show that this algorithm, in spite of being non-parametric, has comparable results with previous methods. Also in some cases, it has the best performance.
Manuscript profile
Due to the increasing speed of information production and the need to convert information into knowledge, old machine learning methods are no longer responsive. When using classifications with the old machine learning methods, especially the use of inherently lazy class More
Due to the increasing speed of information production and the need to convert information into knowledge, old machine learning methods are no longer responsive. When using classifications with the old machine learning methods, especially the use of inherently lazy classifications such as the k-nearest neighbor (KNN) method, the operation of classifying large data sets is very slow.
Nearest Neighborhood is a popular method of data classification due to its simplicity and practical accuracy. The proposed method is based on sorting the training data feature vectors in a binary search tree to expedite the classification of big data using the nearest neighbor method. This is done by finding the approximate two farthest local data in each tree node. These two data are used as a criterion for dividing the data in the current node into two groups. The data set in each node is assigned to the left and right child of the current node based on their similarity to the two data. The results of several experiments performed on different data sets from the UCI repository show a good degree of accuracy due to the low execution time of the proposed method.
Manuscript profile