Variation of Light intensity and its direction have been the main challenges in many face recognition systems that lead to the different normal and abnormal shadows. Today, various methods are presented for face recognition under different lighting conditions which requ More
Variation of Light intensity and its direction have been the main challenges in many face recognition systems that lead to the different normal and abnormal shadows. Today, various methods are presented for face recognition under different lighting conditions which require previous knowledge about Light source and the angle of radiation as well. In this paper, a new approach is proposed to extract the knowledge of/about the lighting angle/direction in face images based on learning techniques. At First, some effective coefficients on lighting variation are extracted on DCT domain. They will be used to determine lighting classes after normalization. Then, three different learning algorithms, Decision tree, SVM, and WAODE (Weightily Averaged One-Dependence Estimators) are used to learn the lighting classes. The algorithms have been tested on the well known YaleB and Extended Yale face databases. The comparative results indicate that the SVM achieves the best average accuracy for classification. On the other hand, WAODE Bayesian approach attains the better accuracy in classes with large lighting angle because of its resistance against data loss.
Manuscript profile
Classification is one of the most important tasks in data mining and machine learning; and the decision tree, as one of the most widely used classification algorithms, has the advantage of simplicity and the ability to interpret results more easily. But when dealing wit More
Classification is one of the most important tasks in data mining and machine learning; and the decision tree, as one of the most widely used classification algorithms, has the advantage of simplicity and the ability to interpret results more easily. But when dealing with huge amounts of data, the obtained decision tree would grow in size and complexity, and therefore require excessive running time. Almost all of the tree-construction algorithms need to store all or part of the training data set; but those algorithms which do not face memory shortages because of selecting a subset of data, can save the extra time for data selection. In order to select the best feature to create a branch in the tree, a lot of calculations are required. In this paper we presents an incremental scalable approach based on fast partitioning and pruning; The proposed algorithm builds the decision tree via using the entire training data set but it doesn't require to store the whole data in the main memory. The pre-pruning method has also been used to reduce the complexity of the tree. The experimental results on the UCI data set show that the proposed algorithm, in addition to preserving the competitive accuracy and construction time, could conquer the mentioned disadvantages of former methods.
Manuscript profile
Due to the increasing speed of information production and the need to convert information into knowledge, old machine learning methods are no longer responsive. When using classifications with the old machine learning methods, especially the use of inherently lazy class More
Due to the increasing speed of information production and the need to convert information into knowledge, old machine learning methods are no longer responsive. When using classifications with the old machine learning methods, especially the use of inherently lazy classifications such as the k-nearest neighbor (KNN) method, the operation of classifying large data sets is very slow.
Nearest Neighborhood is a popular method of data classification due to its simplicity and practical accuracy. The proposed method is based on sorting the training data feature vectors in a binary search tree to expedite the classification of big data using the nearest neighbor method. This is done by finding the approximate two farthest local data in each tree node. These two data are used as a criterion for dividing the data in the current node into two groups. The data set in each node is assigned to the left and right child of the current node based on their similarity to the two data. The results of several experiments performed on different data sets from the UCI repository show a good degree of accuracy due to the low execution time of the proposed method.
Manuscript profile
This paper presents a new method for Out-of-Step detection in synchronous generators based on Decision Tree theory. For distinguishing between power swing and out-of-step conditions a series of input features are introduced and used for decision tree training. For gener More
This paper presents a new method for Out-of-Step detection in synchronous generators based on Decision Tree theory. For distinguishing between power swing and out-of-step conditions a series of input features are introduced and used for decision tree training. For generating input training samples, a series of measurements are taken under various faults including operational and topological disturbances. The proposed method is simulated over 10 machines 39-bus IEEE test system and the simulation results are prepared as input-output pairs for decision tree induction and deduction. The merit of proposed out-of-step protection scheme lies in adaptivity and robustness of input features under different input scenarios
Manuscript profile