• List of Articles


      • Open Access Article

        1 - Feature Selection and Cancer Classification Based on Microarray Data Using Multi-Objective Cuckoo Search Algorithm
        kh. Kamari f. rashidi a. Khalili
        Microarray datasets have an important role in identification and classification of the cancer tissues. In cancer researches, having a few samples of microarrays in cancer researches is one of the most concerns which lead to some problems in designing the classifiers. Mo Full Text
        Microarray datasets have an important role in identification and classification of the cancer tissues. In cancer researches, having a few samples of microarrays in cancer researches is one of the most concerns which lead to some problems in designing the classifiers. Moreover, due to the large number of features in microarrays, feature selection and classification are even more challenging for such datasets. Not all of these numerous features contribute to the classification task, and some even impede performance. Hence, appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, a modified multi-objective cuckoo search algorithm is used to feature selection and sample selection to find the best available solutions. For accelerating the optimization process and preventing local optimum trapping, new heuristic approaches are included to the original algorithm. The proposed algorithm is applied on six cancer datasets and its results are compared with other existing methods. The results show that the proposed method has higher accuracy and validity in comparison to other existing approaches and is able to select the small subset of informative genes in order to increase the classification accuracy. Manuscript Document
      • Open Access Article

        2 - Improving the Architecture of Convolutional Neural Network for Classification of Images Corrupted by Impulse Noise
        Mohammad Momeny M. Agha Sarram A. M.  Latif R. Sheikhpour
        Impulse noise is one the common noises which reduces the performance of convolutional neural networks (CNNs) in image classification. Preprocessing for removal of impulse noise is a costly process which may have a destructive effect on the training and validation of the Full Text
        Impulse noise is one the common noises which reduces the performance of convolutional neural networks (CNNs) in image classification. Preprocessing for removal of impulse noise is a costly process which may have a destructive effect on the training and validation of the convolutional neural networks due to insufficient improvement of noisy images. In this paper, a convolutional neural network is proposed which is robust to impulse noise. Proposed CNN classify images corrupted by impulse noise without any preprocessing for noise removal. A noise detection layer is placed at the beginning of the proposed CNN to prevent the processing of noisy values. The ILSVRC-2012 database is used to train the proposed CNN. Experimental results show that preventing the impact of impulse noise on the training process and classification of CNN can increase the accuracy and speed of the network training. The proposed CNN with error of 0.24 is better than other methods in classification of noisy image corrupted by impulse noise with 10% density. The time complexity of O(1) in the proposed CNN for robustness to noise indicates the superiority of the proposed CNN. Manuscript Document
      • Open Access Article

        3 - Using Evolutionary Clustering for Topic Detection in Microblogging Considering Social Network Information
        E. Alavi H. Mashayekhi H. Hassanpour B. Rahimpour Kami
        Short texts of social media like Twitter provide a lot of information about hot topics and public opinions. For better understanding of such information, topic detection and tracking is essential. In many of the available studies in this field, the number of topics must Full Text
        Short texts of social media like Twitter provide a lot of information about hot topics and public opinions. For better understanding of such information, topic detection and tracking is essential. In many of the available studies in this field, the number of topics must be specified beforehand and cannot be changed during time. From this perspective, these methods are not suitable for increasing and dynamic data. In addition, non-parametric topic evolution models lack appropriate performance on short texts due to the lack of sufficient data. In this paper, we present a new evolutionary clustering algorithm, which is implicitly inspired by the distance-dependent Chinese Restaurant Process (dd-CRP). In the proposed method, to solve the data sparsity problem, social networking information along with textual similarity has been used to improve the similarity evaluation between the tweets. In addition, in the proposed method, unlike most methods in this field, the number of clusters is calculated automatically. In fact, in this method, the tweets are connected with a probability proportional to their similarity, and a collection of these connections constitutes a topic. To speed up the implementation of the algorithm, we use a cluster-based summarization method. The method is evaluated on a real data set collected over two and a half months from the Twitter social network. Evaluation is performed by clustering the texts and comparing the clusters. The results of the evaluations show that the proposed method has a better coherence compared to other methods, and can be effectively used for topic detection from social media short texts. Manuscript Document
      • Open Access Article

        4 - Bug Detection and Assignment for Mobile Apps via Mining Users' Reviews
        Maryam Younesi Abbas Heydarnoori F. Ghanadi
        Increasing the popularity of smart phones and the great ovation of users of mobile apps has turned the app stores to massive software repositories. Therefore, using these repositories can be useful for improving the quality of the program. Since the bridge between users Full Text
        Increasing the popularity of smart phones and the great ovation of users of mobile apps has turned the app stores to massive software repositories. Therefore, using these repositories can be useful for improving the quality of the program. Since the bridge between users and developers of mobile apps is the comments that users write in app stores, special attention to these comments from developers can make a dramatic improvement in the final quality of mobile apps. Hence, in recent years, numerous studies have been conducted around the topic of opinion mining, whose intention was to extract and exert important information from user's reviews. One of the shortcomings of these studies is the inability to use the information contained in user comments to expedite and improve the process of fixing the software error. Hence, this paper provides an approach based on users’ feedback for assigning program bugs to developers. This approach builds on the history of a program using its commit data, as well as developers' ability in fixing a program’s errors using the bugs that developers have already resolved in the app. Then, by combining these two criteria, each developer will get a score for her appropriation for considering each review. Next, a list of developers who are appropriate for each bug are provided. The evaluations show that the proposed method would be able to identify the right developer to address the comments with a precision of 74%. Manuscript Document
      • Open Access Article

        5 - Optimal Resource Allocation in Multi-Task Software-Defined Sensor Networks
        S. A. Mostafavi M. Agha Sarram T. Salimian
        Unlike conventional wireless sensor networks which are designed for a specific application, Software-Defined Wireless Sensor Networks (SDSN) can embed multiple sensors on each node, defining multiple tasks simultaneously. Each sensor node has a virtualization program wh Full Text
        Unlike conventional wireless sensor networks which are designed for a specific application, Software-Defined Wireless Sensor Networks (SDSN) can embed multiple sensors on each node, defining multiple tasks simultaneously. Each sensor node has a virtualization program which serves as a common communication infrastructure for several different applications. Different sensor applications in the network can have different target functions and decision parameters. Due to the resource constraints of sensor network nodes, the multiplicity and variety of tasks in each application, requirements for different levels of quality of service, and the different target functions for different applications, the problem of allocating resources to the tasks on the sensors is complicated. In this paper, we formulate the problem of allocating resources to the sensors in the SDSN with different objective functions as a multi-objective optimization problem and provide an effective solution to solve it. Manuscript Document
      • Open Access Article

        6 - DRSS-Based Localization Using Convex Optimization in Wireless Sensor Networks
        Hassan Nazari M. R. Danaee M. Sepahvand
        Localization with differential received signal strength measurement in recent years has been very much considered. Due to the fact that the probability density function is known for given observations, the maximum likelihood estimator is used. This estimator can be asym Full Text
        Localization with differential received signal strength measurement in recent years has been very much considered. Due to the fact that the probability density function is known for given observations, the maximum likelihood estimator is used. This estimator can be asymptotically represented the optimal estimation of the location. After the formation of this estimator, it is observed that the corresponding cost function is highly nonlinear and non-convex and has a lot of minima, so there is no possibility of achieving the global minimum with Newton method and the localization error will be high. There is no analytical solution for this cost function. To overcome this problem, two methods are existed. First, the cost function is approximated by a linear estimator. But this estimator has poor accuracy. The second method is to replace the non-convex cost function with a convex one with the aid of convex optimization methods, in which case the global minimum is obtained. In this paper, we proposed new convex estimator to solve cost function of maximum likelihood estimator. The results of the simulations show that the proposed estimator has up to 20 percent performance improvement compared with existing estimators, moreover, the execution time of proposed estimator is 30 percent faster than other convex estimators. Manuscript Document
      • Open Access Article

        7 - The Extraction of Fetal ECG from Abdominal Recordings Using Sparse Representation of ECG Signals
        Parya Tavoosi قاسم عازمی پگاه زرجام
        one of the most prevalent causes for mortality of infants is cardiac failure. Recordings of heart electrical activities by Electrocardiogram (ECG) are a safe method to detect abnormal arrhythmia in time and reduce cardiac failure in newborns. However, the non-invasive e Full Text
        one of the most prevalent causes for mortality of infants is cardiac failure. Recordings of heart electrical activities by Electrocardiogram (ECG) are a safe method to detect abnormal arrhythmia in time and reduce cardiac failure in newborns. However, the non-invasive extraction of fetal ECG (fECG) from the maternal abdominal is quite challenging, since the fECG signals are often corrupted by some electrical noises from other sources such as: maternal heart activity, uterine contractions, and respiration, in addition to instrumental noises. Among such signals, the maternal heart signal (due to high amplitude) has the most disruptive effect and the fetal brain signal (due to low amplitude) has the least effect on distortion of the fetal heart signal. In this paper, a new method for extracting fECG signals from multichannel abdominal recordings is proposed. The proposed method uses Compressive Sensing (CS)to reduce the computational complexity and fast Independent Component Analysis (fICA) algorithm to estimate the sources. Also, for finding sparse representations of the acquired ECG signals, two dictionaries namely: discrete cosine transformation and discrete wavelet transform are deployed here. The proposed method is then implemented and its performance is tested using the well-known and publicly available database used in 2013 Physionet Challenge. The performance results are compared with that of the best performing existing methods. The results show that the proposed method based on CS and ICA outperforms the existing detection methods with a Mean Minimum Square Error (MMSE) of 171.65, and therefore can be used for non-invasive and reliable extraction fECG from abdominal recordings. Manuscript Document
      • Open Access Article

        8 - Write Error Rate Reduction Based on Thermal Effect and Dual-Vdd
        حمیدرضا زرندی Sh. Jalilian
        Write Error (WER) is one of the most drawbacks of STT-MRAM based memories. This problem usually occurred because of thermal instability and process variation. Although some methods have been proposed for WER reduction, they often did not consider the thermal effect of M Full Text
        Write Error (WER) is one of the most drawbacks of STT-MRAM based memories. This problem usually occurred because of thermal instability and process variation. Although some methods have been proposed for WER reduction, they often did not consider the thermal effect of MTJ and had significant overhead. Therefore, proposing a new method in a lower layer of abstraction with the minimum penalty is essential. In this regard, a write driver core has been proposed, which uses two distinct ways according to the state of writing data based on the thermal feature of MTJ cell and by Dual-Vdd method. Simulation results show 11.38% write latency reduction without area and power penalty. Manuscript Document