To see the other types of publications on this topic, follow the link: MCS selection algorithm.

Journal articles on the topic 'MCS selection algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'MCS selection algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Zhuo, Zecheng Li, and Wei Zhang. "Quality-Aware Task Allocation for Mobile Crowd Sensing Based on Edge Computing." Electronics 12, no. 4 (February 15, 2023): 960. http://dx.doi.org/10.3390/electronics12040960.

Full text
Abstract:
In the field of mobile crowd sensing (MCS), the traditional client–cloud architecture faces increasing challenges in communication and computation overhead. To address these issues, this paper introduces edge computing into the MCS system and proposes a two-stage task allocation optimization method under the constraint of limited computing resources. The method utilizes deep reinforcement learning for the selection of optimal edge servers for task deployment, followed by a greedy self-adaptive stochastic algorithm for the recruitment of sensing participants. In simulations, the proposed method demonstrated a 20% improvement in spatial coverage compared with the existing RBR algorithm and outperformed the LCBPA, SMA, and MOTA algorithms in 41, 42, and 48 tasks, respectively. This research contributes to the optimization of task allocation in MCS and advances the integration of edge computing in MCS systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Meng, Qing Min, Xiong Gu, Feng Tian, and Bao Yu Zheng. "k-NN Based MCS Selection in Distributed OFDM Wireless Networks." Advanced Materials Research 225-226 (April 2011): 974–77. http://dx.doi.org/10.4028/www.scientific.net/amr.225-226.974.

Full text
Abstract:
Cognitive radio is seen as an intelligent wireless communication system that can learn and adapt the surrounding environment. Cognitive engine is the core component of implementation of cognitive radio. Information in knowledge base of cognitive engine can be obtained by using of machine learning. In this work, we consider wireless networks with clustered nodes and OFDM physical layer and present a combined sub-channel selection and modulation and coding rate selection based on k-Nearest Neighbor classification algorithm. Computer simulation results show that, in frequency selective fading channel, the scheme makes a new network node easy to choose appropriate modulation and coding rate.
APA, Harvard, Vancouver, ISO, and other styles
3

Cuevas, Erik, and Adolfo Reyna-Orta. "A Cuckoo Search Algorithm for Multimodal Optimization." Scientific World Journal 2014 (2014): 1–20. http://dx.doi.org/10.1155/2014/497514.

Full text
Abstract:
Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.
APA, Harvard, Vancouver, ISO, and other styles
4

Kalaiarasu, Dr M., and Dr J. Anitha. "Modified Cuckoo Search-Support Vector Machine (MCS-SVM) Gene Selection and Classification for Autism Spectrum Disorder (ASD) Gene Expression." NeuroQuantology 18, no. 11 (September 30, 2020): 01–13. http://dx.doi.org/10.14704/nq.2020.18.11.nq20228.

Full text
Abstract:
Autism Spectrum Disorder (ASD) is a neuro developmental disorder characterized by weakened social skills, impaired verbal and non-verbal interaction, and repeated behavior. ASD has increased in the past few years and the root cause of the symptom cannot yet be determined. In ASD with gene expression is analyzed by classification methods. For the selection of genes in ASD, statistical philtres and a wrapper-based Geometric Binary Particle Swarm Optimization-Support Vector Machine (GBPSO-SVM) algorithm have recently been implemented. However GBPSO has provides lesser accuracy, if the dataset samples are large and it cannot directly apply to multiple output systems. To overcome this issue, Modified Cuckoo Search-Support Vector Machine (MCS-SVM) based wrapper feature selection algorithm is proposed which improves the accuracy of the classifier in ASD. This work consists of three major steps, (i) preprocessing, (ii) gene selection, and (iii) classification. Firstly, preprocessing is performed by mean or median ratios close to unity was removed from original gene dataset; based on this samples are reduced from 54,613 to 9454. Secondly, gene selection is performed by using statistical filters and wrapper algorithm. Statistical filters methods like Wilcox on Rank Sum test (WRS), Class Correlation (COR) function and Two-sample T-test (TT) were applied in parallel to a ten-fold cross validation range of the most discriminatory genes. In the wrapper algorithm, Modified Cuckoo Search (MCS) is also proposed to gene selection. This step decreases the number of genes of the dataset by removing genes. Finally, SVM classifier combined forms of gene subsets for grading. The autism microarray dataset used in the analysis was downloaded from the benchmark public repository Gene Expression Omnibus (GEO) (National Center for Biotechnology Information (NCBI)). The classification methods are measured in terms of the metrics like precision, recall, f-measure and accuracy. Proposed MCS-SVM classifier achieves highest accuracy when compared Linear Regression (LR), and GBPSO-SVM classifiers.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Yanan, Guodong Sun, and Xingjian Ding. "Coverage-Balancing User Selection in Mobile Crowd Sensing with Budget Constraint." Sensors 19, no. 10 (May 23, 2019): 2371. http://dx.doi.org/10.3390/s19102371.

Full text
Abstract:
Mobile crowd sensing (MCS) is a new computing paradigm for the internet of things, and it is widely accepted as a powerful means to achieve urban-scale sensing and data collection. In the MCS campaign, the smart mobilephone users can detect their surrounding environments with their on-phone sensors and return the sensing data to the MCS organizer. In this paper, we focus on the coverage-balancing user selection (CBUS) problem with a budget constraint. Solving the CBUS problem aims to select a proper subset of users such that their sensing coverage is as large and balancing as possible, yet without violating the budget specified by the MCS campaign. We first propose a novel coverage balance-based sensing utility model, which effectively captures the joint requirement of the MCS requester for coverage area and coverage balance. We then formally define the CBUS problem under the proposed sensing utility model. Because of the NP-hardness of the CBUS problem, we design a heuristic-based algorithm, called MIA, which tactfully employs the maximum independent set model to determine a preliminary subset of users from all the available users and then adjusts this user subset to improve the budget implementation. MIA also includes a fast approach to calculating the area of the union coverage with any complicated boundaries, which is also applicable to any MCS scenarios that are set up with the coverage area-based sensing utility. The extensive numeric experiments show the efficacy of our designs both in coverage balance and in the total coverage area.
APA, Harvard, Vancouver, ISO, and other styles
6

Pošík, Petr, Waltraud Huyer, and László Pál. "A Comparison of Global Search Algorithms for Continuous Black Box Optimization." Evolutionary Computation 20, no. 4 (December 2012): 509–41. http://dx.doi.org/10.1162/evco_a_00084.

Full text
Abstract:
Four methods for global numerical black box optimization with origins in the mathematical programming community are described and experimentally compared with the state of the art evolutionary method, BIPOP-CMA-ES. The methods chosen for the comparison exhibit various features that are potentially interesting for the evolutionary computation community: systematic sampling of the search space (DIRECT, MCS) possibly combined with a local search method (MCS), or a multi-start approach (NEWUOA, GLOBAL) possibly equipped with a careful selection of points to run a local optimizer from (GLOBAL). The recently proposed “comparing continuous optimizers” (COCO) methodology was adopted as the basis for the comparison. Based on the results, we draw suggestions about which algorithm should be used depending on the available budget of function evaluations, and we propose several possibilities for hybridizing evolutionary algorithms (EAs) with features of the other compared algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Gu, Zheng Gang, and Kun Hong Liu. "Microarray Data Classification Based on Evolutionary Multiple Classifier System." Applied Mechanics and Materials 130-134 (October 2011): 2077–80. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.2077.

Full text
Abstract:
Designing an evolutionary multiple classifier system (MCS) is a relatively new research area. In this paper, we propose a genetic algorithm (GA) based MCS for microarray data classification. We construct a feature poll with different feature selection methods first, and then a multi-objective GA is applied to implement ensemble feature selection process so as to generate a set of classifiers. When this GA stops, a set of base classifiers are generated. Here we use all the nondominated individuals in last generation to build an ensemble system and test the proposed ensemble method and the method that apply a classifier selection process to select proper classifiers from all the individuals in last generation. The experimental results show the proposed ensemble method is roubust and can lead to promising results.
APA, Harvard, Vancouver, ISO, and other styles
8

Alizadeh Moghaddam, S. H., M. Mokhtarzade, and S. A. Alizadeh Moghaddam. "A NEW MULTIPLE CLASSIFIER SYSTEM BASED ON A PSO ALGORITHM FOR THE CLASSIFICATION OF HYPERSPECTRAL IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 71–75. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-71-2019.

Full text
Abstract:
Abstract. Multiple classifier systems (MCSs) have shown great performance for the classification of hyperspectral images. The requirements for a successful MCS are 1) diversity between ensembles and 2) good classification accuracy of each ensemble. In this paper, we develop a new MCS method based on a particle swarm optimization (PSO) algorithm. Firstly, in each ensemble of the proposed method, called PSO-MCS, PSO identifies a subset of the spectral bands with a high J2 value, which is a measure of class-separability. Then, an SVM classifier is used to classify the input image, applying the selected features in each ensemble. Finally, the classification results of the entire ensembles are integrated using a majority voting strategy. Having the benefit of the PSO algorithm, PSO-MCS selects appropriate features. In addition, due to the fact that different features are selected in different runs of PSO, diversity between the ensembles is provided. Experimental results on an AVIRIS Indian Pine image show the superiority of the proposed method over its competitor, named random feature selection method.
APA, Harvard, Vancouver, ISO, and other styles
9

Abououf, Menatalla, Shakti Singh, Hadi Otrok, Rabeb Mizouni, and Ernesto Damiani. "Machine Learning in Mobile Crowd Sourcing: A Behavior-Based Recruitment Model." ACM Transactions on Internet Technology 22, no. 1 (February 28, 2022): 1–28. http://dx.doi.org/10.1145/3451163.

Full text
Abstract:
With the advent of mobile crowd sourcing (MCS) systems and its applications, the selection of the right crowd is gaining utmost importance. The increasing variability in the context of MCS tasks makes the selection of not only the capable but also the willing workers crucial for a high task completion rate. Most of the existing MCS selection frameworks rely primarily on reputation-based feedback mechanisms to assess the level of commitment of potential workers. Such frameworks select workers having high reputation scores but without any contextual awareness of the workers, at the time of selection, or the task. This may lead to an unfair selection of workers who will not perform the task. Hence, reputation on its own only gives an approximation of workers’ behaviors since it assumes that workers always behave consistently regardless of the situational context. However, following the concept of cross-situational consistency, where people tend to show similar behavior in similar situations and behave differently in disparate ones, this work proposes a novel recruitment system in MCS based on behavioral profiling. The proposed approach uses machine learning to predict the probability of the workers performing a given task, based on their learned behavioral models. Subsequently, a group-based selection mechanism, based on the genetic algorithm, uses these behavioral models in complementation with a reputation-based model to recruit a group of workers that maximizes the quality of recruitment of the tasks. Simulations based on a real-life dataset show that considering human behavior in varying situations improves the quality of recruitment achieved by the tasks and their completion confidence when compared with a benchmark that relies solely on reputation.
APA, Harvard, Vancouver, ISO, and other styles
10

Krishnaveni P. and Balasundaram S. R. "Automatic Text Summarization by Providing Coverage, Non-Redundancy, and Novelty Using Sentence Graph." Journal of Information Technology Research 15, no. 1 (January 2022): 1–18. http://dx.doi.org/10.4018/jitr.2022010108.

Full text
Abstract:
The day-to-day growth of online information necessitates intensive research in automatic text summarization (ATS). The ATS software produces summary text by extracting important information from the original text. With the help of summaries, users can easily read and understand the documents of interest. Most of the approaches for ATS used only local properties of text. Moreover, the numerous properties make the sentence selection difficult and complicated. So this article uses a graph based summarization to utilize structural and global properties of text. It introduces maximal clique based sentence selection (MCBSS) algorithm to select important and non-redundant sentences that cover all concepts of the input text for summary. The MCBSS algorithm finds novel information using maximal cliques (MCs). The experimental results of recall oriented understudy for gisting evaluation (ROUGE) on Timeline dataset show that the proposed work outperforms the existing graph algorithms Bushy Path (BP), Aggregate Similarity (AS), and TextRank (TR).
APA, Harvard, Vancouver, ISO, and other styles
11

Banov, Reni, and Zdenko Šimić. "On Minimal Cut Sets Representation with Binary Decision Diagrams." Journal of Energy - Energija 71, no. 4 (July 10, 2023): 12–15. http://dx.doi.org/10.37798/2022714420.

Full text
Abstract:
Since their introduction in form of a canonical representation of logical functions, the Binary Decision Diagrams (BDDs) gained a wide acceptance in numerous industrial applications. This paper summarizes the properties of BDD representation of Minimal Cut Sets (MCS) of Fault Tree (FT) models most typically encountered in nuclear energetics. Cut sets from MCS are defined as paths from the top BDD node to terminal nodes in the BDD, on which a quantitative and qualitative FT analysis (FTA) is performed. The core of the FTA on the BDDs is performed with help of two fundamental algorithms, one for conditional probability evaluation and another for the selection of cut sets. The accuracy of conditional probability evaluation represents an essential feature for an unbiased quantitative analysis, such as the top event probability or the determination of event importance measures. The cut set selection algorithm is shown in a generic version introducing logical predicates for its selection criteria. As it is known, the efficiency of depicted algorithms depends only on the number of BDD nodes used for the FT representation. In order to appraise the compactness of the BDD representation of FT models, their characteristics have herein been evaluated on several real-life models from the Nuclear Power Plant Krško. The extraordinariness of the compactness of the BDD representation reflects in its ability to implement advanced dynamic analysis (i.e. what-if) of FT models. The efficiency of such an approach is recognized by commercial vendors upgrading their FT Tools to new versions by implementing BDD based algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Jing, Xiaoxiao Yang, Xin Feng, Hongwei Yang, and An Ren. "A Joint Constraint Incentive Mechanism Algorithm Utilizing Coverage and Reputation for Mobile Crowdsensing." Sensors 20, no. 16 (August 11, 2020): 4478. http://dx.doi.org/10.3390/s20164478.

Full text
Abstract:
Selection of the optimal users to maximize the quality of the collected sensing data within a certain budget range is a crucial issue that affects the effectiveness of mobile crowdsensing (MCS). The coverage of mobile users (MUs) in a target area is relevant to the accuracy of sensing data. Furthermore, the historical reputation of MUs can reflect their previous behavior. Therefore, this study proposes a coverage and reputation joint constraint incentive mechanism algorithm (CRJC-IMA) based on Stackelberg game theory for MCS. First, the location information and the historical reputation of mobile users are used to select the optimal users, and the information quality requirement will be satisfied consequently. Second, a two-stage Stackelberg game is applied to analyze the sensing level of the mobile users and obtain the optimal incentive mechanism of the server center (SC). The existence of the Nash equilibrium is analyzed and verified on the basis of the optimal response strategy of mobile users. In addition, mobile users will adjust the priority of the tasks in time series to enable the total utility of all their tasks to reach a maximum. Finally, the EM algorithm is used to evaluate the data quality of the task, and the historical reputation of each user will be updated accordingly. Simulation experiments show that the coverage of the CRJC-IMA is higher than that of the CTSIA. The utility of mobile users and SC is higher than that in STD algorithms. Furthermore, the utility of mobile users with the adjusted task priority is greater than that without a priority order.
APA, Harvard, Vancouver, ISO, and other styles
13

Michaud, Langis, Patrick Simard, Remy Marcotte-Collard, Mhamed Ouzzani, and Loraine Sinnott. "The Montreal Experience: A Retrospective Study Part I—Basic Principles and Treatment Algorithm." Applied Sciences 11, no. 16 (August 13, 2021): 7455. http://dx.doi.org/10.3390/app11167455.

Full text
Abstract:
CONTEXT: Authors have refined myopia control strategies (MCS) from their experience treating more than 800 children who were followed at the Montreal School of Optometry Clinic (CUV). They developed a treatment algorithm known as the Montreal Experience (ME). Contrary to many other MCS, treatment modalities are selected after careful evaluation of a patient’s parameters (rate of progression, age of myopia onset, corneal parameters, pupil area), the risk factors for ocular pathology (growth charts), and taking into account the patient’s lifestyle and potential compliance. This represents a customized approach for each patient. PURPOSE: To evaluate the efficacy of MCS used following ME algorithm; the primary outcome relates to axial length progression over 24 months. METHODS: This is a retrospective study, conducted after approval of University IRB. Data were extracted from the file of each patient who: (1) consulted CUV between January 2017 and December 2018 and (2) were kept under the same MCS (same design/concentration). Clinical population is composed of 298 patients (35% Caucasian; 45% Asian; 20% others), with a median age of 11 (range 5–18). The treatment options were orthokeratology (OK-4 designs; N = 140), multifocal soft contact lenses (SMCL; 5 designs; N = 128), and low-dose atropine (LDA 0.01% to 0.25%; N = 42). RESULTS: Results are analyzed through sophisticated statistical models, designed for this purpose. At the end of a stepwise selection process that sequentially removed model terms that were not statistically significant, nine model terms remained: month, modality, the interaction of month and modality, refraction (SEQ), the interaction of SEQ and modality, gender, age, the interaction of age and month, and the interaction of age and modality. A total of 298 files were kept for analysis. Participant age varied from 9.7 to 12.5 years old. Baseline AL varied from 24.9 to 25.3 mm and SE refraction was −3.7 + 1.7 D on average. This study population was divided between Caucasian (34%), Asian (44%), and other ethnic origins (22%). Overall results indicate that results vary according to modality and months only. There is no statistical difference based on age, gender, and SEQ. All methods used were effective to slow the natural AL growth. Evolution was the lowest when using smaller treatment zones OK lenses (0.249 mm) and the highest (0.376 mm) for those treated with LDA. This OK advantage was statistically significant versus other modalities at 1 and 2 years. CONCLUSION: The Montreal Experience reveals that personalized MCS may be effective to manage myopia efficiently. It shows AL evolution comparable to the documented natural evolution of emmetropes, especially when using customized or smaller treatment zone OK lens design. Future work on other populations will confirm this tendency.
APA, Harvard, Vancouver, ISO, and other styles
14

Shao, Zihao, Huiqiang Wang, and Guangsheng Feng. "PUEGM: A Method of User Revenue Selection Based on a Publisher-User Evolutionary Game Model for Mobile Crowdsensing." Sensors 19, no. 13 (July 2, 2019): 2927. http://dx.doi.org/10.3390/s19132927.

Full text
Abstract:
Mobile crowdsensing (MCS) is a way to use social resources to solve high-precision environmental awareness problems in real time. Publishers hope to collect as much sensed data as possible at a relatively low cost, while users want to earn more revenue at a low cost. Low-quality data will reduce the efficiency of MCS and lead to a loss of revenue. However, existing work lacks research on the selection of user revenue under the premise of ensuring data quality. In this paper, we propose a Publisher-User Evolutionary Game Model (PUEGM) and a revenue selection method to solve the evolutionary stable equilibrium problem based on non-cooperative evolutionary game theory. Firstly, the choice of user revenue is modeled as a Publisher-User Evolutionary Game Model. Secondly, based on the error-elimination decision theory, we combine a data quality assessment algorithm in the PUEGM, which aims to remove low-quality data and improve the overall quality of user data. Finally, the optimal user revenue strategy under different conditions is obtained from the evolutionary stability strategy (ESS) solution and stability analysis. In order to verify the efficiency of the proposed solutions, extensive experiments using some real data sets are conducted. The experimental results demonstrate that our proposed method has high accuracy of data quality assessment and a reasonable selection of user revenue.
APA, Harvard, Vancouver, ISO, and other styles
15

Garcia-Villegas, Eduard, Alejandro Lopez-Garcia, and Elena Lopez-Aguilera. "Genetic Algorithm-Based Grouping Strategy for IEEE 802.11ah Networks." Sensors 23, no. 2 (January 12, 2023): 862. http://dx.doi.org/10.3390/s23020862.

Full text
Abstract:
The IEEE 802.11ah standard is intended to adapt the specifications of IEEE 802.11 to the Internet of Things (IoT) scenario. One of the main features of IEEE 802.11ah consists of the Restricted Access Window (RAW) mechanism, designed for scheduling transmissions of groups of stations within certain periods of time or windows. With an appropriate configuration, the RAW feature reduces contention and improves energy efficiency. However, the standard specification does not provide mechanisms for the optimal setting of RAW parameters. In this way, this paper presents a grouping strategy based on a genetic algorithm (GA) for IEEE 802.11ah networks operating under the RAW mechanism and considering heterogeneous stations, that is, stations using different modulation and coding schemes (MCS). We define a fitness function from the combination of the predicted system throughput and fairness, and provide the tuning of the GA parameters to obtain the best result in a short time. The paper also includes a comparison of different alternatives with regard to the stages of the GA, i.e., parent selection, crossover, and mutation methods. As a proof of concept, the proposed GA-based RAW grouping is tested on a more constrained device, a Raspberry Pi 3B+, where the grouping method converges in around 5 s. The evaluation concludes with a comparison of the GA-based grouping strategy with other grouping approaches, thus showing that the proposed mechanism provides a good trade-off between throughput and fairness performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Lim, Kai-Zheong, Christopher Daly, Jessica Brown, and Tony Goldschlager. "Dynamic Posture-Related Preoperative Pain as a Single Clinical Criterion in Patient Selection for Extreme Lateral Interbody Fusion Without Direct Decompression." Global Spine Journal 9, no. 6 (November 15, 2018): 575–82. http://dx.doi.org/10.1177/2192568218811317.

Full text
Abstract:
Study Design: Prospective cohort study. Objectives: Evidence on predicting the success of indirect decompression via extreme lateral interbody fusion (XLIF) is scarce. The authors investigated if patients who could achieve a pain-free position preoperatively would derive clinical benefit from XLIF without direct decompression. Methods: Data from 50 consecutive patients who underwent XLIF with and without direct decompression by a single surgeon from January 2014 to August 2017 was collected. Primary outcome is the rate of failure of patients who underwent XLIF without direct decompression, characterized by persistence of pain postoperatively that required reoperations within 6 months postoperatively. Secondary outcomes are clinical outcomes and patient-reported quality of life outcome data, including visual analogue scale for leg (VASL) and back (VASB) pain, Oswetry Disability Index (ODI), and Physical Component Score (PCS) and Mental Component Score (MCS) of SF-12, for up to 2 years postoperatively. Results: One patient with preoperative dynamic posture-related pain who underwent XLIF without direct decompression subsequently had a reoperation due to persisting pain. Statistically significant improvement was achieved across all patient reported outcomes ( P < .05): improvement of 68% for VASL, 61% for VASB, 50% for ODI, 33% for PCS, and 11% for MCS of SF-12 at last follow-up. Six patients had thigh symptoms that resolved. Conclusion: The simple clinical criterion based on postural pain status preoperatively may help clinicians in patient selection for indirect decompression of XLIF without the need for direct decompression. Further studies with larger cohorts are warranted to establish the validity of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
17

Chu, Weng-Ming, Koan-Yuh Chang, Chien-Yu Lu, Chang-Hung Hsu, Chien-Hung Liu, and Yung-Chia Hsiao. "A New Approach to Determine the Critical Path in Stochastic Activity Network." Mathematical Problems in Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/547627.

Full text
Abstract:
The determination of the critical path (CP) in stochastic networks is difficult. It is partly due to the randomness of path durations and partly due to the probability issue of the selection of the critical path in the network. What we are confronted with is not only the complexity among random variables but also the problem of path dependence of the network. Besides, we found that CP is not necessarily the longest (or shortest) path in the network, which was a conventional assumption in use. The Program Evaluation and Review Technique (PERT) and Critical Path Index (CPI) approaches are not able to deal with this problem efficiently. In this study, we give a new definition on the CP in stochastic network and propose a modified label-correcting tracing algorithm (M-LCTA) to solve it. Based on the numerical results, compared with Monte Carlo simulation (MCS), the proposed approach can accurately determine the CP in stochastic networks.
APA, Harvard, Vancouver, ISO, and other styles
18

Mishra, Bharat, Ajay Kumar, Jacek Zaburko, Barbara Sadowska-Buraczewska, and Danuta Barnat-Hunek. "Dynamic Response of Angle Ply Laminates with Uncertainties Using MARS, ANN-PSO, GPR and ANFIS." Materials 14, no. 2 (January 14, 2021): 395. http://dx.doi.org/10.3390/ma14020395.

Full text
Abstract:
In the present work, for the first time, free vibration response of angle ply laminates with uncertainties is attempted using Multivariate Adaptive Regression Spline (MARS), Artificial Neural Network-Particle Swarm Optimization (ANN-PSO), Gaussian Process Regression (GPR), and Adaptive Network Fuzzy Inference System (ANFIS). The present approach employed 2D C0 stochastic finite element (FE) model based on the Third Order Shear Deformation Theory (TSDT) in conjunction with MARS, ANN-PSO, GPR, and ANFIS. The TSDT model used eliminates the requirement of shear correction factor owing to the consideration of the actual parabolic distribution of transverse shear stress. Zero transverse shear stress at the top and bottom of the plate is enforced to compute higher-order unknowns. C0 FE model makes it commercially viable. Stochastic FE analysis done with Monte Carlo Simulation (MCS) FORTRAN inhouse code, selection of design points using a random variable framework, and soft computing with MARS, ANN-PSO, GPR, and ANFIS is implemented using MATLAB in-house code. Following the random variable frame, design points were selected from the input data generated through Monte Carlo Simulation. A total of four-mode shapes are analyzed in the present study. The comparison study was done to compare present work with results in the literature and they were found in good agreement. The stochastic parameters are Young’s elastic modulus, shear modulus, and the Poisson ratio. Lognormal distribution of properties is assumed in the present work. The current soft computation models shrink the number of trials and were found computationally efficient as the MCS-based FE modelling. The paper presents a comparison of MARS, ANN-PSO, GPR, and ANFIS algorithm performance with the stochastic FE model based on TSDT.
APA, Harvard, Vancouver, ISO, and other styles
19

Zajdel, R., K. Sośnica, M. Drożdżewski, G. Bury, and D. Strugarek. "Impact of network constraining on the terrestrial reference frame realization based on SLR observations to LAGEOS." Journal of Geodesy 93, no. 11 (October 17, 2019): 2293–313. http://dx.doi.org/10.1007/s00190-019-01307-0.

Full text
Abstract:
Abstract The Satellite Laser Ranging (SLR) network struggles with some major limitations including an inhomogeneous global station distribution and uneven performance of SLR sites. The International Laser Ranging Service (ILRS) prepares the time-variable list of the most well-performing stations denoted as ‘core sites’ and recommends using them for the terrestrial reference frame (TRF) datum realization in SLR processing. Here, we check how different approaches of the TRF datum realization using minimum constraint conditions (MCs) and the selection of datum-defining stations affect the estimated SLR station coordinates, the terrestrial scale, Earth rotation parameters (ERPs), and geocenter coordinates (GCC). The analyses are based on the processing of the SLR observations to LAGEOS-1/-2 collected between 2010 and 2018. We show that it is essential to reject outlying stations from the reference frame realization to maintain a high quality of SLR-based products. We test station selection criteria based on the Helmert transformation of the network w.r.t. the a priori SLRF2014 coordinates to reject misbehaving stations from the list of datum-defining stations. The 25 mm threshold is optimal to eliminate the epoch-wise temporal deviations and to provide a proper number of datum-defining stations. According to the station selection algorithm, we found that some of the stations that are not included in the list of ILRS core sites could be taken into account as potential core stations in the TRF datum realization. When using a robust station selection for the datum definition, we can improve the station coordinate repeatability by 8%, 4%, and 6%, for the North, East and Up components, respectively. The global distribution of datum-defining stations is also crucial for the estimation of ERPs and GCC. When excluding just two core stations from the SLR network, the amplitude of the annual signal in the GCC estimates is changed by up to 2.2 mm, and the noise of the estimated pole coordinates is substantially increased.
APA, Harvard, Vancouver, ISO, and other styles
20

De Beelde, Brecht, Andrés Almarcha, David Plets, and Wout Joseph. "V-Band Channel Modeling, Throughput Measurements, and Coverage Prediction for Indoor Residential Environments." Electronics 11, no. 4 (February 20, 2022): 659. http://dx.doi.org/10.3390/electronics11040659.

Full text
Abstract:
With the increased resolution and frame rates of video recordings, in combination with the current evolution towards video-on-demand streaming services and the user expecting ubiquitous wireless connectivity, it is necessary to design wireless communication systems that allow high-rate data transfer. The large bandwidths that are available in the mmWave frequency band allow such high data rates. In this paper, we provide an experimental and simulated indoor residential radio channel model at V-band frequencies and perform packet error rate and throughput measurements at 60 GHz using IEEE 802.11ad transceivers. We compare the path loss and throughput measurements to simulations using a network performance prediction tool. The path loss measurement results using an omnidirectional transmit antenna correspond well to generic indoor mmWave channel models. Double-directional path loss measurements show that generic models underestimate path loss of non-Line-of-Sight (NLOS) links. A ray-launching algorithm is designed and validated, and used for IEEE 802.11ad throughput estimation based on link budget calculations. The link budget underestimates the achieved throughput, when comparing to adaptive-rate MCS selection in a commercial transceiver, based on the measured signal-to-noise ratio. Packet error rate measurements confirm that, even for NLOS links, throughputs exceeding 1 Gbps are possible.
APA, Harvard, Vancouver, ISO, and other styles
21

Wu, Xingyu, Bingbing Jiang, Kui Yu, Huanhuan Chen, and Chunyan Miao. "Multi-Label Causal Feature Selection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6430–37. http://dx.doi.org/10.1609/aaai.v34i04.6114.

Full text
Abstract:
Multi-label feature selection has received considerable attentions during the past decade. However, existing algorithms do not attempt to uncover the underlying causal mechanism, and individually solve different types of variable relationships, ignoring the mutual effects between them. Furthermore, these algorithms lack of interpretability, which can only select features for all labels, but cannot explain the correlation between a selected feature and a certain label. To address these problems, in this paper, we theoretically study the causal relationships in multi-label data, and propose a novel Markov blanket based multi-label causal feature selection (MB-MCF) algorithm. MB-MCF mines the causal mechanism of labels and features first, to obtain a complete representation of information about labels. Based on the causal relationships, MB-MCF then selects predictive features and simultaneously distinguishes common features shared by multiple labels and label-specific features owned by single labels. Experiments on real-world data sets validate that MB-MCF could automatically determine the number of selected features and simultaneously achieve the best performance compared with state-of-the-art methods. An experiment in Emotions data set further demonstrates the interpretability of MB-MCF.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Xindi, Bohan Li, Shaowei Cai, and Yiyuan Wang. "Efficient Local Search based on Dynamic Connectivity Maintenance for Minimum Connected Dominating Set." Journal of Artificial Intelligence Research 71 (May 18, 2021): 89–119. http://dx.doi.org/10.1613/jair.1.12618.

Full text
Abstract:
The minimum connected dominating set (MCDS) problem is an important extension of the minimum dominating set problem, with wide applications, especially in wireless networks. Most previous works focused on solving MCDS problem in graphs with relatively small size, mainly due to the complexity of maintaining connectivity. This paper explores techniques for solving MCDS problem in massive real-world graphs with wide practical importance. Firstly, we propose a local greedy construction method with reasoning rule called 1hopReason. Secondly and most importantly, a hybrid dynamic connectivity maintenance method (HDC+) is designed to switch alternately between a novel fast connectivity maintenance method based on spanning tree and its previous counterpart. Thirdly, we adopt a two-level vertex selection heuristic with a newly proposed scoring function called chronosafety to make the algorithm more considerate when selecting vertices. We design a new local search algorithm called FastCDS based on the three ideas. Experiments show that FastCDS significantly outperforms five state-of-the-art MCDS algorithms on both massive graphs and classic benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
23

Niaz, Rizwan, Ibrahim M. Almanjahie, Zulfiqar Ali, Muhammad Faisal, and Ijaz Hussain. "A Novel Framework for Selecting Informative Meteorological Stations Using Monte Carlo Feature Selection (MCFS) Algorithm." Advances in Meteorology 2020 (February 17, 2020): 1–13. http://dx.doi.org/10.1155/2020/5014280.

Full text
Abstract:
Spatial distribution of meteorological stations has a significant role in hydrological research. The meteorological data play a significant role in drought monitoring; in this regard, accurate and suitable provision of meteorological stations is becoming crucial to improve and strengthen the skill of drought prediction. In this perspective, the choice of meteorological stations in a specific region has substantial importance for accurate estimation and continuous monitoring of drought hazards at the regional level. However, installation and data mining on a large number of meteorological stations require high cost and resources. Therefore, it is necessary to rank and find dependencies among existing meteorological stations in a particular region for further climatological analysis and reanalysis of databases. In this paper, the Monte Carlo feature selection and interdependency discovery (MCFS-ID) algorithm-based framework is proposed to identify the important meteorological station in a particular region. We applied the proposed framework on 12 meteorological stations situated in varying climatological regions of Punjab (Pakistan). We employed the drought index SPTI on 1-, 3-, 6-, 9-, 12-, 24-, and 48-month time-scale data to find the interdependencies among meteorological stations at various locations. We found that Sialkot has significance regional importance for studying SPTI-3, SPTI-6, and SPTI-48 indices. This regional importance is based on scores of relative importance (RI); for example, the RI values for SPTI-3, SPTI-6, and SPTI-48 indices are 0.1570, 0.1080, and 0.0270, respectively. Furthermore, the Jhelum station has more relative importance (RI = 0.1410 and 0.1030) for SPTI-1 and SPTI-9 indices, while varying concentration behaviour is observed in the remaining time scales.
APA, Harvard, Vancouver, ISO, and other styles
24

Reis, Cecília, and J. A. Tenreiro Machado. "Computational Intelligence in Circuit Synthesis." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 9 (November 20, 2007): 1122–27. http://dx.doi.org/10.20965/jaciii.2007.p1122.

Full text
Abstract:
This paper is devoted to the synthesis of combinational logic circuits through computational intelligence or, more precisely, using evolutionary computation techniques. Are studied two evolutionary algorithms, the Genetic and the Memetic Algorithm (GAs, MAs) and one swarm intelligence algorithm, the Particle Swarm Optimization (PSO). GAs are optimization and search techniques based on the principles of genetics and natural selection. MAs are evolutionary algorithms that include a stage of individual optimization as part of its search strategy, being the individual optimization in the form of a local search. The PSO is a population-based search algorithm that starts with a population of random solutions called particles. This paper presents the results for digital circuits design using the three above algorithms. The results show the statistical characteristics of this algorithms with respect to the number of generations required to achieve the solutions. The article analyzes also a new fitness function that includes an error discontinuity measure, which demonstrated to improve significantly the performance of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Bekhouche, Safia, and Yamina Mohamed Ben Ali. "Feature Selection in GPCR Classification Using BAT Algorithm." International Journal of Computational Intelligence and Applications 19, no. 01 (March 2020): 2050006. http://dx.doi.org/10.1142/s1469026820500066.

Full text
Abstract:
G-Protein-Coupled Receptors (GPCR) are the large family of protein membrane; and until now some of them still remain orphans. Predicting GPCR functions is a challenging task, it depends closely to their classification, which requires a digital representation of each protein chain as an attribute vector. A major problem of GPCR databases is their great number of features which can produce combinatorial explosion and increase the complexity of classification algorithms. Feature selection techniques are used to deal with this problem by minimizing features space dimension, and keeping the most relevant ones. In this paper, we propose to use the BAT algorithm for extracting the pertinent features and to improve the classification results. We compared the results obtained by our system with two other bio-inspired algorithms, Evolutionary Algorithm and PSO search. Metrics quality measures used for comparison are Error Rate, Accuracy, MCC and [Formula: see text]-measure. Experimental results indicate that our system is more efficient.
APA, Harvard, Vancouver, ISO, and other styles
26

Hessen, Shrouk H., Hatem M. Abdul-kader, Ayman E. Khedr, and Rashed K. Salem. "Developing Multiagent E-Learning System-Based Machine Learning and Feature Selection Techniques." Computational Intelligence and Neuroscience 2022 (January 30, 2022): 1–8. http://dx.doi.org/10.1155/2022/2941840.

Full text
Abstract:
Recently, artificial intelligence (AI) domain increased to contain finance, education, health, mining, and education. Artificial intelligence controls the performance of systems that use new technologies, especially in the education environment. The multiagent system (MAS) is considered an intelligent system to facilitate the e-learning process in the educational environment. MAS is used to make interaction easily among agents, which supports the use of feature selection. The feature selection methods are used to select the important and relevant features from the database that could help machine learning algorithms produce high performance. This paper aims to propose an effective and suitable system for multiagent-based machine learning algorithms and feature selection methods to enhance the e-learning process in the educational environment which predicts pass or fail results. The univariate and Extra Trees feature selection methods are used to select the essential attributes from the database. Five machine learning algorithms named Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), Naive Bayes (NB), and K-nearest neighbors algorithm (KNN) are applied to all features and selected features. The results showed that the learning algorithm that has been measured by the Extra Trees method has achieved the highest performance depending on the evaluation of cross-validation and testing.
APA, Harvard, Vancouver, ISO, and other styles
27

Sun, Chunxiao, Zhiyong Zhang, Jinglei Tang, and Shuai Liu. "A Selection-based MCL Clustering Algorithm for Motif Discovery." American Journal of Biochemistry and Biotechnology 14, no. 4 (April 1, 2018): 298–306. http://dx.doi.org/10.3844/ajbbsp.2018.298.306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Somula, Ramasubbareddy, and Sasikala R. "A Load and Distance Aware Cloudlet Selection Strategy in Multi-Cloudlet Environment." International Journal of Grid and High Performance Computing 11, no. 2 (April 2019): 85–102. http://dx.doi.org/10.4018/ijghpc.2019040105.

Full text
Abstract:
Day to day the usage of mobile devices (MD) is growing in people's lives. But still the MD is limited in terms of memory, battery life time, processing capacity. In order to overcome these issues, the new emerging technology named mobile cloud computing (MCC) has been introduced. The offloading mechanism execute the resource intensive application on the remote cloud to save both the battery utilization and execution time. But still the high latency challenges in MCC need to be addressed by executing resource intensive task at nearby resource cloud server. The key challenge is to find optimal cloudlet to execute task to save computation time. In this article, the authors propose a Round Robin algorithm based on cloudlet selection in heterogeneous MCC system. This article considers both load and distance of server to find optimal cloudlet and minimize waiting time of the user request at server queue. Additionally, the authors provide mathematical evaluation of the algorithm and compare with existing load balancing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
29

Yadav, Rahul, and Weizhe Zhang. "MeReg: Managing Energy-SLA Tradeoff for Green Mobile Cloud Computing." Wireless Communications and Mobile Computing 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6741972.

Full text
Abstract:
Mobile cloud computing (MCC) provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2) emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs) selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA) violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.
APA, Harvard, Vancouver, ISO, and other styles
30

Ai, Hu. "GSEA–SDBE: A gene selection method for breast cancer classification based on GSEA and analyzing differences in performance metrics." PLOS ONE 17, no. 4 (April 26, 2022): e0263171. http://dx.doi.org/10.1371/journal.pone.0263171.

Full text
Abstract:
Motivation Selecting the most relevant genes for sample classification is a common process in gene expression studies. Moreover, determining the smallest set of relevant genes that can achieve the required classification performance is particularly important in diagnosing cancer and improving treatment. Results In this study, I propose a novel method to eliminate irrelevant and redundant genes, and thus determine the smallest set of relevant genes for breast cancer diagnosis. The method is based on random forest models, gene set enrichment analysis (GSEA), and my developed Sort Difference Backward Elimination (SDBE) algorithm; hence, the method is named GSEA–SDBE. Using this method, genes are filtered according to their importance following random forest training and GSEA is used to select genes by core enrichment of Kyoto Encyclopedia of Genes and Genomes pathways that are strongly related to breast cancer. Subsequently, the SDBE algorithm is applied to eliminate redundant genes and identify the most relevant genes for breast cancer diagnosis. In the SDBE algorithm, the differences in the Matthews correlation coefficients (MCCs) of performing random forest models are computed before and after the deletion of each gene to indicate the degree of redundancy of the corresponding deleted gene on the remaining genes during backward elimination. Next, the obtained MCC difference list is divided into two parts from a set position and each part is respectively sorted. By continuously iterating and changing the set position, the most relevant genes are stably assembled on the left side of the gene list, facilitating their identification, and the redundant genes are gathered on the right side of the gene list for easy elimination. A cross-comparison of the SDBE algorithm was performed by respectively computing differences between MCCs and ROC_AUC_score and then respectively using 10-fold classification models, e.g., random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), extreme gradient boosting (XGBoost), and extremely randomized trees (ExtraTrees). Finally, the classification performance of the proposed method was compared with that of three advanced algorithms for five cancer datasets. Results showed that analyzing MCC differences and using random forest models was the optimal solution for the SDBE algorithm. Accordingly, three consistently relevant genes (i.e., VEGFD, TSLP, and PKMYT1) were selected for the diagnosis of breast cancer. The performance metrics (MCC and ROC_AUC_score, respectively) of the random forest models based on 10-fold verification reached 95.28% and 98.75%. In addition, survival analysis showed that VEGFD and TSLP could be used to predict the prognosis of patients with breast cancer. Moreover, the proposed method significantly outperformed the other methods tested as it allowed selecting a smaller number of genes while maintaining the required classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Ai, Hu. "GSEA–SDBE: A gene selection method for breast cancer classification based on GSEA and analyzing differences in performance metrics." PLOS ONE 17, no. 4 (April 26, 2022): e0263171. http://dx.doi.org/10.1371/journal.pone.0263171.

Full text
Abstract:
Motivation Selecting the most relevant genes for sample classification is a common process in gene expression studies. Moreover, determining the smallest set of relevant genes that can achieve the required classification performance is particularly important in diagnosing cancer and improving treatment. Results In this study, I propose a novel method to eliminate irrelevant and redundant genes, and thus determine the smallest set of relevant genes for breast cancer diagnosis. The method is based on random forest models, gene set enrichment analysis (GSEA), and my developed Sort Difference Backward Elimination (SDBE) algorithm; hence, the method is named GSEA–SDBE. Using this method, genes are filtered according to their importance following random forest training and GSEA is used to select genes by core enrichment of Kyoto Encyclopedia of Genes and Genomes pathways that are strongly related to breast cancer. Subsequently, the SDBE algorithm is applied to eliminate redundant genes and identify the most relevant genes for breast cancer diagnosis. In the SDBE algorithm, the differences in the Matthews correlation coefficients (MCCs) of performing random forest models are computed before and after the deletion of each gene to indicate the degree of redundancy of the corresponding deleted gene on the remaining genes during backward elimination. Next, the obtained MCC difference list is divided into two parts from a set position and each part is respectively sorted. By continuously iterating and changing the set position, the most relevant genes are stably assembled on the left side of the gene list, facilitating their identification, and the redundant genes are gathered on the right side of the gene list for easy elimination. A cross-comparison of the SDBE algorithm was performed by respectively computing differences between MCCs and ROC_AUC_score and then respectively using 10-fold classification models, e.g., random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), extreme gradient boosting (XGBoost), and extremely randomized trees (ExtraTrees). Finally, the classification performance of the proposed method was compared with that of three advanced algorithms for five cancer datasets. Results showed that analyzing MCC differences and using random forest models was the optimal solution for the SDBE algorithm. Accordingly, three consistently relevant genes (i.e., VEGFD, TSLP, and PKMYT1) were selected for the diagnosis of breast cancer. The performance metrics (MCC and ROC_AUC_score, respectively) of the random forest models based on 10-fold verification reached 95.28% and 98.75%. In addition, survival analysis showed that VEGFD and TSLP could be used to predict the prognosis of patients with breast cancer. Moreover, the proposed method significantly outperformed the other methods tested as it allowed selecting a smaller number of genes while maintaining the required classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Prasetyo, Septian Eko, Pulung Hendro Prastyo, and Shindy Arti. "A Cardiotocographic Classification using Feature Selection: A comparative Study." JITCE (Journal of Information Technology and Computer Engineering) 5, no. 01 (March 31, 2021): 25–32. http://dx.doi.org/10.25077/jitce.5.01.25-32.2021.

Full text
Abstract:
Cardiotocography is a series of inspections to determine the health of the fetus in pregnancy. The inspection process is carried out by recording the baby's heart rate information whether in a healthy condition or contrarily. In addition, uterine contractions are also used to determine the health condition of the fetus. Fetal health is classified into 3 conditions namely normal, suspect, and pathological. This paper was performed to compare a classification algorithm for diagnosing the result of the cardiotocographic inspection. An experimental scheme is performed using feature selection and not using it. CFS Subset Evaluation, Info Gain, and Chi-Square are used to select the best feature which correlated to each other. The data set was obtained from the UCI Machine Learning repository available freely. To find out the performance of the classification algorithm, this study uses an evaluation matrix of precision, Recall, F-Measure, MCC, ROC, PRC, and Accuracy. The results showed that all algorithms can provide fairly good classification. However, the combination of the Random Forest algorithm and the Info Gain Feature Selection gives the best results with an accuracy of 93.74%.
APA, Harvard, Vancouver, ISO, and other styles
33

Solanki, Yogendra Singh, Prasun Chakrabarti, Michal Jasinski, Zbigniew Leonowicz, Vadim Bolshev, Alexander Vinogradov, Elzbieta Jasinska, Radomir Gono, and Mohammad Nami. "A Hybrid Supervised Machine Learning Classifier System for Breast Cancer Prognosis Using Feature Selection and Data Imbalance Handling Approaches." Electronics 10, no. 6 (March 16, 2021): 699. http://dx.doi.org/10.3390/electronics10060699.

Full text
Abstract:
Nowadays, breast cancer is the most frequent cancer among women. Early detection is a critical issue that can be effectively achieved by machine learning (ML) techniques. Thus in this article, the methods to improve the accuracy of ML classification models for the prognosis of breast cancer are investigated. Wrapper-based feature selection approach along with nature-inspired algorithms such as Particle Swarm Optimization, Genetic Search, and Greedy Stepwise has been used to identify the important features. On these selected features popular machine learning classifiers Support Vector Machine, J48 (C4.5 Decision Tree Algorithm), Multilayer-Perceptron (a feed-forward ANN) were used in the system. The methodology of the proposed system is structured into five stages which include (1) Data Pre-processing; (2) Data imbalance handling; (3) Feature Selection; (4) Machine Learning Classifiers; (5) classifier’s performance evaluation. The dataset under this research experimentation is referred from the UCI Machine Learning Repository, named Breast Cancer Wisconsin (Diagnostic) Data Set. This article indicated that the J48 decision tree classifier is the appropriate machine learning-based classifier for optimum breast cancer prognosis. Support Vector Machine with Particle Swarm Optimization algorithm for feature selection achieves the accuracy of 98.24%, MCC = 0.961, Sensitivity = 99.11%, Specificity = 96.54%, and Kappa statistics of 0.9606. It is also observed that the J48 Decision Tree classifier with the Genetic Search algorithm for feature selection achieves the accuracy of 98.83%, MCC = 0.974, Sensitivity = 98.95%, Specificity = 98.58%, and Kappa statistics of 0.9735. Furthermore, Multilayer Perceptron ANN classifier with Genetic Search algorithm for feature selection achieves the accuracy of 98.59%, MCC = 0.968, Sensitivity = 98.6%, Specificity = 98.57%, and Kappa statistics of 0.9682.
APA, Harvard, Vancouver, ISO, and other styles
34

Gutierrez, Amado, Victor Rangel, Javier Gomez, Robert M. Edwards, and David H. Covarrubias. "A Joint Modulation-Coding Scheme and Resource Allocation in LTE Uplink." Elektronika ir Elektrotechnika 26, no. 5 (October 27, 2020): 50–58. http://dx.doi.org/10.5755/j01.eie.26.5.22313.

Full text
Abstract:
In Long Term Evolution (LTE) Resource Allocation Algorithms (RAAs) are an area of work where researchers are seeking to optimize the efficient use of scarce radio resources. The selection of an optimal Modulation and Coding Scheme (MCS) that allows LTE to adapt to channel conditions is a second area of ongoing work. In the wireless part of LTE, these two factors, RAA and MCS selection, are the most critical in optimization. In this paper, the performance of three resource allocation schemes is compared, and a new allocation scheme, Average MCS (AMCS) allocation, is proposed. AMCS is seen to outperform both “Minimum MCS (MMCS)” and “Average Signal to Interference and Noise Ratio MCS (SINR AMCS)” in terms of improvements to LTE Uplink (UL) performance. The three algorithms were implemented in the Vienna LTE-A Uplink Simulator v1.5.
APA, Harvard, Vancouver, ISO, and other styles
35

Habib, Sami J., and Paulvanna N. Marimuthu. "Management Scheme for Data Collection within Wireless Sensor Networks." International Journal of Adaptive, Resilient and Autonomic Systems 3, no. 2 (April 2012): 59–76. http://dx.doi.org/10.4018/jaras.2012040104.

Full text
Abstract:
This paper proposes a data management scheme which employs an energy constrained algorithm selecting between direct and multi-hop transmissions autonomously based on the residual energy level of the individual sensors. The proposed data management scheme rules out the selection of hotspot sensors, the sensors located closer to the base stations, as the intermediate sensors to avoid the dying of these sensors. In each data transmission, the scheme selects one of the neighborhood sensors having minimal Euclidean distance and maximum energy-level as the intermediate node from the neighboring set, without repeating the selection. The proposed data management scheme manages the data collection by utilizing two scheduling algorithms; as soon as possible (ASAP) and as late as possible (ALAP). As a measure of performance, the simulation results of the data management scheme have been compared with that of minimum connected dominating set algorithm (MCDS). The simulation results demonstrate that the data management scheme outperforms with respect to consume less energy; moreover, it can be observed that the scheme finishes an overall short waiting time of the selected sensors compared to the direct transmission in transmitting the data to the base station. The robustness of the proposed scheme is tested by varying the network sizes and varying the sensing radii.
APA, Harvard, Vancouver, ISO, and other styles
36

EL-GHAMRAWY, SALLY M., and ALI I. ELDESOUKY. "AN AGENT DECISION SUPPORT MODULE BASED ON GRANULAR ROUGH MODEL." International Journal of Information Technology & Decision Making 11, no. 04 (July 2012): 793–820. http://dx.doi.org/10.1142/s0219622012500216.

Full text
Abstract:
A multi-agent system (MAS) is a branch of distributed artificial intelligence, composed of a number of distributed and autonomous agents. In a MAS, effective coordination is essential for autonomous agents to achieve their goals. Any decision based on a foundation of knowledge and reasoning can lead agents into successful cooperation; to achieve the necessary degree of flexibility in coordination, an agent must decide when to coordinate and which coordination mechanism to use. The performance of any MAS depends directly on the decisions made by the agents. The agents must therefore be able to make correct decisions. This paper proposes a decision support module in a distributed MAS that is concerned with two main decisions: the decision needed to allocate a task to specific agent/s and the decision needed to select the appropriate coordination mechanism when agents must coordinate with other agent/s to accomplish a specific task. An algorithm for the task allocation decision maker (TADM) and the coordination mechanism selection decision maker (CMSDM) algorithm are proposed that are based on the granular rough model (GRM). Furthermore, a number of experiments were performed to validate the effectiveness of the proposed algorithms; the efficiency of the proposed algorithms is compared with recent works. The preliminary results demonstrate the efficiency of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Sabeena, B., S. Sivakumari, and Dawit Mamru Teressa. "Optimization-Based Ensemble Feature Selection Algorithm and Deep Learning Classifier for Parkinson’s Disease." Journal of Healthcare Engineering 2022 (April 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/1487212.

Full text
Abstract:
PD (Parkinson’s Disease) is a severe malady that is painful and incurable, affecting older human beings. Identifying PD early in a precise manner is critical for the lengthened survival of patients, where DMTs (data mining techniques) and MLTs (machine learning techniques) can be advantageous. Studies have examined DMTs for their accuracy using Parkinson’s dataset and analyzing feature relevance. Recent studies have used FMBOAs for feature selections and relevance analyses, where the selection of features aims to find the optimal subset of features for classification tasks and combine the learning of FMBOAs. EFSs (ensemble feature selections) are viable solutions for combining the benefits of multiple algorithms while balancing their drawbacks. This work uses OBEFSs (optimization-based ensemble feature selections) to select appropriate features based on agreements. Ensembles have the ability to combine results from multiple feature selection approaches, including FMBOAs, LFCSAs (Lévy flight cuckoo search algorithms), and AFAs (adaptive firefly algorithms). These approaches select optimized feature subsets, resulting in three feature subsets, which are subsequently matched for correlations by ensembles. The optimum features are generated by OBEFSs the trained on FCBi-LSTMs (fuzzy convolution bi-directional long short-term memories) for classifications. This work’s suggested model uses the UCI (University of California-Irvine) learning repository, and the methods are evaluated using LOPO-CVs (Leave-One-Person-Out-Cross Validations) in terms of accuracies, F-measure values, and MCCs (Matthews correlation coefficients).
APA, Harvard, Vancouver, ISO, and other styles
38

R. Muhsen, Atheer, Ghazwh G. Jumaa, Nadia F. AL Bakri, and Ahmed T. Sadiq. "Feature Selection Strategy for Network Intrusion Detection System (NIDS) Using Meerkat Clan Algorithm." International Journal of Interactive Mobile Technologies (iJIM) 15, no. 16 (August 23, 2021): 158. http://dx.doi.org/10.3991/ijim.v15i16.24173.

Full text
Abstract:
<p>The task of network security is to keep services available at all times by dealing with hacker attacks. One of the mechanisms obtainable is the Intrusion Detection System (IDS) which is used to sense and classify any abnormal actions. Therefore, the IDS system should always be up-to-date with the latest hacker attack signatures to keep services confidential, safe, and available. IDS speed is a very important issue in addition to learning new attacks. A modified selection strategy based on features was proposed in this paper one of the important swarm intelligent algorithms is the Meerkat Clan Algorithm (MCA). Meerkat Clan Algorithm has good diversity solutions through its neighboring generation conduct and it was used to solve several problems. The proposed strategy benefitted from mutual information to increase the performance and decrease the consumed time. Two datasets (NSL-KDD &amp; UNSW-NB15) for Network Intrusion Detection Systems (NIDS) have been used to verify the performance of the proposed algorithm. The experimental findings indicate that, compared to other approaches, the proposed algorithm produces good results in a minimum of time.</p><p><strong> </strong></p>
APA, Harvard, Vancouver, ISO, and other styles
39

Laishram, Ricky, Jeremy D. Wendt, and Sucheta Soundarajan. "Crawling the Community Structure of Multiplex Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 168–75. http://dx.doi.org/10.1609/aaai.v33i01.3301168.

Full text
Abstract:
We examine the problem of crawling the community structure of a multiplex network containing multiple layers of edge relationships. While there has been a great deal of work examining community structure in general, and some work on the problem of sampling a network to preserve its community structure, to the best of our knowledge, this is the first work to consider this problem on multiplex networks. We consider the specific case in which the layers of a multiplex network have different query (collection) costs and reliabilities; and a data collector is interested in identifying the community structure of the most expensive layer. We propose MultiComSample (MCS), a novel algorithm for crawling a multiplex network. MCS uses multiple levels of multi-armed bandits to determine the best layers, communities and node roles for selecting nodes to query. We test MCS against six baseline algorithms on real-world multiplex networks, and achieved large gains in performance. For example, after consuming a budget equivalent to sampling 20% of the nodes in the expensive layer, we observe that MCS outperforms the best baseline by up to 49%.
APA, Harvard, Vancouver, ISO, and other styles
40

Sun, Yang, and Ma. "Minimum Connected Dominating Set Algorithms for Ad Hoc Sensor Networks." Sensors 19, no. 8 (April 23, 2019): 1919. http://dx.doi.org/10.3390/s19081919.

Full text
Abstract:
To achieve effective communication in ad hoc sensor networks, researchers have been working on finding a minimum connected dominating set (MCDS) as a virtual backbone network in practice. Presently, many approximate algorithms have been proposed to construct MCDS, the best among which is adopting the two-stage idea, that is, to construct a maximum independent set (MIS) firstly and then realize the connectivity through the Steiner tree construction algorithm. For the first stage, this paper proposes an improved collaborative coverage algorithm for solving maximum independent set (IC-MIS), which expands the selection of the dominating point from two-hop neighbor to three-hop neighbor. The coverage efficiency has been improved under the condition of complete coverage. For the second stage, this paper respectively proposes an improved Kruskal–Steiner tree construction algorithm (IK–ST) and a maximum leaf nodes Steiner tree construction algorithm (ML-ST), both of which can make the result closer to the optimal solution. Finally, the simulation results show that the algorithm proposed in this paper is a great improvement over the previous algorithm in optimizing the scale of the connected dominating set (CDS).
APA, Harvard, Vancouver, ISO, and other styles
41

Jiang, Weijin, Junpeng Chen, Xiaoliang Liu, Yuehua Liu, and Sijian Lv. "Participant Recruitment Method Aiming at Service Quality in Mobile Crowd Sensing." Wireless Communications and Mobile Computing 2021 (April 17, 2021): 1–14. http://dx.doi.org/10.1155/2021/6621659.

Full text
Abstract:
With the rapid popularization and application of smart sensing devices, mobile crowd sensing (MCS) has made rapid development. MCS mobilizes personnel with various sensing devices to collect data. Task distribution as the key point and difficulty in the field of MCS has attracted wide attention from scholars. However, the current research on participant selection methods whose main goal is data quality is not deep enough. Different from most of these previous studies, this paper studies the participant selection scheme on the multitask condition in MCS. According to the tasks completed by the participants in the past, the accumulated reputation and willingness of participants are used to construct a quality of service model (QoS). On the basis of maximizing QoS, two heuristic greedy algorithms are used to solve participation; two options are proposed: task-centric and user-centric. The distance constraint factor, integrity constraint factor, and reputation constraint factor are introduced into our algorithms. The purpose is to select the most suitable set of participants on the premise of ensuring the QoS, as far as possible to improve the platform’s final revenue and the benefits of participants. We used a real data set and generated a simulation data set to evaluate the feasibility and effectiveness of the two algorithms. Detailedly compared our algorithms with the existing algorithms in terms of the number of participants selected, moving distance, and data quality. During the experiment, we established a step data pricing model to quantitatively compare the quality of data uploaded by participants. Experimental results show that two algorithms proposed in this paper have achieved better results in task quality than existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

Peng, C. R., L. Liu, B. Niu, Y. L. Lv, M. J. Li, Y. L. Yuan, Y. B. Zhu, W. C. Lu, and Y. D. Cai. "Prediction of RNA-Binding Proteins by Voting Systems." Journal of Biomedicine and Biotechnology 2011 (2011): 1–8. http://dx.doi.org/10.1155/2011/506205.

Full text
Abstract:
It is important to identify which proteins can interact with RNA for the purpose of protein annotation, since interactions between RNA and proteins influence the structure of the ribosome and play important roles in gene expression. This paper tries to identify proteins that can interact with RNA using voting systems. Firstly through Weka, 34 learning algorithms are chosen for investigation. Then simple majority voting system (SMVS) is used for the prediction of RNA-binding proteins, achieving average ACC (overall prediction accuracy) value of 79.72% and MCC (Matthew’s correlation coefficient) value of 59.77% for the independent testing dataset. Then mRMR (minimum redundancy maximum relevance) strategy is used, which is transferred into algorithm selection. In addition, the MCC value of each classifier is assigned to be the weight of the classifier’s vote. As a result, best average MCC values are attained when 22 algorithms are selected and integrated through weighted votes, which are 64.70% for the independent testing dataset, and ACC value is 82.04% at this moment.
APA, Harvard, Vancouver, ISO, and other styles
43

Pramanik, Pijush Kanti Dutta, Sanjib Biswas, Saurabh Pal, Dragan Marinković, and Prasenjit Choudhury. "A Comparative Analysis of Multi-Criteria Decision-Making Methods for Resource Selection in Mobile Crowd Computing." Symmetry 13, no. 9 (September 16, 2021): 1713. http://dx.doi.org/10.3390/sym13091713.

Full text
Abstract:
In mobile crowd computing (MCC), smart mobile devices (SMDs) are utilized as computing resources. To achieve satisfactory performance and quality of service, selecting the most suitable resources (SMDs) is crucial. The selection is generally made based on the computing capability of an SMD, which is defined by its various fixed and variable resource parameters. As the selection is made on different criteria of varying significance, the resource selection problem can be duly represented as an MCDM problem. However, for the real-time implementation of MCC and considering its dynamicity, the resource selection algorithm should be time-efficient. In this paper, we aim to find out a suitable MCDM method for resource selection in such a dynamic and time-constraint environment. For this, we present a comparative analysis of various MCDM methods under asymmetric conditions with varying selection criteria and alternative sets. Various datasets of different sizes are used for evaluation. We execute each program on a Windows-based laptop and also on an Android-based smartphone to assess average runtimes. Besides time complexity analysis, we perform sensitivity analysis and ranking order comparison to check the correctness, stability, and reliability of the rankings generated by each method.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Hui, Youming Li, Liliang Zhang, Yexian Fan, and Zhiliang Li. "A Self-Deployment Algorithm for Maintaining Maximum Coverage and Connectivity in Underwater Acoustic Sensor Networks Based on an Ant Colony Optimization." Applied Sciences 9, no. 7 (April 9, 2019): 1479. http://dx.doi.org/10.3390/app9071479.

Full text
Abstract:
The self-deployment of nodes with non-uniform coverage in underwater acoustic sensor networks (UASNs) is challenging because it is difficult to access the three-dimensional underwater environment. The problem is further complicated if network connectivity needs to be considered. In order to solve the optimization problem of sensor network node deployment, we propose a maximum coverage and connectivity self-deployment algorithm that is based on ant colony optimization (MCC-ACO). We carry out the greedy strategy, improve the path selection probability and pheromone update system, and propose a self-deployment algorithm based on the foundation of standard ant colony optimization algorithms, so as to achieve energy-saving optimization coverage of target events. The main characteristic of the MCC-ACO algorithm is that it fully considers the effects of the changes in the event quantities and the random distribution of the nodes on the deployment effect of the nodes, ensures that every deployed node can be connected to the sink, and achieves the matching of node distribution density and event distribution. Therefore, the MCC-ACO algorithm has great practical value. A large number of comparative simulation experiments show that the algorithm can effectively realize the self-deployment problem of underwater sensor nodes. In addition, the paper also gives the impact of changes in the number of events in the network on the deployment effect.
APA, Harvard, Vancouver, ISO, and other styles
45

Padikkapparambil, Jinesh, Cornelius Ncube, Firoz Khan, Lakshmana Kumar Ramasamy, and Yomiyu Reta Gashu. "Novel Stacking Classification and Prediction Algorithm Based Ambient Assisted Living for Elderly." Wireless Communications and Mobile Computing 2022 (June 18, 2022): 1–19. http://dx.doi.org/10.1155/2022/5728880.

Full text
Abstract:
Recognition of human activity is a significant area of research with numerous uses. In developed countries, the rising age of citizens requires the improvement of the medical service structure, which raises the price of resources, both financial and human. In that sense, ambient assisted living (AAL) is a relatively novel information and communication technology (ICT) that presents services and recognizes various products that enable older people and the disabled to live autonomously and improve their quality of life. It further assists in reducing the cost of hospital services. In the AAL environment, various sensors and devices are fixed to gather a broad range of data. Moreover, AAL will be the motivating technology for the latest care models by acting as an adjunct. This will become thought-provoking research in a fast-growing world, but exploring different ADL and self-classification will become a major challenge. This paper proposed a Novel Stacking Classification and Prediction (NSCP) algorithm based AAL for the elderly with Multi-strategy Combination based Feature Selection (MCFS) and Novel Clustering Aggregation (NCA) algorithms. This paper’s main aim is to recognize the activity of older people, such as standing, walking, sitting, falling, cramps, and running. The dataset is derived from the Kaggle repository, which refers to data collection from wearable IoT devices. The experimental outcomes demonstrate that the MCFS, NCA, and NSCP algorithms work more efficiently than existing feature selection, clustering, and classification algorithms, respectively, regarding the accuracy, sensitivity, specificity, precision, recall, F-measure, and execution time dataset size and the number of features. Furthermore, the NSCP algorithm provided high accuracy, precision, recall, and F-measure are 98%, 0.96, 0.95, and 0.98, respectively.
APA, Harvard, Vancouver, ISO, and other styles
46

Guan, Jiahui, Lantian Yao, Chia-Ru Chung, Ying-Chih Chiang, and Tzong-Yi Lee. "StackTHPred: Identifying Tumor-Homing Peptides through GBDT-Based Feature Selection with Stacking Ensemble Architecture." International Journal of Molecular Sciences 24, no. 12 (June 19, 2023): 10348. http://dx.doi.org/10.3390/ijms241210348.

Full text
Abstract:
One of the major challenges in cancer therapy lies in the limited targeting specificity exhibited by existing anti-cancer drugs. Tumor-homing peptides (THPs) have emerged as a promising solution to this issue, due to their capability to specifically bind to and accumulate in tumor tissues while minimally impacting healthy tissues. THPs are short oligopeptides that offer a superior biological safety profile, with minimal antigenicity, and faster incorporation rates into target cells/tissues. However, identifying THPs experimentally, using methods such as phage display or in vivo screening, is a complex, time-consuming task, hence the need for computational methods. In this study, we proposed StackTHPred, a novel machine learning-based framework that predicts THPs using optimal features and a stacking architecture. With an effective feature selection algorithm and three tree-based machine learning algorithms, StackTHPred has demonstrated advanced performance, surpassing existing THP prediction methods. It achieved an accuracy of 0.915 and a 0.831 Matthews Correlation Coefficient (MCC) score on the main dataset, and an accuracy of 0.883 and a 0.767 MCC score on the small dataset. StackTHPred also offers favorable interpretability, enabling researchers to better understand the intrinsic characteristics of THPs. Overall, StackTHPred is beneficial for both the exploration and identification of THPs and facilitates the development of innovative cancer therapies.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Ting, Fei Luo, Joel Fuentes, Weichao Ding, and Chunhua Gu. "A Flexible Reinforced Bin Packing Framework with Automatic Slack Selection." Mathematical Problems in Engineering 2021 (May 19, 2021): 1–15. http://dx.doi.org/10.1155/2021/6653586.

Full text
Abstract:
The slack-based algorithms are popular bin-focus heuristics for the bin packing problem (BPP). The selection of slacks in existing methods only consider predetermined policies, ignoring the dynamic exploration of the global data structure, which leads to nonfully utilization of the information in the data space. In this paper, we propose a novel slack-based flexible bin packing framework called reinforced bin packing framework (RBF) for the one-dimensional BPP. RBF considers the RL-system, the instance-eigenvalue mapping process, and the reinforced-MBS strategy simultaneously. In our work, the slack is generated with a reinforcement learning strategy, in which the performance-driven rewards are used to capture the intuition of learning the current state of the container space, the action is the choice of the packing container, and the state is the remaining capacity after packing. During the construction of the slack, an instance-eigenvalue mapping process is designed and utilized to generate the representative and classified validate set. Furthermore, the provision of the slack coefficient is integrated into MBS-based packing process. Experimental results show that, in comparison with fit algorithms, MBS and MBS’, RBF achieves state-of-the-art performance on BINDATA and SCH_WAE datasets. In particular, it outperforms its baseline MBS and MBS’, averaging the number increase of optimal solutions of 189.05% and 27.41%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Xiu Fang, You Long Yang, and Xing Jia Tang. "Research on Multi-Dimensional Bayesian Network Classifiers Based on ICA Dimension Reduction." Applied Mechanics and Materials 380-384 (August 2013): 2593–96. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.2593.

Full text
Abstract:
Multi-dimensional Bayesian network classifiers (MBCs) are probabilistic graphical models proposed to solve classification problems. However, in data analysis and preprocessing tasks, one is often confronted with the problem of selecting features from very high dimensional data. To resolve this problem, the covariance analysis and the FastICA algorithm are applied to decrease the dimension and remove redundant information. And then, we only need to construct class subgraph and bridge subgraph of the MBC model with algorithm and mutual information from the processed data, since the new feature variables satisfy independence assumption. The experiment was tested on three benchmark data sets. The theoretically and experimental results show that our method outperforms other state-of-the-art algorithms for multi-dimensional classification in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
49

Inam, Muhammad, Li Zhuo, Masood Ahmad, and Zulfiar Ali Zardari. "An IRGA-MACS Based Cluster-Head Selection Protocol for Wireless Sensor Networks." Cybernetics and Information Technologies 21, no. 2 (June 1, 2021): 166–82. http://dx.doi.org/10.2478/cait-2021-0025.

Full text
Abstract:
Abstract In a volatile environment, a substantial number of sensor nodes are extensively dispatched to track and detect changes in physical environment. Although sensor nodes have limited energy resources, so energy-efficient routing is a major concern in Wireless Sensor Networks (WSN) to extend the network’s lifespan. Recent research shows that less throughput, increased delay, and high execution time have been provided with high energy usage. A new mechanism called the IRGA-MACS is proposed to overcome these inherent problems. Firstly, the Improved Resampling Genetic Algorithm (IRGA) is used for the best Cluster Head (CH) selection. Secondly, to assess the shortest path among CHs and nodes, the Modified Ant Colony Optimization based Simulated Annealing (MACS) has been speculated to minimize the time consumption during the transmission. The results show that the proposed approaches attain the supreme goal of increasing the network lifetime compared to existing methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Al-Shourbaji, Ibrahim, Na Helian, Yi Sun, Samah Alshathri, and Mohamed Abd Elaziz. "Boosting Ant Colony Optimization with Reptile Search Algorithm for Churn Prediction." Mathematics 10, no. 7 (March 23, 2022): 1031. http://dx.doi.org/10.3390/math10071031.

Full text
Abstract:
The telecommunications industry is greatly concerned about customer churn due to dissatisfaction with service. This industry has started investing in the development of machine learning (ML) models for churn prediction to extract, examine and visualize their customers’ historical information from a vast amount of big data which will assist to further understand customer needs and take appropriate actions to control customer churn. However, the high-dimensionality of the data has a large influence on the performance of the ML model, so feature selection (FS) has been applied since it is a primary preprocessing step. It improves the ML model’s performance by selecting salient features while reducing the computational time, which can assist this sector in building effective prediction models. This paper proposes a new FS approach ACO-RSA, that combines two metaheuristic algorithms (MAs), namely, ant colony optimization (ACO) and reptile search algorithm (RSA). In the developed ACO-RSA approach, an ACO and RSA are integrated to choose an important subset of features for churn prediction. The ACO-RSA approach is evaluated on seven open-source customer churn prediction datasets, ten CEC 2019 test functions, and its performance is compared to particle swarm optimization (PSO), multi verse optimizer (MVO) and grey wolf optimizer (GWO), standard ACO and standard RSA. According to the results along with statistical analysis, ACO-RSA is an effective and superior approach compared to other competitor algorithms on most datasets.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography