Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Genetic algorithms – Statistical methods.

Статті в журналах з теми "Genetic algorithms – Statistical methods"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Genetic algorithms – Statistical methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Dharani Pragada, Venkata Aditya, Akanistha Banerjee, and Srinivasan Venkataraman. "OPTIMISATION OF NAVAL SHIP COMPARTMENT LAYOUT DESIGN USING GENETIC ALGORITHM." Proceedings of the Design Society 1 (July 27, 2021): 2339–48. http://dx.doi.org/10.1017/pds.2021.495.

Повний текст джерела
Анотація:
AbstractAn efficient general arrangement is a cornerstone of a good ship design. A big part of the whole general arrangement process is finding an optimized compartment layout. This task is especially tricky since the multiple needs are often conflicting, and it becomes a serious challenge for the ship designers. To aid the ship designers, improved and reliable statistical and computation methods have come to the fore. Genetic algorithms are one of the most widely used methods. Islier's algorithm for the multi-facility layout problem and an improved genetic algorithm for the ship layout design problem are discussed. A new, hybrid genetic algorithm incorporating local search technique to further the improved genetic algorithm's practicality is proposed. Further comparisons are drawn between these algorithms based on a test case layout. Finally, the developed hybrid algorithm is implemented on a section of an actual ship, and the findings are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Salimi, Amir Hossein, Jafar Masoompour Samakosh, Ehsan Sharifi, Mohammad Reza Hassanvand, Amir Noori, and Hary von Rautenkranz. "Optimized Artificial Neural Networks-Based Methods for Statistical Downscaling of Gridded Precipitation Data." Water 11, no. 8 (August 10, 2019): 1653. http://dx.doi.org/10.3390/w11081653.

Повний текст джерела
Анотація:
Precipitation as a key parameter in hydrometeorology and other water-related applications always needs precise methods for assessing and predicting precipitation data. In this study, an effort has been conducted to downscale and evaluate a satellite precipitation estimation (SPE) product using artificial neural networks (ANN), and to impose a residual correction method for five separate daily heavy precipitation events localized over northeast Austria. For the ANN model, a precipitation variable was the chosen output and the inputs were temperature, MODIS cloud optical, and microphysical variables. The particle swarm optimization (PSO), imperialist competitive algorithm,(ICA), and genetic algorithm (GA) were utilized to improve the performance of ANN. Moreover, to examine the efficiency of the networks, the downscaled product was evaluated using 54 rain gauges at a daily timescale. In addition, sensitivity analysis was conducted to obtain the most and least influential input parameters. Among the optimized algorithms for network training used in this study, the performance of the ICA slightly outperformed other algorithms. The best-recorded performance for ICA was on 17 April 2015 with root mean square error (RMSE) = 5.26 mm, mean absolute error (MAE) = 6.06 mm, R2 = 0.67, bias = 0.07 mm. The results showed that the prediction of precipitation was more sensitive to cloud optical thickness (COT). Moreover, the accuracy of the final downscaled satellite precipitation was improved significantly through residual correction algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Hatjimihail, A. T. "Genetic algorithms-based design and optimization of statistical quality-control procedures." Clinical Chemistry 39, no. 9 (September 1, 1993): 1972–78. http://dx.doi.org/10.1093/clinchem/39.9.1972.

Повний текст джерела
Анотація:
Abstract In general, one cannot use algebraic or enumerative methods to optimize a quality-control (QC) procedure for detecting the total allowable analytical error with a stated probability with the minimum probability for false rejection. Genetic algorithms (GAs) offer an alternative, as they do not require knowledge of the objective function to be optimized and can search through large parameter spaces quickly. To explore the application of GAs in statistical QC, I developed two interactive computer programs based on the deterministic crowding genetic algorithm. Given an analytical process, the program "Optimize" optimizes a user-defined QC procedure, whereas the program "Design" designs a novel optimized QC procedure. The programs search through the parameter space and find the optimal or near-optimal solution. The possible solutions of the optimization problem are evaluated with computer simulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cotfas, Daniel T., Petru A. Cotfas, Mihai P. Oproiu, and Paul A. Ostafe. "Analytical versus Metaheuristic Methods to Extract the Photovoltaic Cells and Panel Parameters." International Journal of Photoenergy 2021 (September 17, 2021): 1–17. http://dx.doi.org/10.1155/2021/3608138.

Повний текст джерела
Анотація:
The parameters of the photovoltaic cells and panels are very important to forecast the power generated. There are a lot of methods to extract the parameters using analytical, metaheuristic, and hybrid algorithms. The comparison between the widely used analytical method and some of the best metaheuristic algorithms from the algorithm families is made for datasets from the specialized literature, using the following statistical tests: absolute error, root mean square error, and the coefficient of determination. The equivalent circuit and mathematical model considered is the single diode model. The result comparison shows that the metaheuristic algorithms have the best performance in almost all cases, and only for the genetic algorithm, there are poorer results for one chosen photovoltaic cell. The parameters of the photovoltaic cells and panels and also the current-voltage characteristic for real outdoor weather conditions are forecasted using the parameters calculated with the best method: one for analytical—the five-parameter analytical method—and one for the metaheuristic algorithms—hybrid successive discretization algorithm. Additionally, the genetic algorithm is used. The forecast current-voltage characteristic is compared with the one measured in real sunlight conditions, and the best results are obtained in the case of a hybrid successive discretization algorithm. The maximum power forecast using the calculated parameters with the five-parameter method is the best, and the error in comparison with the measured ones is 0.48%.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Tucker, Allan, Jason Crampton, and Stephen Swift. "RGFGA: An Efficient Representation and Crossover for Grouping Genetic Algorithms." Evolutionary Computation 13, no. 4 (December 2005): 477–99. http://dx.doi.org/10.1162/106365605774666903.

Повний текст джерела
Анотація:
There is substantial research into genetic algorithms that are used to group large numbers of objects into mutually exclusive subsets based upon some fitness function. However, nearly all methods involve degeneracy to some degree. We introduce a new representation for grouping genetic algorithms, the restricted growth function genetic algorithm, that effectively removes all degeneracy, resulting in a more efficient search. A new crossover operator is also described that exploits a measure of similarity between chromosomes in a population. Using several synthetic datasets, we compare the performance of our representation and crossover with another well known state-of-the-art GA method, a strawman optimisation method and a well-established statistical clustering algorithm, with encouraging results.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tarnaris, Konstantinos, Ioanna Preka, Dionisis Kandris, and Alex Alexandridis. "Coverage and k-Coverage Optimization in Wireless Sensor Networks Using Computational Intelligence Methods: A Comparative Study." Electronics 9, no. 4 (April 21, 2020): 675. http://dx.doi.org/10.3390/electronics9040675.

Повний текст джерела
Анотація:
The domain of wireless sensor networks is considered to be among the most significant scientific regions thanks to the numerous benefits that their usage provides. The optimization of the performance of wireless sensor networks in terms of area coverage is a critical issue for the successful operation of every wireless sensor network. This article pursues the maximization of area coverage and area k-coverage by using computational intelligence algorithms, i.e., a genetic algorithm and a particle swarm optimization algorithm. Their performance was evaluated via comparative simulation tests, made not only against each other but also against two other well-known algorithms. This appraisal was made using statistical testing. The test results, that proved the efficacy of the algorithms proposed, were analyzed and concluding remarks were drawn.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Chien-Feng, Chi-Jen Hsu, Chi-Chung Chen, Bao Rong Chang, and Chen-An Li. "An Intelligent Model for Pairs Trading Using Genetic Algorithms." Computational Intelligence and Neuroscience 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/939606.

Повний текст джерела
Анотація:
Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Aizawa, Akiko N., and Benjamin W. Wah. "Scheduling of Genetic Algorithms in a Noisy Environment." Evolutionary Computation 2, no. 2 (June 1994): 97–122. http://dx.doi.org/10.1162/evco.1994.2.2.97.

Повний текст джерела
Анотація:
In this paper, we develop new methods for adjusting configuration parameters of genetic algorithms operating in a noisy environment. Such methods are related to the scheduling of resources for tests performed in genetic algorithms. Assuming that the population size is given, we address two problems related to the design of efficient scheduling algorithms specifically important in noisy environments. First, we study the durution-scheduling problem that is related to setting dynamically the duration of each generation. Second, we study the sample-allocation problem that entails the adaptive determination of the number of evaluations taken from each candidate in a generation. In our approach, we model the search process as a statistical selection process and derive equations useful for these problems. Our results show that our adaptive procedures improve the performance of genetic algorithms over that of commonly used static ones.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kang, Jae Youn, Byung Ik Choi, Hak Joo Lee, Sang Rok Lee, Joo Sung Kim, and Kee Joo Kim. "Genetic Algorithm Application in Multiaxial Fatigue Criteria Computation." International Journal of Modern Physics B 17, no. 08n09 (April 10, 2003): 1678–83. http://dx.doi.org/10.1142/s0217979203019502.

Повний текст джерела
Анотація:
Both critical plane and stress invariant approaches are used to evaluate fatigue limit criteria of machine component subjected to non-proportional cyclic loading. Critical plane methods require finding the smallest circle enclosing all the tips of shear stress vectors acting on the critical plane. In stress invariant methods, the maximum amplitude of the second invariant of the stress deviator should be determined. In this paper, the previous algorithms for constructing the minimum circumscribed circle or hyper-sphere are briefly reviewed and the method using genetic algorithm is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Marcek, Dusan. "Some statistical and CI models to predict chaotic high-frequency financial data." Journal of Intelligent & Fuzzy Systems 39, no. 5 (November 19, 2020): 6419–30. http://dx.doi.org/10.3233/jifs-189107.

Повний текст джерела
Анотація:
To forecast time series data, two methodological frameworks of statistical and computational intelligence modelling are considered. The statistical methodological approach is based on the theory of invertible ARIMA (Auto-Regressive Integrated Moving Average) models with Maximum Likelihood (ML) estimating method. As a competitive tool to statistical forecasting models, we use the popular classic neural network (NN) of perceptron type. To train NN, the Back-Propagation (BP) algorithm and heuristics like genetic and micro-genetic algorithm (GA and MGA) are implemented on the large data set. A comparative analysis of selected learning methods is performed and evaluated. From performed experiments we find that the optimal population size will likely be 20 with the lowest training time from all NN trained by the evolutionary algorithms, while the prediction accuracy level is lesser, but still acceptable by managers.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Silva Arantes, Jesimar da, Márcio da Silva Arantes, Claudio Fabiano Motta Toledo, Onofre Trindade Júnior, and Brian Charles Williams. "Heuristic and Genetic Algorithm Approaches for UAV Path Planning under Critical Situation." International Journal on Artificial Intelligence Tools 26, no. 01 (February 2017): 1760008. http://dx.doi.org/10.1142/s0218213017600089.

Повний текст джерела
Анотація:
The present paper applies a heuristic and genetic algorithms approaches to the path planning problem for Unmanned Aerial Vehicles (UAVs), during an emergency landing, without putting at risk people and properties. The path re-planning can be caused by critical situations such as equipment failures or extreme environmental events, which lead the current UAV mission to be aborted by executing an emergency landing. This path planning problem is introduced through a mathematical formulation, where all problem constraints are properly described. Planner algorithms must define a new path to land the UAV following problem constraints. Three path planning approaches are introduced: greedy heuristic, genetic algorithm and multi-population genetic algorithm. The greedy heuristic aims at quickly find feasible paths, while the genetic algorithms are able to return better quality solutions within a reasonable computational time. These methods are evaluated over a large set of scenarios with different levels of diffculty. Simulations are also conducted by using FlightGear simulator, where the UAV’s behaviour is evaluated for different wind velocities and wind directions. Statistical analysis reveal that combining the greedy heuristic with the genetic algorithms is a good strategy for this problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

BILLHARDT, HOLGER, DANIEL BORRAJO, and VICTOR MAOJO. "LEARNING RETRIEVAL EXPERT COMBINATIONS WITH GENETIC ALGORITHMS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 11, no. 01 (February 2003): 87–113. http://dx.doi.org/10.1142/s0218488503001965.

Повний текст джерела
Анотація:
The goal of information retrieval (IR) is to provide models and systems that help users to identify the relevant documents to their information needs. Extensive research has been carried out to develop retrieval methods that solve this goal. These IR techniques range from purely syntax-based, considering only frequencies of words, to more semantics-aware approaches. However, it seems clear that there is no single method that works equally well on all collections and for all queries. Prior work suggests that combining the evidence from multiple retrieval experts can achieve significant improvements in retrieval effectiveness. A common problem of expert combination approaches is the selection of both the experts to be combined and the combination function. In most studies the experts are selected from a rather small set of candidates using some heuristics. Thus, only a reduced number of possible combinations is considered and other possibly better solutions are left out. In this paper we propose the use of genetic algorithms to find a suboptimal combination of experts for a document collection at hand. Our approach automatically determines both the experts to be combined and the parameters of the combination function. Because we learn this combination for each specific document collection, this approach allows us to automatically adjust the IR system to specific user needs. To learn retrieval strategies that generalize well on new queries we propose a fitness function that is based on the statistical significance of the average precision obtained on a set of training queries. We test and evaluate the approach on four classical text collections. The results show that the learned combination strategies perform better than any of the individual methods and that genetic algorithms provide a viable method to learn expert combinations. The experiments also evaluate the use of a semantic indexing approach, the context vector model, in combination with classical word matching techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Mramba, Lazarus, and Salvador Gezan. "Evaluating Algorithm Efficiency for Optimizing Experimental Designs with Correlated Data." Algorithms 11, no. 12 (December 18, 2018): 212. http://dx.doi.org/10.3390/a11120212.

Повний текст джерела
Анотація:
The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Jiang, Zhenni, and Xiyu Liu. "A Novel Consensus Fuzzy K-Modes Clustering Using Coupling DNA-Chain-Hypergraph P System for Categorical Data." Processes 8, no. 10 (October 21, 2020): 1326. http://dx.doi.org/10.3390/pr8101326.

Повний текст джерела
Анотація:
In this paper, a data clustering method named consensus fuzzy k-modes clustering is proposed to improve the performance of the clustering for the categorical data. At the same time, the coupling DNA-chain-hypergraph P system is constructed to realize the process of the clustering. This P system can prevent the clustering algorithm falling into the local optimum and realize the clustering process in implicit parallelism. The consensus fuzzy k-modes algorithm can combine the advantages of the fuzzy k-modes algorithm, weight fuzzy k-modes algorithm and genetic fuzzy k-modes algorithm. The fuzzy k-modes algorithm can realize the soft partition which is closer to reality, but treats all the variables equally. The weight fuzzy k-modes algorithm introduced the weight vector which strengthens the basic k-modes clustering by associating higher weights with features useful in analysis. These two methods are only improvements the k-modes algorithm itself. So, the genetic k-modes algorithm is proposed which used the genetic operations in the clustering process. In this paper, we examine these three kinds of k-modes algorithms and further introduce DNA genetic optimization operations in the final consensus process. Finally, we conduct experiments on the seven UCI datasets and compare the clustering results with another four categorical clustering algorithms. The experiment results and statistical test results show that our method can get better clustering results than the compared clustering algorithms, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Marcek, Dusan. "Time Series Analysis and Data Prediction of Large Databases: An Application to Electricity Demand Prediction." Advanced Materials Research 811 (September 2013): 401–6. http://dx.doi.org/10.4028/www.scientific.net/amr.811.401.

Повний текст джерела
Анотація:
We evaluate statistical and machine learning methods for half-hourly 1-step-ahead electricity demand prediction using Australian electricity data. We show that the machine learning methods that use autocorrelation feature selection and BackPropagation Neural Networks, Linear Regression as prediction algorithms outperform the statistical methods Exponential Smoothing and also a number of baselines. We analyze the effect of day time on the prediction error and show that there are time-intervals associated with higher and lower errors and that the prediction methods also differ in their accuracy during the different time intervals. This analysis provides the foundation for construction a hybrid prediction model that achieved lower prediction error. We also show that an RBF neural network trained by genetic algorithm can achieved better prediction results than classic one. The aspect of increased transparency of networks through genetic evolution development features and granular computation is another essential topic promoted by knowledge discovery in large databases.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

LIN, C. Y., and A. J. LEE. "ESTIMATION OF ADDITIVE AND NONADDITIVE GENETIC VARIANCES IN NONINBRED POPULATIONS UNDER SIRE OR FULLSIB MODEL." Canadian Journal of Animal Science 69, no. 1 (March 1, 1989): 61–68. http://dx.doi.org/10.4141/cjas89-009.

Повний текст джерела
Анотація:
The separation of additive and nonadditive genetic variances has been a problem for animal breeding researchers because conventional methods of statistical analyses (least squares or ANOVA type) were not able to accomplish this task. Henderson presented computing algorithms for restricted maximum likelihood (REML) estimation of additive and nonadditive genetic variances from an animal model for noninbred populations. Unfortunately, application of this algorithm in practice requires extensive computing. This study extends Henderson's methodology to estimate additive genetic variance independently of nonadditive genetic variances under halfsib (sire), fullsib nested and fullsib cross-classified models. A numerical example illustrates the REML estimation of additive [Formula: see text] and additive by additive [Formula: see text] genetic variances using a sire model. Key words: Genetic variance, additive, nonadditive, dairy
Стилі APA, Harvard, Vancouver, ISO та ін.
17

GONZALEZ-MONROY, LUIS I., and A. CORDOBA. "OPTIMIZATION OF ENERGY SUPPLY SYSTEMS: SIMULATED ANNEALING VERSUS GENETIC ALGORITHM." International Journal of Modern Physics C 11, no. 04 (June 2000): 675–90. http://dx.doi.org/10.1142/s0129183100000638.

Повний текст джерела
Анотація:
We have applied two methods (simulated annealing and genetic algorithms) to search the solution of a problem of optimization with constraints in order to determine the best way to fulfill different energy demands using a set of facilities of energy transformation and storage. We have introduced a computational efficiency factor that measures the efficiency of the optimization algorithm and, as a result, we can conclude that for short computation times, genetic algorithms are more efficient than simulated annealing when demand profiles are not very long, whereas the latter is more efficient than the former for long computation time or for big demand profiles.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Popelka, Ondřej, and Jiří Šťastný. "WWW portal usage analysis using genetic algorithms." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 57, no. 6 (2009): 201–8. http://dx.doi.org/10.11118/actaun200957060201.

Повний текст джерела
Анотація:
The article proposes a new method suitable for advanced analysis of web portal visits. This is part of retrieving information and knowledge from web usage data (web usage mining). Such information is necessary in order to gain better insight into visitor’s needs and generally consumer behaviour. By le­ve­ra­ging this information a company can optimize the organization of its internet presentations and offer a better end-user experience. The proposed approach is using Grammatical evolution which is computational method based on genetic algorithms. Grammatical evolution is using a context-free grammar in order to generate the solution in arbitrary reusable form. This allows us to describe visitors’ behaviour in different manners depending on desired further processing. In this article we use description with a procedural programming language. Web server access log files are used as source data.The extraction of behaviour patterns can currently be solved using statistical analysis – specifically sequential analysis based methods. Our objective is to develop an alternative algorithm.The article further describes the basic algorithms of two-level grammatical evolution; this involves basic Grammatical Evolution and Differential Evolution, which forms the second phase of the computation. Grammatical evolution is used to generate the basic structure of the solution – in form of a part of application code. Differential evolution is used to find optimal parameters for this solution – the specific pages visited by a random visitor. The grammar used to conduct experiments is described along with explanations of the links to the actual implementation of the algorithm. Furthermore the fitness function is described and reasons which yield to its’ current shape. Finally the process of analyzing and filtering the raw input data is described as it is vital part in obtaining reasonable results.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Horton, Pascal, Michel Jaboyedoff, and Charles Obled. "Global Optimization of an Analog Method by Means of Genetic Algorithms." Monthly Weather Review 145, no. 4 (March 13, 2017): 1275–94. http://dx.doi.org/10.1175/mwr-d-16-0093.1.

Повний текст джерела
Анотація:
Abstract Analog methods are based on a statistical relationship between synoptic meteorological variables (predictors) and local weather (predictand, to be predicted). This relationship is defined by several parameters, which are often calibrated by means of a semiautomatic sequential procedure. This calibration approach is fast, but has strong limitations. It proceeds through successive steps, and thus cannot handle all parameter dependencies. Furthermore, it cannot automatically optimize some parameters, such as the selection of pressure levels and temporal windows (hours of the day) at which the predictors are compared. To overcome these limitations, the global optimization technique of genetic algorithms is considered, which can jointly optimize all parameters of the method, and get closer to a global optimum, by taking into account the dependencies of the parameters. Moreover, it can objectively calibrate parameters that were previously assessed manually and can take into account new degrees of freedom. However, genetic algorithms must be tailored to the problem under consideration. Multiple combinations of algorithms were assessed, and new algorithms were developed (e.g., the chromosome of adaptive search radius, which is found to be very robust), in order to provide recommendations regarding the use of genetic algorithms for optimizing several variants of analog methods. A global optimization approach provides new perspectives for the improvement of analog methods, and for their application to new regions or new predictands.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

DE LOS CAMPOS, GUSTAVO, DANIEL GIANOLA, GUILHERME J. M. ROSA, KENT A. WEIGEL, and JOSÉ CROSSA. "Semi-parametric genomic-enabled prediction of genetic values using reproducing kernel Hilbert spaces methods." Genetics Research 92, no. 4 (August 2010): 295–308. http://dx.doi.org/10.1017/s0016672310000285.

Повний текст джерела
Анотація:
SummaryPrediction of genetic values is a central problem in quantitative genetics. Over many decades, such predictions have been successfully accomplished using information on phenotypic records and family structure usually represented with a pedigree. Dense molecular markers are now available in the genome of humans, plants and animals, and this information can be used to enhance the prediction of genetic values. However, the incorporation of dense molecular marker data into models poses many statistical and computational challenges, such as how models can cope with the genetic complexity of multi-factorial traits and with the curse of dimensionality that arises when the number of markers exceeds the number of data points. Reproducing kernel Hilbert spaces regressions can be used to address some of these challenges. The methodology allows regressions on almost any type of prediction sets (covariates, graphs, strings, images, etc.) and has important computational advantages relative to many parametric approaches. Moreover, some parametric models appear as special cases. This article provides an overview of the methodology, a discussion of the problem of kernel choice with a focus on genetic applications, algorithms for kernel selection and an assessment of the proposed methods using a collection of 599 wheat lines evaluated for grain yield in four mega environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

D'Angelo, Donna J., Judy L. Meyer, Leslie M. Howard, Stanley V. Gregory, and Linda R. Ashkenas. "Ecological uses for genetic algorithms: predicting fish distributions in complex physical habitats." Canadian Journal of Fisheries and Aquatic Sciences 52, no. 9 (September 1, 1995): 1893–908. http://dx.doi.org/10.1139/f95-782.

Повний текст джерела
Анотація:
Genetic algorithms (GA) are artificial intelligence techniques based on the theory of evolution that through the process of natural selection evolve formulae to solve problems or develop control strategies. We designed a GA to examine relationships between stream physical characteristics and trout distribution data for 3rd-, 5th-, and 7th-order stream sites in the Cascade Mountains, Oregon. Although traditional multivariate statistical techniques can perform this particular task, GAs are not constrained by assumptions of independence and linearity and therefore provide a useful alternative. To help gauge the effectiveness of the GA, we compared GA results with results from proportional trout distributions and multiple linear regression equations. The GA was a more effective predictor of trout distributions (paired t test, P < 0.05) than other methods and also provided new insights into relationships between stream geomorphology and trout distributions. Most importantly, GA equations emphasized the nonindependence of stream channel units by revealing that (i) the factors that influence trout distributions change along a downstream continuum, and (ii) channel unit sequence can be critical. Superior performance of the GA, along with the new information it provided, indicates that genetic algorithms may provide a useful alternative or supportive method to statistical techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Sun, Hong Tao, Yong Shou Dai, Fang Wang, and Xing Peng. "Seismic Wavelet Estimation Using High-Order Statistics and Chaos-Genetic Algorithm." Advanced Materials Research 433-440 (January 2012): 4241–47. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.4241.

Повний текст джерела
Анотація:
Accurate and effective seismic wavelet estimation has an extreme significance in the seismic data processing of high resolution, high signal-to-noise ratio and high fidelity. The emerging non-liner optimization methods enhance the applied potential for the statistical method of seismic wavelet extraction. Because non-liner optimization algorithms in the seismic wavelet estimation have the defects of low computational efficiency and low precision, Chaos-Genetic Algorithm (CGA) based on the cat mapping is proposed which is applied in the multi-dimensional and multi-modal non-linear optimization. The performance of CGA is firstly verified by four test functions, and then applied to the seismic wavelet estimation. Theoretical analysis and numerical simulation demonstrate that CGA has better convergence speed and convergence performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Sen, G., and E. Akyol. "A genetic-algorithm approach for assessing the liquefaction potential of sandy soils." Natural Hazards and Earth System Sciences 10, no. 4 (April 9, 2010): 685–98. http://dx.doi.org/10.5194/nhess-10-685-2010.

Повний текст джерела
Анотація:
Abstract. The determination of liquefaction potential is required to take into account a large number of parameters, which creates a complex nonlinear structure of the liquefaction phenomenon. The conventional methods rely on simple statistical and empirical relations or charts. However, they cannot characterise these complexities. Genetic algorithms are suited to solve these types of problems. A genetic algorithm-based model has been developed to determine the liquefaction potential by confirming Cone Penetration Test datasets derived from case studies of sandy soils. Software has been developed that uses genetic algorithms for the parameter selection and assessment of liquefaction potential. Then several estimation functions for the assessment of a Liquefaction Index have been generated from the dataset. The generated Liquefaction Index estimation functions were evaluated by assessing the training and test data. The suggested formulation estimates the liquefaction occurrence with significant accuracy. Besides, the parametric study on the liquefaction index curves shows a good relation with the physical behaviour. The total number of misestimated cases was only 7.8% for the proposed method, which is quite low when compared to another commonly used method.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Mahdavi, Ali, Mohsen Najarchi, Emadoddin Hazaveie, Seyed Mohammad Mirhosayni Hazave, and Seyed Mohammad Mahdai Najafizadeh. "Comparison of neural networks and genetic algorithms to determine missing precipitation data (Case study: the city of Sari)." Revista de la Universidad del Zulia 11, no. 29 (February 8, 2020): 114–28. http://dx.doi.org/10.46925//rdluz.29.08.

Повний текст джерела
Анотація:
Neural networks and genetic programming in the investigation of new methods for predicting rainfall in the catchment area of the city of Sari. Various methods are used for prediction, such as the time series model, artificial neural networks, fuzzy logic, fuzzy Nero, and genetic programming. Results based on statistical indicators of root mean square error and correlation coefficient were studied. The results of the optimal model of genetic programming were compared, the correlation coefficients and the root mean square error 0.973 and 0.034 respectively for training, and 0.964 and 0.057 respectively for the optimal neural network model. Genetic programming has been more accurate than artificial neural networks and is recommended as a good way to accurately predict.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Drachal, Krzysztof, and Michał Pawłowski. "A Review of the Applications of Genetic Algorithms to Forecasting Prices of Commodities." Economies 9, no. 1 (January 19, 2021): 6. http://dx.doi.org/10.3390/economies9010006.

Повний текст джерела
Анотація:
This paper is focused on the concise review of the specific applications of genetic algorithms in forecasting commodity prices. Genetic algorithms seem relevant in this field for many reasons. For instance, they lack the necessity to assume a certain statistical distribution, and they are efficient in dealing with non-stationary data. Indeed, the latter case is very frequent while forecasting the commodity prices of, for example, crude oil. Moreover, growing interest in their application has been observed recently. In parallel, researchers are also interested in constructing hybrid genetic algorithms (i.e., joining them with other econometric methods). Such an approach helps to reduce each of the individual method flaws and yields promising results. In this article, three groups of commodities are discussed: energy commodities, metals, and agricultural products. The advantages and disadvantages of genetic algorithms and their hybrids are presented, and further conclusions concerning their possible improvements and other future applications are discussed. This article fills a significant literature gap, focusing on particular financial and economic applications. In particular, it combines three important—yet not often jointly discussed—topics: genetic algorithms, their hybrids with other tools, and commodity price forecasting issues.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Juhola, M., S. Lammi, K. Viikki, and J. Laurikkala. "Comparison of Genetic Algorithms and Other Classification Methods in the Diagnosis of Female Urinary Incontinence." Methods of Information in Medicine 38, no. 02 (1999): 125–31. http://dx.doi.org/10.1055/s-0038-1634175.

Повний текст джерела
Анотація:
AbstractGalactica, a newly developed machine-learning system that utilizes a genetic algorithm for learning, was compared with discriminant analysis, logistic regression, k-means cluster analysis, a C4.5 decision-tree generator and a random bit climber hill-climbing algorithm. The methods were evaluated in the diagnosis of female urinary incontinence in terms of prediction accuracy of classifiers, on the basis of patient data. The best methods were discriminant analysis, logistic regression, C4.5 and Galactica. Practically no statistically significant differences existed between the prediction accuracy of these classification methods. We consider that machine-learning systems C4.5 and Galactica are preferable for automatic construction of medical decision aids, because they can cope with missing data values directly and can present a classifier in a comprehensible form. Galactica performed nearly as well as C4.5. The results are in agreement with the results of earlier research, indicating that genetic algorithms are a competitive method for constructing classifiers from medical data.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Vayenas, Nick, and Sihong Peng. "Reliability analysis of underground mining equipment using genetic algorithms." Journal of Quality in Maintenance Engineering 20, no. 1 (March 4, 2014): 32–50. http://dx.doi.org/10.1108/jqme-02-2013-0006.

Повний текст джерела
Анотація:
Purpose – While increased mechanization and automation make considerable contributions to mine productivity, unexpected equipment failures and imperfect planned or routine maintenance prohibit the maximum possible utilization of sophisticated mining equipment and require significant amount of extra capital investment. Traditional preventive/planned maintenance is usually scheduled at a fixed interval based on maintenance personnel's experience and it can result in decreasing reliability. This paper deals with reliability analysis and prediction for mining machinery. A software tool called GenRel is discussed with its theoretical background, applied algorithms and its current improvements. In GenRel, it is assumed that failures of mining equipment caused by an array of factors (e.g. age of equipment, operating environment) follow the biological evolution theory. GenRel then simulates the failure occurrences during a time period of interest based on Genetic Algorithms (GAs) combined with a number of statistical procedures. The paper also discusses a case study of two mine hoists. The purpose of this paper is to investigate whether or not GenRel can be applied for reliability analysis of mine hoists in real life. Design/methodology/approach – Statistical testing methods are applied to examine the similarity between the predicted data set with the real-life data set in the same time period. The data employed in this case study is compiled from two mine hoists from the Sudbury area in Ontario, Canada. Potential applications of the reliability assessment results yielded from GenRel include reliability-centered maintenance planning and production simulation. Findings – The case studies shown in this paper demonstrate successful applications of a GAs-based software, GenRel, to analyze and predict dynamic reliability characteristics of two hoist systems. Two separate case studies in Mine A and Mine B at a time interval of three months both present acceptable prediction results at a given level of confidence, 5 percent. Practical implications – Potential applications of the reliability assessment results yielded from GenRel include reliability-centered maintenance planning and production simulation. Originality/value – Compared to conventional mathematical models, GAs offer several key advantages. To the best of the authors’ knowledge, there has not been a wide application of GAs in hoist reliability assessment and prediction. In addition, the authors bring discrete distribution functions to the software tool (GenRel) for the first time and significantly improve computing efficiency. The results of the case studies demonstrate successful application of GenRel in assessing and predicting hoist reliability, and this may lead to better preventative maintenance management in the industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

XIAO, HANGUANG, CONGZHONG CAI, and YUZONG CHEN. "MILITARY VEHICLE CLASSIFICATION VIA ACOUSTIC AND SEISMIC SIGNALS USING STATISTICAL LEARNING METHODS." International Journal of Modern Physics C 17, no. 02 (February 2006): 197–212. http://dx.doi.org/10.1142/s0129183106008789.

Повний текст джерела
Анотація:
It is a difficult and important task to classify the types of military vehicles using the acoustic and seismic signals generated by military vehicles. For improving the classification accuracy and reducing the computing time and memory size, we investigated different pre-processing technology, feature extraction and selection methods. Short Time Fourier Transform (STFT) was employed for feature extraction. Genetic Algorithms (GA) and Principal Component Analysis (PCA) were used for feature selection and extraction further. A new feature vector construction method was proposed by uniting PCA and another feature selection method. K-Nearest Neighbor Classifier (KNN) and Support Vector Machines (SVM) were used for classification. The experimental results showed the accuracies of KNN and SVM were affected obviously by the window size which was used to frame the time series of the acoustic and seismic signals. The classification results indicated the performance of SVM was superior to that of KNN. The comparison of the four feature selection and extraction methods showed the proposed method is a simple, none time-consuming, and reliable technique for feature selection and helps the classifier SVM to achieve more better results than solely using PCA, GA, or combination.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Lee, Michael, and Ting Hu. "Computational Methods for the Discovery of Metabolic Markers of Complex Traits." Metabolites 9, no. 4 (April 4, 2019): 66. http://dx.doi.org/10.3390/metabo9040066.

Повний текст джерела
Анотація:
Metabolomics uses quantitative analyses of metabolites from tissues or bodily fluids to acquire a functional readout of the physiological state. Complex diseases arise from the influence of multiple factors, such as genetics, environment and lifestyle. Since genes, RNAs and proteins converge onto the terminal downstream metabolome, metabolomics datasets offer a rich source of information in a complex and convoluted presentation. Thus, powerful computational methods capable of deciphering the effects of many upstream influences have become increasingly necessary. In this review, the workflow of metabolic marker discovery is outlined from metabolite extraction to model interpretation and validation. Additionally, current metabolomics research in various complex disease areas is examined to identify gaps and trends in the use of several statistical and computational algorithms. Then, we highlight and discuss three advanced machine-learning algorithms, specifically ensemble learning, artificial neural networks, and genetic programming, that are currently less visible, but are budding with high potential for utility in metabolomics research. With an upward trend in the use of highly-accurate, multivariate models in the metabolomics literature, diagnostic biomarker panels of complex diseases are more recently achieving accuracies approaching or exceeding traditional diagnostic procedures. This review aims to provide an overview of computational methods in metabolomics and promote the use of up-to-date machine-learning and computational methods by metabolomics researchers.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

LI, JING, and TAO JIANG. "A SURVEY ON HAPLOTYPING ALGORITHMS FOR TIGHTLY LINKED MARKERS." Journal of Bioinformatics and Computational Biology 06, no. 01 (February 2008): 241–59. http://dx.doi.org/10.1142/s0219720008003369.

Повний текст джерела
Анотація:
Two grand challenges in the postgenomic era are to develop a detailed understanding of heritable variation in the human genome, and to develop robust strategies for identifying the genetic contribution to diseases and drug responses. Haplotypes of single nucleotide polymorphisms (SNPs) have been suggested as an effective representation of human variation, and various haplotype-based association mapping methods for complex traits have been proposed in the literature. However, humans are diploid and, in practice, genotype data instead of haplotype data are collected directly. Therefore, efficient and accurate computational methods for haplotype reconstruction are needed and have recently been investigated intensively, especially for tightly linked markers such as SNPs. This paper reviews statistical and combinatorial haplotyping algorithms using pedigree data, unrelated individuals, or pooled samples.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Nezhadhosein, Saeed, Aghileh Heydari, and Reza Ghanbari. "A Modified Hybrid Genetic Algorithm for Solving Nonlinear Optimal Control Problems." Mathematical Problems in Engineering 2015 (2015): 1–21. http://dx.doi.org/10.1155/2015/139036.

Повний текст джерела
Анотація:
Here, a two-phase algorithm is proposed for solving bounded continuous-time nonlinear optimal control problems (NOCP). In each phase of the algorithm, a modified hybrid genetic algorithm (MHGA) is applied, which performs a local search on offsprings. In first phase, a random initial population of control input values in time nodes is constructed. Next, MHGA starts with this population. After phase 1, to achieve more accurate solutions, the number of time nodes is increased. The values of the associated new control inputs are estimated by Linear interpolation (LI) or Spline interpolation (SI), using the curves obtained from the phase 1. In addition, to maintain the diversity in the population, some additional individuals are added randomly. Next, in the second phase, MHGA restarts with the new population constructed by above procedure and tries to improve the obtained solutions at the end of phase 1. We implement our proposed algorithm on 20 well-known benchmark and real world problems; then the results are compared with some recently proposed algorithms. Moreover, two statistical approaches are considered for the comparison of the LI and SI methods and investigation of sensitivity analysis for the MHGA parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Kingsley, Mark T., Timothy M. Straub, Douglas R. Call, Don S. Daly, Sharon C. Wunschel, and Darrell P. Chandler. "Fingerprinting Closely Related Xanthomonas Pathovars with Random Nonamer Oligonucleotide Microarrays." Applied and Environmental Microbiology 68, no. 12 (December 2002): 6361–70. http://dx.doi.org/10.1128/aem.68.12.6361-6370.2002.

Повний текст джерела
Анотація:
ABSTRACT Current bacterial DNA-typing methods are typically based on gel-based fingerprinting methods. As such, they access a limited complement of genetic information and many independent restriction enzymes or probes are required to achieve statistical rigor and confidence in the resulting pattern of DNA fragments. Furthermore, statistical comparison of gel-based fingerprints is complex and nonstandardized. To overcome these limitations of gel-based microbial DNA fingerprinting, we developed a prototype, 47-probe microarray consisting of randomly selected nonamer oligonucleotides. Custom image analysis algorithms and statistical tools were developed to automatically extract fingerprint profiles from microarray images. The prototype array and new image analysis algorithms were used to analyze 14 closely related Xanthomonas pathovars. Of the 47 probes on the prototype array, 10 had diagnostic value (based on a chi-squared test) and were used to construct statistically robust microarray fingerprints. Analysis of the microarray fingerprints showed clear differences between the 14 test organisms, including the separation of X. oryzae strains 43836 and 49072, which could not be resolved by traditional gel electrophoresis of REP-PCR amplification products. The proof-of-application study described here represents an important first step to high-resolution bacterial DNA fingerprinting with microarrays. The universal nature of the nonamer fingerprinting microarray and data analysis methods developed here also forms a basis for method standardization and application to the forensic identification of other closely related bacteria.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

NOROUZZADEH, P., B. RAHMANI, and M. S. NOROUZZADEH. "FORECASTING SMOOTHED NON-STATIONARY TIME SERIES USING GENETIC ALGORITHMS." International Journal of Modern Physics C 18, no. 06 (June 2007): 1071–86. http://dx.doi.org/10.1142/s0129183107011133.

Повний текст джерела
Анотація:
We introduce kernel smoothing method to extract the global trend of a time series and remove short time scales variations and fluctuations from it. A multifractal detrended fluctuation analysis (MF-DFA) shows that the multifractality nature of TEPIX returns time series is due to both fatness of the probability density function of returns and long range correlations between them. MF-DFA results help us to understand how genetic algorithm and kernel smoothing methods act. Then we utilize a recently developed genetic algorithm for carrying out successful forecasts of the trend in financial time series and deriving a functional form of Tehran price index (TEPIX) that best approximates the time variability of it. The final model is mainly dominated by a linear relationship with the most recent past value, while contributions from nonlinear terms to the total forecasting performance are rather small.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Yang, Yao Wen, and Ai Wei Miao. "Structural Parameters Identification Using PZT Sensors and Genetic Algorithms." Advanced Materials Research 79-82 (August 2009): 63–66. http://dx.doi.org/10.4028/www.scientific.net/amr.79-82.63.

Повний текст джерела
Анотація:
Piezoelectric ceramic lead zirconate titanate (PZT) based electro-mechanical impedance (EMI) technique for structural health monitoring (SHM) has been successfully applied to various engineering systems [1-5]. In the traditional EMI method, statistical analysis methods such as root mean square deviation indices of the PZT electromechanical (EM) admittance are used as damage indicator, which is difficult to specify the effect of damage on structural properties. This paper proposes to use the genetic algorithms (GAs) to identify the structural parameters according to the changes in the PZT admittance signature. The basic principle is that structural damage, especially local damage, is typically related to changes in the structural physical parameters. Therefore, to recognize the changes of structural parameters is an effective way to assess the structural damage. Towards this goal, a model of driven point PZT EM admittance is established. In this model, the dynamic behavior of the structure is represented by a multiple degree of freedom (DOF) system. The EM admittance is formulated as a function of excitation frequency and the unknown structural parameters, i.e., the mass, stiffness and the damping coefficient of many single DOF elements. Using the GAs, the optimal values of structural parameters in the model can be back-calculated such that the EM admittance matches the target value. In practice, the target admittance is measured from experiments. In this paper, we use the calculated one as the target. For damage assessment, these optimal values obtained before and after the appearance of structural damage can be compared to study the effects of damage on the structural properties, which are specified to be stiffness and damping in this study. Furthermore, the identified structural parameters could be used to predict the remaining loading capacity of the structure, which serves the purpose for damage prognosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Alam, Tanweer, Shamimul Qamar, Amit Dixit, and Mohamed Benaida. "Genetic Algorithm: Reviews, Implementations, and Applications." International Journal of Engineering Pedagogy (iJEP) 10, no. 6 (December 8, 2020): 57. http://dx.doi.org/10.3991/ijep.v10i6.14567.

Повний текст джерела
Анотація:
Nowadays genetic algorithm (GA) is greatly used in engineering pedagogy as adaptive technology to learn and solve complex problems and issues. It is a meta-heuristic approach that is used to solve hybrid computation challenges. GA utilizes selection, crossover, and mutation operators to effectively manage the searching system strategy. This algorithm is derived from natural selection and genetics concepts. GA is an intelligent use of random search supported with historical data to contribute the search in an area of the improved outcome within a coverage framework. Such algorithms are widely used for maintaining high-quality reactions to optimize issues and problems investigation. These techniques are recognized to be somewhat of a statistical investigation process to search for a suitable solution or prevent an accurate strategy for challenges in optimization or searches. These techniques have been produced from natural selection or genetics principles. For random testing, historical information is provided with intelligent enslavement to continue moving the search out from the area of improved features for processing of the outcomes. It is a category of heuristics of evolutionary history using behavioral science-influenced methods like an annuity, gene, preference, or combination (sometimes refers to as hybridization). This method seemed to be a valuable tool to find solutions for problems optimization. In this paper, the author has explored the GAs, its role in engineering pedagogies, and the emerging areas where it is using, and its implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Abdellahoum, Hamza, and Abdelmajid Boukra. "A Fuzzy Cooperative Approach to Resolve the Image Segmentation Problem." International Journal of Swarm Intelligence Research 12, no. 3 (July 2021): 188–214. http://dx.doi.org/10.4018/ijsir.2021070109.

Повний текст джерела
Анотація:
The image segmentation problem is one of the most studied problems because it helps in several areas. In this paper, the authors propose new algorithms to resolve two problems, namely cluster detection and centers initialization. The authors opt to use statistical methods to automatically determine the number of clusters and the fuzzy sets theory to start the algorithm with a near optimal configuration. They use the image histogram information to determine the number of clusters and a cooperative approach involving three metaheuristics, genetic algorithm (GA), firefly algorithm (FA). and biogeography-based optimization algorithm (BBO), to detect the clusters centers in the initialization step. The experimental study shows that, first, the proposed solution determines a near optimal initial clusters centers set leading to good image segmentation compared to well-known methods; second, the number of clusters determined automatically by the proposed approach contributes to improve the image segmentation quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Mühlenbein, Heinz, and Robin Höns. "The Estimation of Distributions and the Minimum Relative Entropy Principle." Evolutionary Computation 13, no. 1 (March 2005): 1–27. http://dx.doi.org/10.1162/1063656053583469.

Повний текст джерела
Анотація:
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms. In this paper we explain the relationship of EDA to algorithms developed in statistics, artificial intelligence, and statistical physics. The major design issues are discussed within a general interdisciplinary framework. It is shown that maximum entropy approximations play a crucial role. All proposed algorithms try to minimize the Kullback-Leibler divergence KLD between the unknown distribution p(x) and a class q(x) of approximations. However, the Kullback-Leibler divergence is not symmetric. Approximations which suppose that the function to be optimized is additively decomposed (ADF) minimize KLD(q||p), the methods which learn the approximate model from data minimize KLD(p||q). This minimization is identical to maximizing the log-likelihood. In the paper three classes of algorithms are discussed. FDAuses the ADF to compute an approximate factorization of the unknown distribution. The factors are marginal distributions, whose values are computed from samples. The second class is represented by the Bethe-Kikuchi approach which has recently been rediscovered in statistical physics. Here the values of the marginals are computed from a difficult constrained minimization problem. The third class learns the factorization from the data. We analyze our learning algorithm LFDA in detail. It is shown that learning is faced with two problems: first, to detect the important dependencies between the variables, and second, to create an acyclic Bayesian network of bounded clique size.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ay, Ahmet, Dihong Gong, and Tamer Kahveci. "Network-based Prediction of Cancer under Genetic Storm." Cancer Informatics 13s3 (January 2014): CIN.S14025. http://dx.doi.org/10.4137/cin.s14025.

Повний текст джерела
Анотація:
Classification of cancer patients using traditional methods is a challenging task in the medical practice. Owing to rapid advances in microarray technologies, currently expression levels of thousands of genes from individual cancer patients can be measured. The classification of cancer patients by supervised statistical learning algorithms using the gene expression datasets provides an alternative to the traditional methods. Here we present a new network-based supervised classification technique, namely the NBC method. We compare NBC to five traditional classification techniques (support vector machines (SVM), k-nearest neighbor (kNN), naïve Bayes (NB), C4.5, and random forest (RF)) using 50–300 genes selected by five feature selection methods. Our results on five large cancer datasets demonstrate that NBC method outperforms traditional classification techniques. Our analysis suggests that using symmetrical uncertainty (SU) feature selection method with NBC method provides the most accurate classification strategy. Finally, in-depth analysis of the correlation-based co-expression networks chosen by our network-based classifier in different cancer classes shows that there are drastic changes in the network models of different cancer types.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Huang, Chien-Feng, Tsung-Nan Hsieh, Bao Rong Chang, and Chih-Hsiang Chang. "A study of risk-adjusted stock selection models using genetic algorithms." Engineering Computations 31, no. 8 (October 28, 2014): 1720–31. http://dx.doi.org/10.1108/ec-11-2012-0293.

Повний текст джерела
Анотація:
Purpose – Stock selection has long been identified as a challenging task. This line of research is highly contingent upon reliable stock ranking for successful portfolio construction. The purpose of this paper is to employ the methods from computational intelligence (CI) to solve this problem more effectively. Design/methodology/approach – The authors develop a risk-adjusted strategy to improve upon the previous stock selection models by two main risk measures – downside risk and variation in returns. Moreover, the authors employ the genetic algorithm for optimization of model parameters and selection for input variables simultaneously. Findings – It is found that the proposed risk-adjusted methodology via maximum drawdown significantly outperforms the benchmark and improves the previous model in the performance of stock selection. Research limitations/implications – Future work considers an extensive study for the risk-adjusted model using other risk measures such as Value at Risk, Block Maxima, etc. The authors also intend to use financial data from other countries, if available, in order to assess if the method is generally applicable and robust across different environments. Practical implications – The authors expect this risk-adjusted model to advance the CI research for financial engineering and provide an promising solutions to stock selection in practice. Originality/value – The originality of this work is that maximum drawdown is being successfully incorporated into the CI-based stock selection model in which the model's effectiveness is validated with strong statistical evidence.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Camacho, Francy Liliana, Rodrigo Torres-Sáez, and Raúl Ramos-Pollán. "Assessing the behavior of machine learning methods to predict the activity of antimicrobial peptides." Revista Facultad de Ingeniería 26, no. 44 (December 31, 2016): 167. http://dx.doi.org/10.19053/01211129.v26.n44.2017.5834.

Повний текст джерела
Анотація:
This study demonstrates the importance of obtaining statistically stable results when using machine learning methods to predict the activity of antimicrobial peptides, due to the cost and complexity of the chemical processes involved in cases where datasets are particularly small (less than a few hundred instances). Like in other fields with similar problems, this results in large variability in the performance of predictive models, hindering any attempt to transfer them to lab practice. Rather than targeting good peak performance obtained from very particular experimental setups, as reported in related literature, we focused on characterizing the behavior of the machine learning methods, as a preliminary step to obtain reproducible results across experimental setups, and, ultimately, good performance. We propose a methodology that integrates feature learning (autoencoders) and selection methods (genetic algorithms) thorough the exhaustive use of performance metrics (permutation tests and bootstrapping), which provide stronger statistical evidence to support investment decisions with the lab resources at hand. We show evidence for the usefulness of 1) the extensive use of computational resources, and 2) adopting a wider range of metrics than those reported in the literature to assess method performance. This approach allowed us to guide our quest for finding suitable machine learning methods, and to obtain results comparable to those in the literature with strong statistical stability.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Leal, José, and Teresa Costa. "Tuning a semantic relatedness algorithm using a multiscale approach." Computer Science and Information Systems 12, no. 2 (2015): 635–54. http://dx.doi.org/10.2298/csis140905020l.

Повний текст джерела
Анотація:
The research presented in this paper builds on previous work that lead to the definition of a family of semantic relatedness algorithms. These algorithms depend on a semantic graph and on a set of weights assigned to each type of arcs in the graph. The current objective of this research is to automatically tune the weights for a given graph in order to increase the proximity quality. The quality of a semantic relatedness method is usually measured against a benchmark data set. The results produced by a method are compared with those on the benchmark using a nonparametric measure of statistical dependence, such as the Spearman?s rank correlation coefficient. The presented methodology works the other way round and uses this correlation coefficient to tune the proximity weights. The tuning process is controlled by a genetic algorithm using the Spearman?s rank correlation coefficient as fitness function. This algorithm has its own set of parameters which also need to be tuned. Bootstrapping is a statistical method for generating samples that is used in this methodology to enable a large number of repetitions of a genetic algorithm, exploring the results of alternative parameter settings. This approach raises several technical challenges due to its computational complexity. This paper provides details on techniques used to speedup the process. The proposed approach was validated with the WordNet 2.1 and the WordSim-353 data set. Several ranges of parameter values were tested and the obtained results are better than the state of the art methods for computing semantic relatedness using the WordNet 2.1, with the advantage of not requiring any domain knowledge of the semantic graph.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Zhang, Zuoquan, Fan Lang, and Qin Zhao. "Research of Financial Early-Warning Model on Evolutionary Support Vector Machines Based on Genetic Algorithms." Discrete Dynamics in Nature and Society 2009 (2009): 1–8. http://dx.doi.org/10.1155/2009/830572.

Повний текст джерела
Анотація:
A support vector machine is a new learning machine; it is based on the statistics learning theory and attracts the attention of all researchers. Recently, the support vector machines (SVMs) have been applied to the problem of financial early-warning prediction (Rose, 1999). The SVMs-based method has been compared with other statistical methods and has shown good results. But the parameters of the kernel function which influence the result and performance of support vector machines have not been decided. Based on genetic algorithms, this paper proposes a new scientific method to automatically select the parameters of SVMs for financial early-warning model. The results demonstrate that the method is a powerful and flexible way to solve financial early-warning problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Kumar, Adarsh, Saurabh Jain, and Divakar Yadav. "A novel simulation-annealing enabled ranking and scaling statistical simulation constrained optimization algorithm for Internet-of-things (IoTs)." Smart and Sustainable Built Environment 9, no. 4 (March 6, 2020): 675–93. http://dx.doi.org/10.1108/sasbe-06-2019-0073.

Повний текст джерела
Анотація:
PurposeSimulation-based optimization is a decision-making tool for identifying an optimal design of a system. Here, optimal design means a smart system with sensing, computing and control capabilities with improved efficiency. As compared to testing the physical prototype, computer-based simulation provides much cheaper, faster and lesser time-and resource-consuming solutions. In this work, a comparative analysis of heuristic simulation optimization methods (genetic algorithms, evolutionary strategies, simulated annealing, tabu search and simplex search) is performed.Design/methodology/approachIn this work, a comparative analysis of heuristic simulation optimization methods (genertic algorithms, evolutionary strategies, simulated annealing, tabu search and simplex search) is performed. Further, a novel simulation annealing-based heuristic approach is proposed for critical infrastructure.FindingsA small scale network of 50–100 nodes shows that genetic simulation optimization with multi-criteria and multi-dimensional features performs better as compared to other simulation optimization approaches. Further, a minimum of 3.4 percent and maximum of 16.2 percent improvement is observed in faster route identification for small scale Internet-of-things (IoT) networks with simulation optimization constraints integrated model as compared to the traditional method.Originality/valueIn this work, simulation optimization techniques are applied for identifying optimized Quality of service (QoS) parameters for critical infrastructure which in turn helps in improving the network performance. In order to identify optimized parameters, Tabu search and ant-inspired heuristic optimization techniques are applied over QoS parameters. These optimized values are compared with every monitoring sensor point in the network. This comparative analysis helps in identifying underperforming and outperforming monitoring points. Further, QoS of these points can be improved by identifying their local optimum values which in turn increases the performance of overall network. In continuation, a simulation model of bus transport is taken for analysis. Bus transport system is a critical infrastructure for Dehradun. In this work, feasibility of electric recharging units alongside roads under different traffic conditions is checked using simulation. The simulation study is performed over five bus routes in a small scale IoT network.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Kolesnikov, A. V., and O. P. Fedorov. "SYSTEM OF THE COMPLICATED PRACTICAL PROBLEMS ANALYSIS." Mathematical Modelling and Analysis 7, no. 1 (June 30, 2002): 83–92. http://dx.doi.org/10.3846/13926292.2002.9637181.

Повний текст джерела
Анотація:
The original methodology of the system analysis of the inhomogeneous problem is offered, including stages of its reducing to homogeneous parts and selecting for them appropriate toolkits: methods and models. This system applies the accumulated knowledge and the experts skills to refer of each homogeneous problem to one or several alternative classes of modelling methods: analytical methods, statistical methods, artificial neuronets, knowledge based systems, fuzzy systems, genetic algorithms. The knowledge base testing has shown sufficiency and consistency of knowledge for realization of the inhomogeneous problems analysis even in conditions with a low and average distortion in the problem descriptions.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Korkmaz, Nimet, İsmail Öztürk, Adem Kalinli, and Recai Kiliç. "A Comparative Study on Determining Nonlinear Function Parameters of the Izhikevich Neuron Model." Journal of Circuits, Systems and Computers 27, no. 10 (May 24, 2018): 1850164. http://dx.doi.org/10.1142/s0218126618501645.

Повний текст джерела
Анотація:
In the literature, the parabolic function of the Izhikevich Neuron Model (IzNM) is transformed to the Piecewise Linear (PWL) functions in order to make digital hardware implementations easier. The coefficients in this PWL functions are identified by utilizing the error-prone classical step size method. In this paper, it is aimed to determine the coefficients of the PWL functions in the modified IzNM by using the stochastic optimization methods. In order to obtain more accurate results, Genetic Algorithm and Artificial Bee Colony Algorithm (GA and ABC) are used as alternative estimation methods, and amplitude and phase errors between the original and the modified IzNMs are specified with a newly introduced error minimization algorithm, which is based on the exponential forms of the complex numbers. In accordance with this purpose, GA and ABC algorithms are run 30 times for each of the 20 behaviors of a neuron. The statistical results of these runs are given in the tables in order to compare the performance of three parameter-search methods and especially to see the effectiveness of the newly introduced error minimization algorithm. Additionally, two basic dynamical neuronal behaviors of the original and the modified IzNMs are realized with a digital programmable device, namely FPGA, by using new coefficients identified by GA and ABC algorithms. Thus, the efficiency of the GA and ABC algorithm for determining the nonlinear function parameters of the modified IzNM are also verified experimentally.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Kumari, Madhulata, Neeraj Tiwari, and Naidu Subbarao. "A genetic programming-based approach to identify potential inhibitors of serine protease of Mycobacterium tuberculosis." Future Medicinal Chemistry 12, no. 2 (January 2020): 147–59. http://dx.doi.org/10.4155/fmc-2018-0560.

Повний текст джерела
Анотація:
Aim: We applied genetic programming approaches to understand the impact of descriptors on inhibitory effects of serine protease inhibitors of Mycobacterium tuberculosis ( Mtb) and the discovery of new inhibitors as drug candidates. Materials & methods: The experimental dataset of serine protease inhibitors of Mtb descriptors was optimized by genetic algorithm (GA) along with the correlation-based feature selection (CFS) in order to develop predictive models using machine-learning algorithms. The best model was deployed on a library of 918 phytochemical compounds to screen potential serine protease inhibitors of Mtb. Quality and performance of the predictive models were evaluated using various standard statistical parameters. Result: The best random forest model with CFS-GA screened 126 anti-tubercular agents out of 918 phytochemical compounds. Also, genetic programing symbolic classification method is optimized descriptors and developed an equation for mathematical models. Conclusion: The use of CFS-GA with random forest-enhanced classification accuracy and predicted new serine protease inhibitors of Mtb, which can be used for better drug development against tuberculosis.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Wang, Chun, Zhicheng Ji, and Yan Wang. "Many-objective flexible job shop scheduling using NSGA-III combined with multi-attribute decision making." Modern Physics Letters B 32, no. 34n36 (December 30, 2018): 1840110. http://dx.doi.org/10.1142/s0217984918401103.

Повний текст джерела
Анотація:
This paper considers many-objective flexible job shop scheduling problem (MaOFJSP) in which the number of optimization problems is larger than three. An integrated multi-objective optimization method is proposed which contains both optimization and decision making. The non-dominated sorting genetic algorithm III (NSGA-III) is utilized to find a trade-off solution set by simultaneously optimizing six objectives including makespan, workload balance, mean of earliness and tardiness, cost, quality, and energy consumption. Then, an integrated multi-attribute decision-making method is introduced to select one solution that fits into the decision maker’s preference. NSGA-III is compared with three multi-objective evolutionary algorithms (MOEAs)-based scheduling methods, and the simulation results show that NSGA-III performs better in generating the Pareto solutions. In addition, the impacts of using different reference points and decoding methods are investigated.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Elston, Robert C. "An Accidental Genetic Epidemiologist." Annual Review of Genomics and Human Genetics 21, no. 1 (August 31, 2020): 15–36. http://dx.doi.org/10.1146/annurev-genom-103119-125052.

Повний текст джерела
Анотація:
I briefly describe my early life and how, through a series of serendipitous events, I became a genetic epidemiologist. I discuss how the Elston–Stewart algorithm was discovered and its contribution to segregation, linkage, and association analysis. New linkage findings and paternity testing resulted from having a genotyping lab. The different meanings of interaction—statistical and biological—are clarified. The computer package S.A.G.E. (Statistical Analysis for Genetic Epidemiology), based on extensive method development over two decades, was conceived in 1986, flourished for 20 years, and is now freely available for use and further development. Finally, I describe methods to estimate and test hypotheses about familial correlations, and point out that the liability model often used to estimate disease heritability estimates the heritability of that liability, rather than of the disease itself, and so can be highly dependent on the assumed distribution of that liability.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Schaffer, Jesse D., Paul J. Roebber, and Clark Evans. "Development and Evaluation of an Evolutionary Programming-Based Tropical Cyclone Intensity Model." Monthly Weather Review 148, no. 5 (April 15, 2020): 1951–70. http://dx.doi.org/10.1175/mwr-d-19-0346.1.

Повний текст джерела
Анотація:
Abstract A statistical–dynamical tropical cyclone (TC) intensity model is developed from a large ensemble of algorithms through evolutionary programming (EP). EP mimics the evolutionary principles of genetic information, reproduction, and mutation to develop a population of algorithms with skillful predictor combinations. From this evolutionary process the 100 most skillful algorithms as determined by root-mean square error on validation data are kept and bias corrected. Bayesian model combination is used to assign weights to a subset of 10 skillful yet diverse algorithms from this list. The resulting algorithm combination produces a forecast superior in skill to that from any individual algorithm. Using these methods, two models are developed to give deterministic and probabilistic forecasts for TC intensity every 12 h out to 120 h: one each for the North Atlantic and eastern and central North Pacific basins. Deterministic performance, as defined by MAE, exceeds that of a “no skill” forecast in the North Atlantic to 96 h and is competitive with the operational Statistical Hurricane Intensity Prediction Scheme and Logistic Growth Equation Model at these times. In the eastern and central North Pacific, deterministic skill is comparable to the blended 5-day climatology and persistence (CLP5) track and decay-SHIFOR (DSHF) intensity forecast (OCD5) only to 24 h, after which time it is generally less skillful than OCD5 and all operational guidance. Probabilistic rapid intensification forecasts at the 25–30 kt (24 h)−1 thresholds, particularly in the Atlantic, are skillful relative to climatology and competitive with operational guidance when subjectively calibrated; however, probabilistic rapid weakening forecasts are not skillful relative to climatology at any threshold in either basin. Case studies are analyzed to give more insight into model behavior and performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Frost, Volker J., and Karl Molt. "Use of a Genetic Algorithm for Factor Selection in Principal Component Regression." Journal of Near Infrared Spectroscopy 6, A (January 1998): A185—A190. http://dx.doi.org/10.1255/jnirs.192.

Повний текст джерела
Анотація:
The critical point in the development of principal component regression (PCR) calibration programs is the automatic factor selection step. In classical methods this is based on a differentiation between primary and secondary factors and other statistical assumptions and criteria. In contrast to this the Genetic Algorithm (GA) used for factor selection in this paper finds the optimal combination of factors without statistical constraints beyond an appropriately chosen fitness function.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії