To see the other types of publications on this topic, follow the link: Algorithms.

Journal articles on the topic 'Algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu, and Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped." Intelligent Data Analysis 26, no. 5 (September 5, 2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Full text
Abstract:
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Gościniak, Ireneusz, and Krzysztof Gdawiec. "Visual Analysis of Dynamics Behaviour of an Iterative Method Depending on Selected Parameters and Modifications." Entropy 22, no. 7 (July 2, 2020): 734. http://dx.doi.org/10.3390/e22070734.

Full text
Abstract:
There is a huge group of algorithms described in the literature that iteratively find solutions of a given equation. Most of them require tuning. The article presents root-finding algorithms that are based on the Newton–Raphson method which iteratively finds the solutions, and require tuning. The modification of the algorithm implements the best position of particle similarly to the particle swarm optimisation algorithms. The proposed approach allows visualising the impact of the algorithm’s elements on the complex behaviour of the algorithm. Moreover, instead of the standard Picard iteration, various feedback iteration processes are used in this research. Presented examples and the conducted discussion on the algorithm’s operation allow to understand the influence of the proposed modifications on the algorithm’s behaviour. Understanding the impact of the proposed modification on the algorithm’s operation can be helpful in using it in other algorithms. The obtained images also have potential artistic applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Gangavane, Ms H. N. "A Comparison of ABK-Means Algorithm with Traditional Algorithms." International Journal of Trend in Scientific Research and Development Volume-1, Issue-4 (June 30, 2017): 614–21. http://dx.doi.org/10.31142/ijtsrd2197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nico, Nico, Novrido Charibaldi, and Yuli Fauziah. "Comparison of Memetic Algorithm and Genetic Algorithm on Nurse Picket Scheduling at Public Health Center." International Journal of Artificial Intelligence & Robotics (IJAIR) 4, no. 1 (May 30, 2022): 9–23. http://dx.doi.org/10.25139/ijair.v4i1.4323.

Full text
Abstract:
One of the most significant aspects of the working world is the concept of a picket schedule. It is difficult for the scheduler to make an archive since there are frequently many issues with the picket schedule. These issues include schedule clashes, requests for leave, and trading schedules. Evolutionary algorithms have been successful in solving a wide variety of scheduling issues. Evolutionary algorithms are very susceptible to data convergence. But no one has discussed where to start from, where the data converges from making schedules using evolutionary algorithms. The best algorithms among evolutionary algorithms for scheduling are genetic algorithms and memetics algorithms. When it comes to the two algorithms, using genetic algorithms or memetics algorithms may not always offer the optimum outcomes in every situation. Therefore, it is necessary to compare the genetic algorithm and the algorithm's memetic algorithm to determine which one is suitable for the nurse picket schedule. From the results of this study, the memetic algorithm is better than the genetic algorithm in making picket schedules. The memetic algorithm with a population of 10000 and a generation of 5000 does not produce convergent data. While for the genetic algorithm, when the population is 5000 and the generation is 50, the data convergence starts. For accuracy, the memetic algorithm violates only 24 of the 124 existing constraints (80,645%). The genetic algorithm violates 27 of the 124 constraints (78,225%). The average runtime used to generate optimal data using the memetic algorithm takes 20.935592 seconds. For the genetic algorithm, it takes longer, as much as 53.951508 seconds.
APA, Harvard, Vancouver, ISO, and other styles
5

Omar, Hoger K., Kamal H. Jihad, and Shalau F. Hussein. "Comparative analysis of the essential CPU scheduling algorithms." Bulletin of Electrical Engineering and Informatics 10, no. 5 (October 1, 2021): 2742–50. http://dx.doi.org/10.11591/eei.v10i5.2812.

Full text
Abstract:
CPU scheduling algorithms have a significant function in multiprogramming operating systems. When the CPU scheduling is effective a high rate of computation could be done correctly and also the system will maintain in a stable state. As well as, CPU scheduling algorithms are the main service in the operating systems that fulfill the maximum utilization of the CPU. This paper aims to compare the characteristics of the CPU scheduling algorithms towards which one is the best algorithm for gaining a higher CPU utilization. The comparison has been done between ten scheduling algorithms with presenting different parameters, such as performance, algorithm’s complexity, algorithm’s problem, average waiting times, algorithm’s advantages-disadvantages, allocation way, etc. The main purpose of the article is to analyze the CPU scheduler in such a way that suits the scheduling goals. However, knowing the algorithm type which is most suitable for a particular situation by showing its full properties.
APA, Harvard, Vancouver, ISO, and other styles
6

Hairol Anuar, Siti Haryanti, Zuraida Abal Abas, Norhazwani Mohd Yunos, Nurul Hafizah Mohd Zaki, Nurul Akmal Hashim, Mohd Fariddudin Mokhtar, Siti Azirah Asmai, Zaheera Zainal Abidin, and Ahmad Fadzli Nizam. "Comparison between Louvain and Leiden Algorithm for Network Structure: A Review." Journal of Physics: Conference Series 2129, no. 1 (December 1, 2021): 012028. http://dx.doi.org/10.1088/1742-6596/2129/1/012028.

Full text
Abstract:
Abstract In the real network, there must be a large and complex network. The solution to understand that kind of network structure is using the community detection algorithms. There are a lot of other algorithms out there to perform community detection. Each of the algorithms has its own advantages and disadvantages with different types and scale of complex network. The Louvain has been experimented that shows bad connected in community and disconnected when running the algorithm iteratively. In this paper, two algorithm based on agglomerative method (Louvain and Leiden) are introduced and reviewed. The concept and benefit are summarized in detail by comparison. Finally, the Leiden algorithm’s property is considered the latest and fastest algorithm than the Louvain algorithm. For the future, the comparison can help in choosing the best community detection algorithms even though these algorithms have different definitions of community.
APA, Harvard, Vancouver, ISO, and other styles
7

Belazi, Akram, Héctor Migallón, Daniel Gónzalez-Sánchez, Jorge Gónzalez-García, Antonio Jimeno-Morenilla, and José-Luis Sánchez-Romero. "Enhanced Parallel Sine Cosine Algorithm for Constrained and Unconstrained Optimization." Mathematics 10, no. 7 (April 3, 2022): 1166. http://dx.doi.org/10.3390/math10071166.

Full text
Abstract:
The sine cosine algorithm’s main idea is the sine and cosine-based vacillation outwards or towards the best solution. The first main contribution of this paper proposes an enhanced version of the SCA algorithm called as ESCA algorithm. The supremacy of the proposed algorithm over a set of state-of-the-art algorithms in terms of solution accuracy and convergence speed will be demonstrated by experimental tests. When these algorithms are transferred to the business sector, they must meet time requirements dependent on the industrial process. If these temporal requirements are not met, an efficient solution is to speed them up by designing parallel algorithms. The second major contribution of this work is the design of several parallel algorithms for efficiently exploiting current multicore processor architectures. First, one-level synchronous and asynchronous parallel ESCA algorithms are designed. They have two favors; retain the proposed algorithm’s behavior and provide excellent parallel performance by combining coarse-grained parallelism with fine-grained parallelism. Moreover, the parallel scalability of the proposed algorithms is further improved by employing a two-level parallel strategy. Indeed, the experimental results suggest that the one-level parallel ESCA algorithms reduce the computing time, on average, by 87.4% and 90.8%, respectively, using 12 physical processing cores. The two-level parallel algorithms provide extra reductions of the computing time by 91.4%, 93.1%, and 94.5% with 16, 20, and 24 processing cores, including physical and logical cores. Comparison analysis is carried out on 30 unconstrained benchmark functions and three challenging engineering design problems. The experimental outcomes show that the proposed ESCA algorithm behaves outstandingly well in terms of exploration and exploitation behaviors, local optima avoidance, and convergence speed toward the optimum. The overall performance of the proposed algorithm is statistically validated using three non-parametric statistical tests, namely Friedman, Friedman aligned, and Quade tests.
APA, Harvard, Vancouver, ISO, and other styles
8

Agapie, Alexandru. "Theoretical Analysis of Mutation-Adaptive Evolutionary Algorithms." Evolutionary Computation 9, no. 2 (June 2001): 127–46. http://dx.doi.org/10.1162/106365601750190370.

Full text
Abstract:
Adaptive evolutionary algorithms require a more sophisticated modeling than their static-parameter counterparts. Taking into account the current population is not enough when implementing parameter-adaptation rules based on success rates (evolution strategies) or on premature convergence (genetic algorithms). Instead of Markov chains, we use random systems with complete connections - accounting for a complete, rather than recent, history of the algorithm's evolution. Under the new paradigm, we analyze the convergence of several mutation-adaptive algorithms: a binary genetic algorithm, the 1/5 success rule evolution strategy, a continuous, respectively a dynamic (1+1) evolutionary algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Luan, Yuxuan, Junjiang He, Jingmin Yang, Xiaolong Lan, and Geying Yang. "Uniformity-Comprehensive Multiobjective Optimization Evolutionary Algorithm Based on Machine Learning." International Journal of Intelligent Systems 2023 (November 10, 2023): 1–21. http://dx.doi.org/10.1155/2023/1666735.

Full text
Abstract:
When solving real-world optimization problems, the uniformity of Pareto fronts is an essential strategy in multiobjective optimization problems (MOPs). However, it is a common challenge for many existing multiobjective optimization algorithms due to the skewed distribution of solutions and biases towards specific objective functions. This paper proposes a uniformity-comprehensive multiobjective optimization evolutionary algorithm based on machine learning to address this limitation. Our algorithm utilizes uniform initialization and self-organizing map (SOM) to enhance population diversity and uniformity. We track the IGD value and use K-means and CNN refinement with crossover and mutation techniques during evolutionary stages. Our algorithm’s uniformity and objective function balance superiority were verified through comparative analysis with 13 other algorithms, including eight traditional multiobjective optimization algorithms, three machine learning-based enhanced multiobjective optimization algorithms, and two algorithms with objective initialization improvements. Based on these comprehensive experiments, it has been proven that our algorithm outperforms other existing algorithms in these areas.
APA, Harvard, Vancouver, ISO, and other styles
10

SHAH, I. "DIRECT ALGORITHMS FOR FINDING MINIMAL UNSATISFIABLE SUBSETS IN OVER-CONSTRAINED CSPs." International Journal on Artificial Intelligence Tools 20, no. 01 (February 2011): 53–91. http://dx.doi.org/10.1142/s0218213011000036.

Full text
Abstract:
In many situations, an explanation of the reasons behind inconsistency in an overconstrained CSP is required. This explanation can be given in terms of minimal unsatisfiable subsets (MUSes) of constraints. This paper presents algorithms for finding minimal unsatisfiable subsets (MUSes) of constraints in overconstrained CSPs with finite domains and binary constraints. The approach followed is to generate subsets in the subset space, test them for consistency and record the inconsistent subsets found. We present three algorithms as variations of this basic approach. Each algorithm generates subsets in the subset space in a different order and curtails search by employing various search pruning mechanisms. The proposed algorithms are anytime algorithms: a time limit can be set on an algorithm's search and the algorithm can be made to find a subset of MUSes. Experimental evaluation of the proposed algorithms demonstrates that they perform two to three orders of magnitude better than the existing indirect algorithms. Furthermore, the algorithms are able to find MUSes in large CSP benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
11

Wadhai, Prajwal Ashok. "Algolizer Using ReactJS." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 17, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem30733.

Full text
Abstract:
The Algorithm Visualizer Project is an interactive and educational tool designed to illustrate various algorithms' functionality and efficiency through visual representations. Algorithms are fundamental to computer science, but their abstract nature can be challenging to comprehend. This project aims to bridge that gap by providing a user-friendly interface that visually demonstrates algorithms in action. The visualizer offers a platform where users can select from a range of algorithms, such as sorting (e.g., Bubble Sort, Merge Sort). Each algorithm is showcased step-by-step, allowing users to observe how data structures evolve and how the algorithms operate on them. Through dynamic visualizations, users can track the algorithm's progress, see how data is manipulated, and understand the underlying logic behind each step. Additionally, the tool provides options for adjusting parameters, such as input size or speed, enabling users to experiment with different scenarios and grasp the impact on algorithm performance. This project not only serves as a learning resource for students studying computer science and programming but also appeals to enthusiasts seeking a deeper understanding of algorithms. By offering an intuitive and engaging visual representation, the Algorithm Visualizer Project aims to make complex algorithms accessible and comprehensible to a wider audience.
APA, Harvard, Vancouver, ISO, and other styles
12

Edam, Salaheldin, Doaa Abubakr, Roaa Rahma, and Roaa Yagoub. "Comparative evaluation of localization range-free algorithms in wireless sensor networks." International Journal of Engineering, Science and Technology 16, no. 2 (May 23, 2024): 1–10. http://dx.doi.org/10.4314/ijest.v16i2.1.

Full text
Abstract:
Localization in wireless sensor networks is essential not only for determining the location but also for routing, managing density, tracking, and a wide range of other communication network functions. There are two main categories of localization algorithms in wireless sensor networks: range-based algorithms and range-free techniques. Localization based on range-free algorithms has benefits in terms of requiring less hardware and energy, making it cost-efficient. This paper examines the impact of beacon nodes on range-free localization algorithms. The findings indicate that ADLA has intermediate localization errors and the best detecting nodes. It also addresses the effect of the number of locators on the algorithm’s efficiency. The findings demonstrate that as the number of locators increased, the number of detected nodes in Centroid also increased. Compared to Centroid, ADLA has the second-best detecting nodes but with better average error. Moreover, it considers the impact of the number of static nodes on range-free localization algorithms, and ADLA achieved the best detection nodes. According to the evaluated results, this paper proposes a hybrid algorithm that combines the Centroid algorithm and Active Distributed Localization Algorithm (ADLA) algorithm. However, combining these two algorithms results in less localization error.
APA, Harvard, Vancouver, ISO, and other styles
13

Michael, James Bret, and Jeffrey Voas. "Algorithms, Algorithms, Algorithms." Computer 53, no. 11 (November 2020): 13–15. http://dx.doi.org/10.1109/mc.2020.3016534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

A. Baker, Shatha, and Ahmed S. Nori. "Comparison of the Randomness Analysis of the Modified Rectangle Block Cipher and Original algorithm." NTU Journal of Pure Sciences 1, no. 2 (May 31, 2022): 10–21. http://dx.doi.org/10.56286/ntujps.v1i2.185.

Full text
Abstract:
In recent years, different encryption lightweight algorithms have been suggested to protect the security of data transferred across the IoT network. The symmetric key ciphers play a significant role in the security of devices, in particular block ciphers. the RECTANGLE algorithm amongst the current lightweight algorithms. Rectangle algorithm does have good encryption efficacy but the characteristics of confusion and diffusion that a cipher needed are lacking from this algorithm. Therefore, by improving the algorithm confusion and diffusion properties, we expanded Rectangle utilizing a 3D cipher and modified the key scheduling algorithm. To assess if these two algorithms are random or not, randomness analysis was done by using the NIST Statistical Test Suite. To create 100 samples for each algorithm, nine distinct data categories were used. These algorithms created ciphertext blocks, which were then concatenated to form a binary sequence. NIST tests carried out under 1% significance level. According to the results of the comparison study, the proposed algorithm's randomness analysis results are gave 27.48% better results than the original algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Toleushova, A. T., D. M. Uypalakova, and A. B. Imansakipova. "SIGNATURE RECOGNITION ALGORITHMS. BEZIER ALGORITHM." Bulletin of Shakarim University. Technical Sciences, no. 3(7) (February 10, 2023): 47–53. http://dx.doi.org/10.53360/2788-7995-2022-1(5)-7.

Full text
Abstract:
This article focuses on improving the human and machine interface, which should ensure efficient processing of data and knowledge in simple, fast and accessible ways. One of the ways to organize it is the introduction of the manuscript (entering text, drawings, drawings, etc.). Handwritten signatures can be considered as handwritten words, but they are more suitable for drawings, because the signer tries to make his signature unique, using not only his first and last names, but also additional graphic elements. Creating a signature is quite simple, although it is impossible to reproduce the recording speed. The signature has long been used to certify the authenticity of documents and verify (authenticate) an individual. In principle, the signature examination is used during the forensic examination. Signature recognition can be carried out by sequential verification of the signature to each known person. The signature recognition methodology includes a verification methodology and processing of verification results. One of the modern areas of interface improvement is the development and research of software for signature recognition and visualization. The advent of modern computer input tools has led to the emergence of a new type of online signature describing the signature creation process, not the result. Moreover, not only the coordinates of points on the line, but also a sequence of vectors of parameter values for each of the values of pressure, direction and speed of movement, the angle of adaptation of the pen and the signature time.
APA, Harvard, Vancouver, ISO, and other styles
16

Siva Shankar, S., K. Maithili, K. Madhavi, and Yashwant Singh Bisht. "Evaluating Clustering Algorithms: An Analysis using the EDAS Method." E3S Web of Conferences 430 (2023): 01161. http://dx.doi.org/10.1051/e3sconf/202343001161.

Full text
Abstract:
Data clustering is frequently utilized in the early stages of analyzing big data. It enables the examination of massive datasets encompassing diverse types of data, with the aim of revealing undiscovered correlations, concealed patterns, and other valuable information that can be leveraged. The assessment of algorithms designed for handling large-scale data poses a significant research challenge across various fields. Evaluating the performance of different algorithms in processing massive data can yield diverse or even contradictory results, a phenomenon that remains insufficiently explored. This paper seeks to address this issue by proposing a solution framework for evaluating clustering algorithms, with the objective of reconciling divergent or conflicting evaluation outcomes. “The multicriteria decision making (MCDM) method” is used to assess the clustering algorithms. Using the EDAS rating system, the report examines six alternative clustering algorithms “the KM algorithm, EM algorithm, filtered clustering (FC), farthest-first (FF) algorithm, make density-based clustering (MD), and hierarchical clustering (HC)”—against, six clustering external measures. The Expectation Maximization (EM) algorithm has an ASi value of 0.048021 and is ranked 5th among the clustering algorithms. The Farthest-First (FF) Algorithm has an ASi value of 0.753745 and is ranked 2nd. The Filtered Clustering (FC) algorithm has an ASi value of 0.055173 and is ranked 4th. The Hierarchical Clustering (HC) algorithm has the highest ASi value of 0.929506 and is ranked 1st. The Make Density-Based Clustering (MD) algorithm has an ASi value of 0.011219 and is ranked 6th. Lastly, the K-Means Algorithm has an ASi value of 0.055376 and is ranked 3rd. These ASi values provide an assessment of each algorithm’s overall performance, and the rankings offer a comparative analysis of their performance. Based on the result, we observe that the Hierarchical Clustering algorithm achieves the highest ASi value and is ranked first, indicating its superior performance compared to the other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
17

Birnie, Claire, Kit Chambers, Doug Angus, and Anna L. Stork. "On the importance of benchmarking algorithms under realistic noise conditions." Geophysical Journal International 221, no. 1 (January 15, 2020): 504–20. http://dx.doi.org/10.1093/gji/ggaa025.

Full text
Abstract:
SUMMARY Testing with synthetic data sets is a vital stage in an algorithm’s development for benchmarking the algorithm’s performance. A common addition to synthetic data sets is White, Gaussian Noise (WGN) which is used to mimic noise that would be present in recorded data sets. The first section of this paper focuses on comparing the effects of WGN and realistic modelled noise on standard microseismic event detection and imaging algorithms using synthetic data sets with recorded noise as a benchmark. The data sets with WGN underperform on the trace-by-trace algorithm while overperforming on algorithms utilizing the full array. Throughout, the data sets with realistic modelled noise perform near identically to the recorded noise data sets. The study concludes by testing an algorithm that simultaneously solves for the source location and moment tensor of a microseismic event. Not only does the algorithm fail to perform at the signal-to-noise ratios indicated by the WGN results but the results with realistic modelled noise highlight pitfalls of the algorithm not previously identified. The misleading results from the WGN data sets highlight the need to test algorithms under realistic noise conditions to gain an understanding of the conditions under which an algorithm can perform and to minimize the risk of misinterpretation of the results.
APA, Harvard, Vancouver, ISO, and other styles
18

Benbouzid, Bilel. "Unfolding Algorithms." Science & Technology Studies 32, no. 4 (December 13, 2019): 119–36. http://dx.doi.org/10.23987/sts.66156.

Full text
Abstract:
Predictive policing is a research field whose principal aim is to develop machines for predicting crimes, drawing on machine learning algorithms and the growing availability of a diversity of data. This paper deals with the case of the algorithm of PredPol, the best-known startup in predictive policing. The mathematicians behind it took their inspiration from an algorithm created by a French seismologist, a professor in earth sciences at the University of Savoie. As the source code of the PredPol platform is kept inaccessible as a trade secret, the author contacted the seismologist directly in order to try to understand the predictions of the company’s algorithm. Using the same method of calculation on the same data, the seismologist arrived at a different, more cautious interpretation of the algorithm's capacity to predict crime. How were these predictive analyses formed on the two sides of the Atlantic? How do predictive algorithms come to exist differently in these different contexts? How and why is it that predictive machines can foretell a crime that is yet to be committed in a California laboratory, and yet no longer work in another laboratory in Chambéry? In answering these questions, I found that machine learning researchers have a moral vision of their own activity that can be understood by analyzing the values and material consequences involved in the evaluation tests that are used to create the predictions.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Yi, and Kangshun Li. "A Lévy Flight-Inspired Random Walk Algorithm for Continuous Fitness Landscape Analysis." International Journal of Cognitive Informatics and Natural Intelligence 17, no. 1 (September 21, 2023): 1–18. http://dx.doi.org/10.4018/ijcini.330535.

Full text
Abstract:
Heuristic algorithms are effective methods for solving complex optimization problems. The optimal algorithm selection for a specific optimization problem is a challenging task. Fitness landscape analysis (FLA) is used to understand the optimization problem's characteristics and help select the optimal algorithm. A random walk algorithm is an essential technique for FLA in continuous search space. However, most currently proposed random walk algorithms suffer from unbalanced sampling points. This article proposes a Lévy flight-based random walk (LRW) algorithm to address this problem. The Lévy flight is used to generate the proposed random walk algorithm's variable step size and direction. Some tests show that the proposed LRW algorithm performs better in the uniformity of sampling points. Besides, the authors analyze the fitness landscape of the CEC2017 benchmark functions using the proposed LRW algorithm. The experimental results indicate that the proposed LRW algorithm can better obtain the structural features of the landscape and has better stability than several other RW algorithms.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Chenyang, and Benjamin Moseley. "Learning-Augmented Algorithms for Online Steiner Tree." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8744–52. http://dx.doi.org/10.1609/aaai.v36i8.20854.

Full text
Abstract:
This paper considers the recently popular beyond-worst-case algorithm analysis model which integrates machine-learned predictions with online algorithm design. We consider the online Steiner tree problem in this model for both directed and undirected graphs. Steiner tree is known to have strong lower bounds in the online setting and any algorithm’s worst-case guarantee is far from desirable. This paper considers algorithms that predict which terminal arrives online. The predictions may be incorrect and the algorithms’ performance is parameterized by the number of incorrectly predicted terminals. These guarantees ensure that algorithms break through the online lower bounds with good predictions and the competitive ratio gracefully degrades as the prediction error grows. We then observe that the theory is predictive of what will occur empirically. We show on graphs where terminals are drawn from a distribution, the new online algorithms have strong performance even with modestly correct predictions.
APA, Harvard, Vancouver, ISO, and other styles
21

Saeed, Ayesha, Ali Husnain, Anam Zahoor, and Mehmood Gondal. "A Comparative Study of Cat Swarm Algorithm for Graph Coloring Problem: Convergence Analysis and Performance Evaluation." International Journal of Innovative Research in Computer Science and Technology 12, no. 4 (July 2024): 1–9. http://dx.doi.org/10.55524/ijircst.2024.12.4.1.

Full text
Abstract:
The Graph Coloring Problem (GCP) is a significant optimization challenge widely suitable to solve scheduling problems. Its goal is to specify the minimum colors (k) required to color a graph properly. Due to its NP-completeness, exact algorithms become impractical for graphs exceeding 100 vertices. As a result, approximation algorithms have gained prominence for tackling large-scale instances. In this context, the Cat Swarm algorithm, a novel population-based metaheuristic in the domain of swarm intelligence, has demonstrated promising convergence properties compared to other population-based algorithms. This research focuses on designing and implementing the Cat Swarm algorithm to address the GCP. By conducting a comparative study with established algorithms, our investigation revolves around quantifying the minimum value of k required by the Cat Swarm algorithm for each graph instance. The evaluation metrics include the algorithm's running time in seconds, success rate, and the mean count of iterations or assessments required to reach a goal.
APA, Harvard, Vancouver, ISO, and other styles
22

Barbulescu, L., A. E. Howe, L. D. Whitley, and M. Roberts. "Understanding Algorithm Performance on an Oversubscribed Scheduling Application." Journal of Artificial Intelligence Research 27 (December 28, 2006): 577–615. http://dx.doi.org/10.1613/jair.2038.

Full text
Abstract:
The best performing algorithms for a particular oversubscribed scheduling application, Air Force Satellite Control Network (AFSCN) scheduling, appear to have little in common. Yet, through careful experimentation and modeling of performance in real problem instances, we can relate characteristics of the best algorithms to characteristics of the application. In particular, we find that plateaus dominate the search spaces (thus favoring algorithms that make larger changes to solutions) and that some randomization in exploration is critical to good performance (due to the lack of gradient information on the plateaus). Based on our explanations of algorithm performance, we develop a new algorithm that combines characteristics of the best performers; the new algorithm's performance is better than the previous best. We show how hypothesis driven experimentation and search modeling can both explain algorithm performance and motivate the design of a new algorithm.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Jianwei, Lingmei Jiang, Shengli Wu, Gongxue Wang, Jian Wang, and Xiaojing Liu. "Development of a Snow Depth Estimation Algorithm over China for the FY-3D/MWRI." Remote Sensing 11, no. 8 (April 24, 2019): 977. http://dx.doi.org/10.3390/rs11080977.

Full text
Abstract:
Launched on 15 November 2017, China’s FengYun-3D (FY-3D) has taken over prime operational weather service from the aging FengYun-3B (FY-3B). Rather than directly implementing an FY-3B operational snow depth retrieval algorithm on FY-3D, we investigated this and four other well-known snow depth algorithms with respect to regional uncertainties in China. Applicable to various passive microwave sensors, these four snow depth algorithms are the Environmental and Ecological Science Data Centre of Western China (WESTDC) algorithm, the Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E) algorithm, the Chang algorithm, and the Foster algorithm. Among these algorithms, validation results indicate that FY-3B and WESTDC perform better than the others. However, these two algorithms often result in considerable underestimation for deep snowpack (greater than 20 cm), while the other three persistently overestimate snow depth, probably because of their poor representation of snowpack characteristics in China. To overcome the retrieval errors that occur under deep snowpack conditions without sacrificing performance under relatively thin snowpack conditions, we developed an empirical snow depth retrieval algorithm suite for the FY-3D satellite. Independent evaluation using weather station observations in 2014 and 2015 demonstrates that the FY-3D snow depth algorithm’s root mean square error (RMSE) and bias are 6.6 cm and 0.2 cm, respectively, and it has advantages over other similar algorithms.
APA, Harvard, Vancouver, ISO, and other styles
24

Kang, Tae-Won, Jin-Gu Kang, and Jin-Woo Jung. "A Bidirectional Interpolation Method for Post-Processing in Sampling-Based Robot Path Planning." Sensors 21, no. 21 (November 8, 2021): 7425. http://dx.doi.org/10.3390/s21217425.

Full text
Abstract:
This paper proposes a post-processing method called bidirectional interpolation method for sampling-based path planning algorithms, such as rapidly-exploring random tree (RRT). The proposed algorithm applies interpolation to the path generated by the sampling-based path planning algorithm. In this study, the proposed algorithm is applied to the path created by RRT-connect and six environmental maps were used for the verification. It was visually and quantitatively confirmed that, in all maps, not only path lengths but also the piecewise linear shape were decreased compared to the path generated by RRT-connect. To check the proposed algorithm’s performance, visibility graph, RRT-connect algorithm, Triangular-RRT-connect algorithm and post triangular processing of midpoint interpolation (PTPMI) were compared in various environmental maps through simulation. Based on these experimental results, the proposed algorithm shows similar planning time but shorter path length than previous RRT-like algorithms as well as RRT-like algorithms with PTPMI having a similar number of samples.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Chuang, Yue-Han Pei, Xiao-Xue Wang, Hong-Yu Hou, and Li-Hua Fu. "Symmetric cross-entropy multi-threshold color image segmentation based on improved pelican optimization algorithm." PLOS ONE 18, no. 6 (June 29, 2023): e0287573. http://dx.doi.org/10.1371/journal.pone.0287573.

Full text
Abstract:
To address the problems of low accuracy and slow convergence of traditional multilevel image segmentation methods, a symmetric cross-entropy multilevel thresholding image segmentation method (MSIPOA) with multi-strategy improved pelican optimization algorithm is proposed for global optimization and image segmentation tasks. First, Sine chaotic mapping is used to improve the quality and distribution uniformity of the initial population. A spiral search mechanism incorporating a sine cosine optimization algorithm improves the algorithm’s search diversity, local pioneering ability, and convergence accuracy. A levy flight strategy further improves the algorithm’s ability to jump out of local minima. In this paper, 12 benchmark test functions and 8 other newer swarm intelligence algorithms are compared in terms of convergence speed and convergence accuracy to evaluate the performance of the MSIPOA algorithm. By non-parametric statistical analysis, MSIPOA shows a greater superiority over other optimization algorithms. The MSIPOA algorithm is then experimented with symmetric cross-entropy multilevel threshold image segmentation, and eight images from BSDS300 are selected as the test set to evaluate MSIPOA. According to different performance metrics and Fridman test, MSIPOA algorithm outperforms similar algorithms in global optimization and image segmentation, and the symmetric cross entropy of MSIPOA algorithm for multilevel thresholding image segmentation method can be effectively applied to multilevel thresholding image segmentation tasks.
APA, Harvard, Vancouver, ISO, and other styles
26

Ahmad, Yasir, Mohib Ullah, Rafiullah Khan, Bushra Shafi, Atif Khan, Mahdi Zareei, Abdallah Aldosary, and Ehab Mahmoud Mohamed. "SiFSO: Fish Swarm Optimization-Based Technique for Efficient Community Detection in Complex Networks." Complexity 2020 (December 12, 2020): 1–9. http://dx.doi.org/10.1155/2020/6695032.

Full text
Abstract:
Efficient community detection in a complex network is considered an interesting issue due to its vast applications in many prevailing areas such as biology, chemistry, linguistics, social sciences, and others. There are several algorithms available for network community detection. This study proposed the Sigmoid Fish Swarm Optimization (SiFSO) algorithm to discover efficient network communities. Our proposed algorithm uses the sigmoid function for various fish moves in a swarm, including Prey, Follow, Swarm, and Free Move, for better movement and community detection. The proposed SiFSO algorithm’s performance is tested against state-of-the-art particle swarm optimization (PSO) algorithms in Q-modularity and normalized mutual information (NMI). The results showed that the proposed SiFSO algorithm is 0.0014% better in terms of Q-modularity and 0.1187% better in terms of NMI than the other selected algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

Trofymenko, Olena, Yuliia Prokop, Olena Chepurna, and Mykola Korniichuk. "A PERFORMANCE COMPARISON OF SORTING ALGORITHMS IN DIFFERENT PROGRAMMING LANGUAGES." Cybersecurity: Education, Science, Technique 1, no. 21 (2023): 86–98. http://dx.doi.org/10.28925/2663-4023.2023.21.8698.

Full text
Abstract:
Sorting, as one of the basic algorithms, has a wide range of applications in software development. As the amount of processed data grows, the need for fast and efficient data sorting increases significantly. There are many sorting algorithms and their extensions. However, choosing the best and most versatile among them is impossible. All these algorithms have their specifics, which determine the scope of their effective use. Therefore, the problem of deciding the optimal algorithm for certain specific conditions is relevant. This choice is often a non-trivial task, and an unsuccessful choice of algorithm can cause difficulties with data processing performance. To determine which algorithm will be the best in a particular situation, you need to analyse all the factors that affect the operation of algorithms: the size and structure of the data set, the range of element values, the form of access (random or sequential), the orderliness, the amount of additional memory required to execute the algorithm, etc. In addition, different algorithms have different performance in different programming languages. The study analyses the advantages and disadvantages of nine popular sorting algorithms (Bubble, Insertion, Selection, Shell, Merge, Quick, Counting, Radix, and Heap) due to their specifics and limitations on their possible use. The performance of these algorithms implemented in four popular programming languages (C++, C#, Java and JavaScript) is tested. We experimentally discovered that the performance of sorting algorithms differs depending on the programming language. The applied aspect of the study is that its conclusions and results will allow developers to choose the best algorithm for a particular programming language, depending on the size, range, structure, etc. of the data set to be sorted. Considering this is significant when we have to sort large amounts of data in search engines, scientific and engineering applications. After all, the sorting algorithm's efficiency significantly affects the system's overall performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Rahman, Chnoor M., and Tarik A. Rashid. "Dragonfly Algorithm and Its Applications in Applied Science Survey." Computational Intelligence and Neuroscience 2019 (December 6, 2019): 1–21. http://dx.doi.org/10.1155/2019/9293617.

Full text
Abstract:
One of the most recently developed heuristic optimization algorithms is dragonfly by Mirjalili. Dragonfly algorithm has shown its ability to optimizing different real-world problems. It has three variants. In this work, an overview of the algorithm and its variants is presented. Moreover, the hybridization versions of the algorithm are discussed. Furthermore, the results of the applications that utilized the dragonfly algorithm in applied science are offered in the following area: machine learning, image processing, wireless, and networking. It is then compared with some other metaheuristic algorithms. In addition, the algorithm is tested on the CEC-C06 2019 benchmark functions. The results prove that the algorithm has great exploration ability and its convergence rate is better than the other algorithms in the literature, such as PSO and GA. In general, in this survey, the strong and weak points of the algorithm are discussed. Furthermore, some future works that will help in improving the algorithm’s weak points are recommended. This study is conducted with the hope of offering beneficial information about dragonfly algorithm to the researchers who want to study the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

Shultz Colby, Rebekah. "Theorycrafting Algorithms: Teaching Algorithmic Literacy." Literacy in Composition Studies 11, no. 1 (February 8, 2024): 21–41. http://dx.doi.org/10.21623/1.11.1.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bhagya Sri, Mukku, Rachita Bhavsar, and Preeti Narooka. "String Matching Algorithms." International Journal Of Engineering And Computer Science 7, no. 03 (March 23, 2018): 23769–72. http://dx.doi.org/10.18535/ijecs/v7i3.19.

Full text
Abstract:
To analyze the content of the documents, the various pattern matching algorithms are used to find all the occurrences of a limited set of patterns within an input text or input document. In order to perform this task, this research work used four existing string matching algorithms; they are Brute Force algorithm, Knuth-Morris-Pratt algorithm (KMP), Boyer Moore algorithm and Rabin Karp algorithm. This work also proposes three new string matching algorithms. They are Enhanced Boyer Moore algorithm, Enhanced Rabin Karp algorithm and Enhanced Knuth-Morris-Pratt algorithm. Findings: For experimentation, this work has used two types of documents, i.e. .txt and .docx. Performance measures used are search time, number of iterations and accuracy. From the experimental results, it is realized that the enhanced KMP algorithm gives better accuracy compared to other string matching algorithms. Application/Improvements: Normally, these algorithms are used in the field of text mining, document classification, content analysis and plagiarism detection. In future, these algorithms have to be enhanced to improve their performance and the various types of documents will be used for experimentation.
APA, Harvard, Vancouver, ISO, and other styles
31

Hewawasam, Hasitha, Yousef Ibrahim, and Gayan Kahandawa. "A Novel Optimistic Local Path Planner: Agoraphilic Navigation Algorithm in Dynamic Environment." Machines 10, no. 11 (November 16, 2022): 1085. http://dx.doi.org/10.3390/machines10111085.

Full text
Abstract:
This paper presents a novel local path planning algorithm developed based on the new free space attraction (Agoraphilic) concept. The proposed algorithm is capable of navigating robots in unknown static, as well as dynamically cluttered environments. Unlike the other navigation algorithms, the proposed algorithm takes the optimistic approach of the navigation problem. It does not look for problems to avoid, but rather for solutions to follow. This human-like decision-making behaviour distinguishes the new algorithm from all the other navigation algorithms. Furthermore, the new algorithm utilises newly developed tracking and prediction algorithms, to safely navigate mobile robots. This is further supported by a fuzzy logic controller designed to efficiently account for the inherent high uncertainties in the robot’s operational environment at a reduced computational cost. This paper also includes physical experimental results combined with bench-marking against other recent methods. The reported results verify the algorithm’s successful advantages in navigating robots in both static and dynamic environments.
APA, Harvard, Vancouver, ISO, and other styles
32

Vasyluk, Andrii, and Taras Basyuk. "Synthesis System Оf Algebra Algorithms Formulas." Vìsnik Nacìonalʹnogo unìversitetu "Lʹvìvsʹka polìtehnìka". Serìâ Ìnformacìjnì sistemi ta merežì 9 (June 10, 2021): 11–22. http://dx.doi.org/10.23939/sisn2021.09.011.

Full text
Abstract:
In the article the authors have developed a mathematical support for the process of generating subject unitherms of formulas of algebra of algorithms. The analysis of features of construction of formulas of algebra of algorithms as a result of which it was found out, that today, subsystems with realization of processes of generation of subject unitherms on the basis of abstract unitherms with the subsequent adaptation of formulas are not realized in known systems that served as stimulus to intellectual analysis formulas of algebra of algorithms. It is described that the synthesis of algebra formulas of algorithms, and especially the generation of subject unitherms on the basis of abstract ones is an extremely complex and laborious process. Since all elements of the formula are interconnected, all changes in the algorithm’s formula affect its structure. Therefore, this is the main reason for the complexity of the described processes. One aspect of the synthesis of the formulas of the algebra of algorithms is the process of generating subject unitherms based on abstract unitherms. The signs of operations of the algebra of algorithms are briefly described. Mathematical support of the process of synthesis of algorithm algebra formulas is developed, which takes into account vertical and horizontal orientation and type of algorithm algebra formula: text unitherm, sequencing operation, elimination operation, parallel operation and corresponding cyclic sequencing operations, elimination and parallelization, as well as geometric parameters. The process of generating subject unitherms on the basis of abstract ones is previously described. The list of necessary eliminations and sequences for the synthesis of the corresponding formulas is determined. According to the properties of the signs of operations of the algebra of algorithms, the synthesized formulas of the algorithms are minimized by the number of unitherms. Also, in accordance with the properties of the formulas of the algorithms of algebra, the corresponding unitherms are taken out as signs of operations, as a result of which the formula of the algorithm for the synthesis of algorithm formulas is obtained taking into account the generation of subject unitherms based on abstract unitherms.
APA, Harvard, Vancouver, ISO, and other styles
33

Castelo, Noah, Maarten W. Bos, and Donald Lehmann. "Let the Machine Decide: When Consumers Trust or Distrust Algorithms." NIM Marketing Intelligence Review 11, no. 2 (November 1, 2019): 24–29. http://dx.doi.org/10.2478/nimmir-2019-0012.

Full text
Abstract:
AbstractThanks to the rapid progress in the field of artificial intelligence algorithms are able to accomplish an increasingly comprehensive list of tasks, and often they achieve better results than human experts. Nevertheless, many consumers have ambivalent feelings towards algorithms and tend to trust humans more than they trust machines. Especially when tasks are perceived as subjective, consumers often assume that algorithms will be less effective, even if this belief is getting more and more inaccurate.To encourage algorithm adoption, managers should provide empirical evidence of the algorithm’s superior performance relative to humans. Given that consumers trust in the cognitive capabilities of algorithms, another way to increase trust is to demonstrate that these capabilities are relevant for the task in question. Further, explaining that algorithms can detect and understand human emotions can enhance adoption of algorithms for subjective tasks.
APA, Harvard, Vancouver, ISO, and other styles
34

BARMAK, OLEXANDER, PAVLO RADIUK, MARYNA MOLCHANOVA, and OLENA SOBKO. "APPROACHES TO PRACTICAL ANALYSIS OF COMPUTING ALGORITHMS." Herald of Khmelnytskyi National University 303, no. 6 (December 2021): 102–5. http://dx.doi.org/10.31891/2307-5732-2021-303-6-102-105.

Full text
Abstract:
The present work proposes a practical approach to determining the main types of algorithms, depending on their effectiveness in the appearance of the software code. Examples of analysis of the software code for computational complexity are given in the order of reducing the efficiency supplied as (in asymptotic designations): O(1), O(LogN), O(N), O(NlogN), O(N2), O(N2), O(N2), O(N3). The research task was to analyze the software code and specific conditions in which the algorithm refers to a particular type of computational complexity. The aim of analyzing the complexity of algorithms is to find the optimal algorithm for solving a specific problem. The criterion of optimality of the algorithm is chosen by the complexity of the algorithm, i.e., the number of elementary operations that must be performed to solve the problem using this algorithm. The complexity function is the ratio that connects the algorithm’s input data with the number of elementary operations. The paper contains a description of classical computational complexity that can be revealed by visual analysis of program code. The main types of computational complexity are (listed in descending order of efficiency) constant, logarithmic, linear, linear-logarithmic, quadratic, cubic. Also, methods for the determination of computational complexity are described. It is established that the main factors that can assess the algorithm’s computational complexity for the visual analysis of the software code are the presence of cycles, especially enclosed, reversibility of the algorithm, etc. Further research could usefully explore a method of semantic analysis of program code to predict the assessment of its computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
35

Abdul Qader, Raghad Abdul Hadi, and Auday H. Saeed AL-Wattar. "A Review of the Blowfish Algorithm Modifications in Terms of Execution Time and Security." Technium: Romanian Journal of Applied Sciences and Technology 4, no. 9 (October 10, 2022): 89–101. http://dx.doi.org/10.47577/technium.v4i9.7452.

Full text
Abstract:
Data has become increasingly popular for advanced digital content transmission. Researchers are concerned about the protection of data. The transmission of digital data over a network has made multimedia data vulnerable to various threats, including unauthorized access and network hacking. As a result, the data must be protected with encryption methods based on symmetric encryption algorithms, which will ensure the data security. The Blowfish encryption algorithm is one of the most well-known cryptographic algorithms. However, each of the current algorithms has its own set of advantages and disadvantages. However, there are several drawbacks to using this algorithm, including complex computational operations, fixed (S-Box) and pattern issues, which can arise while dealing with more complex data, including texts. Many academics have sought to increase the algorithm's efficiency. The modifications to the Blowfish algorithms provided by researchers in prior works are summarized in this publication.
APA, Harvard, Vancouver, ISO, and other styles
36

Yan, Shaoqiang, Weidong Liu, Xinqi Li, Ping Yang, Fengxuan Wu, and Zhe Yan. "Comparative Study and Improvement Analysis of Sparrow Search Algorithm." Wireless Communications and Mobile Computing 2022 (August 31, 2022): 1–15. http://dx.doi.org/10.1155/2022/4882521.

Full text
Abstract:
To solve the problem that the emerging sparrow search algorithm (SSA) lacks systematic comparison and analysis with other classical algorithms, this paper first introduces the principle of the sparrow search algorithm and then describes the mathematical model and algorithm description of the sparrow search algorithm. By comparing several classical intelligent algorithms with particle swarm optimization (PSO), differential evolution (DE), and gray wolf optimizer (GWO), the sparrow search algorithm’s theory and model are systematically compared and analyzed, and the advantages and disadvantages of SSA are summarized. Finally, based on the above research and previous research, the limitations of SSA and current improved SSA are analyzed, which provides ideas for further improvement of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Priyadarshini, Ishaani. "Dendritic Growth Optimization: A Novel Nature-Inspired Algorithm for Real-World Optimization Problems." Biomimetics 9, no. 3 (February 21, 2024): 130. http://dx.doi.org/10.3390/biomimetics9030130.

Full text
Abstract:
In numerous scientific disciplines and practical applications, addressing optimization challenges is a common imperative. Nature-inspired optimization algorithms represent a highly valuable and pragmatic approach to tackling these complexities. This paper introduces Dendritic Growth Optimization (DGO), a novel algorithm inspired by natural branching patterns. DGO offers a novel solution for intricate optimization problems and demonstrates its efficiency in exploring diverse solution spaces. The algorithm has been extensively tested with a suite of machine learning algorithms, deep learning algorithms, and metaheuristic algorithms, and the results, both before and after optimization, unequivocally support the proposed algorithm’s feasibility, effectiveness, and generalizability. Through empirical validation using established datasets like diabetes and breast cancer, the algorithm consistently enhances model performance across various domains. Beyond its working and experimental analysis, DGO’s wide-ranging applications in machine learning, logistics, and engineering for solving real-world problems have been highlighted. The study also considers the challenges and practical implications of implementing DGO in multiple scenarios. As optimization remains crucial in research and industry, DGO emerges as a promising avenue for innovation and problem solving.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Jun. "Efficiency of large integer multiplication algorithms: A comparative study of traditional methods and Karatsuba's algorithm." Applied and Computational Engineering 69, no. 1 (June 21, 2024): 30–36. http://dx.doi.org/10.54254/2755-2721/69/20241470.

Full text
Abstract:
The large integer multiplication is the basis of many computer science algorithms, ranging from cryptography to complex calculations in various scientific fields. Contemporary society excessively depends on complex computing tasks. Hence, the need for good algorithms is becoming increasingly apparent as well. This text gives the reader an in-depth knowledge of the multiplication algorithms of large integers by contrasting traditional algorithms with the new Algorithm developed by Karatsuba. This research methodology involves a comparative analysis of the components using an advanced analysis framework that primarily focuses on execution times, efficiency metrics, and resource utilization. Incontrovertibly, the experimental results confirm the Karatsuba algorithm's undoubted hastiness compared to the conventional approaches. This study extends our grasp of the evolution of algorithms in computational optimization, enabling people to get unique and relevant findings that will benefit numerous areas where large integer multiplications are involved. In addition to these findings, the study also highlights the importance of algorithm selection in ensuring computational efficiency and accuracy in large integer multiplications across various applications.
APA, Harvard, Vancouver, ISO, and other styles
39

Jubair, Mohammed Ahmed, Salama A. Mostafa, Aida Mustapha, Zirawani Baharum, Mohamad Aizi Salamat, and Aldo Erianda. "A Multi-Agent K-Means Algorithm for Improved Parallel Data Clustering." JOIV : International Journal on Informatics Visualization 6, no. 1-2 (May 31, 2022): 145. http://dx.doi.org/10.30630/joiv.6.1-2.934.

Full text
Abstract:
Due to the rapid increase in data volumes, clustering algorithms are now finding applications in a variety of fields. However, existing clustering techniques have been deemed unsuccessful in managing large data volumes due to the issues of accuracy and high computational cost. As a result, this work offers a parallel clustering technique based on a combination of the K-means and Multi-Agent System algorithms (MAS). The proposed technique is known as Multi-K-means (MK-means). The main goal is to keep the dataset intact while boosting the accuracy of the clustering procedure. The cluster centers of each partition are calculated, combined, and then clustered. The performance of the suggested method's statistical significance was confirmed using the five datasets that served as testing and assessment methods for the proposed algorithm's efficacy. In terms of performance, the proposed MK-means algorithm is compared to the Clustering-based Genetic Algorithm (CGA), the Adaptive Biogeography Clustering-based Genetic Algorithm (ABCGA), and standard K-means algorithms. The results show that the MK-means algorithm outperforms other algorithms because it works by activating agents separately for clustering processes while each agent considers a separate group of features.
APA, Harvard, Vancouver, ISO, and other styles
40

Beth, T., and D. Gollman. "Algorithm engineering for public key algorithms." IEEE Journal on Selected Areas in Communications 7, no. 4 (May 1989): 458–66. http://dx.doi.org/10.1109/49.17708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Muzakir, Ari, Hadi Syaputra, and Febriyanti Panjaitan. "A Comparative Analysis of Classification Algorithms for Cyberbullying Crime Detection: An Experimental Study of Twitter Social Media in Indonesia." Scientific Journal of Informatics 9, no. 2 (October 17, 2022): 133–38. http://dx.doi.org/10.15294/sji.v9i2.35149.

Full text
Abstract:
Purpose: This research aims to identify content that contains cyberbullying on Twitter. We also conducted a comparative study of several classification algorithms, namely NB, DT, LR, and SVM. The dataset we use comes from Twitter data which is then manually labeled and validated by language experts. This study used 1065 data with a label distribution, namely 638 data with a non-bullying label and 427 with a bullying label.Methods: The weighting process for each word uses the bag of word (BOW) method, which uses three weighting features. The three-word vector weighting features used include unigram, bigram, and trigram. The experiment was conducted with two scenarios, namely testing to find the best accuracy value with the three features. The following scenario looks at the overall comparison of the algorithm's performance against all the features used.Result: The experimental results show that for the measurement of accuracy weighting based on features and algorithms, the SVM classification algorithm outperformed other algorithms with a percentage of 76%. Then for the weighting based on the average recall, the DT classification algorithm outperformed the other algorithms by an average of 76%. Another test for measuring overall performance (F-measure) based on accuracy and precision, the SVM classification algorithm, managed to outperform other algorithms with an F-measure of 82%.Value: Based on several experiments conducted, the SVM classification algorithm can detect words containing cyberbullying on social media.
APA, Harvard, Vancouver, ISO, and other styles
42

Abbas, Basim K. "Genetic Algorithms for Quadratic Equations." Aug-Sept 2023, no. 35 (August 26, 2023): 36–42. http://dx.doi.org/10.55529/jecnam.35.36.42.

Full text
Abstract:
A common technique for finding accurate solutions to quadratic equations is to employ genetic algorithms. The authors propose using a genetic algorithm to find the complex roots of a quadratic problem. The technique begins by generating a collection of viable solutions, then proceeds to assess the suitability of each solution, choose parents for the next generation, and apply crossover and mutation to the offspring. For a predetermined number of generations, the process is repeated. Comparing the evolutionary algorithm's output to the quadratic formula proves its validity and uniqueness. Furthermore, the utility of the evolutionary algorithm has been demonstrated by programming it in Python code and comparing the outcomes to conventional intuitions.
APA, Harvard, Vancouver, ISO, and other styles
43

Wei, Wei, Liang Liu, Zhong Qin Hu, and Yu Jing Zhou. "Rigid Medical Image Registration Based on Genetic Algorithms and Mutual Information." Applied Mechanics and Materials 665 (October 2014): 712–17. http://dx.doi.org/10.4028/www.scientific.net/amm.665.712.

Full text
Abstract:
With the variety of medical imaging equipment’s application in the medical process,medical image registration becomes particularly important in the field of medical image processing,which has important clinical diagnostic and therapeutic value. This article describes the matrix conversion method of the rigid registration model, the basic concepts and principles of the mutual information algorithm ,the basic idea of genetic algorithms and algorithm’s flow , and the application of the improved genetic algorithms in practice. The rigid registration of two CT brain bones images uses mutual information as a similarity measure, genetic algorithm as the search strategy and matlab as programming environment. Using the three-point crossover technique to exchange the three parameters in the rigid transformation repeectively to produce new individuals, the genetic algorithm’s local search ability enhanced and the prematurity phenomenon can be reduced through the depth study of the basic genetic algorithm. The experiments show that the registration has high stability and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
44

Singh, Surya Partap, Amitesh Srivastava, Suryansh Dwivedi, and Mr Anil Kumar Pandey. "AI Based Recruitment Tool." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (May 31, 2023): 2815–19. http://dx.doi.org/10.22214/ijraset.2023.52193.

Full text
Abstract:
Abstract: In this study, the researchers narrowed their focus to the application of algorithmic decision-making in ranking job applicants. Instead of comparing algorithms to human decision-makers, the study examined participants' perceptions of different types of algorithms. The researchers varied the complexity and transparency of the algorithm to understand how these factors influenced participants' perceptions. The study explored participants' trust in the algorithm's decision-making abilities, fairness of the decisions, and emotional responses to the situation. Unlike previous work, the study emphasized the impact of algorithm design and presentation on perceptions. The findings are important for algorithm designers, especially employers subject to public scrutiny for their hiring practices.
APA, Harvard, Vancouver, ISO, and other styles
45

Wen, Xiaodong, Xiangdong Liu, Cunhui Yu, Haoning Gao, Jing Wang, Yongji Liang, Jiangli Yu, and Yan Bai. "IOOA: A multi-strategy fusion improved Osprey Optimization Algorithm for global optimization." Electronic Research Archive 32, no. 3 (2024): 2033–74. http://dx.doi.org/10.3934/era.2024093.

Full text
Abstract:
<abstract><p>With the widespread application of metaheuristic algorithms in engineering and scientific research, finding algorithms with efficient global search capabilities and precise local search performance has become a hot topic in research. The osprey optimization algorithm (OOA) was first proposed in 2023, characterized by its simple structure and strong optimization capability. However, practical tests have revealed that the OOA algorithm inevitably encounters common issues faced by metaheuristic algorithms, such as the tendency to fall into local optima and reduced population diversity in the later stages of the algorithm's iterations. To address these issues, a multi-strategy fusion improved osprey optimization algorithm is proposed (IOOA). First, the characteristics of various chaotic mappings were thoroughly explored, and the adoption of Circle chaotic mapping to replace pseudo-random numbers for population initialization improvement was proposed, increasing initial population diversity and improving the quality of initial solutions. Second, a dynamically adjustable elite guidance mechanism was proposed to dynamically adjust the position updating method according to different stages of the algorithm's iteration, ensuring the algorithm maintains good global search capabilities while significantly increasing the convergence speed of the algorithm. Lastly, a dynamic chaotic weight factor was designed and applied in the development stage of the original algorithm to enhance the algorithm's local search capability and improve the convergence accuracy of the algorithm. To fully verify the effectiveness and practical engineering applicability of the IOOA algorithm, simulation experiments were conducted using 21 benchmark test functions and the CEC-2022 benchmark functions, and the IOOA algorithm was applied to the LSTM power load forecasting problem as well as two engineering design problems. The experimental results show that the IOOA algorithm possesses outstanding global optimization performance in handling complex optimization problems and broad applicability in practical engineering applications.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
46

Hornyák, Olivér. "An approach to classify algorithms by complexity." Production Systems and Information Engineering 10, no. 3 (2022): 86–91. http://dx.doi.org/10.32968/psaie.2022.3.8.

Full text
Abstract:
This paper investigates computer algorithms complexity, which takes an important role in software design. An algorithm is a finite set of instructions, those if followed, accomplishes a particular task. It is not language specific; any language and symbolscan represent instructions. While complexity is usually in terms of time, sometimes complexity is also analyzed in terms of space, which translates to the algorithm's memory requirements. The paper gives an overview of the most widely used O notation. Experienced programmers can evaluate the time and memory complexity of a block of source code; however, this is not possible when the algorithm is available in the form of an executable. In this paper amethod is proposed to evaluate algorithms without havingthe source code. The potential drawbacks of the proposal are also considered.
APA, Harvard, Vancouver, ISO, and other styles
47

Pan, Xirui, Zhuyuan Cheng, and Yonggang Zhang. "Two Improved Constraint-Solving Algorithms Based on lmaxRPC3rm." Symmetry 15, no. 12 (December 3, 2023): 2151. http://dx.doi.org/10.3390/sym15122151.

Full text
Abstract:
The Constraint Satisfaction Problem (CSP) is a significant research area in artificial intelligence, and includes a large number of symmetric or asymmetric structures. A backtracking search combined with constraint propagation is considered to be the best CSP-solving algorithm, and the consistency algorithm is the main algorithm used in the process of constraint propagation, which is the key factor in constraint-solving efficiency. Max-restricted path consistency (maxRPC) is a well-known and efficient consistency algorithm, whereas the lmaxRPC3rm algorithm is a classic lightweight algorithm for maxRPC. In this paper, we leverage the properties of symmetry to devise an improved pruning strategy aimed at efficiently diminishing the problem’s search space, thus enhancing the overall solving efficiency. Firstly, we propose the maxRPC3sim algorithm, which abandons the two complex data structures used by lmaxRPC3rm. We can render the algorithm to be more concise and competitive compared to the original algorithm while ensuring that it maintains the same average performance. Secondly, inspired by the RCP3 algorithm, we propose the maxRPC3simR algorithm, which uses the idea of residual support to cut down the redundant operation of the lmaxRPC3rm algorithm. Finally, combining the domain/weighted degree (dom/wdeg) heuristic with the activity-based search (ABS) heuristic, a new variable ordering heuristic, ADW, is proposed. Our heuristic prioritizes the selection of variables with symmetry for pruning, further enhancing the algorithm’s pruning capabilities. Experiments were conducted on both random and structural problems separately. The results indicate that our two algorithms generally outperform other algorithms in terms of performance on both problem classes. Moreover, the new heuristic algorithm demonstrates enhanced robustness across different problem types when compared to various existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
48

Mansouri, Taha, Ahad Zare Ravasan, and Mohammad Reza Gholamian. "A Novel Hybrid Algorithm Based on K-Means and Evolutionary Computations for Real Time Clustering." International Journal of Data Warehousing and Mining 10, no. 3 (July 2014): 1–14. http://dx.doi.org/10.4018/ijdwm.2014070101.

Full text
Abstract:
One of the most widely used algorithms to solve clustering problems is the K-means. Despite of the algorithm's timely performance to find a fairly good solution, it shows some drawbacks like its dependence on initial conditions and trapping in local minima. This paper proposes a novel hybrid algorithm, comprised of K-means and a variation operator inspired by mutation in evolutionary algorithms, called Noisy K-means Algorithm (NKA). Previous research used K-means as one of the genetic operators in Genetic Algorithms. However, the proposed NKA is a kind of individual based algorithm that combines advantages of both K-means and mutation. As a result, proposed NKA algorithm has the advantage of faster convergence time, while escaping from local optima. In this algorithm, a probability function is utilized which adaptively tunes the rate of mutation. Furthermore, a special mutation operator is used to guide the search process according to the algorithm performance. Finally, the proposed algorithm is compared with the classical K-means, SOM Neural Network, Tabu Search and Genetic Algorithm in a given set of data. Simulation results statistically demonstrate that NKA out-performs all others and it is prominently prone to real time clustering.
APA, Harvard, Vancouver, ISO, and other styles
49

Kareem, Abbas Abdulrazzaq, Mohamed Jasim Mohamed, and Bashra Kadhim Oleiwi. "Unmanned aerial vehicle path planning in a 3D environment using a hybrid algorithm." Bulletin of Electrical Engineering and Informatics 13, no. 2 (April 1, 2024): 905–15. http://dx.doi.org/10.11591/eei.v13i2.6020.

Full text
Abstract:
The optimal unmanned aerial vehicle (UAV) path planning using bio-inspired algorithms requires high computation and low convergence in a complex 3D environment. To solve this problem, a hybrid A*-FPA algorithm was proposed that combines the A* algorithm with a flower pollination algorithm (FPA). The main idea of this algorithm is to balance the high speed of the A* exploration ability with the FPA exploitation ability to find an optimal 3D UAV path. At first, the algorithm starts by finding the locally optimal path based on a grid map, and the result is a set of path nodes. The algorithm will select three discovered nodes and set the FPA's initial population. Finally, the FPA is applied to obtain the optimal path. The proposed algorithm's performance was compared with the A*, FPA, genetic algorithm (GA), and partical swarm optimization (PSO) algorithms, where the comparison is done based on four factors: the best path, mean path, standard deviation, and worst path length. The simulation results showed that the proposed algorithm outperformed all previously mentioned algorithms in finding the optimal path in all scenarios, significantly improving the best path length and mean path length of 79.3% and 147.8%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
50

Muñoz, Mario A., and Kate A. Smith-Miles. "Performance Analysis of Continuous Black-Box Optimization Algorithms via Footprints in Instance Space." Evolutionary Computation 25, no. 4 (December 2017): 529–54. http://dx.doi.org/10.1162/evco_a_00194.

Full text
Abstract:
This article presents a method for the objective assessment of an algorithm’s strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography