Journal articles on the topic 'Embedded Systems, Algorithms, Optimization Techniques'

To see the other types of publications on this topic, follow the link: Embedded Systems, Algorithms, Optimization Techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Embedded Systems, Algorithms, Optimization Techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ajani, Taiwo Samuel, Agbotiname Lucky Imoize, and Aderemi A. Atayero. "An Overview of Machine Learning within Embedded and Mobile Devices–Optimizations and Applications." Sensors 21, no. 13 (June 28, 2021): 4412. http://dx.doi.org/10.3390/s21134412.

Full text
Abstract:
Embedded systems technology is undergoing a phase of transformation owing to the novel advancements in computer architecture and the breakthroughs in machine learning applications. The areas of applications of embedded machine learning (EML) include accurate computer vision schemes, reliable speech recognition, innovative healthcare, robotics, and more. However, there exists a critical drawback in the efficient implementation of ML algorithms targeting embedded applications. Machine learning algorithms are generally computationally and memory intensive, making them unsuitable for resource-constrained environments such as embedded and mobile devices. In order to efficiently implement these compute and memory-intensive algorithms within the embedded and mobile computing space, innovative optimization techniques are required at the algorithm and hardware levels. To this end, this survey aims at exploring current research trends within this circumference. First, we present a brief overview of compute intensive machine learning algorithms such as hidden Markov models (HMM), k-nearest neighbors (k-NNs), support vector machines (SVMs), Gaussian mixture models (GMMs), and deep neural networks (DNNs). Furthermore, we consider different optimization techniques currently adopted to squeeze these computational and memory-intensive algorithms within resource-limited embedded and mobile environments. Additionally, we discuss the implementation of these algorithms in microcontroller units, mobile devices, and hardware accelerators. Conclusively, we give a comprehensive overview of key application areas of EML technology, point out key research directions and highlight key take-away lessons for future research exploration in the embedded machine learning domain.
APA, Harvard, Vancouver, ISO, and other styles
2

Stojanovic, Radovan, Sasa Knezevic, Dejan Karadaglic, and Goran Devedzic. "Optimization and implementation of the wavelet based algorithms for embedded biomedical signal processing." Computer Science and Information Systems 10, no. 1 (2013): 503–23. http://dx.doi.org/10.2298/csis120517013s.

Full text
Abstract:
Existing biomedical wavelet based applications exceed the computational, memory and consumption resources of low-complexity embedded systems. In order to make such systems capable to use wavelet transforms, optimization and implementation techniques are proposed. The Real Time QRS Detector and ?De-noising? Filter are developed and implemented in 16-bit fixed point microcontroller achieving 800 Hz sampling rate, occupation of less than 500 bytes of data memory, 99.06% detection accuracy, and 1 mW power consumption. By evaluation of the obtained results it is found that the proposed techniques render negligible degradation in detection accuracy of -0.41% and SNR of -2.8%, behind 2-4 times faster calculation, 2 times less memory usage and 5% energy saving. The same approach can be applied with other signals where the embedded implementation of wavelets can be beneficial.
APA, Harvard, Vancouver, ISO, and other styles
3

Mhadhbi, Imene, Slim Ben Othman, and Slim Ben Saoud. "An Efficient Technique for Hardware/Software Partitioning Process in Codesign." Scientific Programming 2016 (2016): 1–11. http://dx.doi.org/10.1155/2016/6382765.

Full text
Abstract:
Codesign methodology deals with the problem of designing complex embedded systems, where automatic hardware/software partitioning is one key issue. The research efforts in this issue are focused on exploring new automatic partitioning methods which consider only binary or extended partitioning problems. The main contribution of this paper is to propose a hybrid FCMPSO partitioning technique, based on Fuzzy C-Means (FCM) and Particle Swarm Optimization (PSO) algorithms suitable for mapping embedded applications for both binary and multicores target architecture. Our FCMPSO optimization technique has been compared using different graphical models with a large number of instances. Performance analysis reveals that FCMPSO outperforms PSO algorithm as well as the Genetic Algorithm (GA), Simulated Annealing (SA), Ant Colony Optimization (ACO), and FCM standard metaheuristic based techniques and also hybrid solutions including PSO then GA, GA then SA, GA then ACO, ACO then SA, FCM then GA, FCM then SA, and finally ACO followed by FCM.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramadurgam, Srikanth, and Darshika G. Perera. "An Efficient FPGA-Based Hardware Accelerator for Convex Optimization-Based SVM Classifier for Machine Learning on Embedded Platforms." Electronics 10, no. 11 (May 31, 2021): 1323. http://dx.doi.org/10.3390/electronics10111323.

Full text
Abstract:
Machine learning is becoming the cornerstones of smart and autonomous systems. Machine learning algorithms can be categorized into supervised learning (classification) and unsupervised learning (clustering). Among many classification algorithms, the Support Vector Machine (SVM) classifier is one of the most commonly used machine learning algorithms. By incorporating convex optimization techniques into the SVM classifier, we can further enhance the accuracy and classification process of the SVM by finding the optimal solution. Many machine learning algorithms, including SVM classification, are compute-intensive and data-intensive, requiring significant processing power. Furthermore, many machine learning algorithms have found their way into portable and embedded devices, which have stringent requirements. In this research work, we introduce a novel, unique, and efficient Field Programmable Gate Array (FPGA)-based hardware accelerator for a convex optimization-based SVM classifier for embedded platforms, considering the constraints associated with these platforms and the requirements of the applications running on these devices. We incorporate suitable mathematical kernels and decomposition methods to systematically solve the convex optimization for machine learning applications with a large volume of data. Our proposed architectures are generic, parameterized, and scalable; hence, without changing internal architectures, our designs can be used to process different datasets with varying sizes, can be executed on different platforms, and can be utilized for various machine learning applications. We also introduce system-level architectures and techniques to facilitate real-time processing. Experiments are performed using two different benchmark datasets to evaluate the feasibility and efficiency of our hardware architecture, in terms of timing, speedup, area, and accuracy. Our embedded hardware design achieves up to 79 times speedup compared to its embedded software counterpart, and can also achieve up to 100% classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Merone, Mario, Alessandro Graziosi, Valerio Lapadula, Lorenzo Petrosino, Onorato d’Angelis, and Luca Vollero. "A Practical Approach to the Analysis and Optimization of Neural Networks on Embedded Systems." Sensors 22, no. 20 (October 14, 2022): 7807. http://dx.doi.org/10.3390/s22207807.

Full text
Abstract:
The exponential increase in internet data poses several challenges to cloud systems and data centers, such as scalability, power overheads, network load, and data security. To overcome these limitations, research is focusing on the development of edge computing systems, i.e., based on a distributed computing model in which data processing occurs as close as possible to where the data are collected. Edge computing, indeed, mitigates the limitations of cloud computing, implementing artificial intelligence algorithms directly on the embedded devices enabling low latency responses without network overhead or high costs, and improving solution scalability. Today, the hardware improvements of the edge devices make them capable of performing, even if with some constraints, complex computations, such as those required by Deep Neural Networks. Nevertheless, to efficiently implement deep learning algorithms on devices with limited computing power, it is necessary to minimize the production time and to quickly identify, deploy, and, if necessary, optimize the best Neural Network solution. This study focuses on developing a universal method to identify and port the best Neural Network on an edge system, valid regardless of the device, Neural Network, and task typology. The method is based on three steps: a trade-off step to obtain the best Neural Network within different solutions under investigation; an optimization step to find the best configurations of parameters under different acceleration techniques; eventually, an explainability step using local interpretable model-agnostic explanations (LIME), which provides a global approach to quantify the goodness of the classifier decision criteria. We evaluated several MobileNets on the Fudan Shangai-Tech dataset to test the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, O., S. Areibi, R. Collier, and G. Grewal. "An Impulse-C Hardware Accelerator for Packet Classification Based on Fine/Coarse Grain Optimization." International Journal of Reconfigurable Computing 2013 (2013): 1–23. http://dx.doi.org/10.1155/2013/130765.

Full text
Abstract:
Current software-based packet classification algorithms exhibit relatively poor performance, prompting many researchers to concentrate on novel frameworks and architectures that employ both hardware and software components. The Packet Classification with Incremental Update (PCIU) algorithm, Ahmed et al. (2010), is a novel and efficient packet classification algorithm with a unique incremental update capability that demonstrated excellent results and was shown to be scalable for many different tasks and clients. While a pure software implementation can generate powerful results on a server machine, an embedded solution may be more desirable for some applications and clients. Embedded, specialized hardware accelerator based solutions are typically much more efficient in speed, cost, and size than solutions that are implemented on general-purpose processor systems. This paper seeks to explore the design space of translating the PCIU algorithm into hardware by utilizing several optimization techniques, ranging from fine grain to coarse grain and parallel coarse grain approaches. The paper presents a detailed implementation of a hardware accelerator of the PCIU based on an Electronic System Level (ESL) approach. Results obtained indicate that the hardware accelerator achieves on average 27x speedup over a state-of-the-art Xeon processor.
APA, Harvard, Vancouver, ISO, and other styles
7

Elhossini, Ahmed, Shawki Areibi, and Robert Dony. "Architecture Exploration Based on GA-PSO Optimization, ANN Modeling, and Static Scheduling." VLSI Design 2013 (September 26, 2013): 1–22. http://dx.doi.org/10.1155/2013/624369.

Full text
Abstract:
Embedded systems are widely used today in different digital signal processing (DSP) applications that usually require high computation power and tight constraints. The design space to be explored depends on the application domain and the target platform. A tool that helps explore different architectures is required to design such an efficient system. This paper proposes an architecture exploration framework for DSP applications based on Particle Swarm Optimization (PSO) and genetic algorithms (GA) techniques that can handle multiobjective optimization problems with several hybrid forms. A novel approach for performance evaluation of embedded systems is also presented. Several cycle-accurate simulations are performed for commercial embedded processors. These simulation results are used to build an artificial neural network (ANN) model that can predict performance/power of newly generated architectures with an accuracy of 90% compared to cycle-accurate simulations with a very significant time saving. These models are combined with an analytical model and static scheduler to further increase the accuracy of the estimation process. The functionality of the framework is verified based on benchmarks provided by our industrial partner ON Semiconductor to illustrate the ability of the framework to investigate the design space.
APA, Harvard, Vancouver, ISO, and other styles
8

Guardado, J. L., F. Rivas-Davalos, J. Torres, S. Maximov, and E. Melgoza. "An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/506769.

Full text
Abstract:
Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.
APA, Harvard, Vancouver, ISO, and other styles
9

Branco, Sérgio, André G. Ferreira, and Jorge Cabral. "Machine Learning in Resource-Scarce Embedded Systems, FPGAs, and End-Devices: A Survey." Electronics 8, no. 11 (November 5, 2019): 1289. http://dx.doi.org/10.3390/electronics8111289.

Full text
Abstract:
The number of devices connected to the Internet is increasing, exchanging large amounts of data, and turning the Internet into the 21st-century silk road for data. This road has taken machine learning to new areas of applications. However, machine learning models are not yet seen as complex systems that must run in powerful computers (i.e., Cloud). As technology, techniques, and algorithms advance, these models are implemented into more computational constrained devices. The following paper presents a study about the optimizations, algorithms, and platforms used to implement such models into the network’s end, where highly resource-scarce microcontroller units (MCUs) are found. The paper aims to provide guidelines, taxonomies, concepts, and future directions to help decentralize the network’s intelligence.
APA, Harvard, Vancouver, ISO, and other styles
10

Matusiak, Mariusz. "Optimization for Software Implementation of Fractional Calculus Numerical Methods in an Embedded System." Entropy 22, no. 5 (May 18, 2020): 566. http://dx.doi.org/10.3390/e22050566.

Full text
Abstract:
In this article, some practical software optimization methods for implementations of fractional order backward difference, sum, and differintegral operator based on Grünwald–Letnikov definition are presented. These numerical algorithms are of great interest in the context of the evaluation of fractional-order differential equations in embedded systems, due to their more convenient form compared to Caputo and Riemann–Liouville definitions or Laplace transforms, based on the discrete convolution operation. A well-known difficulty relates to the non-locality of the operator, implying continually increasing numbers of processed samples, which may reach the limits of available memory or lead to exceeding the desired computation time. In the study presented here, several promising software optimization techniques were analyzed and tested in the evaluation of the variable fractional-order backward difference and derivative on two different Arm® Cortex®-M architectures. Reductions in computation times of up to 75% and 87% were achieved compared to the initial implementation, depending on the type of Arm® core.
APA, Harvard, Vancouver, ISO, and other styles
11

Jude Hemanth, Duraisamy, Subramaniyan Umamaheswari, Daniela Elena Popescu, and Antoanela Naaji. "Application of Genetic Algorithm and Particle Swarm Optimization techniques for improved image steganography systems." Open Physics 14, no. 1 (January 1, 2016): 452–62. http://dx.doi.org/10.1515/phys-2016-0052.

Full text
Abstract:
AbstractImage steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.
APA, Harvard, Vancouver, ISO, and other styles
12

BONZO, DANIEL C., and AUGUSTO Y. HERMOSILLA. "CLUSTERING PANEL DATA VIA PERTURBED ADAPTIVE SIMULATED ANNEALING AND GENETIC ALGORITHMS." Advances in Complex Systems 05, no. 04 (December 2002): 339–60. http://dx.doi.org/10.1142/s0219525902000559.

Full text
Abstract:
Non-hierarchical cluster analysis for panel data is known to be hampered by structural preservation, computational complexity and efficiency, and dependency problems. Resolving these issues becomes increasingly important as efficient collection and maintenance of panel data make application more conducive. To address some computational issues and structural preservation, Bonzo [3] presented a stochastic version of Kosmelj and Batagelj's approach [16] to clustering panel data. The method used a probability link function (instead of the usual distance functions) in defining cluster inertias with the aim of preserving the clusters' probabilistic structure. Formulating clustering as an optimization problem, the objective function allows the application of heuristic and stochastic optimization techniques. In this paper, we present a modified heuristic for adaptive simulated annealing (ASA) by perturbing the state vector's sampling distribution, specifically, by perturbing the drift of a diffusion process. Such an approach has been used to hasten convergence towards global optimum at equilibrium for diversely complex, combinatorial, and large-scale systems. The perturbed ASA (PASA) heuristic is then embedded in a genetic algorithm (GA) procedure to hasten and improve the stochastic local search process. The PASA-GA hybrid can be further modified and improved such as by explicit parallel implementation.
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Minjeong, and Jimin Koo. "Embedded System Performance Analysis for Implementing a Portable Drowsiness Detection System for Drivers." Technologies 11, no. 1 (December 30, 2022): 8. http://dx.doi.org/10.3390/technologies11010008.

Full text
Abstract:
Drowsiness on the road is a widespread problem with fatal consequences; thus, a multitude of systems and techniques have been proposed. Among existing methods, Ghoddoosian et al. utilized temporal blinking patterns to detect early signs of drowsiness, but their algorithm was tested only on a powerful desktop computer, which is not practical to apply in a moving vehicle setting. In this paper, we propose an efficient platform to run Ghoddoosian’s algorithm, detail the performance tests we ran to determine this platform, and explain our threshold optimization logic. After considering the Jetson Nano and Beelink (Mini PC), we concluded that the Mini PC is most efficient and practical to run our embedded system in a vehicle. To determine this, we ran communication speed tests and evaluated total processing times for inference operations. Based on our experiments, the average total processing time to run the drowsiness detection model was 94.27 ms for the Jetson Nano and 22.73 ms for the Beelink (Mini PC). Considering the portability and power efficiency of each device, along with the processing time results, the Beelink (Mini PC) was determined to be most suitable. Additionally, we propose a threshold optimization algorithm, which determines whether the driver is drowsy, or alert based on the trade-off between the sensitivity and specificity of the drowsiness detection model. Our study will serve as a crucial next step for drowsiness detection research and its application in vehicles. Through our experiments, we have determined a favorable platform that can run drowsiness detection algorithms in real-time and can be used as a foundation to further advance drowsiness detection research. In doing so, we have bridged the gap between an existing embedded system and its actual implementation in vehicles to bring drowsiness technology a step closer to prevalent real-life implementation.
APA, Harvard, Vancouver, ISO, and other styles
14

Antolak, Ernest, and Andrzej Pułka. "An Analysis of the Impact of Gating Techniques on the Optimization of the Energy Dissipated in Real-Time Systems." Applied Sciences 12, no. 3 (February 4, 2022): 1630. http://dx.doi.org/10.3390/app12031630.

Full text
Abstract:
The paper concerns research on electronics-embedded safety systems. The authors focus on the optimization of the energy consumed by multitasking real-time systems. A new flexible and reconfigurable multi-core architecture based on pipeline processing is proposed. The presented solution uses thread-interleaving mechanisms that allow avoiding hazards and minimizing unpredictability. The proposed architecture is compared with the classical solutions consisting of many processors and based on the scheme using one processor per single task. Energy-efficient task mapping is analyzed and a design methodology, based on minimizing the number of active and utilized resources, is proposed. New techniques for energy optimization are proposed, mainly, clock gating and switching-resources blocking. The authors investigate two main factors of the system: setting the processing frequency, and gating techniques; the latter are used under the assumption that the system meets the requirements of time predictability. The energy consumed by the system is reduced. Theoretical considerations are verified by many experiments of the system’s implementation in an FPGA structure. The set of tasks tested consists of programs that implement Mälardalen WCET benchmark algorithms. The tested scenarios are divided into periodic and non-periodic execution schemes. The obtained results show that it is possible to reduce the dynamic energy consumed by real-time applications’ meeting their other requirements.
APA, Harvard, Vancouver, ISO, and other styles
15

Gong, Xuan, Zichun Le, Hui Wang, and Yukun Wu. "Study on the Moving Target Tracking Based on Vision DSP." Sensors 20, no. 22 (November 13, 2020): 6494. http://dx.doi.org/10.3390/s20226494.

Full text
Abstract:
The embedded visual tracking system has higher requirements for real-time performance and system resources, and this is a challenge for visual tracking systems with available hardware resources. The major focus of this study is evaluating the results of hardware optimization methods. These optimization techniques provide efficient utilization based on limited hardware resources. This paper also uses a pragmatic approach to investigate the real-time performance effect by implementing and optimizing a kernel correlation filter (KCF) tracking algorithm based on a vision digital signal processor (vision DSP). We examine and analyze the impact factors of the tracking system, which include DP (data parallelism), IP (instruction parallelism), and the characteristics of parallel processing of the DSP core and iDMA (integrated direct memory access). Moreover, we utilize a time-sharing strategy to increase the system runtime speed. These research results are also applicable to other machine vision algorithms. In addition, we introduced a scale filter to overcome the disadvantages of KCF for scale transformation. The experimental results demonstrate that the use of system resources and real-time tracking speed also satisfies the expected requirements, and the tracking algorithm with a scale filter can realize almost the same accuracy as the DSST (discriminative scale space tracking) algorithm under a vision DSP environment.
APA, Harvard, Vancouver, ISO, and other styles
16

Fanariotis, Anastasios, Theofanis Orphanoudakis, Konstantinos Kotrotsios, Vassilis Fotopoulos, George Keramidas, and Panagiotis Karkazis. "Power Efficient Machine Learning Models Deployment on Edge IoT Devices." Sensors 23, no. 3 (February 1, 2023): 1595. http://dx.doi.org/10.3390/s23031595.

Full text
Abstract:
Computing has undergone a significant transformation over the past two decades, shifting from a machine-based approach to a human-centric, virtually invisible service known as ubiquitous or pervasive computing. This change has been achieved by incorporating small embedded devices into a larger computational system, connected through networking and referred to as edge devices. When these devices are also connected to the Internet, they are generally named Internet-of-Thing (IoT) devices. Developing Machine Learning (ML) algorithms on these types of devices allows them to provide Artificial Intelligence (AI) inference functions such as computer vision, pattern recognition, etc. However, this capability is severely limited by the device’s resource scarcity. Embedded devices have limited computational and power resources available while they must maintain a high degree of autonomy. While there are several published studies that address the computational weakness of these small systems-mostly through optimization and compression of neural networks- they often neglect the power consumption and efficiency implications of these techniques. This study presents power efficiency experimental results from the application of well-known and proven optimization methods using a set of well-known ML models. The results are presented in a meaningful manner considering the “real world” functionality of devices and the provided results are compared with the basic “idle” power consumption of each of the selected systems. Two different systems with completely different architectures and capabilities were used providing us with results that led to interesting conclusions related to the power efficiency of each architecture.
APA, Harvard, Vancouver, ISO, and other styles
17

Shafiq, Muhammad, Zhaoquan Gu, Shah Nazir, and Rahul Yadav. "Analyzing IoT Attack Feature Association with Threat Actors." Wireless Communications and Mobile Computing 2022 (May 5, 2022): 1–11. http://dx.doi.org/10.1155/2022/7143054.

Full text
Abstract:
Internet of Things (IoT) refers to the interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data. These devices can be controlled remotely, which makes them susceptible to exploitation or even takeover by an attacker. The lack of security features on many IoT devices makes them easy to access confidential information, issue commands from a distance, or even use the compromised device as part of a DDoS attack against another network. Feature selection is an important part of problem formulation in machine learning. To overcome the above problems, this paper proposes a novel feature selection framework RFS for IoT attack detection using machine learning (ML) techniques. The RFS is based on the concept of effective feature selection and consists of three main stages: feature selection, modeling, and attacks detection. For feature selection, three different models are proposed. Based on these approaches, three different algorithms are proposed. A set of 40 features was included in the model, derived from combinatorial optimization and statistical analysis methods. Our experimental study shows that the proposed frame work significantly improves over state-of-the-art cyberattacks techniques for time series data with outliers.
APA, Harvard, Vancouver, ISO, and other styles
18

Kavitha, C., Saravanan M., Thippa Reddy Gadekallu, Nimala K., Balasubramanian Prabhu Kavin, and Wen-Cheng Lai. "Filter-Based Ensemble Feature Selection and Deep Learning Model for Intrusion Detection in Cloud Computing." Electronics 12, no. 3 (January 21, 2023): 556. http://dx.doi.org/10.3390/electronics12030556.

Full text
Abstract:
In recent years, the high improvement in communication, Internet of Things (IoT) and cloud computing have begun complex questioning in security. Based on the development, cyberattacks can be increased since the present security techniques do not give optimal solutions. As a result, the authors of this paper created filter-based ensemble feature selection (FEFS) and employed a deep learning model (DLM) for cloud computing intrusion detection. Initially, the intrusion data were collected from the global datasets of KDDCup-99 and NSL-KDD. The data were utilized for validation of the proposed methodology. The collected database was utilized for feature selection to empower the intrusion prediction. The FEFS is a combination of three feature extraction processes: filter, wrapper and embedded algorithms. Based on the above feature extraction process, the essential features were selected for enabling the training process in the DLM. Finally, the classifier received the chosen features. The DLM is a combination of a recurrent neural network (RNN) and Tasmanian devil optimization (TDO). In the RNN, the optimal weighting parameter is selected with the assistance of the TDO. The proposed technique was implemented in MATLAB, and its effectiveness was assessed using performance metrics including sensitivity, F measure, precision, sensitivity, recall and accuracy. The proposed method was compared with the conventional techniques such as an RNN and deep neural network (DNN) and RNN–genetic algorithm (RNN-GA), respectively.
APA, Harvard, Vancouver, ISO, and other styles
19

Manju, V. N., and A. Lenin Fred. "Sparse Decomposition Technique for Segmentation and Compression of Compound Images." Journal of Intelligent Systems 29, no. 1 (April 25, 2018): 515–28. http://dx.doi.org/10.1515/jisys-2017-0360.

Full text
Abstract:
Abstract Compression of compound records and images can be more cumbersome than the original information since they can be a mix of text, picture and graphics. The principle requirement of the compound record or images is the nature of the compressed data. In this paper, diverse procedures are used under block-based classification to distinguish the compound image segments. The segmentation process starts with separation of the entire image into blocks by spare decomposition technique in smooth blocks and non smooth blocks. Gray wolf-optimization based FCM (fuzzy C-means) algorithm is employed to segment background, text, graphics, images and overlap, which are then individually compressed using adaptive Huffman coding, embedded zero wavelet and H.264 coding techniques. Exploratory outcomes demonstrate that the proposed conspire expands compression ratio, enhances image quality and additionally limits computational complexity. The proposed method is implemented on the working platform of MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
20

Roy, Debayan, Licong Zhang, Wanli Chang, Dip Goswami, Birgit Vogel-Heuser, and Samarjit Chakraborty. "Tool Integration for Automated Synthesis of Distributed Embedded Controllers." ACM Transactions on Cyber-Physical Systems 6, no. 1 (January 31, 2022): 1–31. http://dx.doi.org/10.1145/3477499.

Full text
Abstract:
Controller design and their software implementations are usually done in isolated design spaces using respective COTS design tools. However, this separation of concerns can lead to long debugging and integration phases. This is because assumptions made about the implementation platform during the design phase—e.g., related to timing—might not hold in practice, thereby leading to unacceptable control performance. In order to address this, several control/architecture co-design techniques have been proposed in the literature. However, their adoption in practice has been hampered by the lack of design flows using commercial tools. To the best of our knowledge, this is the first article that implements such a co-design method using commercially available design tools in an automotive setting, with the aim of minimally disrupting existing design flows practiced in the industry. The goal of such co-design is to jointly determine controller and platform parameters in order to avoid any design-implementation gap , thereby minimizing implementation time testing and debugging. Our setting involves distributed implementations of control algorithms on automotive electronic control units ( ECUs ) communicating via a FlexRay bus. The co-design and the associated toolchain Co-Flex jointly determines controller and FlexRay parameters (that impact signal delays) in order to optimize specified design metrics. Co-Flex seamlessly integrates the modeling and analysis of control systems in MATLAB/Simulink with platform modeling and configuration in SIMTOOLS/SIMTARGET that is used for configuring FlexRay bus parameters. It automates the generation of multiple Pareto-optimal design options with respect to the quality of control and the resource usage, that an engineer can choose from. In this article, we outline a step-by-step software development process based on Co-Flex tools for distributed control applications. While our exposition is automotive specific, this design flow can easily be extended to other domains.
APA, Harvard, Vancouver, ISO, and other styles
21

Swaminathan, Dhivya, and Arul Rajagopalan. "Optimized Network Reconfiguration with Integrated Generation Using Tangent Golden Flower Algorithm." Energies 15, no. 21 (November 1, 2022): 8158. http://dx.doi.org/10.3390/en15218158.

Full text
Abstract:
The importance of integrating distributed generation (DG) units into the distribution network (DN) recently developed. To decrease power losses (PL), this article presents a meta-heuristic population-based tangent golden flower pollination algorithm (TGFPA) as an optimization technique for selecting the ideal site for DG. Furthermore, the proposed algorithm also finds the optimal routing configuration for power flow. TGFPA requires very few tuning parameters and is comprised of a golden section and a tangent flight algorithm (TFA). Hence, it is easy to update these parameters to obtain the best values, which provide highly reliable results compared to other existing techniques. In different case studies, the TGFPA’s performance was assessed on four test bus systems: IEEE 33-bus, IEEE 69-bus, IEEE 119-bus, and Indian-52 bus. According to simulation results, TGFPA computes the optimal reconfigured DN embedded along with DG, achieving the goal of minimal power loss.
APA, Harvard, Vancouver, ISO, and other styles
22

Tyagi, Shweta, and Kamal K. Bharadwaj. "A Particle Swarm Optimization Approach to Fuzzy Case-based Reasoning in the Framework of Collaborative Filtering." International Journal of Rough Sets and Data Analysis 1, no. 1 (January 2014): 48–64. http://dx.doi.org/10.4018/ijrsda.2014010104.

Full text
Abstract:
The particle Swarm Optimization (PSO) algorithm, as one of the most effective search algorithm inspired from nature, is successfully applied in a variety of fields and is demonstrating fairly immense potential for development. Recently, researchers are investigating the use of PSO algorithm in the realm of personalized recommendation systems for providing tailored suggestions to users. Collaborative filtering (CF) is the most promising technique in recommender systems, providing personalized recommendations to users based on their previously expressed preferences and those of other similar users. However, data sparsity and prediction accuracy are the major concerns related to CF techniques. In order to handle these problems, this paper proposes a novel approach to CF technique by employing fuzzy case-based reasoning (FCBR) augmented with PSO algorithm, called PSO/FCBR/CF technique. In this method, the PSO algorithm is utilized to estimate the features importance and assign their weights accordingly in the process of fuzzy case-based reasoning (FCBR) for the computation of similarity between users and items. In this way, PSO embedded FCBR algorithm is applied for the prediction of missing values in user-item rating matrix and then CF technique is employed to generate recommendations for an active user. The experimental results clearly reveal that the proposed scheme, PSO/FCBR/CF, deals with the problem of sparsity as well as improves the prediction accuracy when compared with other state of the art CF schemes.
APA, Harvard, Vancouver, ISO, and other styles
23

Abdul Rahim, Siti Rafidah, Ismail Musirin, Muhammad Murtadha Othman, and Muhamad Hatta Hussain. "Effect of Load Model Using Ranking Identification Technique for Multi Type DG Incorporating Embedded Meta EP-Firefly Algorithm." MATEC Web of Conferences 150 (2018): 01014. http://dx.doi.org/10.1051/matecconf/201815001014.

Full text
Abstract:
This paper presents the effect of load model prior to the distributed generation (DG) planning in distribution system. In achieving optimal allocation and placement of DG, a ranking identification technique was proposed in order to study the DG planning using pre-developed Embedded Meta Evolutionary Programming–Firefly Algorithm. The aim of this study is to analyze the effect of different type of DG in order to reduce the total losses considering load factor. To realize the effectiveness of the proposed technique, the IEEE 33 bus test systems was utilized as the test specimen. In this study, the proposed techniques were used to determine the DG sizing and the suitable location for DG planning. The results produced are utilized for the optimization process of DG for the benefit of power system operators and planners in the utility. The power system planner can choose the suitable size and location from the result obtained in this study with the appropriate company’s budget. The modeling of voltage dependent loads has been presented and the results show the voltage dependent load models have a significant effect on total losses of a distribution system for different DG type.
APA, Harvard, Vancouver, ISO, and other styles
24

Maharana, Himanshu Shekhar, and Saroj Kumar Dash. "Dual objective multiconstraint swarm optimization based advanced economic load dispatch." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (June 1, 2021): 1924. http://dx.doi.org/10.11591/ijece.v11i3.pp1924-1932.

Full text
Abstract:
In electric power system, the vital topic to be mooted is economic load dispatch (ELD). It is a non-linear problem with some unavoidable constraints such as valve point loading and ramp rate constraint. For solving ELD problem distint methods were devised and tried for different electric supply systems yielding slow convergence rates. To achieve fast convergence, dual objective multi constraint swarm optimization based advanced economic load dispatch (DOMSOBAELD) algorithm is proposed making use of simulated values of real power outages of a thermal power plant as initial estimates for PSO technique embedded in it and used for optimizing economic dispatch problem in this article. DOMSOBAELD method was developed in the form of amalgamating fluids. Presence of power line losses, multiple valves in steam turbines, droop constraints and inhibited zones were utilized to optimize the ELD problem as genuinely approximate as possible. The results obtained from DOSOBAELD are compared with particle swarm optimization (PSO), PSOIW and differential particle swarm optimization (DPSO) techniques. It is quite conspicuous that DOMSOBAELD yielded minimum cost values with most favourable values of real unit outputs. Thus the proposed method proves to be advantageous over other heuristic methods and yields best solution for ELD by selecting incremental fuel cost as the decision variable and cost function as fitness function.
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Youngbeom, Jingyo Song, Taek-Young Youn, and Seog Chung Seo. "Crystals-Dilithium on ARMv8." Security and Communication Networks 2022 (February 27, 2022): 1–12. http://dx.doi.org/10.1155/2022/5226390.

Full text
Abstract:
Crystals-Dilithium is one of the digital-signature algorithms in NIST’s ongoing post-quantum cryptography (PQC) standardization final round. Security and computational efficiency concerning software and hardware implementations are the primary criteria for PQC standardization. Many studies were conducted to efficiently apply Dilithium in various environments; however, they are focused on traditionally used PC and 32-bit Advanced RISC Machine (ARM) processors (Cortex-M4). ARMv8-based processors are more advanced embedded microcontrollers (MCUs) and have been widely used for various IoT devices, edge computing devices, and On-Board Units in autonomous driving cars. In this study, we present an efficient Crystals-Dilithium implementation on ARMv8-based MCU. To enhance Dilithium’s performance, we optimize number theoretic transform (NTT)-based polynomial multiplication, the core operation of Dilithium, by leveraging ARMv8’s architectural properties such as large register sets and NEON engine. We apply task parallelism to NTT-based polynomial multiplication using the NEON engine. In addition, we reduced the number of memory accesses during NTT-based polynomial multiplication with the proposed merging and register-holding techniques. Finally, we present an interleaved NTT-based multiplication simultaneously executed with ARM processor and NEON engine. This implementation can further optimize performance by eliminating the ARM processor latency with NEON overheads. Through the proposed optimization methods, for Dilithium 3, we achieved a performance improvement of about 43.83% in key pair generation, 113.25% in signing, and 41.92% in verification compared to the reference implementation submitted to the final round of the NIST PQC competition.
APA, Harvard, Vancouver, ISO, and other styles
26

Biswas, Arnab Kumar. "Cryptographic Software IP Protection without Compromising Performance or Timing Side-channel Leakage." ACM Transactions on Architecture and Code Optimization 18, no. 2 (March 2021): 1–20. http://dx.doi.org/10.1145/3443707.

Full text
Abstract:
Program obfuscation is a widely used cryptographic software intellectual property (IP) protection technique against reverse engineering attacks in embedded systems. However, very few works have studied the impact of combining various obfuscation techniques on the obscurity (difficulty of reverse engineering) and performance (execution time) of obfuscated programs. In this article, we propose a Genetic Algorithm (GA)-based framework that not only optimizes obscurity and performance of obfuscated cryptographic programs, but it also ensures very low timing side-channel leakage. Our proposed T iming S ide C hannel S ensitive P rogram O bfuscation O ptimization F ramework (TSC-SPOOF) determines the combination of obfuscation transformation functions that produce optimized obfuscated programs with preferred optimization parameters. In particular, TSC-SPOOF employs normalized compression distance (NCD) and channel capacity to measure obscurity and timing side-channel leakage, respectively. We also use RISC-V rocket core running on a Xilinx Zynq FPGA device as part of our framework to obtain realistic results. The experimental results clearly show that our proposed solution leads to cryptographic programs with lower execution time, higher obscurity, and lower timing side-channel leakage than unguided obfuscation.
APA, Harvard, Vancouver, ISO, and other styles
27

Lau, Henry, C. K. M. Lee, Dilupa Nakandala, and Paul Shum. "An outcome-based process optimization model using fuzzy-based association rules." Industrial Management & Data Systems 118, no. 6 (July 9, 2018): 1138–52. http://dx.doi.org/10.1108/imds-08-2017-0347.

Full text
Abstract:
Purpose The purpose of this paper is to propose an outcome-based process optimization model which can be deployed in companies to enhance their business operations, strengthening their competitiveness in the current industrial environment. To validate the approach, a case example has been included to assess the practicality and validity of this approach to be applied in actual environment. Design/methodology/approach This model embraces two approaches including: fuzzy logic for mimicking the human thinking and decision making mechanism; and data mining association rules approach for optimizing the analyzed knowledge for future decision-making as well as providing a mechanism to apply the obtained knowledge to support the improvement of different types of processes. Findings The new methodology of the proposed algorithm has been evaluated in a case study and the algorithm shows its potential to determine the primary factors that have a great effect upon the final result of the entire operation comprising a number of processes. In this case example, relevant process parameters have been identified as the important factors causing significant impact on the result of final outcome. Research limitations/implications The proposed methodology requires the dependence on human knowledge and personal experience to determine the various fuzzy regions of the processes. This can be fairly subjective and even biased. As such, it is advisable that the development of artificial intelligence techniques to support automatic machine learning to derive the fuzzy sets should be promoted to provide more reliable results. Originality/value Recent study on the relevant topics indicates that an intelligent process optimization approach, which is able to interact seamlessly with the knowledge-based system and extract useful information for process improvement, is still seen as an area that requires more study and investigation. In this research, the process optimization system with an effective process mining algorithm embedded for supporting knowledge discovery is proposed for use to achieve better quality control.
APA, Harvard, Vancouver, ISO, and other styles
28

Rahman, Md Lizur, Ahmed Wasif Reza, and Shaiful Islam Shabuj. "An internet of things-based automatic brain tumor detection system." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 1 (January 1, 2022): 214. http://dx.doi.org/10.11591/ijeecs.v25.i1.pp214-222.

Full text
Abstract:
Due to the advances in information and communication technologies, the usage of the internet of things (IoT) has reached an evolutionary process in the development of the modern health care environment. In the recent human health care analysis system, the amount of brain tumor patients has increased severely and placed in the 10th position of the leading cause of death. Previous state-of-the-art techniques based on magnetic resonance imaging (MRI) faces challenges in brain tumor detection as it requires accurate image segmentation. A wide variety of algorithms were developed earlier to classify MRI images which are computationally very complex and expensive. In this paper, a cost-effective stochastic method for the automatic detection of brain tumors using the IoT is proposed. The proposed system uses the physical activities of the brain to detect brain tumors. To track the daily brain activities, a portable wrist band named Mi Band 2, temperature, and blood pressure monitoring sensors embedded with Arduino-Uno are used and the system achieved an accuracy of 99.3%. Experimental results show the effectiveness of the designed method in detecting brain tumors automatically and produce better accuracy in comparison to previous approaches.
APA, Harvard, Vancouver, ISO, and other styles
29

Ahmed, Ismail Taha, Norziana Jamil, and Baraa Tareq Hammad. "Low feature dimension in image steganographic recognition." Indonesian Journal of Electrical Engineering and Computer Science 27, no. 2 (August 1, 2022): 885. http://dx.doi.org/10.11591/ijeecs.v27.i2.pp885-891.

Full text
Abstract:
Steganalysis <span>aids in the detection of steganographic data without the need to know the embedding algorithm or the "cover" image. The researcher's major goal was to develop a Steganalysis technique that might improve recognition accuracy while utilizing a minimal feature vector dimension. A number of Steganalysis techniques have been developed to detect steganography in images. However, the steganalysis technique's performance is still limited due to their large feature vector dimension, which takes a long time to compute. The variations of texture and properties of an embedded image are clearly seen. Therefore, in this paper, we proposed Steganalysis recognition based on one of the texture features, such as gray level co-occurrence matrix (GLCM). As a classifier, Ada-Boost and Gaussian discriminant analysis (GDA) are used. In order to evaluate the performance of the proposed method, we use a public database in our proposed and applied it using IStego100K datasets. The results of the experiment show that the proposed can improve accuracy greatly. It also indicates that in terms of accuracy, the Ada-Boost classifier surpassed the GDA. The comparative findings show that the proposed method outperforms other current techniques especially in terms of feature size and recognition accuracy</span>.
APA, Harvard, Vancouver, ISO, and other styles
30

Hassan, Nidaa Flaih, Akbas Ezaldeen Ali, Teaba Wala Aldeen, and Ayad Al-Adhami. "Video mosaic watermarking using plasma key." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 2 (May 1, 2021): 619. http://dx.doi.org/10.11591/ijeecs.v22.i2.pp619-628.

Full text
Abstract:
Video watermarking is one of the most widespread techniques amongst the many watermarking techniques presently are used; this is because the extreme existences of copyright abuse and misappropriation occur for video content. In this paper, a new watermarking algorithm is proposed to embed logo in digital video for copyright protection. To make the watermarks more robust to attack, host frame and host embedding indices must be changeable. A new algorithm is proposed to determined host frames by plasma function, Host location indices in frames are also determined by another plasma function. Logo is divided using the mosaic principle, the size of mosaic blocks is determined initially according to the degree of protection, whenever the size of mosaic blocks is small, it leads to safe embedding, and vice versa. Digital watermarks are embedded easily without any degradation for video quality, In the other side, the watermarked is retrieved by applying the reverse of proposed embedding algorithm and extracted watermark is still recognizable. The experimental results confirm that watermark is robust against three types of attacks which are addition of Gaussian noise, JPEG compression, and rotation process.
APA, Harvard, Vancouver, ISO, and other styles
31

Rauh, Andreas, Robert Dehnert, Swantje Romig, Sabine Lerch, and Bernd Tibken. "Iterative Solution of Linear Matrix Inequalities for the Combined Control and Observer Design of Systems with Polytopic Parameter Uncertainty and Stochastic Noise." Algorithms 14, no. 7 (July 7, 2021): 205. http://dx.doi.org/10.3390/a14070205.

Full text
Abstract:
Most research activities that utilize linear matrix inequality (LMI) techniques are based on the assumption that the separation principle of control and observer synthesis holds. This principle states that the combination of separately designed linear state feedback controllers and linear state observers, which are independently proven to be stable, results in overall stable system dynamics. However, even for linear systems, this property does not necessarily hold if polytopic parameter uncertainty and stochastic noise influence the system’s state and output equations. In this case, the control and observer design needs to be performed simultaneously to guarantee stabilization. However, the loss of the validity of the separation principle leads to nonlinear matrix inequalities instead of LMIs. For those nonlinear inequalities, the current paper proposes an iterative LMI solution procedure. If this algorithm produces a feasible solution, the resulting controller and observer gains ensure robust stability of the closed-loop control system for all possible parameter values. In addition, the proposed optimization criterion leads to a minimization of the sensitivity to stochastic noise so that the actual state trajectories converge as closely as possible to the desired operating point. The efficiency of the proposed solution approach is demonstrated by stabilizing the Zeeman catastrophe machine along the unstable branch of its bifurcation diagram. Additionally, an observer-based tracking control task is embedded into an iterative learning-type control framework.
APA, Harvard, Vancouver, ISO, and other styles
32

Gaber, Tarek, Chin-Shiuh Shieh, Yuh-Chung Lin, and Fatma Masmoudi. "Modified Flower Pollination Algorithm based Resource Management Model for Clustered IoT Network." International Journal of Wireless and Ad Hoc Communication 4, no. 2 (2022): 97–106. http://dx.doi.org/10.54216/ijwac.040205.

Full text
Abstract:
Internet of Things (IoT) is a technological innovation that defined interaction and computation of latest period. The objects of Internet of Things would empower by embedded gadgets whose limited sources has to be managed effectively. IoT usually means a network of devices connected through wireless network and interacts through internet. Resource management, particularly energy management, becomes a serious problem while devising IoT gadgets. Numerous researchers stated that routing and clustering were energy effectual solutions for optimum resource management in IoT setting. This study introduces a Modified Flower Pollination Algorithm based Resource Management (MFPA-RMM) model for Clustered IoT Environment. The presented MFPA-RMM model majorly focuses on the clustering the IoT devices in such a way that the resources are proficiently managed. The MFPA-RMM model is derived based on the fuzzy c-means (FCM) with FPA. The FPA approach is called heuristic algorithm has benefits of global optimization and faster convergence, therefore it was incorporated to FCM system for resolving the advantages and disadvantages of FCM method termed FCM-FPA mechanism. The result analysis of the MFPA-RMM model reported the enhanced performance of the MFPA-RMM model over other well-known techniques like LEACH and TEEN.
APA, Harvard, Vancouver, ISO, and other styles
33

Kumari, Gorli L. Aruna, Poosapati Padmaja, and Jaya G. Suma. "A novel method for prediction of diabetes mellitus using deep convolutional neural network and long short-term memory." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 1 (April 1, 2022): 404. http://dx.doi.org/10.11591/ijeecs.v26.i1.pp404-413.

Full text
Abstract:
Hyperglycemia arises due to diabetes mellitus, which is a persistent and life-threatening ailment. In this paper deep convolution neural network can be embedded to long short-term memory networks to recognize early prediction of diabetes and to decrease the complications that can be occurred through diabetes irrespective to the age. Diabetes problem is being gradually growing and presently, it is reported as a significant cause of death in the top spot. According to the recent studies 48% of overall world population will be affected by diabetes by 2045. If diabetes unidentified in early stages, it may cause other additional cardiac problems. In the proposed based work, a deep learning framework deep combination of convolution neural network and long short-term memory is proposed by embedding both to leverage their respective advantages for diabetes recognition and to allow early prediction of diabetes to avoid other complications. The experimental evolution on the bunch mark of diabetes data set demonstrates the proposed model embedded deep long short-term memory outperforms other machine learning and conventional deep learning approaches. The proposed algorithm in this paper outperforms existing techniques and evaluates total effectiveness and accuracy of predicting whether a person will suffer from diabetes.
APA, Harvard, Vancouver, ISO, and other styles
34

Mata, Carlos. "Technology Focus: Production Monitoring (March 2022)." Journal of Petroleum Technology 74, no. 03 (March 1, 2022): 68–69. http://dx.doi.org/10.2118/0322-0068-jpt.

Full text
Abstract:
We are seeing an uptrend in the instrumentation of legacy wells across the world, as costs lower and business cases become obvious. Several interesting technology applications have been showcased lately about retrofitting instrumentation and control technology in legacy wells. Advances on the digital front, where data science and engineering analytics are becoming more embedded in regular production monitoring and optimization processes, have been widespread. On the gas lift side, developments in surface controllable gas lift valves, which can be deployed during the completion phase, or retrofitted in existing mandrels through well intervention, have proceeded apace. This technology can bring significant improvement in the monitoring and management of wells because the operating envelope can be significantly expanded. Other technology developments involve the use of alternative data. As the saying goes, somebody’s noise is somebody else’s data. A use case of alternative data for rod pumps uses edge computing to process electric measurements and extract features using signal processing and machine learning. This can be used to create synthetic dynamometer cards used to optimize the wells in real time and predict failures ahead of time. Similar techniques could be applicable for electric submersible pumps. Distributed acoustic sensing (DAS) has many applications, but it is challenging to fully exploit the capability because of the sheer amount of raw data coming from these sensors. The data need to be compressed using feature-extraction algorithms. Each use case may require data over different regions of the frequency spectrum covered by the sensors, so how can feature extraction be set to account for all present and future use cases? Significant advances in inflow profiling have been achieved over the past few years, correlating DAS signals against flow-loop measurements and transient simulations. These technologies are very promising, especially now that wet connect and pumpdown technology for fiber optics is gaining more attention. Digital technology for orchestrating production-optimization and reservoir-management work flows has been increasingly embedding machine-learning functionality. Despite these advances, it remains a challenge to maintain these digital solutions over the long term. The most-successful companies in this area ensure the systems are tightly integrated with business-critical work flows, such as integrated activity planning, loss management, locked-in potential management, reservoir-surveillance planning, well-work-opportunity identification, production forecasting, and production back allocation. This involves a significant management of change process. It takes time and effort, but the rewards are worth it. Recommended additional reading at OnePetro: www.onepetro.org. SPE 207879 - Expert Advisory System for Production Surveillance and Optimization Assisted by Artificial Intelligence by Carlos Mata, ADNOC Upstream, et al. SPE 201313 - Production Rate Measurement Optimization Using Test Separator and In-Well Sound Speed by Ö. Haldun Ünalmis, Weatherford SPE 203119 - Wireless Completion Monitoring and Flow Control: A Hybrid Solution for Extended Capabilities by Marcel Bouman, Emerson Automation Solutions
APA, Harvard, Vancouver, ISO, and other styles
35

Majumder, Jayeeta, and Chittaranjan Pradhan. "An interpolation based steganographic technique with least-significant-bit and pixel value differencing in a pixel block." Indonesian Journal of Electrical Engineering and Computer Science 27, no. 2 (August 1, 2022): 1074. http://dx.doi.org/10.11591/ijeecs.v27.i2.pp1074-1082.

Full text
Abstract:
Over the <span lang="EN-US">past few years, in order to improve the hiding capacity and the peak signal-to-noise ratio (PSNR) value, several steganographic techniques have been developed. Steganography has become a popular technique to transmit secret data through any medium. In image steganography, the human eye cannot easily identify the hidden data which is embedded into the image. Small changes are also not detected by the human eye. High hidden capacity along with high visual quality is provided by the pixel value differencing (PVD) method. This paper first proposes the method of interpolation between the pixel blocks and then applies the least-significant-bit (LSB) substitution technique with the PVD method. At the starting phase, the original image is fixedto a 2x2 block, then the nearest neighbor interpolation (NNI) technique is implemented. In the next phase, the upper left pixel isembedded by the k-bit LSB substitution method along with hidden data. The newly generated neighbouring pixel value is measured. Thus, data is hidden from three directions. Through this paper using two different range tables, the new algorithm is proposed. We observed that in both cases, PSNR and the hiding capacity are improved.</span>
APA, Harvard, Vancouver, ISO, and other styles
36

Nurmaini, Siti, Ahmad Zarkasi, Deris Stiawan, Bhakti Yudho Suprapto, Sri Desy Siswanti, and Huda Ubaya. "Robot movement controller based on dynamic facial pattern recognition." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 2 (May 1, 2021): 733. http://dx.doi.org/10.11591/ijeecs.v22.i2.pp733-743.

Full text
Abstract:
In terms of movement, mobile robots are equipped with various navigation techniques. One of the navigation techniques used is facial pattern recognition. But Mobile robot hardware usually uses embedded platforms which have limited resources. In this study, a new navigation technique is proposed by combining a face detection system with a ram-based artificial neural network. This technique will divide the face detection area into five frame areas, namely top, bottom, right, left, and neutral. In this technique, the face detection area is divided into five frame areas, namely top, bottom, right, left, and neutral. The value of each detection area will be grouped into the ram discriminator. Then a training and testing process will be carried out to determine which detection value is closest to the true value, which value will be compared with the output value in the output pattern so that the winning discriminator is obtained which is used as the navigation value. In testing 63 face samples for the Upper and Lower frame areas, resulting in an accuracy rate of 95%, then for the Right and Left frame areas, the resulting accuracy rate is 93%. In the process of testing the ram-based neural network algorithm pattern, the efficiency of memory capacity in ram, the discriminator is 50%, assuming a 16-bit input pattern to 8 bits. While the execution time of the input vector until the winner of the class is under milliseconds (ms).
APA, Harvard, Vancouver, ISO, and other styles
37

Boutekkouk, Fateh. "Real-Time Embedded Systems Scheduling Optimization." International Journal of Applied Evolutionary Computation 12, no. 1 (January 2021): 43–73. http://dx.doi.org/10.4018/ijaec.2021010104.

Full text
Abstract:
The embedded real-time scheduling problem is qualified as a hard multi-objective optimization problem under constraints since it should compromise between three key conflictual objectives that are tasks deadlines guarantee, energy consumption reduction, and reliability enhancement. On this fact, conventional approaches can easily fail to find a good tradeoff in particular when the design space is too vast. On the other side, bio-inspired meta-heuristics have proved their efficiency even if the design space is very large. In this framework, the authors review the most pertinent works of literature targeting the application of bio-inspired methods to resolve the real-time scheduling problem for embedded systems, notably artificial immune systems, machine learning, cellular automata, evolutionary algorithms, and swarm intelligence. A deep discussion is conducted putting the light on the main challenges of using bio-inspired methods in the context of embedded systems. At the end of this review, the authors highlight some of the future directions.
APA, Harvard, Vancouver, ISO, and other styles
38

Panda, P. R., F. Catthoor, N. D. Dutt, K. Danckaert, E. Brockmeyer, C. Kulkarni, A. Vandercappelle, and P. G. Kjeldsberg. "Data and memory optimization techniques for embedded systems." ACM Transactions on Design Automation of Electronic Systems 6, no. 2 (April 2001): 149–206. http://dx.doi.org/10.1145/375977.375978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sotelo, David, Antonio Favela-Contreras, Alfonso Avila, Arturo Pinto, Francisco Beltran-Carbajal, and Carlos Sotelo. "A New Software-Based Optimization Technique for Embedded Latency Improvement of a Constrained MIMO MPC." Mathematics 10, no. 15 (July 24, 2022): 2571. http://dx.doi.org/10.3390/math10152571.

Full text
Abstract:
Embedded controllers for multivariable processes have become a powerful tool in industrial implementations. Here, the Model Predictive Control offers higher performances than standard control methods. However, they face low computational resources, which reduces their processing capabilities. Based on pipelining concept, this paper presents a new embedded software-based implementation for a constrained Multi-Input-Multi-Output predictive control algorithm. The main goal of this work focuses on improving the timing performance and the resource usage of the control algorithm. Therefore, a profiling study of the baseline algorithm is developed, and the performance bottlenecks are identified. The functionality and effectiveness of the proposed implementation are validated in the NI myRIO 1900 platform using the simulation of a jet transport aircraft during cruise flight and a tape transport system. Numerical results for the study cases show that the latency and the processor usage are substantially reduced compared with the baseline algorithm, 4.6× and 3.17× respectively. Thus, efficient program execution is obtained which makes the proposed software-based implementation mainly suitable for embedded control systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Ang, Li-Minn, Kah Phooi Seng, and Christopher Wing Hong Ngau. "Biologically Inspired Components in Embedded Vision Systems." International Journal of Systems Biology and Biomedical Technologies 3, no. 1 (January 2015): 39–72. http://dx.doi.org/10.4018/ijsbbt.2015010103.

Full text
Abstract:
Biological vision components like visual attention (VA) algorithms aim to mimic the mechanism of the human vision system. Often VA algorithms are complex and require high computational and memory requirements to be realized. In biologically-inspired vision and embedded systems, the computational capacity and memory resources are of a primary concern. This paper presents a discussion for implementing VA algorithms in embedded vision systems in a resource constrained environment. The authors survey various types of VA algorithms and identify potential techniques which can be implemented in embedded vision systems. Then, they propose a low complexity and low memory VA model based on a well-established mainstream VA model. The proposed model addresses critical factors in terms of algorithm complexity, memory requirements, computational speed, and salience prediction performance to ensure the reliability of the VA in a resource constrained environment. Finally a custom softcore microprocessor-based hardware implementation on a Field-Programmable Gate Array (FPGA) is used to verify the implementation feasibility of the presented model.
APA, Harvard, Vancouver, ISO, and other styles
41

Jiang, Yu, Hehua Zhang, Zonghui Li, Yangdong Deng, Xiaoyu Song, Ming Gu, and Jiaguang Sun. "Design and Optimization of Multiclocked Embedded Systems Using Formal Techniques." IEEE Transactions on Industrial Electronics 62, no. 2 (February 2015): 1270–78. http://dx.doi.org/10.1109/tie.2014.2316234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Boutekkouk, Fateh. "Real Time Scheduling Optimization." Journal of Information Technology Research 12, no. 4 (October 2019): 132–52. http://dx.doi.org/10.4018/jitr.2019100107.

Full text
Abstract:
This article deals with real time embedded multiprocessor systems scheduling optimization using conventional and quantum inspired genetic algorithms. Real time scheduling problems are known to be NP-hard. In order to resolve it, researchers have resorted to meta-heuristics instead of exact methods. Genetic algorithms seem to be a good choice to solve complex, non-linear, multi-objective and multi-modal problems. However, conventional genetic algorithms may consume much time to find good solutions. For this reason, to minimize the mean response time and the number of tasks missing their deadlines using quantum inspired genetic algorithms for multiprocessors architectures. Our proposed approach takes advantage of both static and dynamic preemptive scheduling. This article has the developed algorithms on a typical example showing a big improvement in research time of good solutions in quantum genetic algorithms with comparison to conventional ones.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Biaobiao, Yue Wu, Jiabin Lu, and K. L. Du. "Evolutionary Computation and Its Applications in Neural and Fuzzy Systems." Applied Computational Intelligence and Soft Computing 2011 (2011): 1–20. http://dx.doi.org/10.1155/2011/938240.

Full text
Abstract:
Neural networks and fuzzy systems are two soft-computing paradigms for system modelling. Adapting a neural or fuzzy system requires to solve two optimization problems: structural optimization and parametric optimization. Structural optimization is a discrete optimization problem which is very hard to solve using conventional optimization techniques. Parametric optimization can be solved using conventional optimization techniques, but the solution may be easily trapped at a bad local optimum. Evolutionary computation is a general-purpose stochastic global optimization approach under the universally accepted neo-Darwinian paradigm, which is a combination of the classical Darwinian evolutionary theory, the selectionism of Weismann, and the genetics of Mendel. Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies. Other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are also described. Some topics pertaining to evolutionary algorithms are also discussed, and a comparison between evolutionary algorithms and simulated annealing is made. Finally, the application of EAs to the learning of neural networks as well as to the structural and parametric adaptations of fuzzy systems is also detailed.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Yiyang, Risheng Liu, Long Ma, and Xiaoliang Song. "Task Embedded Coordinate Update: A Realizable Framework for Multivariate Non-Convex Optimization." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1650–57. http://dx.doi.org/10.1609/aaai.v33i01.33011650.

Full text
Abstract:
We in this paper propose a realizable framework TECU, which embeds task-specific strategies into update schemes of coordinate descent, for optimizing multivariate non-convex problems with coupled objective functions. On one hand, TECU is capable of improving algorithm efficiencies through embedding productive numerical algorithms, for optimizing univariate sub-problems with nice properties. From the other side, it also augments probabilities to receive desired results, by embedding advanced techniques in optimizations of realistic tasks. Integrating both numerical algorithms and advanced techniques together, TECU is proposed in a unified framework for solving a class of non-convex problems. Although the task embedded strategies bring inaccuracies in sub-problem optimizations, we provide a realizable criterion to control the errors, meanwhile, to ensure robust performances with rigid theoretical analyses. By respectively embedding ADMM and a residual-type CNN in our algorithm framework, the experimental results verify both efficiency and effectiveness of embedding task-oriented strategies in coordinate descent for solving practical problems.
APA, Harvard, Vancouver, ISO, and other styles
45

Ali Abttan, Rana, Adnan Hasan Tawafan, and Samar Jaafar Ismael. "Economic dispatch by optimization techniques." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 3 (June 1, 2022): 2228. http://dx.doi.org/10.11591/ijece.v12i3.pp2228-2241.

Full text
Abstract:
The current paper offers the solution strategy for the economic dispatch problem in electric power system implementing ant lion optimization algorithm (ALOA) and bat algorithm (BA) techniques. In the power network, the economic dispatch (ED) is a short-term calculation of the optimum performance of several electricity generations or a plan of outputs of all usable power generation units from the energy produced to fulfill the necessary demand, although equivalent and unequal specifications need to be achieved at minimal fuel and carbon pollution costs. In this paper, two recent meta-heuristic approaches are introduced, the BA and ALOA. A rigorous stochastically developmental computing strategy focused on the action and intellect of ant lions is an ALOA. The ALOA imitates ant lions' hunting process. The introduction of a numerical description of its biological actions for the solution of ED in the power framework. These algorithms are applied to two systems: a small scale three generator system and a large scale six generator. Results show were compared on the metrics of convergence rate, cost, and average run time that the ALOA and BA are suitable for economic dispatch studies which is clear in the comparison set with other algorithms. Both of these algorithms are tested on IEEE-30 bus reliability test system.
APA, Harvard, Vancouver, ISO, and other styles
46

Hwang, Dong Hyun, Chang Yeop Han, Hyun Woo Oh, and Seung Eun Lee. "ASimOV: A Framework for Simulation and Optimization of an Embedded AI Accelerator." Micromachines 12, no. 7 (July 19, 2021): 838. http://dx.doi.org/10.3390/mi12070838.

Full text
Abstract:
Artificial intelligence algorithms need an external computing device such as a graphics processing unit (GPU) due to computational complexity. For running artificial intelligence algorithms in an embedded device, many studies proposed light-weighted artificial intelligence algorithms and artificial intelligence accelerators. In this paper, we propose the ASimOV framework, which optimizes artificial intelligence algorithms and generates Verilog hardware description language (HDL) code for executing intelligence algorithms in field programmable gate array (FPGA). To verify ASimOV, we explore the performance space of k-NN algorithms and generate Verilog HDL code to demonstrate the k-NN accelerator in FPGA. Our contribution is to provide the artificial intelligence algorithm as an end-to-end pipeline and ensure that it is optimized to a specific dataset through simulation, and an artificial intelligence accelerator is generated in the end.
APA, Harvard, Vancouver, ISO, and other styles
47

Xu, Cheng, and Tao Li. "Chemical Reaction Optimization for Task Mapping in Heterogeneous Embedded Multiprocessor Systems." Advanced Materials Research 712-715 (June 2013): 2604–10. http://dx.doi.org/10.4028/www.scientific.net/amr.712-715.2604.

Full text
Abstract:
With different task mapping and scheduling will lead to different time consumption and energy consumption on heterogeneous multiprocessor systems, using appropriate task mapping and scheduling algorithms can save more energy. In this paper, we propose a new method to solve the task mapping problem. The algorithm consists of two elements: An intelligent approach to assign the execution orders of tasks by task level, and an allocation algorithm based on chemical-reaction-inspired metaheuristic called Chemical Reaction Optimization (CRO) to map processors to tasks. The results show that it can use less time to reduce more energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
48

Nogueira, Bruno, Paulo Maciel, Eduardo Tavares, Ricardo M. A. Silva, and Ermeson Andrade. "Multi-objective optimization of multimedia embedded systems using genetic algorithms and stochastic simulation." Soft Computing 21, no. 14 (February 11, 2016): 4141–58. http://dx.doi.org/10.1007/s00500-016-2061-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Anupriya, Koneru, Kurakula Harini, Kethe Balaji, and Karnati Geetha Sudha. "Spam Mail Detection Using Optimization Techniques." Ingénierie des systèmes d information 27, no. 1 (February 28, 2022): 157–63. http://dx.doi.org/10.18280/isi.270119.

Full text
Abstract:
On account of the widespread availability of internet access, email correspondence is one among the most well-known cost-effective and convenient method for users in the office and in business. Many people abuse this convenient mode of communication by spamming others with conciseness bulk emails. They use emails to collect personal information of the users to benefit financially. A literature review is conducted to investigate the most effective strategies for achieving successful outcomes while working with various spam mail datasets. K-Nearest Neighbor, Support Vector Machine, Naive Bayes, Decision Tree, Random Forest, and Logistic Regression are all employed in the implementation of machine learning techniques. To make classifiers more efficient, bio-inspired algorithms such as BAT and PSO are used. The accuracy of every classification algorithm along with and without optimization is observed. Factors such as accuracy, f1-score, precision, and recall are used to compare the results. This work is implemented in Python along with GUI interface Tkinter.
APA, Harvard, Vancouver, ISO, and other styles
50

Shah, Abdul, Haidawati Nasir, Muhammad Fayaz, Adidah Lajis, and Asadullah Shah. "A Review on Energy Consumption Optimization Techniques in IoT Based Smart Building Environments." Information 10, no. 3 (March 8, 2019): 108. http://dx.doi.org/10.3390/info10030108.

Full text
Abstract:
In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique is to maintain a balance between user comfort and energy requirements, such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gaps in the literature are due to advancements in technology, the drawbacks of optimization algorithms, and the introduction of new optimization algorithms. Further, many newly proposed optimization algorithms have produced better accuracy on the benchmark instances but have not been applied yet for the optimization of energy consumption in smart homes. In this paper, we have carried out a detailed literature review of the techniques used for the optimization of energy consumption and scheduling in smart homes. Detailed discussion has been carried out on different factors contributing towards thermal comfort, visual comfort, and air quality comfort. We have also reviewed the fog and edge computing techniques used in smart homes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography