To see the other types of publications on this topic, follow the link: ZERO INITIALIZATION.

Journal articles on the topic 'ZERO INITIALIZATION'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'ZERO INITIALIZATION.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shiratsuchi, Hiroshi, Hiromu Gotanda, Katsuhiro Inoue, and Kousuke Kumamaru. "Studies on Effects of Initialization on Structure Formationand Generalization of Structural Learning with Forgetting." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 6 (November 20, 2004): 621–26. http://dx.doi.org/10.20965/jaciii.2004.p0621.

Full text
Abstract:
In this paper, our proposed initialization for multilayer neural networks (NN) applies to the structural learning with forgetting. Initialization consists of two steps: weights of hidden units are initialized so that their hyperplanes pass through the center of gravity of an input pattern set, and weights of output units are initialized to zero. Several simulations were performed to study how the initialization effects the structure formation of the NN. From the simulation result, it was confirmed that the initialization gives better network structure and higher generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
2

Shiratsuchi, Hiroshi, Hiromu Gotanda, Katsuhiro Inoue, and Kousuke Kumamaru. "Effects of Initialization on Rule Extraction in Structural Learning." Journal of Advanced Computational Intelligence and Intelligent Informatics 12, no. 1 (January 20, 2008): 57–62. http://dx.doi.org/10.20965/jaciii.2008.p0057.

Full text
Abstract:
This paper studies how our previously proposed initialization effects the rule extraction of neural networks by structural learning with forgetting. The proposed initialization consists of two steps: (1) initializing weights of hidden units so that their separation hyperplanes should pass through the center of an input pattern set and (2) initializing those of output units to zero. From simulation results on Boolean function discovery problems with 5 and 7 inputs, it has been confirmed that the proposed initialization yields a simpler network structure and higher rule extraction ability than the conventional initialization giving uniform random number to all the initial weights of the network.
APA, Harvard, Vancouver, ISO, and other styles
3

Danesh, Mohamad H. "Reducing Neural Network Parameter Initialization Into an SMT Problem (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15775–76. http://dx.doi.org/10.1609/aaai.v35i18.17884.

Full text
Abstract:
Training a neural network (NN) depends on multiple factors, including but not limited to the initial weights. In this paper, we focus on initializing deep NN parameters such that it performs better, comparing to random or zero initialization. We do this by reducing the process of initialization into an SMT solver. Previous works consider certain activation functions on small NNs, however the studied NN is a deep network with different activation functions. Our experiments show that the proposed approach for parameter initialization achieves better performance comparing to randomly initialized networks.
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Xiaozeng, and Chuanjiang He. "Implicit Active Contour Model with Local and Global Intensity Fitting Energies." Mathematical Problems in Engineering 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/367086.

Full text
Abstract:
We propose a new active contour model which integrates a local intensity fitting (LIF) energy with an auxiliary global intensity fitting (GIF) energy. The LIF energy is responsible for attracting the contour toward object boundaries and is dominant near object boundaries, while the GIF energy incorporates global image information to improve the robustness to initialization of the contours. The proposed model not only can provide desirable segmentation results in the presence of intensity inhomogeneity but also allows for more flexible initialization of the contour compared to the RSF and LIF models, and we give a theoretical proof to compute a unique steady state regardless of the initialization; that is, the convergence of the zero-level line is irrespective of the initial function. This means that we can obtain the same zero-level line in the steady state, if we choose the initial function as a bounded function. In particular, our proposed model has the capability of detecting multiple objects or objects with interior holes or blurred edges.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahn, Jaemyung, Jun Bang, and Sang-Il Lee. "Acceleration of Zero-Revolution Lambert’s Algorithms Using Table-Based Initialization." Journal of Guidance, Control, and Dynamics 38, no. 2 (February 2015): 335–42. http://dx.doi.org/10.2514/1.g000764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Fuxin, Chuanlei Zheng, Hui Li, Liang Li, Jie Zhang, and Lin Zhao. "Continuity Enhancement Method for Real-Time PPP Based on Zero-Baseline Constraint of Multi-Receiver." Remote Sensing 13, no. 4 (February 8, 2021): 605. http://dx.doi.org/10.3390/rs13040605.

Full text
Abstract:
Continuity is one of the metrics that characterize the required navigation performance of global navigation satellite system (GNSS)-based applications. Data outage due to receiver failure is one of the reasons for continuity loss. Although a multi-receiver configuration can maintain position solutions in case a receiver has data outage, the initialization of the receiver will also cause continuous high-precision positioning performance loss. To maintain continuous high-precision positioning performance of real-time precise point positioning (RT-PPP), we proposed a continuity enhancement method for RT-PPP based on zero-baseline constraint of multi-receiver. On the one hand, the mean time to repair (MTTR) of the multi-receiver configuration is improved to maintain continuous position solutions. On the other hand, the zero-baseline constraint of multi-receiver including between-satellite single-differenced (BSSD) ambiguities, zenith troposphere wet delay (ZWD), and their suitable stochastic models are constructed to achieve instantaneous initialization of back-up receiver. Through static and kinematic experiments based on real data, the effectiveness and robustness of proposed method are evaluated comprehensively. The experiment results show that the relationship including BSSD ambiguities and ZWD between receivers can be determined reliably based on zero-baseline constraint, and the instantaneous initialization can be achieved without high-precision positioning continuity loss in the multi-receiver RT-PPP processing.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Shuiping, Shun Li, Tianrui Luan, Yixing Chen, and Suirong Li. "A Zero-injection Initialization Numerical Method for Ill-conditioned Power Flow." IOP Conference Series: Earth and Environmental Science 1050, no. 1 (July 1, 2022): 012009. http://dx.doi.org/10.1088/1755-1315/1050/1/012009.

Full text
Abstract:
Abstract Regarding the initial value sensitivity issues commonly found in Newton-like power flow calculation algorithms, a zero-injection initialization numerical method for ill-conditioned power flow is proposed in this paper, which improves the convergence of power flow calculation in ill-conditioned power systems. Based on the initial power flow provided by this method, the unbalanced active power of each bus (the slack bus excluded) relates to the net injection of active power of that bus and the unbalanced reactive power of each PQ bus relates to the net injection of reactive power of that bus, thus avoiding significant power unbalance. The method put forward herein can provide a reasonable initial value for the power flow calculation so that difficulty in power flow convergence caused by the unreasonable initial power flow is effectively avoided. The Newton’s method and the optimal multiplier method are implemented respectively on a small-scale ill-conditioned test system to verify the feasibility of the proposed zero-injection initialization numerical method.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Chi-Chung, and Yi-Ting Liu. "Enhanced Ant Colony Optimization with Dynamic Mutation and Ad Hoc Initialization for Improving the Design of TSK-Type Fuzzy System." Computational Intelligence and Neuroscience 2018 (2018): 1–15. http://dx.doi.org/10.1155/2018/9485478.

Full text
Abstract:
This paper proposes an enhanced ant colony optimization with dynamic mutation and ad hoc initialization, ACODM-I, for improving the accuracy of Takagi-Sugeno-Kang- (TSK-) type fuzzy systems design. Instead of the generic initialization usually used in most population-based algorithms, ACODM-I proposes an ad hoc application-specific initialization for generating the initial ant solutions to improve the accuracy of fuzzy system design. The generated initial ant solutions are iteratively improved by a new approach incorporating the dynamic mutation into the existing continuous ACO (ACOR). The introduced dynamic mutation balances the exploration ability and convergence rate by providing more diverse search directions in the early stage of optimization process. Application examples of two zero-order TSK-type fuzzy systems for dynamic plant tracking control and one first-order TSK-type fuzzy system for the prediction of the chaotic time series have been simulated to validate the proposed algorithm. Performance comparisons with ACOR and different advanced algorithms or neural-fuzzy models verify the superiority of the proposed algorithm. The effects on the design accuracy and convergence rate yielded by the proposed initialization and introduced dynamic mutation have also been discussed and verified in the simulations.
APA, Harvard, Vancouver, ISO, and other styles
9

Eng, Wei Yong, Yang Lang Chang, Tien Sze Lim, and Voon Chet Koo. "A Dense Optical Flow Field Estimation with Variational Refinement." Journal of Engineering Technology and Applied Physics 1, no. 2 (December 17, 2019): 10–13. http://dx.doi.org/10.33093/jetap.2019.1.2.3.

Full text
Abstract:
Optical flow has long been a focus of research study in computer vision community. Researchers have established extensive work to solve the optical flow estimation. Among the published works, a notable work using variational energy minimization has been a baseline of optical flow estimation for a long time. Variational optical flow optimization solves an approximate global minimum in a well-defined nonlinear Markov Energy formulation. It works by first linearizing the energy model and uses a numerical method specifically successive over-relaxation (SOR) method to solve the resulting linear model. An initialization scheme is required for optical flow field in this iterative optimization method. In the original work, a zero initialization is proposed and it works well on the various environments with photometric and geometric distortion. In this work, we have experimented with different flow field initialization scheme under various environment setting. We found out that variational refinement with a good initial flow estimate using state-of-art optical flow algorithms can further improve its accuracy performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Yong Eng, Wei, Yang Lang Chang, Tien Sze Lim, and Voon Chet Koo. "A Dense Optical Flow Field Estimation with Variational Refinement." Journal of Engineering Technology and Applied Physics 1, no. 2 (December 17, 2019): 10–13. http://dx.doi.org/10.33093/jetap.2019.1.2.30.

Full text
Abstract:
Optical flow has long been a focus of research study in computer vision community. Researchers have established extensive work to solve the optical flow estimation. Among the published works, a notable work using variational energy minimization has beena baseline of optical flow estimation for a long time. Variational optical flow optimization solves an approximate global minimum in a well-defined non-linear Markov Energy formulation. It works by first linearizing the energy model and uses a numerical method specifically successive over-relaxation (SOR) method to solve the resulting linear model. An initialization scheme is required for optical flow field in this iterative optimization method. In the original work, a zero initialization is proposed and it works well on the various environments with photometric and geometric distortion. In this work, we have experimented with different flow field initialization scheme under various environment setting. We found out that variational refinement with a good initial flow estimate using state-of-art optical flow algorithms can further improve its accuracy performance.
APA, Harvard, Vancouver, ISO, and other styles
11

Edström, K., and S. T. Glad. "Algorithmic, physically based mode initialization when simulating hybrid systems." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 216, no. 1 (February 1, 2002): 65–72. http://dx.doi.org/10.1243/0959651021541435.

Full text
Abstract:
In the simulation of hybrid systems, discontinuities can appear at mode changes. An algorithm is presented that gives initial values for the continuous state variables in a new mode. The algorithm is based on a switched bond graph representation of the system, and it handles discontinuities introduced by a changed number of state variables at a mode change. The algorithm is obtained by integrating the bond graph relations over the mode change and assuming that the physical variables are bounded. This gives a relation between the variables before and after the mode change. It is proved here that the equations for the new initial conditions are solvable. The algorithm is related to a singular perturbation theory by replacing the discontinuity by a fast continuous change. The action is considered of a single switch and the corresponding continuous change, tuned by a single parameter. By letting this parameter tend to zero, the same initial state values are achieved as those derived by the presented algorithm. The algorithm is also related to physical principles such as charge conservation.
APA, Harvard, Vancouver, ISO, and other styles
12

Raca, Dejan, Michael C. Harke, and Robert D. Lorenz. "Robust Magnet Polarity Estimation for Initialization of PM Synchronous Machines With Near-Zero Saliency." IEEE Transactions on Industry Applications 44, no. 4 (2008): 1199–209. http://dx.doi.org/10.1109/tia.2008.926195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

AL-AHMAD, HUSSIAN, and GORDON B. LOCKHART. "Design and analysis of recursive filters with optimum clutter rejection using zero frequency initialization." International Journal of Electronics 75, no. 6 (December 1993): 1099–109. http://dx.doi.org/10.1080/00207219308907185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

SUN, MING BO, XUE SONG BAI, WEI DONG LIU, JIAN HAN LIANG, and ZHEN GUO WANG. "A MODIFIED SUB-CELL-FIX METHOD FOR RE-INITIALIZATION OF LEVEL-SET DISTANCE FUNCTION AND ITS DEPENDENCE TESTS ON GRID STRETCHING." Modern Physics Letters B 24, no. 15 (June 20, 2010): 1615–29. http://dx.doi.org/10.1142/s0217984910024018.

Full text
Abstract:
The sub-cell-fix (SCF) method proposed by Russo and Smereka3 computes the distance function of the cells adjacent to the zero level-set without disturbing the original zero level-set. A modified sub-cell-fix scheme independent of local curvature is developed in this paper, which makes use of a combination of the points adjacent to zero level-set surfaces and preserves the interface in a second-order accuracy. The new sub-cell-fix scheme is capable of handling large local curvature, and as a result it demonstrates satisfactory performance on several challenging test cases. The limitations of the modified scheme on stretched grids are tested and it is found that the highly stretched grid causes large numerical errors, and needs further assessment and modification.
APA, Harvard, Vancouver, ISO, and other styles
15

Hsu, Jerry, Tongtong Wang, Zherong Pan, Xifeng Gao, Cem Yuksel, and Kui Wu. "Sag-Free Initialization for Strand-Based Hybrid Hair Simulation." ACM Transactions on Graphics 42, no. 4 (July 26, 2023): 1–14. http://dx.doi.org/10.1145/3592143.

Full text
Abstract:
Lagrangian/Eulerian hybrid strand-based hair simulation techniques have quickly become a popular approach in VFX and real-time graphics applications. With Lagrangian hair dynamics, the inter-hair contacts are resolved in the Eulerian grid using the continuum method, i.e., the MPM scheme with the granular Drucker-Prager rheology, to avoid expensive collision detection and handling. This fuzzy collision handling makes the authoring process significantly easier. However, although current hair grooming tools provide a wide range of strand-based modeling tools for this simulation approach, the crucial sag-free initialization functionality remains often ignored. Thus, when the simulation starts, gravity would cause any artistic hairstyle to sag and deform into unintended and undesirable shapes. This paper proposes a novel four-stage sag-free initialization framework to solve stable quasistatic configurations for hybrid strand-based hair dynamic systems. These four stages are split into two global-local pairs. The first one ensures static equilibrium at every Eulerian grid node with additional inequality constraints to prevent stress from exiting the yielding surface. We then derive several associated closed-form solutions in the local stage to compute segment rest lengths, orientations, and particle deformation gradients in parallel. The second global-local step solves along each hair strand to ensure all the bend and twist constraints produce zero net torque on every hair segment, followed by a local step to adjust the rest Darboux vectors to a unit quaternion. We also introduce an essential modification for the Darboux vector to eliminate the ambiguity of the Cosserat rod rest pose in both initialization and simulation. We evaluate our method on a wide range of hairstyles, and our approach can only take a few seconds to minutes to get the rest quasistatic configurations for hundreds of hair strands. Our results show that our method successfully prevents sagging and has minimal impact on the hair motion during simulation.
APA, Harvard, Vancouver, ISO, and other styles
16

Cadiz, Fabian, Abdelhak Djeffal, Delphine Lagarde, Andrea Balocchi, Bingshan Tao, Bo Xu, Shiheng Liang, et al. "Electrical Initialization of Electron and Nuclear Spins in a Single Quantum Dot at Zero Magnetic Field." Nano Letters 18, no. 4 (March 8, 2018): 2381–86. http://dx.doi.org/10.1021/acs.nanolett.7b05351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Labiod, Salim, Hamid Boubertakh, and Thierry Marie Guerra. "Indirect Adaptive Fuzzy Control for a Class of Uncertain Nonlinear Systems with Unknown Control Direction." International Journal of Fuzzy System Applications 1, no. 4 (October 2011): 1–17. http://dx.doi.org/10.4018/ijfsa.2011100101.

Full text
Abstract:
In this paper, the authors propose two indirect adaptive fuzzy control schemes for a class of uncertain continuous-time single-input single-output (SISO) nonlinear dynamic systems with known and unknown control direction. Within these schemes, fuzzy systems are used to approximate unknown nonlinear functions and the Nussbaum gain technique is used to deal with the unknown control direction. This paper first presents a singularity-free indirect adaptive control algorithm for nonlinear systems with known control direction, and then this control algorithm is generalized for the case of unknown control direction. The proposed adaptive controllers are free from singularity, allow initialization to zero of all adjustable parameters of the used fuzzy systems, and guarantee asymptotic convergence of the tracking error to zero. Simulations performed on a nonlinear system are given to show the feasibility of the proposed adaptive control schemes.
APA, Harvard, Vancouver, ISO, and other styles
18

Casas, Raúl A., C. Richard Johnson, Rodney A. Kennedy, Zhi Ding, and Roberto Malamut. "Blind adaptive decision feedback equalization: a class of channels resulting in ill-convergence from a zero initialization." International Journal of Adaptive Control and Signal Processing 12, no. 2 (March 1998): 173–93. http://dx.doi.org/10.1002/(sici)1099-1115(199803)12:2<173::aid-acs486>3.0.co;2-v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Viúdez, Álvaro, and David G. Dritschel. "Dynamic Potential Vorticity Initialization and the Diagnosis of Mesoscale Motion." Journal of Physical Oceanography 34, no. 12 (December 1, 2004): 2761–73. http://dx.doi.org/10.1175/jpo2648.1.

Full text
Abstract:
Abstract A new method for diagnosing the balanced three-dimensional velocity from a given density field in mesoscale oceanic flows is described. The method is referred to as dynamic potential vorticity initialization (PVI) and is based on the idea of letting the inertia–gravity waves produced by the initially imbalanced mass density and velocity fields develop and evolve in time while the balanced components of these fields adjust during the diagnostic period to a prescribed initial potential vorticity (PV) field. Technically this is achieved first by calculating the prescribed PV field from given density and geostrophic velocity fields; then the PV anomaly is multiplied by a simple time-dependent ramp function, initially zero but tending to unity over the diagnostic period. In this way, the PV anomaly builds up to the prescribed anomaly. During this time, the full three-dimensional primitive equations—except for the PV equation—are integrated for several inertial periods. At the end of the diagnostic period the density and velocity fields are found to adjust to the prescribed PV field and the approximate balanced vortical motion is obtained. This adjustment involves the generation and propagation of fast, small-amplitude inertia–gravity waves, which appear to have negligible impact on the final near-balanced motion. Several practical applications of this method are illustrated. The highly nonlinear, complex breakup of baroclinically unstable currents into eddies, fronts, and filamentary structures is examined. The capability of the method to generate the balanced three-dimensional motion is measured by analyzing the ageostrophic horizontal and vertical velocity—the latter is the velocity component most sensitive to initialization, and one for which a quasigeostrophic diagnostic solution is available for comparison purposes. The authors find that the diagnosed fields are closer to the actual fields than are either the geostrophic or the quasigeostrophic approximations. Dynamic PV initialization thus appears to be a promising way of improving the diagnosis of balanced mesoscale motions.
APA, Harvard, Vancouver, ISO, and other styles
20

Kanza, Rogeany, Yu Zhao, Zhilin Huang, Chenyu Huang, and Zhuoming Li. "Enhancement: SiamFC Tracker Algorithm Performance Based on Convolutional Hyperparameters Optimization and Low Pass Filter." Mathematics 10, no. 9 (May 3, 2022): 1527. http://dx.doi.org/10.3390/math10091527.

Full text
Abstract:
Over the past few decades, convolutional neural networks (CNNs) have achieved outstanding results in addressing a broad scope of computer vision problems. Despite these improvements, fully convolutional Siamese neural networks (FCSNN) still hardly adapt to complex scenes, such as appearance change, scale change, similar objects interference, etc. The present study focuses on an enhanced FCSNN based on convolutional block hyperparameters optimization, a new activation function (ModReLU) and Gaussian low pass filter. The optimization of hyperparameters is an important task, as it has a crucial ascendancy on the tracking process performance, especially when it comes to the initialization of weights and bias. They have to work efficiently with the following activation function layer. Inadequate initialization can result in vanishing or exploding gradients. In the first method, we propose an optimization strategy for initializing weights and bias in the convolutional block to ameliorate the learning of features so that each neuron learns as much as possible. Next, the activation function normalizes the output. We implement the convolutional block hyperparameters optimization by setting the convolutional weights initialization to constant, the bias initialization to zero and the Leaky ReLU activation function at the output. In the second method, we propose a new activation, ModReLU, in the activation layer of CNN. Additionally, we also introduce a Gaussian low pass filter to minimize image noise and improve the structures of images at distinct scales. Moreover, we add a pixel-domain-based color adjustment implementation to enhance the capacity of the proposed strategies. The proposed implementations handle better rotation, moving, occlusion and appearance change problems and improve tracking speed. Our experimental results clearly show a significant improvement in the overall performance compared to the original SiamFC tracker. The first proposed technique of this work surpasses the original fully convolutional Siamese networks (SiamFC) on the VOT 2016 dataset with an increase of 15.42% in precision, 16.79% in AUPC and 15.93% in IOU compared to the original SiamFC. Our second proposed technique also reveals remarkable advances over the original SiamFC with 18.07% precision increment, 17.01% AUPC improvement and an increase of 15.87% in IOU. We evaluate our methods on the Visual Object Tracking (VOT) Challenge 2016 dataset, and they both outperform the original SiamFC tracker performance and many other top performers.
APA, Harvard, Vancouver, ISO, and other styles
21

Hinze, Matthias, André Schmidt, and Remco I. Leine. "Numerical solution of fractional-order ordinary differential equations using the reformulated infinite state representation." Fractional Calculus and Applied Analysis 22, no. 5 (October 25, 2019): 1321–50. http://dx.doi.org/10.1515/fca-2019-0070.

Full text
Abstract:
Abstract In this paper, we propose a novel approach for the numerical solution of fractional-order ordinary differential equations. The method is based on the infinite state representation of the Caputo fractional differential operator, in which the entire history of the state of the system is considered for correct initialization. The infinite state representation contains an improper integral with respect to frequency, expressing the history dependence of the fractional derivative. The integral generally has a weakly singular kernel, which may lead to problems in numerical computations. A reformulation of the integral generates a kernel that decays to zero at both ends of the integration interval leading to better convergence properties of the related numerical scheme. We compare our method to other schemes by considering several benchmark problems.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Chi-Chung. "Optimization of Zero-Order TSK-Type Fuzzy System Using Enhanced Particle Swarm Optimizer with Dynamic Mutation and Special Initialization." International Journal of Fuzzy Systems 20, no. 5 (January 27, 2018): 1685–98. http://dx.doi.org/10.1007/s40815-018-0453-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Almaleh, Abdulaziz, Reem Almushabb, and Rahaf Ogran. "Malware API Calls Detection Using Hybrid Logistic Regression and RNN Model." Applied Sciences 13, no. 9 (April 27, 2023): 5439. http://dx.doi.org/10.3390/app13095439.

Full text
Abstract:
Behavioral malware analysis is a powerful technique used against zero-day and obfuscated malware. Additionally referred to as dynamic malware analysis, this approach employs various methods to achieve enhanced detection. One such method involves using machine learning and deep learning algorithms to learn from the behavior of malware. However, the task of weight initialization in neural networks remains an active area of research. In this paper, we present a novel hybrid model that utilizes both machine learning and deep learning algorithms to detect malware across various categories. The proposed model achieves this by recognizing the malicious functions performed by the malware, which can be inferred from its API call sequences. Failure to detect these malware instances can result in severe cyberattacks, which pose a significant threat to the confidentiality, privacy, and availability of systems. We rely on a secondary dataset containing API call sequences, and we apply logistic regression to obtain the initial weight that serves as input to the neural network. By utilizing this hybrid approach, our research aims to address the challenges associated with traditional weight initialization techniques and to improve the accuracy and efficiency of malware detection based on API calls. The integration of both machine learning and deep learning algorithms allows the proposed model to capitalize on the strengths of each approach, potentially leading to a more robust and versatile solution to malware detection. Moreover, our research contributes to the ongoing efforts in the field of neural networks, by offering a novel perspective on weight initialization techniques and their impact on the performance of neural networks in the context of behavioral malware analysis. Experimental results using a balanced dataset showed 83% accuracy and a 0.44 loss, which outperformed the baseline model in terms of the minimum loss. The imbalanced dataset’s accuracy was 98%, and the loss was 0.10, which exceeded the state-of-the-art model’s accuracy. This demonstrates how well the suggested model can handle malware classification.
APA, Harvard, Vancouver, ISO, and other styles
24

Gao, Hongyang, Lei Cai, and Shuiwang Ji. "Adaptive Convolutional ReLUs." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3914–21. http://dx.doi.org/10.1609/aaai.v34i04.5805.

Full text
Abstract:
Rectified linear units (ReLUs) are currently the most popular activation function used in neural networks. Although ReLUs can solve the gradient vanishing problem and accelerate training convergence, it suffers from the dying ReLU problem in which some neurons are never activated if the weights are not updated properly. In this work, we propose a novel activation function, known as the adaptive convolutional ReLU (ConvReLU), that can better mimic brain neuron activation behaviors and overcome the dying ReLU problem. With our novel parameter sharing scheme, ConvReLUs can be applied to convolution layers that allow each input neuron to be activated by different trainable thresholds without involving a large number of extra parameters. We employ the zero initialization scheme in ConvReLU to encourage trainable thresholds to be close to zero. Finally, we develop a partial replacement strategy that only replaces the ReLUs in the early layers of the network. This resolves the dying ReLU problem and retains sparse representations for linear classifiers. Experimental results demonstrate that our proposed ConvReLU has consistently better performance compared to ReLU, LeakyReLU, and PReLU. In addition, the partial replacement strategy is shown to be effective not only for our ConvReLU but also for LeakyReLU and PReLU.
APA, Harvard, Vancouver, ISO, and other styles
25

Cao, C., and N. Hovakimyan. "adaptive controller for systems with unknown time-varying parameters and disturbances in the presence of non-zero trajectory initialization error." International Journal of Control 81, no. 7 (July 2008): 1147–61. http://dx.doi.org/10.1080/00207170701670939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yu, Hai Ying, Bao Feng Zhou, Wen Xiang Jiang, Cheng Yang, Quan Cai Xie, Ping Yu Tan, Si Ming Xu, and Xiao Fen Zhao. "Data Processing and Preliminary Analysis of Strong Motion Records from the Ms7.0 Lushan, China Earthquake." Applied Mechanics and Materials 580-583 (July 2014): 1528–32. http://dx.doi.org/10.4028/www.scientific.net/amm.580-583.1528.

Full text
Abstract:
At 8:02 a.m. Beijing time on April 20, 2013, an Ms 7.0 earthquake shook the city of Ya’an, China. Until 24:00:00 on 20 July 2013, a total of 339 strong-motion records from 113 free-field stations, 21 records from 1 strong-motion structure array and 1123 groups of aftershock records have been collected, processed and analyzed preliminarily by the China Strong Motion Network Center (CSMNC) in this paper. There are more than 26 stations located within Longmenshan fault zone and all peak ground acceleration (PGA) of records is larger than 50gal. The PGA of Baoxingdiban station reaches 1005.4 gal after zero-line initialization, which firstly breaks through 1g in the mainland of China. Based on the mainshock records, shakemaps of those PGAs are plotted and attenuation relationships of PGAs are presented in this article.
APA, Harvard, Vancouver, ISO, and other styles
27

Vecchi, Gabriel A., Rym Msadek, Whit Anderson, You-Soon Chang, Thomas Delworth, Keith Dixon, Rich Gudgel, et al. "Multiyear Predictions of North Atlantic Hurricane Frequency: Promise and Limitations." Journal of Climate 26, no. 15 (July 26, 2013): 5337–57. http://dx.doi.org/10.1175/jcli-d-12-00464.1.

Full text
Abstract:
Abstract Retrospective predictions of multiyear North Atlantic Ocean hurricane frequency are explored by applying a hybrid statistical–dynamical forecast system to initialized and noninitialized multiyear forecasts of tropical Atlantic and tropical-mean sea surface temperatures (SSTs) from two global climate model forecast systems. By accounting for impacts of initialization and radiative forcing, retrospective predictions of 5- and 9-yr mean tropical Atlantic hurricane frequency show significant correlations relative to a null hypothesis of zero correlation. The retrospective correlations are increased in a two-model average forecast and by using a lagged-ensemble approach, with the two-model ensemble decadal forecasts of hurricane frequency over 1961–2011 yielding correlation coefficients that approach 0.9. These encouraging retrospective multiyear hurricane predictions, however, should be interpreted with care: although initialized forecasts have higher nominal skill than uninitialized ones, the relatively short record and large autocorrelation of the time series limits confidence in distinguishing between the skill caused by external forcing and that added by initialization. The nominal increase in correlation in the initialized forecasts relative to the uninitialized experiments is caused by improved representation of the multiyear tropical Atlantic SST anomalies. The skill in the initialized forecasts comes in large part from the persistence of a mid-1990s shift by the initialized forecasts, rather than from predicting its evolution. Predicting shifts like that observed in 1994/95 remains a critical issue for the success of multiyear forecasts of Atlantic hurricane frequency. The retrospective forecasts highlight the possibility that changes in observing system impact forecast performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Suzuki, Yuta, Michael E. Hahn, and Yasushi Enomoto. "Estimation of Foot Trajectory and Stride Length during Level Ground Running Using Foot-Mounted Inertial Measurement Units." Sensors 22, no. 19 (September 20, 2022): 7129. http://dx.doi.org/10.3390/s22197129.

Full text
Abstract:
Zero-velocity assumption has been used for estimation of foot trajectory and stride length during running from the data of foot-mounted inertial measurement units (IMUs). Although the assumption provides a reasonable initialization for foot trajectory and stride length estimation, the other source of errors related to the IMU’s orientation still remains. The purpose of this study was to develop an improved foot trajectory and stride length estimation method for the level ground running based on the displacement of the foot. Seventy-nine runners performed running trials at 5 different paces and their running motions were captured using a motion capture system. The accelerations and angular velocities of left and right feet were measured with two IMUs mounted on the dorsum of each foot. In this study, foot trajectory and stride length were estimated using zero-velocity assumption with IMU data, and the orientation of IMU was estimated to calculate the mediolateral and vertical distance of the foot between two consecutive midstance events. Calculated foot trajectory and stride length were compared with motion capture data. The results show that the method used in this study can provide accurate estimation of foot trajectory and stride length for level ground running across a range of running speeds.
APA, Harvard, Vancouver, ISO, and other styles
29

HINDMARSH, RICHARD C. A. "Normal modes of an ice sheet." Journal of Fluid Mechanics 335 (March 25, 1997): 393–413. http://dx.doi.org/10.1017/s0022112096004612.

Full text
Abstract:
A linearized perturbation about the Vialov–Nye fixed-span solution for a steady-state ice sheet yields a Sturm-Liouville problem. The numerical eigenvalue problem is solved and the resulting normal modes are used to compute Green's and influence functions for perturbations to the accumulation rate, the rate factor and for long-wavelength basal topography. The eigenvalue for the slowest mode is approximately the same as that predicted by the zero-dimensional theory. It is found that the sensitivity of the steady profile to accumulation is greatest in the central area of the ice sheet, while the sensitivity to rate factor is greatest near the margin. The antisymmetric perturbation provides information about the relaxation time for divide motion and spatial variation in the sensitivity of divide deviation from the ice-sheet centre to accumulation rate variations. The use of the method for model initialization is considered. Forcing deviations of 30% give relative errors in the perturbation of about 10%.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, J., A. D. Liu, C. Zhou, G. Zhuang, W. X. Ding, G. H. Hu, G. S. Xu, et al. "Determination of the radial position of zero density for profile reflectometry on experimental advanced superconducting tokamak." Journal of Instrumentation 18, no. 01 (January 1, 2023): T01001. http://dx.doi.org/10.1088/1748-0221/18/01/t01001.

Full text
Abstract:
Abstract The Experimental Advanced Superconducting Tokamak (EAST) reflectometry is a Frequency-Modulated Continuous-Wave monostatic system with the transmission lines similar to the ITER reflectometer design. One of the most significant and common problems for reflectometry to reconstruct the density profile is the determination of initialization, i.e. zero density position (R start), which could be determined by the extraordinary mode (X-mode) reflectometry. The main source of noise still comes from the wave scattering of plasmas despite the sweeping period of around 10 microseconds. It is found that to reduce the random noise of R start, averaging on 25 periods is the optimal solution. During the L-mode discharges with only lower hybrid wave (LHW) heating, R start would move outwards and its fluctuation and the high frequency components of turbulence around R start would be obviously increased when the 2.45 GHz system is switched on, while no obvious change is observed in the 4.6 GHz case. These phenomena are consistent with that 2.45 GHz LHW has less current drive ability than 4.6 GHz LHW on EAST. During ELMy H-mode, the peak of R start is consistent with Deuterium signal and the maximum displacement is about 3 cm. The comparison of density profiles from reflectometry and Lithium beam emission spectroscopy (Li-BES) suggests that setting the density at R start to (4 ± 2)× 1017 m-3 is much better than zero, while this value is somewhat empirical and closely related to the amplitude threshold for determining the first probing frequency corresponding to R start.
APA, Harvard, Vancouver, ISO, and other styles
31

Xu, Lian Hu, Yi Bao Yuan, and Wei Ying Piao. "A Novel Method of Establishing “Datum Reference” for the Coaxiality Measurements Using the Multi-Section and Multi-Probe Gauges." Advanced Materials Research 317-319 (August 2011): 1759–68. http://dx.doi.org/10.4028/www.scientific.net/amr.317-319.1759.

Full text
Abstract:
The electronic or pneumatic multi-section and multi-probe gauges are widely used for diameter and coaxiality measurements due to their high measurement efficiency in workshops. However, their measurement accuracy is determined mainly by the manufacturing errors of the assembly coaxiality master, therefore, how to establish coaxiality measurement datum reference is the key technology. The physical coaxiality masters are expected to be the impossible "zero error datum reference". The higher manufacturing accuracy of the masters, the more expensive for their manufacturing costs. A novel mathematical method on the basis of error separation principle was proposed in order to separate the manufacturing errors of the master. The basic principle is that the eccentric error of the coaxiality master can be expressed as the first harmonic function and the ideal zero-error datum reference could be established by the mathematical method of two sampling operations in phase difference of 180° in coaxiality master for gauge initialization. This method can be called as "mathematical datum reference" for coaxiality measurement. Experimental results indicate that the coaxiality measurement results of the multi-section and multi-probe gauge by the novel mathematical method coincide with those of the three-coordinate measuring machine and the maximum difference of both is about 0.0014 mm. The new coaxiality measurement principle can separate the datum error of the coaxiality master theoretically and can improve the coaxiality measurement accuracy greatly with the common accuracy coaxiality master.
APA, Harvard, Vancouver, ISO, and other styles
32

Cao, Rui. "High-Precision Timing System Based on Microcontroller." Advanced Materials Research 546-547 (July 2012): 131–36. http://dx.doi.org/10.4028/www.scientific.net/amr.546-547.131.

Full text
Abstract:
The task of timing system is countdown on a set date and showing the time from current date to the setting date to enhance people’s attention and sense of urgency. A kind of high-precision timing system is introduced. The system is controlled by single-chip microcontroller. High integration clock chip is used as clock module. System hardware design and software design are introduced. System hardware design includes design of clock module, its connection with single-chip microcomputer, and the design of display module. System software design includes the settings of single-chip serial port, its initialization, the reading and writing of the clock module, the countdown program and the display program. Compared with other timing system which consists of frequency demultiplication by multivibrator, this system is more stable and reliable. The accumulative error is zero when long-time running continuously. Because the system control module is simple, it is convenient to be developed and utilized. Practice shows that this high-precision timing system can meet practical needs well.
APA, Harvard, Vancouver, ISO, and other styles
33

Ghosh, Jayati, and Brad Paden. "Iterative Learning Control for Nonlinear Nonminimum Phase Plants1." Journal of Dynamic Systems, Measurement, and Control 123, no. 1 (December 1, 1998): 21–30. http://dx.doi.org/10.1115/1.1341200.

Full text
Abstract:
Learning control is a very effective approach for tracking control in processes occurring repetitively over a fixed interval of time. In this paper a robust learning algorithm is proposed for a generic family of nonlinear, nonminimum phase plants with disturbances and initialization error. The “stable-inversion” method of Devasia, Chen and Paden is applied to develop a learning controller for linear nonminimum phase plants. This is adapted to accommodate a more general class of nonlinear plants. The bounds on the asymptotic error for the learned input are exhibited via a concise proof. Simulation studies demonstrate that in the absence of input disturbances, perfect tracking of the desired trajectory is achieved for nonlinear nonminimum phase plants. Further, in the presence of random disturbances, the tracking error converges to a neighborhood of zero. A bound on the tracking error is derived which is a continuous function of the bound on the disturbance. It is also observed that perfect tracking of the desired trajectory is achieved if the input disturbance is the same at every iteration.
APA, Harvard, Vancouver, ISO, and other styles
34

Ren, Zemin. "Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients." Mathematical Problems in Engineering 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/145343.

Full text
Abstract:
We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.
APA, Harvard, Vancouver, ISO, and other styles
35

Quintela Rodriguez, Frank Ernesto, and Filippo Troiani. "Vibrational response functions for multidimensional electronic spectroscopy in the adiabatic regime: A coherent-state approach." Journal of Chemical Physics 157, no. 3 (July 21, 2022): 034107. http://dx.doi.org/10.1063/5.0094512.

Full text
Abstract:
Multi-dimensional spectroscopy represents a particularly insightful tool for investigating the interplay of nuclear and electronic dynamics, which plays an important role in a number of photophysical processes and photochemical reactions. Here, we present a coherent state representation of the vibronic dynamics and of the resulting response functions for the widely used linearly displaced harmonic oscillator model. Analytical expressions are initially derived for the case of third-order response functions in an N-level system, with ground state initialization of the oscillator (zero-temperature limit). The results are then generalized to the case of Mth order response functions, with arbitrary M. The formal derivation is translated into a simple recipe, whereby the explicit analytical expressions of the response functions can be derived directly from the Feynman diagrams. We further generalize to the whole set of initial coherent states, which form an overcomplete basis. This allows one, in principle, to derive the dependence of the response functions on arbitrary initial states of the vibrational modes and is here applied to the case of thermal states. Finally, a non-Hermitian Hamiltonian approach is used to include in the above expressions the effect of vibrational relaxation.
APA, Harvard, Vancouver, ISO, and other styles
36

SHAO, FAN, KECK VOON LING, and WAN SING NG. "AUTOMATIC 3D PROSTATE SURFACE DETECTION FROM TRUS WITH LEVEL SETS." International Journal of Image and Graphics 04, no. 03 (July 2004): 385–403. http://dx.doi.org/10.1142/s0219467804001476.

Full text
Abstract:
Prostate boundary detection from ultrasound images plays an important role in prostate disease diagnoses and treatments. However, due to the low contrast, speckle noise and shadowing in ultrasound images, this still remains a difficult task. Currently, prostate boundary detection is performed manually, which is arduous and heavily user dependent. A possible solution is to improve the efficiency by automating the boundary detection process with minimal manual involvement. This paper presents a new approach based on the level set method to automatically detect the prostate surface from 3D transrectal ultrasound images. The user interaction in the initialization procedure is relieved by automatically putting the centroid of the initial zero level sets close to the image center. Region information, instead of the image gradient, is integrated into the level set method to remedy the "boundary leaking" problem caused by gaps or weak boundaries. Moreover, to increase the accuracy and robustness, knowledge-based features, such as expected shape (kidney-like) and ultrasound appearance of the prostate (looking from within the gland, the intensities are transitions from dark to light), are also incorporated into the model. The proposed method is applied to eight 3D TRUS images and the results have shown its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
37

Eiben, Agoston E., and Thomas Bäck. "Empirical Investigation of Multiparent Recombination Operators in Evolution Strategies." Evolutionary Computation 5, no. 3 (September 1997): 347–65. http://dx.doi.org/10.1162/evco.1997.5.3.347.

Full text
Abstract:
An extension of evolution strategies to multiparent recombination involving a variable number ϱ of parents to create an offspring individual is proposed. The extension is experimentally evaluated on a test suite of functions differing in their modality and separability and the regular/irregular arrangement of their local optima. Multiparent diagonal crossover and uniform scanning crossover and a multiparent version of intermediary recombination are considered in the experiments. The performance of the algorithm is observed to depend on the particular combination of recombination operator and objective function. In most of the cases a significant increase in performance is observed as the number of parents increases. However, there might also be no significant impact of recombination at all, and for one of the unimodal objective functions, the performance is observed to deteriorate over the course of evolution for certain choices of the recombination operator and the number of parents. Additional experiments with a skewed initialization of the population clarify that intermediary recombination does not cause a search bias toward the origin of the coordinate system in the case of domains of variables that are symmetric around zero.
APA, Harvard, Vancouver, ISO, and other styles
38

Ragul, R., N. Shanmugasundaram, Mariaraja Paramasivam, Suresh Seetharaman, and Sheela L. Mary Immaculate. "PV Based Standalone DC -Micro Grid System for EV Charging Station with New GWO-ANFIS MPPTs under Partial Shading Conditions." International Transactions on Electrical Energy Systems 2023 (March 15, 2023): 1–14. http://dx.doi.org/10.1155/2023/2073742.

Full text
Abstract:
The goal of this article is to use MPPTs (maximum power point trackers) to extort maximum power from best configuration or combine renewable resources and energy storage systems that all work together in off-grid for electric vehicle charging. The grey wolf algorithm (GWO) searches the MPP at partial shading condition (PSC) with following two consideration one is high oscillations around GMPPs, and other is that they are unable to track the new GMPPs after it has changed positions because the seeking agents will be busy around the previous GMPPs captured. Hence, in this paper, the proposed research objective is to find solutions to these two difficulties. The issue of oscillations around GMPPs was handled by combining GWO with ANFISs (adaptive Neuro-Fuzzy inference system) to gently tune output produced power at GMPPs. ANFISs are distinguished by their near-zero oscillations and precise GMPPs capturing. The second issue called they are unable to track the new GMPPs after it has changed positions is addressed in this work by using novel initialization by GWOs (Grey wolf Optimizations). In the MATLAB-Simulink and experiments demonstrate the effectiveness of the suggested GWO-ANFIS MPPTs based off-grid station for EVs (Electrical Vehicle) battery charging.
APA, Harvard, Vancouver, ISO, and other styles
39

De Wolf, E. D., and L. J. Franel. "Neural Networks That Distinguish Infection Periods of Wheat Tan Spot in an Outdoor Environment." Phytopathology® 87, no. 1 (January 1997): 83–87. http://dx.doi.org/10.1094/phyto.1997.87.1.83.

Full text
Abstract:
Tan spot of wheat, caused by Pyrenophora tritici-repentis, provided a model system for testing disease forecasts based on an artificial neural network. Infection periods for P. tritici-repentis on susceptible wheat cultivars were identified from a bioassay system that correlated tan spot incidence with crop growth stage and 24-h summaries of environmental data, including temperature, relative humidity, wind speed, wind direction, solar radiation, precipitation, and flat-plate resistance-type wetness sensors. The resulting data set consisted of 97 discrete periods, of which 32 were reserved for validation analysis. Neural networks with zero to nine processing elements were evaluated 20 times each to identify the model that most accurately predicted an infection event. The 200 models averaged 74 to 77% accuracy, depending on the number of processing elements and random initialization of coefficients. The most accurate model had five processing elements and correctly predicted 87% of the infection periods in the validation set. In comparison, stepwise logistic regression correctly predicted 69% of the validation cases, and multivariate discriminant analysis distinguished 50% of the validation cases. When wetness-sensor inputs were withheld from the models, both the neural network and logistic regression models declined 6% in prediction accuracy. Thus, neural networks were more accurate than statistical procedures, both with and without wetness-sensor inputs. These results demonstrate the applicability of neural networks to plant disease forecasting.
APA, Harvard, Vancouver, ISO, and other styles
40

Melhauser, Christopher, and Fuqing Zhang. "Development and Application of a Simplified Coplane Wind Retrieval Algorithm Using Dual-Beam Airborne Doppler Radar Observations for Tropical Cyclone Prediction." Monthly Weather Review 144, no. 7 (June 23, 2016): 2645–66. http://dx.doi.org/10.1175/mwr-d-15-0323.1.

Full text
Abstract:
Abstract Based on established coplane methodology, a simplified three-dimensional wind retrieval algorithm is proposed to derive two-dimensional wind vectors from radial velocity observations by the tail Doppler radars on board the NOAA P3 hurricane reconnaissance aircraft. Validated against independent in situ flight-level and dropsonde observations before and after genesis of Hurricane Karl (2010), each component of the retrieved wind vectors near the aircraft track has an average error of approximately 1.5 m s−1, which increases with the scanning angle and distance away from the aircraft track. Simulated radial velocities derived from a convection-permitting simulation of Karl are further used to systematically quantify errors of the simplified coplane algorithm. The accuracy of the algorithm is strongly dependent on the time between forward and backward radar scans and to a lesser extent, the zero vertical velocity assumption at large angles relative to a plane parallel with the aircraft wings. A proof-of-concept experiment assimilating the retrieved wind vectors with an ensemble Kalman filter shows improvements in track and intensity forecasts similar to assimilating radial velocity super observations or the horizontal wind vectors from the analysis retrievals provided by the Hurricane Research Division of NOAA. Future work is needed to systematically evaluate this simplified coplane algorithm with proper error characteristics for TC initialization and prediction through a large number of events to establish statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
41

Allahdadi, Mohammad Nabi, Chunyan Li, and Nazanin Chaichitehrani. "Stratification Breakdown by Fall Cold Front Winds over the Louisiana Shelf in the Northern Gulf of Mexico: A Numerical Experiment." Journal of Marine Science and Engineering 11, no. 3 (March 22, 2023): 673. http://dx.doi.org/10.3390/jmse11030673.

Full text
Abstract:
Cold fronts are meteorological phenomena that impact the northern Gulf of Mexico, mostly between the fall and spring seasons. On average, they pass the region every 3–7 days, with a duration ranging between 24 and 74 h. In the present study, a high-resolution FVCOM model with an unstructured mesh was used to simulate the effect of the fall cold front winds on water column mixing over the Louisiana shelf, which is often stratified in the summer, leading to hypoxia. Numerical experiments were conducted for October 2009, a period with five consecutive cold front events. Winds from an offshore station forced the model, while climatological temperature/salinity profiles prepared by NOAA for September were used for model initialization. The model performance was evaluated by comparing it with the surface current measurements at two offshore stations, and the results showed a good agreement between the model results and observations. Shelf mixing and stratification were investigated through examining the simulated sea surface temperature as well as the longitudinal and cross-shelf vertical sections. Simulation results showed a significant effect on shelf mixing, with the mixed layer depth increasing from the initial values of 5 m to 25 m at the end of simulation at different parts of the shelf, with maximum mixed layer depths corresponding to the peak of cold fronts. The buoyancy frequency, Richardson number, and the average potential energy demand (APED) for mixing the water column were used to quantify the stratification at two selected locations over the shelf. Results showed that all these parameters almost continuously decreased due to mixing induced by cold front wind events during this time. At the station off the Terrebonne Bay with a water depth of 20 m, the water column became fully mixed after three of the cold front events, with Richardson numbers smaller than 0.25 and approaching zero. This continued mixing trend was also proven by obtaining a decreasing trend of APED from 100 to 5 kg/m.s2 with several close to zero energy demand values.
APA, Harvard, Vancouver, ISO, and other styles
42

Grooß, Jens-Uwe, Paul Konopka, and Rolf Müller. "Ozone Chemistry during the 2002 Antarctic Vortex Split." Journal of the Atmospheric Sciences 62, no. 3 (March 1, 2005): 860–70. http://dx.doi.org/10.1175/jas-3330.1.

Full text
Abstract:
Abstract In September 2002, the Antarctic polar vortex was disturbed, and it split into two parts caused by an unusually early stratospheric major warming. This study discusses the chemical consequences of this event using the Chemical Lagrangian Model of the Stratosphere (CLaMS). The chemical initialization of the simulation is based on Halogen Occultation Experiment (HALOE) measurements. Because of its Lagrangian nature, CLaMS is well suited for simulating the small-scale filaments that evolve during this period. Filaments of vortex origin in the midlatitudes were observed by HALOE several times in October 2002. The results of the simulation agree well with these HALOE observations. The simulation further indicates a very rapid chlorine deactivation that is triggered by the warming associated with the split of the vortex. Correspondingly, the ozone depletion rates in the polar vortex parts rapidly decrease to zero. Outside the polar vortex, where air masses of midlatitude origin were transported to the polar region, the simulation shows high ozone depletion rates at the 700-K level caused mainly by NOx chemistry. Owing to the major warming in September 2002, ozone-poor air masses were transported into the midlatitudes and caused a decrease of midlatitude ozone by 5%–15%, depending on altitude. Besides this dilution effect, there was no significant additional chemical effect. The net chemical ozone depletion in air masses of vortex origin was low and did not differ significantly from that of midlatitude air, in spite of the different chemical composition of the two types of air masses.
APA, Harvard, Vancouver, ISO, and other styles
43

Ren, Xiangyang, Shuai Chen, Kunyuan Wang, and Juan Tan. "Design and application of improved sparrow search algorithm based on sine cosine and firefly perturbation." Mathematical Biosciences and Engineering 19, no. 11 (2022): 11422–52. http://dx.doi.org/10.3934/mbe.2022533.

Full text
Abstract:
<abstract> <p>Swarm intelligence algorithms are relatively simple and highly applicable algorithms, especially for solving optimization problems with high reentrancy, high stochasticity, large scale, multi-objective and multi-constraint characteristics. The sparrow search algorithm (SSA) is a kind of swarm intelligence algorithm with strong search capability, but SSA has the drawback of easily falling into local optimum in the iterative process. Therefore, a sine cosine and firefly perturbed sparrow search algorithm (SFSSA) is proposed for addressing this deficiency. Firstly, the Tent chaos mapping is invoked in the initialization population stage to improve the population diversity; secondly, the positive cosine algorithm incorporating random inertia weights is introduced in the discoverer position update, so as to improve the probability of the algorithm jumping out of the local optimum and speed up the convergence; finally, the firefly perturbation is used to firefly perturb the sparrows, and all sparrows are updated with the optimal sparrows using the firefly perturbation method to improve their search-ability. Thirteen benchmark test functions were chosen to evaluate SFSSA, and the results were compared to those computed by existing swarm intelligence algorithms, as well as the proposed method was submitted to the Wilcoxon rank sum test. Furthermore, the aforesaid methods were evaluated in the CEC 2017 test functions to further validate the optimization efficiency of the algorithm when the optimal solution is not zero. The findings show that SFSSA is more favorable in terms of algorithm performance, and the method's searchability is boosted. Finally, the suggested algorithm is used to the locating problem of emergency material distribution centers to further validate the feasibility and efficacy of SFSSA.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
44

Taeib, Adel, Moêz Soltani, and Abdelkader Chaari. "Model predictive control based on chaos particle swarm optimization for nonlinear processes with constraints." Kybernetes 43, no. 9/10 (November 3, 2014): 1469–82. http://dx.doi.org/10.1108/k-06-2013-0103.

Full text
Abstract:
Purpose – The purpose of this paper is to propose a new type of predictive fuzzy controller. The desired nonlinear system behavior is described by a set of Takagi-Sugeno (T-S) model. However, due to the complexity of the real processes, obtaining a high quality control with a short settle time, a periodical step response and zero steady-state error is often a difficult task. Indeed, conventional model predictive control (MPC) attempts to minimize a quadratic cost over an extended control horizon. Then, the MPC is insufficient to adapt to changes in system dynamics which have characteristics of complex constraints. In addition, it is shown that the clustering algorithm is sensitive to random initialization and may affect the quality of obtaining predictive fuzzy controller. In order to overcome these problems, chaos particle swarm optimization (CPSO) is used to perform model predictive controller for nonlinear process with constraints. The practicality and effectiveness of the identification and control scheme is demonstrated by simulation results involving simulations of a continuous stirred-tank reactor. Design/methodology/approach – A new type of predictive fuzzy controller. The proposed algorithm based on CPSO is used to perform model predictive controller for nonlinear process with constraints. Findings – The results obtained using this the approach were comparable with other modeling approaches reported in the literature. The proposed control scheme has been show favorable results either in the absence or in the presence of disturbance compared with the other techniques. It confirms the usefulness and robustness of the proposed controller. Originality/value – This paper presents an intelligent model predictive controller MPC based on CPSO (MPC-CPSO) for T-S fuzzy modeling with constraints.
APA, Harvard, Vancouver, ISO, and other styles
45

Eltamaly, Ali M., Zeyad A. Almutairi, and Mohamed A. Abdelhamid. "Modern Optimization Algorithm for Improved Performance of Maximum Power Point Tracker of Partially Shaded PV Systems." Energies 16, no. 13 (July 7, 2023): 5228. http://dx.doi.org/10.3390/en16135228.

Full text
Abstract:
Due to the rapid advancement in the use of photovoltaic (PV) energy systems, it has become critical to look for ways to improve the energy generated by them. The extracted power from the PV modules is proportional to the output voltage. The relationship between output power and array voltage has only one peak under uniform irradiance, whereas it has multiple peaks under partial shade conditions (PSCs). There is only one global peak (GP) and many local peaks (LPs), where the typical maximum power point trackers (MPPTs) may become locked in one of the LPs, significantly reducing the PV system’s generated power and efficiency. The metaheuristic optimization algorithms (MOAs) solved this problem, albeit at the expense of the convergence time, which is one of these algorithms’ key shortcomings. Most MOAs attempt to lower the convergence time at the cost of the failure rate and the accuracy of the findings because these two factors are interdependent. To address these issues, this work introduces the dandelion optimization algorithm (DOA), a novel optimization algorithm. The DOA’s convergence time and failure rate are compared to other modern MOAs in critical scenarios of partial shade PV systems to demonstrate the DOA’s superiority. The results obtained from this study showed substantial performance improvement compared to other MOAs, where the convergence time was reduced to 0.4 s with zero failure rate compared to 0.9 s, 1.25 s, and 0.43 s for other MOAs under study. The optimal number of search agents in the swarm, the best initialization of search agents, and the optimal design of the dc–dc converter are introduced for optimal MPPT performance.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Meihua. "Study on Hopf Branch of Stability of Time-Delay Unified System and Performance Evaluation Based on Deep Learning." Mathematical Problems in Engineering 2022 (September 12, 2022): 1–11. http://dx.doi.org/10.1155/2022/2173900.

Full text
Abstract:
Deep learning is a breakthrough in machine learning research. It aims to establish a deep network structure that can simulate the human brain for analysis and learning, interpret data through the mechanism of layer-by-layer abstract feature representation, and has excellent feature learning capabilities. According to the input-output performance evaluation data of colleges and universities, three experiments are done. First, the feature expression ability of RBM, the basic building block of deep learning, is studied, and compared with PCA, the results show that RBM-fine-tuning has better performance than PCA-expressed classifier; the reconstruction error can be used to judge the hidden layer. As the number of RBM layers increases, the classification accuracy gradually increases, indicating the feasibility of the RBMs feature extractor. Second, the model in this study has a higher prediction accuracy than other classification models and clarifies the effectiveness of the modular deep learning model based on RBMs from the perspectives of network convergence analysis and network output analysis. The ability is stronger than DBN, and the obtained abstract feature representation is more conducive to classification. Although the classification accuracy rate of the model in this study has been improved, the model has certain limitations. The network initialization is still set based on experiments and experience, and the prediction accuracy rate is only 88.3%, which needs to be improved. The parameter training algorithm of RBMs can be further studied. To improve the more accurate reference basis for the performance evaluation of colleges and universities. Third, in the research of dynamical systems, the stability of the time-delay unified system at zero equilibrium and positive equilibrium is studied, and the conditions for generating the Hopf branch are given. At the same time, some conclusions are obtained through theoretical analysis. Numerical simulations further verify the validity of the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
47

Thuy Pham, Ngoc, Diep Phu Nguyen, Khuong Huu Nguyen, and Nho Van Nguyen. "New Version of Adaptive Speed Observer based on Neural Network for SPIM." International Journal of Power Electronics and Drive Systems (IJPEDS) 9, no. 4 (December 1, 2018): 1486. http://dx.doi.org/10.11591/ijpeds.v9.i4.pp1486-1502.

Full text
Abstract:
This paper presents a novel Stator Current based Model Reference Adaptive System (SC_MRAS) speed observer for high-performance Six Phases Induction Motor (SPIM) drives using linear neural network. The article aim is intended to improve performance of an SC_MRAS observer, which were presented in the literature. In this proposed scheme, the measured stator current components are used as the reference model of the MRAS observer to avoid the use of a pure integrator and reduce the influence of motor parameter variation. The adaptive model uses a two-layer Neural Network (NN) to estimate the stator current, which has been trained online by means of a Least Squares (LS) algorithm instead of uses a nonlinear Back Propagation Network (BPN) algorithm to reduce the complexity and computational burden, it also help to improve some disadvantages cause by the inherent nonlinearity of the BPN algorithm as local minima, two heuristically chosen parameters, initialization, and convergence problems, paralysis of the neural network. The adaptive model of the proposed scheme is employed in prediction mode, not in simulation mode as is usually the case in the literature, this made the proposed observer operate better accuracy and stability. In the proposed observer, stator and rotor resistance values are estimated online, these values thereafter were updated for the current observer and rotor flux identifier to enhance the accuracy, robustness and insensitivity to parameters variation for the proposed observer. The proposed LS SC_MRAS observer has been verified thought the simulation and compared with the BPN MRAS observer. The simulation results have proven that the speed is estimated a consequent quicker convergence, do not need the estimated speed filter, lower estimation errors both in transient and steady state operation, better behavior in low and zero speed operation.
APA, Harvard, Vancouver, ISO, and other styles
48

Marpaung, Faridawaty, Arnita Arnita, and Nila Sari. "Maximal Flow of Transportation Network in Medan City Using Ford-Fulkerson Algorithm." International Journal of Science, Technology & Management 4, no. 1 (January 9, 2023): 100–106. http://dx.doi.org/10.46729/ijstm.v4i1.724.

Full text
Abstract:
Transportation is a major component of living, government, and social systems. Tsocio-demographic conditions influence the performance. In this study has a purpose as an investigation of an event that measures the maximal capacity of the road as a way to overcome congestion. Searching for the maximal flow based upon the graph formed using the Ford-Fulkerson Algorithm with numerous stages, namely determining the possible pathway through the source point to the main point arranged in numerous iterations by analyzing the graph formed. Then the initialization process is given by zero flow from each side of the graph, then the search for maximal flow is obtained after doing as many as 16 iterations. The residual flow results from subtracting the total number of flows by the number of flows at each iteration. The iteration will stop if no more augmenting pathway is found. Cuts will occur if the amount of flow equals the amount of capacity. Research data is in the form of public transportations in Medan and routes. The data was obtained directly from the archives of the Transportation Service in Medan City. The search for the maximal flow of the transportation network is carried out by observing the public transportation route in the city of Medan, namely the public transportation route through Jalan Jamin Ginting to Jl. Various types of public transportation traverse Williem Iskandar. Based upon the research, the maximal capacity can be exceeded through the transportation network. There are 5 types of public transportations. The pathway through City Hall - Jl. Putri Hijau and the pathway through Jl. Perintis Kemerdekaan–Jl. HM. Yamin has exceeded capacity because the types of public transportation that pass through the two roads are 15 and 9. However, the other lanes are still low, so it should divert some types of public transportation overload to empty roads.
APA, Harvard, Vancouver, ISO, and other styles
49

Tyshchuk, Kirill, Polina Karpikova, Andrew Spiridonov, Anastasiia Prutianova, Anton Razzhigaev, and Alexander Panchenko. "On Isotropy of Multimodal Embeddings." Information 14, no. 7 (July 10, 2023): 392. http://dx.doi.org/10.3390/info14070392.

Full text
Abstract:
Embeddings, i.e., vector representations of objects, such as texts, images, or graphs, play a key role in deep learning methodologies nowadays. Prior research has shown the importance of analyzing the isotropy of textual embeddings for transformer-based text encoders, such as the BERT model. Anisotropic word embeddings do not use the entire space, instead concentrating on a narrow cone in such a pretrained vector space, negatively affecting the performance of applications, such as textual semantic similarity. Transforming a vector space to optimize isotropy has been shown to be beneficial for improving performance in text processing tasks. This paper is the first comprehensive investigation of the distribution of multimodal embeddings using the example of OpenAI’s CLIP pretrained model. We aimed to deepen the understanding of the embedding space of multimodal embeddings, which has previously been unexplored in this respect, and study the impact on various end tasks. Our initial efforts were focused on measuring the alignment of image and text embedding distributions, with an emphasis on their isotropic properties. In addition, we evaluated several gradient-free approaches to enhance these properties, establishing their efficiency in improving the isotropy/alignment of the embeddings and, in certain cases, the zero-shot classification accuracy. Significantly, our analysis revealed that both CLIP and BERT models yielded embeddings situated within a cone immediately after initialization and preceding training. However, they were mostly isotropic in the local sense. We further extended our investigation to the structure of multilingual CLIP text embeddings, confirming that the observed characteristics were language-independent. By computing the few-shot classification accuracy and point-cloud metrics, we provide evidence of a strong correlation among multilingual embeddings. Embeddings transformation using the methods described in this article makes it easier to visualize embeddings. At the same time, multiple experiments that we conducted showed that, in regard to the transformed embeddings, the downstream tasks performance does not drop substantially (and sometimes is even improved). This means that one could obtain an easily visualizable embedding space, without substantially losing the quality of downstream tasks.
APA, Harvard, Vancouver, ISO, and other styles
50

Manthiramoorthy, Chinnadurai, K. Mohamed Sayeed Khan, and Noorul Ameen A. "Comparing Several Encrypted Cloud Storage Platforms." International Journal of Mathematics, Statistics, and Computer Science 2 (August 6, 2023): 44–62. http://dx.doi.org/10.59543/ijmscs.v2i.7971.

Full text
Abstract:
Cloud services and cryptographic cloud storage systems have gained popularity in recent years due to their availability and accessibility. The present systems are nonetheless still ineffectual. They are the best since they demand a lot of trust from the user or the provider. To ensure they are not violating any End-User License Agreement (EULA) clauses, providers typically keep the ability to examine the files that have been saved, and some even keep the ability to share the data. It is simple to create a copy of every piece of data when a provider has access to go through it, which is considered abuse. A typical user would have a very difficult time proving these claims because they have no method of finding any evidence supporting such claims. Due to the growing quantity of Machine Learning (ML) performed on personal user data for either tailoring advertisements or, in more severe cases, manipulating public opinion, this issue has only gotten worse in modern times. Due to the volume of users and files kept, cloud storage services are the ideal location for getting such information, whether personal or not. To retain complete anonymity, the user could take the simple step of adding a local layer of encryption. This will prevent the cloud provider from being able to decrypt the data. The requirement for ongoing key management, which becomes more challenging as the number of keys rises, is another drawback of this. To better understand normal behaviors and pinpoint potential weaknesses, this study aims to explore and assess the security of a few well-known existing cryptographic cloud storage options. Among the vendors investigated are Microsoft Azure, Tresorit, Amazon S3, and Google Cloud. Based on documentation particular to each service, this comparison was done. However, the majority of providers frequently provide only a limited amount of information or don't go into great detail about specific ideas or procedures (for instance, security in Google Cloud), leaving room for interpretation. The authors conclude by outlining a unique approach for encrypted cloud storage that employs Cocks Identity Based Encryption (IBE) and Advanced Encryption Standard (AES)-256 Cipher Block Chaining (CBC) to limit potential abuse by alerting the user anytime a file 1 inspection takes place. Cocks IBE will be utilized as an alternate cryptographic method for access controls, and AES-256 will be used for the Initialization Vector (IV) features' encryption. Additionally, Fiat-Shamir authentication will be zero-knowledge. A system like this might be used by companies who offer services in the actual world because it would boost customer confidence.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography