Journal articles on the topic 'Deep parameter optimisation'

To see the other types of publications on this topic, follow the link: Deep parameter optimisation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep parameter optimisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bhattacharjee, Rohan, Debjyoti Ghosh, and Abhirup Mazumder. "A REVIEW ON HYPER-PARAMETER OPTIMISATION BY DEEP LEARNING EXPERIMENTS." Journal of Mathematical Sciences & Computational Mathematics 2, no. 4 (July 5, 2021): 532–41. http://dx.doi.org/10.15864/jmscm.2407.

Full text
Abstract:
It has been found that during the runtime of a deep learning experiment, the intermediate resultant values get removed while the processes carry forward. This removal of data forces the interim experiment to roll back to a certain initial point after which the hyper-parameters or results become difficult to obtain (mostly for a vast set of experimental data). Hyper-parameters are the various constraints/measures that a learning model requires to generalise distinct data patterns and control the learning process. A proper choice and optimization of these hyper-parameters must be made so that the learning model is capable of resolving the given machine learning problem and during training, a specific performance objective for an algorithm on a dataset is optimised. This review paper aims at presenting a Parameter Optimisation for Learning (POL) model highlighting the all-round features of a deep learning experiment via an application-based programming interface (API). This provides the means of stocking, recovering and examining parameters settings and intermediate values. To ease the process of optimisation of hyper-parameters further, the model involves the application of optimisation functions, analysis and data management. Moreover, the prescribed model boasts of a higher interactive aspect and is circulating across a number of machine learning experts, aiding further utility in data management.
APA, Harvard, Vancouver, ISO, and other styles
2

Tourki, Z., and K. Sai. "Design and loading parameter optimisation in deep drawing process." International Journal of Vehicle Design 39, no. 1/2 (2005): 25. http://dx.doi.org/10.1504/ijvd.2005.007216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abebe, H., V. Tyree, H. Morris, and P. T. Vernier. "SPICE BSIM3 Model Parameter Extraction and Optimisation: Practical Considerations." International Journal of Electrical Engineering & Education 44, no. 3 (July 2007): 249–62. http://dx.doi.org/10.7227/ijeee.44.3.5.

Full text
Abstract:
This tutorial paper discusses the SPICE BSIM3v3.1 model parameter extraction and optimisation strategies that show consistency and very good accuracy in circuit simulation, less than 10% error, for practical IC design application in deep submicron processes. This paper describes an approach to BSIM3v3.1 model parameter extraction that mitigates or eliminates many of the unstable circuit behaviours observed during SPICE simulations with BSIM3v3. We present here a strategy applicable to 0.18 micron CMOS technology, in which the accuracy of the final extracted model parameters is evaluated by comparing simulations of inverter gain and a 31-stage ring oscillator with measured data.
APA, Harvard, Vancouver, ISO, and other styles
4

Mihail, Laurentiu A. "ROBUST ENGINEERING CASE STUDY FOR SIMULTANEOUS PARAMETER OPTIMIZATION OF A DEEP DRILLING PROCESS." International Journal "Advanced Quality" 45, no. 1 (June 19, 2017): 27. http://dx.doi.org/10.25137/ijaq.n1.v45.y2017.p27-34.

Full text
Abstract:
The paper reflects de overall results of an experiment developed for optimising a deep peck drilling process, using an extra-long flute drill. The problem stated was the dimensional, geometrical and orientation accuracy of the holes machined by the previously mentioned machining method. The target was to improve the quality of the machined parts and to reach the maximum productivity in the same time. The optimisation method used was the Taguchi Method, with a L423 fractionated factorial array. Another important issue was to optimise several quality characteristics, simultaneously. After machining the test part on a high-speed machining flexible system, the parts were measured on a coordinate measuring machine. Finally, the data was computed assisted by an advanced quality software. The simultaneous optimisation was achieved by validated method, through several iterations based on advanced process and design of experiments knowledge. Finally, the conclusions were compared with another results, from the same research program, validating it.
APA, Harvard, Vancouver, ISO, and other styles
5

Treloar, Neythen J., Nathan Braniff, Brian Ingalls, and Chris P. Barnes. "Deep reinforcement learning for optimal experimental design in biology." PLOS Computational Biology 18, no. 11 (November 21, 2022): e1010695. http://dx.doi.org/10.1371/journal.pcbi.1010695.

Full text
Abstract:
The field of optimal experimental design uses mathematical techniques to determine experiments that are maximally informative from a given experimental setup. Here we apply a technique from artificial intelligence—reinforcement learning—to the optimal experimental design task of maximizing confidence in estimates of model parameter values. We show that a reinforcement learning approach performs favourably in comparison with a one-step ahead optimisation algorithm and a model predictive controller for the inference of bacterial growth parameters in a simulated chemostat. Further, we demonstrate the ability of reinforcement learning to train over a distribution of parameters, indicating that this approach is robust to parametric uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
6

Damavandi, Esmaeil, Amin Kolahdooz, Yousef Shokoohi, Seyyed Ali Latifi Rostami, and Sayed Mohamadbagher Tabatabaei. "Multi-objective parameter optimisation to improve machining performance on deep drilling process." International Journal of Machining and Machinability of Materials 23, no. 5/6 (2021): 500. http://dx.doi.org/10.1504/ijmmm.2021.121196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brodzicki, Andrzej, Michał Piekarski, and Joanna Jaworek-Korjakowska. "The Whale Optimization Algorithm Approach for Deep Neural Networks." Sensors 21, no. 23 (November 30, 2021): 8003. http://dx.doi.org/10.3390/s21238003.

Full text
Abstract:
One of the biggest challenge in the field of deep learning is the parameter selection and optimization process. In recent years different algorithms have been proposed including bio-inspired solutions to solve this problem, however, there are many challenges including local minima, saddle points, and vanishing gradients. In this paper, we introduce the Whale Optimisation Algorithm (WOA) based on the swarm foraging behavior of humpback whales to optimise neural network hyperparameters. We wish to stress that to the best of our knowledge this is the first attempt that uses Whale Optimisation Algorithm for the optimisation task of hyperparameters. After a detailed description of the WOA algorithm we formulate and explain the application in deep learning, present the implementation, and compare the proposed algorithm with other well-known algorithms including widely used Grid and Random Search methods. Additionally, we have implemented a third dimension feature analysis to the original WOA algorithm to utilize 3D search space (3D-WOA). Simulations show that the proposed algorithm can be successfully used for hyperparameters optimization, achieving accuracy of 89.85% and 80.60% for Fashion MNIST and Reuters datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Guzzi, Francesco, George Kourousias, Alessandra Gianoncelli, Fulvio Billè, and Sergio Carrato. "A Parameter Refinement Method for Ptychography Based on Deep Learning Concepts." Condensed Matter 6, no. 4 (October 14, 2021): 36. http://dx.doi.org/10.3390/condmat6040036.

Full text
Abstract:
X-ray ptychography is an advanced computational microscopy technique, which is delivering exceptionally detailed quantitative imaging of biological and nanotechnology specimens, which can be used for high-precision X-ray measurements. However, coarse parametrisation in propagation distance, position errors and partial coherence frequently threaten the experimental viability. In this work, we formally introduce these actors, solving the whole reconstruction as an optimisation problem. A modern deep learning framework was used to autonomously correct the setup incoherences, thus improving the quality of a ptychography reconstruction. Automatic procedures are indeed crucial to reduce the time for a reliable analysis, which has a significant impact on all the fields that use this kind of microscopy. We implemented our algorithm in our software framework, SciComPty, releasing it as open-source. We tested our system on both synthetic datasets, as well as on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
APA, Harvard, Vancouver, ISO, and other styles
9

Xiong, Fansheng, Heng Yong, Hua Chen, Han Wang, and Weidong Shen. "Biot's equations-based reservoir parameter inversion using deep neural networks." Journal of Geophysics and Engineering 18, no. 6 (December 2021): 862–74. http://dx.doi.org/10.1093/jge/gxab057.

Full text
Abstract:
Abstract Reservoir parameter inversion from seismic data is an important issue in rock physics. The traditional optimisation-based inversion method requires high computational expense, and the process exhibits subjectivity due to the nonuniqueness of generated solutions. This study proposes a deep neural network (DNN)-based approach as a new means to analyse the sensitivity of seismic attributes to basic rock-physics parameters and then realise fast parameter inversion. First, synthetic data of inputs (reservoir properties) and outputs (seismic attributes) are generated using Biot's equations. Then, a forward DNN model is trained to carry out a sensitivity analysis. One can in turn investigate the influence of each rock-physics parameter on the seismic attributes calculated by Biot's equations, and the method can also be used to estimate and evaluate the accuracy of parameter inversion. Finally, DNNs are applied to parameter inversion. Different scenarios are designed to study the inversion accuracy of porosity, bulk and shear moduli of a rock matrix considering that the input quantities are different. It is found that the inversion of porosity is relatively easy and accurate, while more information is needed to make the inversion more accurate for bulk and shear moduli. From the presented results, the new approach makes it possible to realise accurate and pointwise inverse modelling with high efficiency for actual data interpretation and analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Rolek, Jaroslaw, and Grzegorz Utrata. "Optimisation of the FE Model Based on the No-Load Test Measurement for Estimating Electromagnetic Parameters of an Induction Motor Equivalent Circuit Including the Rotor Deep-Bar Effect." Energies 14, no. 22 (November 12, 2021): 7562. http://dx.doi.org/10.3390/en14227562.

Full text
Abstract:
The various measurement procedures for determination of electromagnetic parameters for the induction motor (IM) equivalent circuits including the rotor deep-bar effect were proposed in the literature. One of them is the procedure based on the load curve test (LCT). Since the execution of the LCT can pose some difficulties, especially in industrial conditions, as an alternative, the finite element method (FEM) can be employed to simulate the IM operation under the LCT. In this work we developed the optimisation technique for the finite element (FE) model. This technique is performed with the use of the stator current space-vector components which determine the IM input active and reactive power consumption during no-load operation. Relying on the LCT simulation carried out with the optimised FE model the inductance frequency characteristic can be determined and then used as the reference characteristic in the electromagnetic parameter estimation for the IM equivalent circuit including the rotor deep-bar effect. The presented research results demonstrate proper conformity between the inductance frequency characteristics obtained from the LCT performed experimentally and determined by means of the optimised FE model. Satisfactory conformity is also achieved in the case of the torque-versus-slip frequency curves acquired from the measurement and calculated by the IM space-vector model with estimated electromagnetic parameters. All of this validates the effectiveness of the proposed technique for the FE-model optimisation and the usefulness of the presented approach using the FEM in the electromagnetic parameter estimation for the IM equivalent circuit including the rotor deep-bar effect.
APA, Harvard, Vancouver, ISO, and other styles
11

Kriest, Iris, Paul Kähler, Wolfgang Koeve, Karin Kvale, Volkmar Sauerland, and Andreas Oschlies. "One size fits all? Calibrating an ocean biogeochemistry model for different circulations." Biogeosciences 17, no. 12 (June 18, 2020): 3057–82. http://dx.doi.org/10.5194/bg-17-3057-2020.

Full text
Abstract:
Abstract. Global biogeochemical ocean models are often tuned to match the observed distributions and fluxes of inorganic and organic quantities. This tuning is typically carried out “by hand”. However, this rather subjective approach might not yield the best fit to observations, is closely linked to the circulation employed and is thus influenced by its specific features and even its faults. We here investigate the effect of model tuning, via objective optimisation, of one biogeochemical model of intermediate complexity when simulated in five different offline circulations. For each circulation, three of six model parameters have been adjusted to characteristic features of the respective circulation. The values of these three parameters – namely, the oxygen utilisation of remineralisation, the particle flux parameter and potential nitrogen fixation rate – correlate significantly with deep mixing and ideal age of North Atlantic Deep Water (NADW) and the outcrop area of Antarctic Intermediate Waters (AAIW) and Subantarctic Mode Water (SAMW) in the Southern Ocean. The clear relationship between these parameters and circulation characteristics, which can be easily diagnosed from global models, can provide guidance when tuning global biogeochemistry within any new circulation model. The results from 20 global cross-validation experiments show that parameter sets optimised for a specific circulation can be transferred between similar circulations without losing too much of the model's fit to observed quantities. When compared to model intercomparisons of subjectively tuned, global coupled biogeochemistry–circulation models, each with different circulation and/or biogeochemistry, our results show a much lower range of oxygen inventory, oxygen minimum zone (OMZ) volume and global biogeochemical fluxes. Export production depends to a large extent on the circulation applied, while deep particle flux is mostly determined by the particle flux parameter. Oxygen inventory, OMZ volume, primary production and fixed-nitrogen turnover depend more or less equally on both factors, with OMZ volume showing the highest sensitivity, and residual variability. These results show a beneficial effect of optimisation, even when a biogeochemical model is first optimised in a relatively coarse circulation and then transferred to a different finer-resolution circulation model.
APA, Harvard, Vancouver, ISO, and other styles
12

Motz, C., T. Paireder, and M. Huemer. "Deep unfolding based hyper‐parameter optimisation for self‐interference cancellation in LTE‐A/5G‐transceivers." Electronics Letters 57, no. 18 (May 29, 2021): 711–13. http://dx.doi.org/10.1049/ell2.12230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ni, Ni, and Shaoshao Xu. "Model optimization strategies based on deep neural networks Learning and application of pruning optimization algorithms." Journal of Physics: Conference Series 2303, no. 1 (July 1, 2022): 012033. http://dx.doi.org/10.1088/1742-6596/2303/1/012033.

Full text
Abstract:
Abstract Deep learning is becoming increasingly important in the context of machine learning, with notable performance achieved in many experiments and attempts. However, if one wants to port a deep learning network from a computationally powerful CPU platform to a small embedded mobile device, it will be constrained by various aspects such as power consumption storage. This paper addresses this problem by simplifying the model, i.e. by model compression. Specifically, by collating work on compression pruning techniques from recent years, the importance of the parameter weights of the data model in the training phase is analysed and then the model is tested with new data in the inference phase to reduce the parameter weights and avoid over-parameterisation, thus achieving model optimisation. Ultimately, an effective idea for model pruning was derived: using the absolute magnitude of the parameters and feature outputs to measure their importance, while analysing the effect of parameter clipping on the loss function. This paper provides a summary distillation of relevant model compression work in recent years to provide a reference for research enthusiasts in the field of pruning and to pave the way for further research.
APA, Harvard, Vancouver, ISO, and other styles
14

Fei, Zhenyu, Zengxi Pan, Dominic Cuiuri, Huijun Li, Bintao Wu, Donghong Ding, and Lihong Su. "Effect of Heat Input on Weld Formation and Tensile Properties in Keyhole Mode TIG Welding Process." Metals 9, no. 12 (December 7, 2019): 1327. http://dx.doi.org/10.3390/met9121327.

Full text
Abstract:
Keyhole mode Tungsten Inert Gas (K-TIG) welding is a novel advanced deep penetration welding technology which provides an alternative to high power density welding in terms of achieving keyhole mode welding. In order to facilitate welding procedure optimisation in this newly developed welding technology, the relationship among welding parameters, weld formation and tensile properties during the K-TIG welding was investigated in detail. Results show that except for travel speed, the heat input level also plays an important role in forming undercut defect by changing the plasma jet trajectory inside keyhole channel, leading to the formation of hump in the weld centre and exacerbation of undercut formation. Both undercut defect and root side fusion boundary can act as a stress concentration point, which affects the fracture mode and tensile properties considerably. The research results provide a practical guidance of process parameter optimisation and quality assurance for the K-TIG welding process.
APA, Harvard, Vancouver, ISO, and other styles
15

Ledent, Antoine, Waleed Mustafa, Yunwen Lei, and Marius Kloft. "Norm-Based Generalisation Bounds for Deep Multi-Class Convolutional Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 8279–87. http://dx.doi.org/10.1609/aaai.v35i9.17007.

Full text
Abstract:
We show generalisation error bounds for deep learning with two main improvements over the state of the art. (1) Our bounds have no explicit dependence on the number of classes except for logarithmic factors. This holds even when formulating the bounds in terms of the Frobenius-norm of the weight matrices, where previous bounds exhibit at least a square-root dependence on the number of classes. (2) We adapt the classic Rademacher analysis of DNNs to incorporate weight sharing---a task of fundamental theoretical importance which was previously attempted only under very restrictive assumptions. In our results, each convolutional filter contributes only once to the bound, regardless of how many times it is applied. Further improvements exploiting pooling and sparse connections are provided. The presented bounds scale as the norms of the parameter matrices, rather than the number of parameters. In particular, contrary to bounds based on parameter counting, they are asymptotically tight (up to log factors) when the weights approach initialisation, making them suitable as a basic ingredient in bounds sensitive to the optimisation procedure. We also show how to adapt the recent technique of loss function augmentation to replace spectral norms by empirical analogues whilst maintaining the advantages of our approach.
APA, Harvard, Vancouver, ISO, and other styles
16

Krauße, T., and J. Cullmann. "Towards a more representative parametrisation of hydrologic models via synthesizing the strengths of Particle Swarm Optimisation and Robust Parameter Estimation." Hydrology and Earth System Sciences 16, no. 2 (February 27, 2012): 603–29. http://dx.doi.org/10.5194/hess-16-603-2012.

Full text
Abstract:
Abstract. The development of methods for estimating the parameters of hydrologic models considering uncertainties has been of high interest in hydrologic research over the last years. In particular methods which understand the estimation of hydrologic model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008) presented a first Robust Parameter Estimation Method (ROPE) and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. The basic idea of this algorithm is to identify a set of model parameter vectors with high model performance called good parameters and subsequently generate a set of parameter vectors with high data depth with respect to the first set. Both steps are repeated iteratively until a stopping criterion is met. The results estimated in this case study show the high potential of the principle of data depth to be used for the estimation of hydrologic model parameters. In this paper we present some further developments that address the most important shortcomings of the original ROPE approach. We developed a stratified depth based sampling approach that improves the sampling from non-elliptic and multi-modal distributions. It provides a higher efficiency for the sampling of deep points in parameter spaces with higher dimensionality. Another modification addresses the problem of a too strong shrinking of the estimated set of robust parameter vectors that might lead to overfitting for model calibration with a small amount of calibration data. This contradicts the principle of robustness. Therefore, we suggest to split the available calibration data into two sets and use one set to control the overfitting. All modifications were implemented into a further developed ROPE approach that is called Advanced Robust Parameter Estimation (AROPE). However, in this approach the estimation of the good parameters is still based on an ineffective Monte Carlo approach. Therefore we developed another approach called ROPE with Particle Swarm Optimisation (ROPE-PSO) that substitutes the Monte Carlo approach with a more effective and efficient approach based on Particle Swarm Optimisation. Two case studies demonstrate the improvements of the developed algorithms when compared with the first ROPE approach and two other classical optimisation approaches calibrating a process oriented hydrologic model with hourly time step. The focus of both case studies is on modelling flood events in a small catchment characterised by extreme process dynamics. The calibration problem was repeated with higher dimensionality considering the uncertainty in the soil hydraulic parameters and another conceptual parameter of the soil module. We discuss the estimated results and propose further possibilities in order to apply ROPE as a well-founded parameter estimation and uncertainty analysis tool.
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Wen-Hui, Ping Wang, Kuo-Ming Chao, Hsiao-Chung Lin, Zong-Yu Yang, and Yu-Huang Lai. "Deep-Learning Model Selection and Parameter Estimation from a Wind Power Farm in Taiwan." Applied Sciences 12, no. 14 (July 13, 2022): 7067. http://dx.doi.org/10.3390/app12147067.

Full text
Abstract:
Deep learning networks (DLNs) use multilayer neural networks for multiclass classification that exhibit better results in wind-power forecasting applications. However, improving the training process using proper parameter hyperisations and techniques, such as regularisation and Adam-based optimisation, remains a challenge in the design of DLNs for processing time-series data. Moreover, the most appropriate parameter for the DLN model is to solve the wind-power forecasting problem by considering the excess training algorithms, such as the optimiser, activation function, batch size, and dropout. Reinforcement learning (RN) schemes constitute a smart approach to explore the proper initial parameters for the developed DLN model, considering a balance between exploration and exploitation processes. Therefore, the present study focuses on determining the proper hyperparameters for DLN models using a Q-learning scheme for four developed models. To verify the effectiveness of the developed temporal convolution network (TCN) models, experiments with five different sets of initial parameters for the TCN model were determined by the output results of Q-learning computation. The experimental results showed that the TCN accuracy for 168 h wind power prediction reached a mean absolute percentage error of 1.41%. In evaluating the effectiveness of selection of hyperparameters for the proposed model, the performance of four DLN-based prediction models for power forecasting—TCN, long short-term memory (LSTM), recurrent neural network (RNN), and gated recurrence unit (GRU) models—were compared. The overall detection accuracy of the TCN model exhibited higher prediction accuracy compared to canonical recurrent networks (i.e., the GRU, LSTM, and RNN models).
APA, Harvard, Vancouver, ISO, and other styles
18

Krauße, T., and J. Cullmann. "Towards a more representative parametrisation of hydrological models via synthesizing the strengths of particle swarm optimisation and robust parameter estimation." Hydrology and Earth System Sciences Discussions 8, no. 2 (March 7, 2011): 2373–422. http://dx.doi.org/10.5194/hessd-8-2373-2011.

Full text
Abstract:
Abstract. The development of methods for estimating the parameters of hydrological models considering uncertainties has been of high interest in hydrological research over the last years. In particular methods which understand the estimation of hydrological model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth found growing research interest. Bárdossy and Singh (2008) presented a first proposal and applied it for the calibration of a conceptual rainfall-runoff model with daily time step. Krauße and Cullmann (2011) further developed this method and applied it in a case study to calibrate a process oriented hydrological model with hourly time step focussing on flood events in a fast responding catchment. The results of both studies showed the potential of the application of the principle of data depth. However, also the weak point of the presented approach got obvious. The algorithm identifies a set of model parameter vectors with high model performance and subsequently generates a set of parameter vectors with high data depth with respect to the first set. These both steps are repeated iteratively until a stopping criterion is met. In the first step the estimation of the good parameter vectors is based on the Monte Carlo method. The major shortcoming of this method is that it is strongly dependent on a high number of samples exponentially growing with the dimensionality of the problem. In this paper we present another robust parameter estimation strategy which applies an approved search strategy for high-dimensional parameter spaces, the particle swarm optimisation in order to identify a set of good parameter vectors with given uncertainty bounds. The generation of deep parameters is according to Krauße and Cullmann (2011). The method was compared to the Monte Carlo based robust parameter estimation algorithm on the example of a case study in Krauße and Cullmann (2011) to calibrate the process-oriented distributed hydrological model focussing for flood forecasting in a small catchment characterised by extreme process dynamics. In a second case study the comparison is repeated on a problem with higher dimensionality considering further parameters of the soil module.
APA, Harvard, Vancouver, ISO, and other styles
19

van der Kamp, L., D. M. van Dam, M. A. Kenworthy, E. E. Mamajek, and G. Pojmański. "K2 discovery of a circumsecondary disk transiting EPIC 220208795." Astronomy & Astrophysics 658 (January 28, 2022): A38. http://dx.doi.org/10.1051/0004-6361/202142063.

Full text
Abstract:
Context. Observations of the star EPIC 220208795 (2MASS J01105556+0018507) reveal a single, deep and asymmetric eclipse, which we hypothesise is due to an eclipsing companion surrounded by a tilted and inclined opaque disk, similar to those seen around V928 Tau and EPIC 204376071. Aims. We aim to derive physical parameters of the disk and orbital parameters for the companion around the primary star. Methods. The modelling is carried out using a modified version of the python package pyPplusS, and optimisation is done using emcee. The period analysis makes use of photometry from ground-based surveys, where we performed a period folding search for other possible eclipses by the disk. Parameters obtained by the best model fits are used to obtain the parameter space of the orbital parameters, while the most likely period obtained is used to constrain these parameters. Results. The best model has an opaque disk with a radius of 1.14 ± 0.03 R⊙, an impact parameter of 0.61 ± 0.02 R⊙, an inclination of 77.01° ± 0.03°, a tilt of 36.81° ± 0.05°, and a transverse velocity of 77.45 ± 0.05 km s−1. The two most likely periods are ~290 days and ~236 days, corresponding to an eccentricity of ~0.7, allowing us to make predictions for the epochs of the next eclipses. All models with tilted and inclined disks result in a minimum derived eccentricity of 0.3, which in combination with the two other known small transiting disk candidates V928 Tau and EPIC 204376071, suggest that there may be a common origin for their eccentric orbits.
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Hongming, Wei Guo, Yanjun Lan, Zhenzhuo Wei, Sen Gao, Yu Sun, and Yifan Fu. "Black-Box Modelling and Prediction of Deep-Sea Landing Vehicles Based on Optimised Support Vector Regression." Journal of Marine Science and Engineering 10, no. 5 (April 24, 2022): 575. http://dx.doi.org/10.3390/jmse10050575.

Full text
Abstract:
Due to the nonlinearity of the deep-seafloor and complexity of the hydrodynamic force of novel structure platforms, realising an accurate motion mechanism modelling of a deep-sea landing vehicle (DSLV) is difficult. The support vector regression (SVR) model optimised through particle swarm optimisation (PSO) was used to complete the black-box motion modelling and vehicle prediction. In this study, first, the prototype and system composition of the DSLV were proposed, and subsequently, the high-dimensional nonlinear mapping relationship between the motion state and the driving forces was constructed using the SVR of radial basis function. The high-precision model parameter combination was obtained using PSO, and, subsequently, the black-box modelling and prediction of the vehicle were realised. Finally, the effectiveness of the method was verified through multi-body dynamics simulation and scaled test prototype data. The experimental results confirmed that the proposed PSO–SVR model could establish an accurate motion model of the vehicle, and provided a high-precision motion state prediction. Furthermore, with less calculation, the proposed method can reliably apply the model prediction results to the intelligent behaviour control and planning of the vehicle, accelerate the development progress of the prototype, and minimise the economic cost of the research and development process.
APA, Harvard, Vancouver, ISO, and other styles
21

Sun, Hongming, Wei Guo, Yanjun Lan, Zhenzhuo Wei, Sen Gao, Yu Sun, and Yifan Fu. "Black-Box Modelling and Prediction of Deep-Sea Landing Vehicles Based on Optimised Support Vector Regression." Journal of Marine Science and Engineering 10, no. 5 (April 24, 2022): 575. http://dx.doi.org/10.3390/jmse10050575.

Full text
Abstract:
Due to the nonlinearity of the deep-seafloor and complexity of the hydrodynamic force of novel structure platforms, realising an accurate motion mechanism modelling of a deep-sea landing vehicle (DSLV) is difficult. The support vector regression (SVR) model optimised through particle swarm optimisation (PSO) was used to complete the black-box motion modelling and vehicle prediction. In this study, first, the prototype and system composition of the DSLV were proposed, and subsequently, the high-dimensional nonlinear mapping relationship between the motion state and the driving forces was constructed using the SVR of radial basis function. The high-precision model parameter combination was obtained using PSO, and, subsequently, the black-box modelling and prediction of the vehicle were realised. Finally, the effectiveness of the method was verified through multi-body dynamics simulation and scaled test prototype data. The experimental results confirmed that the proposed PSO–SVR model could establish an accurate motion model of the vehicle, and provided a high-precision motion state prediction. Furthermore, with less calculation, the proposed method can reliably apply the model prediction results to the intelligent behaviour control and planning of the vehicle, accelerate the development progress of the prototype, and minimise the economic cost of the research and development process.
APA, Harvard, Vancouver, ISO, and other styles
22

Barakat, Chadi, Marcel Aach, Andreas Schuppert, Sigurður Brynjólfsson, Sebastian Fritsch, and Morris Riedel. "Analysis of Chest X-ray for COVID-19 Diagnosis as a Use Case for an HPC-Enabled Data Analysis and Machine Learning Platform for Medical Diagnosis Support." Diagnostics 13, no. 3 (January 20, 2023): 391. http://dx.doi.org/10.3390/diagnostics13030391.

Full text
Abstract:
The COVID-19 pandemic shed light on the need for quick diagnosis tools in healthcare, leading to the development of several algorithmic models for disease detection. Though these models are relatively easy to build, their training requires a lot of data, storage, and resources, which may not be available for use by medical institutions or could be beyond the skillset of the people who most need these tools. This paper describes a data analysis and machine learning platform that takes advantage of high-performance computing infrastructure for medical diagnosis support applications. This platform is validated by re-training a previously published deep learning model (COVID-Net) on new data, where it is shown that the performance of the model is improved through large-scale hyperparameter optimisation that uncovered optimal training parameter combinations. The per-class accuracy of the model, especially for COVID-19 and pneumonia, is higher when using the tuned hyperparameters (healthy: 96.5%; pneumonia: 61.5%; COVID-19: 78.9%) as opposed to parameters chosen through traditional methods (healthy: 93.6%; pneumonia: 46.1%; COVID-19: 76.3%). Furthermore, training speed-up analysis shows a major decrease in training time as resources increase, from 207 min using 1 node to 54 min when distributed over 32 nodes, but highlights the presence of a cut-off point where the communication overhead begins to affect performance. The developed platform is intended to provide the medical field with a technical environment for developing novel portable artificial-intelligence-based tools for diagnosis support.
APA, Harvard, Vancouver, ISO, and other styles
23

Abduh, Setya Budi Muhammad, Sze Ying Leong, Chun Zhao, Samantha Baldwin, David J. Burritt, Dominic Agyei, and Indrawati Oey. "Kinetics of Colour Development during Frying of Potato Pre-Treated with Pulsed Electric Fields and Blanching: Effect of Cultivar." Foods 10, no. 10 (September 28, 2021): 2307. http://dx.doi.org/10.3390/foods10102307.

Full text
Abstract:
The current research aimed to investigate the effect of pulsed electric fields (1 kV/cm; 50 and 150 kJ/kg) followed by blanching (3 min., 100 °C) on the colour development of potato slices during frying on a kinetic basis. Four potato cultivars ‘Crop77’, ‘Moonlight’, ‘Nadine’, and ‘Russet Burbank’ with different content of glucose and amino acids were used. Lightness (L* values from colorimeter measurement) was used as a parameter to assess the colour development during frying. The implementation of PEF and blanching as sequential pre-treatment prior to frying for all potato cultivars was found effective in improving their lightness in the fried products. PEF pre-treatment did not change the kinetics of L* reduction during frying (between 150 and 190 °C) which followed first-order reaction kinetics. The estimated reaction rate constant (k) and activation energy (Ea based on Arrhenius equation) for non-PEF and PEF-treated samples were cultivar dependent. The estimated Ea values during the frying of PEF-treated ‘Russet Burbank’ and ‘Crop77’ were significantly (p < 0.05) lower (up to 30%) than their non-PEF counterparts, indicating that the change in k value of L* became less temperature dependence during frying. This kinetic study is valuable to aid the optimisation of frying condition in deep-fried potato industries when PEF technology is implemented.
APA, Harvard, Vancouver, ISO, and other styles
24

Bellotti, Renato, Romana Boiger, and Andreas Adelmann. "Fast, Efficient and Flexible Particle Accelerator Optimisation Using Densely Connected and Invertible Neural Networks." Information 12, no. 9 (August 28, 2021): 351. http://dx.doi.org/10.3390/info12090351.

Full text
Abstract:
Particle accelerators are enabling tools for scientific exploration and discovery in various disciplines. However, finding optimised operation points for these complex machines is a challenging task due to the large number of parameters involved and the underlying non-linear dynamics. Here, we introduce two families of data-driven surrogate models, based on deep and invertible neural networks, that can replace the expensive physics computer models. These models are employed in multi-objective optimisations to find Pareto optimal operation points for two fundamentally different types of particle accelerators. Our approach reduces the time-to-solution for a multi-objective accelerator optimisation up to a factor of 640 and the computational cost up to 98%. The framework established here should pave the way for future online and real-time multi-objective optimisation of particle accelerators.
APA, Harvard, Vancouver, ISO, and other styles
25

Brotchie, James, Wei Shao, Wenchao Li, and Allison Kealy. "Leveraging Self-Attention Mechanism for Attitude Estimation in Smartphones." Sensors 22, no. 22 (November 21, 2022): 9011. http://dx.doi.org/10.3390/s22229011.

Full text
Abstract:
Inertial attitude estimation is a crucial component of many modern systems and applications. Attitude estimation from commercial-grade inertial sensors has been the subject of an abundance of research in recent years due to the proliferation of Inertial Measurement Units (IMUs) in mobile devices, such as the smartphone. Traditional methodologies involve probabilistic, iterative-state estimation; however, these approaches do not generalise well over changing motion dynamics and environmental conditions, as they require context-specific parameter tuning. In this work, we explore novel methods for attitude estimation from low-cost inertial sensors using a self-attention-based neural network, the Attformer. This paper proposes to part ways from the traditional cycle of continuous integration algorithms, and formulate it as an optimisation problem. This approach separates itself by leveraging attention operations to learn the complex patterns and dynamics associated with inertial data, allowing for the linear complexity in the dimension of the feature vector to account for these patterns. Additionally, we look at combining traditional state-of-the-art approaches with our self-attention method. These models were evaluated on entirely unseen sequences, over a range of different activities, users and devices, and compared with a recent alternate deep learning approach, the unscented Kalman filter and the iOS CoreMotion API. The inbuilt iOS had a mean angular distance from the true attitude of 117.31∘, the GRU 21.90∘, the UKF 16.38∘, the Attformer 16.28∘ and, finally, the UKF–Attformer had mean angular distance of 10.86∘. We show that this plug-and-play solution outperforms previous approaches and generalises well across different users, devices and activities.
APA, Harvard, Vancouver, ISO, and other styles
26

Degueldre, Claude. "Zirconia Inert Matrix for Plutonium Utilisation and Minor Actinides Disposition in Thermal Reactors." Advances in Science and Technology 45 (October 2006): 1907–14. http://dx.doi.org/10.4028/www.scientific.net/ast.45.1907.

Full text
Abstract:
The toxicity of the UO2 spent fuel is dominated by plutonium and minor actinides (MA): Np, Am and Cm, after decay of the short live fission products. Zirconia ceramics containing Pu and MA in the form of an Inert Matrix Fuel (IMF) could be used to burn these actinides in Light Water Reactors. Optimisation of the fuel designs dictated by properties such as thermal, mechanical, chemical and physical must be performed with attention for their behaviour under irradiation. Zirconia must be stabilised by yttria to form a solid solution such as AnzYyPuxZr1-yO2-y where minor actinide oxides are also soluble. Burnable poison may be added if necessary such as Gd, Ho, Er, Eu or Np, Am them-self. These cubic solid solutions are stable under heavy ion irradiation. The retention of fission products in zirconia, under similar thermodynamic conditions, is a priori stronger, compared to UO2, the lattice parameter being larger for UO2 than for (Y,Zr)O2-x. (Er,Y,Pu,Zr)O2-x in which Pu contains 5% Am was successfully irradiated in the Proteus reactor at PSI, in the HFR facility, Petten as well as in the Halden Reactor. These irradiations make the Swiss scientists confident to irradiate such IMF in a commercial reactor that would allow later a commercial deployment of such a fuel for Pu and MA utilisation in a last cycle. The fuel forms namely pellet of solid solution, cercer or cermet fuel are discussed considering the once through strategy. For this strategy, low solubility of the inert matrix is required for geological disposal. As spent fuels these IMF’s are demanding materials from the solubility point of view, this parameter was studied in detail for a range of solutions corresponding to groundwater under near field conditions. Under these conditions the IMF solubility is 106 times smaller than glass, which makes the zirconia material very attractive for deep geological disposal. The desired objective would be to use IMF to produce energy in reactors, opting for an economical and ecological solution.
APA, Harvard, Vancouver, ISO, and other styles
27

Fielding, Ben, and Li Zhang. "Evolving Deep DenseBlock Architecture Ensembles for Image Classification." Electronics 9, no. 11 (November 9, 2020): 1880. http://dx.doi.org/10.3390/electronics9111880.

Full text
Abstract:
Automatic deep architecture generation is a challenging task, owing to the large number of controlling parameters inherent in the construction of deep networks. The combination of these parameters leads to the creation of large, complex search spaces that are feasibly impossible to properly navigate without a huge amount of resources for parallelisation. To deal with such challenges, in this research we propose a Swarm Optimised DenseBlock Architecture Ensemble (SODBAE) method, a joint optimisation and training process that explores a constrained search space over a skeleton DenseBlock Convolutional Neural Network (CNN) architecture. Specifically, we employ novel weight inheritance learning mechanisms, a DenseBlock skeleton architecture, as well as adaptive Particle Swarm Optimisation (PSO) with cosine search coefficients to devise networks whilst maintaining practical computational costs. Moreover, the architecture design takes advantage of recent advancements of the concepts of residual connections and dense connectivity, in order to yield CNN models with a much wider variety of structural variations. The proposed weight inheritance learning schemes perform joint optimisation and training of the architectures to reduce the computational costs. Being evaluated using the CIFAR-10 dataset, the proposed model shows great superiority in classification performance over other state-of-the-art methods while illustrating a greater versatility in architecture generation.
APA, Harvard, Vancouver, ISO, and other styles
28

Soon, Foo Chong, Hui Ying Khaw, Joon Huang Chuah, and Jeevan Kanesan. "Hyper-parameters optimisation of deep CNN architecture for vehicle logo recognition." IET Intelligent Transport Systems 12, no. 8 (October 1, 2018): 939–46. http://dx.doi.org/10.1049/iet-its.2018.5127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Barber, R. L., M. K. Ghantasala, R. Divan, K. D. Vora, E. C. Harvey, and D. C. Mancini. "Optimisation of SU-8 processing parameters for deep X-ray lithography." Microsystem Technologies 11, no. 4-5 (April 2005): 303–10. http://dx.doi.org/10.1007/s00542-004-0442-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pfrommer, Julius, Clemens Zimmerling, Jinzhao Liu, Luise Kärger, Frank Henning, and Jürgen Beyerer. "Optimisation of manufacturing process parameters using deep neural networks as surrogate models." Procedia CIRP 72 (2018): 426–31. http://dx.doi.org/10.1016/j.procir.2018.03.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Karabağ, Cefa, Jo Verhoeven, Naomi Rachel Miller, and Constantino Carlos Reyes-Aldasoro. "Texture Segmentation: An Objective Comparison between Five Traditional Algorithms and a Deep-Learning U-Net Architecture." Applied Sciences 9, no. 18 (September 17, 2019): 3900. http://dx.doi.org/10.3390/app9183900.

Full text
Abstract:
This paper compares a series of traditional and deep learning methodologies for the segmentation of textures. Six well-known texture composites first published by Randen and Husøy were used to compare traditional segmentation techniques (co-occurrence, filtering, local binary patterns, watershed, multiresolution sub-band filtering) against a deep-learning approach based on the U-Net architecture. For the latter, the effects of depth of the network, number of epochs and different optimisation algorithms were investigated. Overall, the best results were provided by the deep-learning approach. However, the best results were distributed within the parameters, and many configurations provided results well below the traditional techniques.
APA, Harvard, Vancouver, ISO, and other styles
32

Pepelnjak, Tomaž, Tomaž Bren, Bojan Železnik, and Mitja Kuštra. "Computer-Assisted Design of Sheet Metal Component Formed from Stainless Steel." Materials and Geoenvironment 65, no. 2 (September 1, 2018): 71–78. http://dx.doi.org/10.2478/rmzmag-2018-0012.

Full text
Abstract:
Abstract The development of the product from stainless steel, which is produced for the client in large series, is presented. Technological optimisation was mainly focussed on the design of the deep drawing process in a single operation, which proved to be technologically unstable and therefore unfeasible for the prescribed shape of the product. Testing of prototype products showed unacceptable wrinkling due to the cone-shaped geometry of the workpiece. For this purpose, the research work was oriented towards technological optimisation of forming operations and set-up of proper phase plan in order to eliminate the wrinkling of the material. Testing of several different materials of the same quality was performed to determine the appropriate input parameters used for digital analyses. The analyses were focussed towards the set-up of optimal forming process and appropriate geometry of the corresponding tool, which allowed deep drawing of the workpiece without tearing and/or wrinkling of the material. Performed analyses of the forming process in the digital environment were tested with experiments, which showed a good correlation between the results of both development concepts.
APA, Harvard, Vancouver, ISO, and other styles
33

Wadas, Maciej, and Jakub Smołka. "Performance analysis of the TensorFlow library with different optimisation algorithms." Journal of Computer Sciences Institute 21 (December 30, 2021): 330–35. http://dx.doi.org/10.35784/jcsi.2738.

Full text
Abstract:
This paper presents the results of performance analysis of the Tensorflow library used in machine learning and deep neural networks. The analysis focuses on comparing the parameters obtained when training the neural network model for optimization algorithms: Adam, Nadam, AdaMax, AdaDelta, AdaGrad. Special attention has been paid to the differences between the training efficiency on tasks using microprocessor and graphics card. For the study, neural network models were created in order to recognise Polish handwritten characters. The results obtained showed that the most efficient algorithm is AdaMax, while the computer component used during the research only affects the training time of the neural network model used.
APA, Harvard, Vancouver, ISO, and other styles
34

Wu, Pengcheng, Yan He, Yufeng Li, Jingsen He, Xueqian Liu, and Yulin Wang. "Multi-objective optimisation of machining process parameters using deep learning-based data-driven genetic algorithm and TOPSIS." Journal of Manufacturing Systems 64 (July 2022): 40–52. http://dx.doi.org/10.1016/j.jmsy.2022.05.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lu, Fengyi, Guanghui Zhou, Chao Zhang, Yang Liu, Fengtian Chang, and Zhongdong Xiao. "Energy-efficient multi-pass cutting parameters optimisation for aviation parts in flank milling with deep reinforcement learning." Robotics and Computer-Integrated Manufacturing 81 (June 2023): 102488. http://dx.doi.org/10.1016/j.rcim.2022.102488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jalil Jassim Ghrabat, Mudhafar, Zaid Alaa Hussien, Mustafa S. Khalefa, Zaid Ameen Abduljabbar, Vincent Omollo Nyangaresi, Mustafa A. Al Sibahee, and Enas Wahab Abood. "Fully automated model on breast cancer classification using deep learning classifiers." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 1 (October 1, 2022): 183. http://dx.doi.org/10.11591/ijeecs.v28.i1.pp183-191.

Full text
Abstract:
Deep learning models on the same database have varied accuracy ratings; as such, additional parameters, such as pre-processing, data augmentation and transfer learning, can influence the models’ capacity to obtain higher accuracy. In this paper, a fully automated model is designed using deep learning algorithm to capture images from patients and pre-process, segment and classify the intensity of cancer spread. In the first pre-processing step, pectoral muscles are removed from the input images, which are then downsized. The removal of pectoral muscles after identification may become crucial in classification systems. Finally, the pectoral musclesaredeleted from the picture by using an area expanding segmentation. All mammograms are downsized to reduce processing time. Each stage of the fully automated model uses an optimisation approach to obtain highaccuracy results at respective stages. Simulation is conducted to test the efficacy of the model against state-of-art models, and the proposed fully automated model is thoroughly investigated. For a more accurate comparison, we include the model in our analysis. In a nutshell, this work offers a wealth of information as well as review and discussion of the experimental conditions used by studies on classifying breast cancer images.
APA, Harvard, Vancouver, ISO, and other styles
37

Delport, D. J., J. H. Potgieter, W. G. Augustyn, and J. H. Prior. "Die ontwikkeling van ’n korrosiebestande langpadlengte-infrarooisel Deel I: Konstruksieparameters en -oorwegings." Suid-Afrikaanse Tydskrif vir Natuurwetenskap en Tegnologie 22, no. 2/3 (September 26, 2003): 89–92. http://dx.doi.org/10.4102/satnt.v22i2/3.216.

Full text
Abstract:
Hierdie artikel beskryf die konstruksie van ’n langpadlengte-infrarooisel wat gebruik kan word om korrosiewe gasmengsels kwalitatief en kwantitatief te analiseer. Die vereistes waaraan so ’n sel moet voldoen, word kortliks bespreek en die ontwerp word beskryf met spesifieke verwysing na geskikte materiale vir die spieël, die optimisering van die padlengte en die bepaling van die maksimum seinversterking. Na bepaling van die nodige ontwerpparameters, is dit uit vlekvrye staal gekonstrueer en in bedryf gestel.AbstractThe development of a corrosion resistant long pathlength infrared cell Part I: Construction parameters and considerations This paper describes the construction of a long pathlength infrared cell that is suitable for the qualitative and quantitative analyses of corrosive gas mixtures. The requirements of such a cell are discussed briefly and its design is described with specific reference to suitable materials for the mirror, the optimisation of the pathlength and to determining the maximum gain. After establishing the necessary design parameters, the cell was constructed from stainless steel and commissioned for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
38

Molina García, Víctor, Sruthy Sasi, Dmitry S. Efremenko, and Diego Loyola. "Improvement of EPIC/DSCOVR Image Registration by Means of Automatic Coastline Detection." Remote Sensing 11, no. 15 (July 25, 2019): 1747. http://dx.doi.org/10.3390/rs11151747.

Full text
Abstract:
In this work, we address the image geolocation issue that is present in the imagery of EPIC/DSCOVR (Earth Polychromatic Imaging Camera/Deep Space Climate Observatory) Level 1B version 2. To solve it, we develop an algorithm that automatically computes a registration correction consisting of a motion (translation plus rotation) and a radial distortion. The correction parameters are retrieved for every image by means of a regularised non-linear optimisation process, in which the spatial distances between the theoretical and actual locations of chosen features are minimised. The actual features are found along the coastlines automatically by using computer vision techniques. The retrieved correction parameters show a behaviour that is related to the period of DSCOVR orbiting around the Lagrangian point L 1 . With this procedure, the EPIC coastlines are collocated with an accuracy of about 1.5 pixels, thus significantly improving the original registration of about 5 pixels from the imagery of EPIC/DSCOVR Level 1B version 2.
APA, Harvard, Vancouver, ISO, and other styles
39

Long, David, and Stephen O'Shaughnessy. "Shallow Deep Learning using Space-filling Curves for Malware Classification." International Conference on Cyber Warfare and Security 17, no. 1 (March 2, 2022): 145–54. http://dx.doi.org/10.34190/iccws.17.1.13.

Full text
Abstract:
The incidents of malware attacks are continually increasing at a rapid rate, thanks to the lucrative potential in schemes such as ransomware, credential stealing Trojans and cryptominers. Their explosive growth is compounded by the ease with which variants can be created from original strains. As a result, anti-virus organisations are struggling to keep up, with some reporting upwards of 14 million samples processed per month. These sheer volumes have caused a shift towards machine learning and artificial intelligence in an effort to alleviate the manual burden of analysis and classification. This research presents a novel framework for the classification of malware into distinct family classes through computer vision and deep learning. In the proposed framework, malware binaries are represented in an abstract form as images mapped through mathematical constructs known as space-filling curves. Convolutional neural networks were constructed and applied to the malware images to build predictive models for classification. The models were optimised using an auto-tuning function for the hyper parameters, which included Bayesian Optimisation, Random search and HyperBand, providing an exhaustive search on the hyper parameters. On a training dataset of 13k malware samples from 23 distinct families, the models yielded an average score of 95% for precision, recall and f1-score. The final deep learning model was validated for robustness against a dataset of more recent variants, comprising 12,816 samples from 16 malware families, returning classification scores of 95%, 86% and 90% for precision, recall and f1-score. The final model was demonstrated to outperform a similar benchmark model considerably. The results show the potential of the deep learning framework as a viable solution to the classification of malware, without the need for manually intensive feature generation or invasive processing techniques.
APA, Harvard, Vancouver, ISO, and other styles
40

Zeybek, Sultan, Duc Truong Pham, Ebubekir Koç, and Aydın Seçer. "An Improved Bees Algorithm for Training Deep Recurrent Networks for Sentiment Classification." Symmetry 13, no. 8 (July 26, 2021): 1347. http://dx.doi.org/10.3390/sym13081347.

Full text
Abstract:
Recurrent neural networks (RNNs) are powerful tools for learning information from temporal sequences. Designing an optimum deep RNN is difficult due to configuration and training issues, such as vanishing and exploding gradients. In this paper, a novel metaheuristic optimisation approach is proposed for training deep RNNs for the sentiment classification task. The approach employs an enhanced Ternary Bees Algorithm (BA-3+), which operates for large dataset classification problems by considering only three individual solutions in each iteration. BA-3+ combines the collaborative search of three bees to find the optimal set of trainable parameters of the proposed deep recurrent learning architecture. Local learning with exploitative search utilises the greedy selection strategy. Stochastic gradient descent (SGD) learning with singular value decomposition (SVD) aims to handle vanishing and exploding gradients of the decision parameters with the stabilisation strategy of SVD. Global learning with explorative search achieves faster convergence without getting trapped at local optima to find the optimal set of trainable parameters of the proposed deep recurrent learning architecture. BA-3+ has been tested on the sentiment classification task to classify symmetric and asymmetric distribution of the datasets from different domains, including Twitter, product reviews, and movie reviews. Comparative results have been obtained for advanced deep language models and Differential Evolution (DE) and Particle Swarm Optimization (PSO) algorithms. BA-3+ converged to the global minimum faster than the DE and PSO algorithms, and it outperformed the SGD, DE, and PSO algorithms for the Turkish and English datasets. The accuracy value and F1 measure have improved at least with a 30–40% improvement than the standard SGD algorithm for all classification datasets. Accuracy rates in the RNN model trained with BA-3+ ranged from 80% to 90%, while the RNN trained with SGD was able to achieve between 50% and 60% for most datasets. The performance of the RNN model with BA-3+ has as good as for Tree-LSTMs and Recursive Neural Tensor Networks (RNTNs) language models, which achieved accuracy results of up to 90% for some datasets. The improved accuracy and convergence results show that BA-3+ is an efficient, stable algorithm for the complex classification task, and it can handle the vanishing and exploding gradients problem of deep RNNs.
APA, Harvard, Vancouver, ISO, and other styles
41

Sierant, Z. "Optimisation of the Deoxidation Degree and Modification on the Cast Steel Properties Improvement at Low Temperature." Archives of Metallurgy and Materials 58, no. 3 (September 1, 2013): 763–67. http://dx.doi.org/10.2478/amm-2013-0068.

Full text
Abstract:
Abstract Industrial conditions of obtaining thick-walled and shaped castings intended for operations at temperatures: minus 40÷60°C are presented in the paper. The selection of a cast steel grade is based on known studies [1,10,14], however due to castings wall thickness (70-240mm), the way of preparing liquid metal (deep deoxidation and modification, argon stirring in a ladle) and conditions of filling mould cavities (gating system assuring quiet filling, directional solidification and avoiding a formation of inessential heat centres), were developed to assure the required impact strength. Maintaining these parameters as well as the selection of heat treatments for the produced massive castings allowed to achieve the impact strength over 50 J/cm2 at minus 40°. This value was obtained for walls of various thickness
APA, Harvard, Vancouver, ISO, and other styles
42

Shruthi, G., Monica R. Mundada, B. J. Sowmya, and S. Supreeth. "Mayfly Taylor Optimisation-Based Scheduling Algorithm with Deep Reinforcement Learning for Dynamic Scheduling in Fog-Cloud Computing." Applied Computational Intelligence and Soft Computing 2022 (August 28, 2022): 1–17. http://dx.doi.org/10.1155/2022/2131699.

Full text
Abstract:
Fog computing domain plays a prominent role in supporting time-delicate applications, which are associated with smart Internet of Things (IoT) services, like smart healthcare and smart city. However, cloud computing is a capable standard for IoT in data processing owing to the high latency restriction of the cloud, and it is incapable of satisfying needs for time-sensitive applications. The resource provisioning and allocation process in fog-cloud structure considers dynamic alternations in user necessities, and also restricted access resources in fog devices are more challenging. The global adoption of IoT-driven applications has led to the rise of fog computing structure, which permits perfect connection for mobile edge and cloud resources. The effectual scheduling of application tasks in fog environments is a challenging task because of resource heterogeneity, stochastic behaviours, network hierarchy, controlled resource abilities, and mobility elements in IoT. The deadline is the most significant challenge in the fog computing structure due to the dynamic variations in user requirement parameters. In this paper, Mayfly Taylor Optimisation Algorithm (MTOA) is developed for dynamic scheduling in the fog-cloud computing model. The developed MTOA-based Deep Q-Network (DQN) showed better performance with energy consumption, service level agreement (SLA), and computation cost of 0.0162, 0.0114, and 0.0855, respectively.
APA, Harvard, Vancouver, ISO, and other styles
43

Gani, Adel, William Ion, and Erfu Yang. "Optimisation of cutting parameters and surface deformation during thin steel sheets plasma processing using Taguchi approach." Advances in Mechanical Engineering 13, no. 7 (July 2021): 168781402110304. http://dx.doi.org/10.1177/16878140211030401.

Full text
Abstract:
Plasma machining was investigated for suitability to cut thin sheets and the quality obtained was assessed. The aim of this research was to analyse the impact of the heat generated from plasma cutting on thin sheet surface deformation and heat affected zones. Plasma cutting was assumed to be noneffective for processing thin material, this technology was known primely for cutting medium to thick plates. Tremendous work was performed previously to improve the quality. However, there are no scientific publications assessed the effect of the plasma heat on thin material under 1 mm and optimised optimise the process to reduce the phenomena resulted to their minimal. The traditional cutting methods were found to be limited and time consuming. This research can be beneficial for vehicle convertors such as wheelchair accessible vehicles or similar industries where conventional cutting methods are still predominant. Therefore, an automated plasma cutter if properly optimised would be an effective solution. CNC Plasma cutter with an automated torch height adjustment was used for this experiment to process a 0.6 mm thick deep drawing cold rolled DCO1 steel grade material. Tests were carried out varying the plasma parameters speed, pressure and intensity, the quality obtained was assessed. Three-dimensional Triplescan Atos model was used to measure the maximum surface deformation. Samples were sectioned, mounted, polished, mirrored and etched with 5% Nital acid to expose the material grains. Results were gathered on a tables, then Taguchi method and Analysis of Variance were used to optimise and analyse the parameters. Results showed an effective cut for thin materials. The optimal values obtained were 8000 mm/min, pressure 70 psi and Intensity 25 A for sheet deformation whereas heat affected zones was cutting speed 8500 mm/min, pressure 80 psi and Intensity 30 A. Cutting speed and intensity had the highest impact on deformation however heat affected zones was influenced mainly by the cutting speed. The study suggested also that a mathematical model can be constructed to predict the response for both phenomena and assess the relationship strength, the method used was least square error, the tests showed that the models fit adequately and can be trusted to predict new values. Lastly, a vehicle chassis was used for the test, the results showed an effective cutting on painted material, no second processing was required. Sealant under the material required removal on plasma pathway to avoid excess smoke or fire generation. An estimation was made for the underbody chassis cut out processing time and it was found that automated plasma can reduce the cycle time to just above 40 min.
APA, Harvard, Vancouver, ISO, and other styles
44

Kai, Priscila Marques, Bruna Mendes de Oliveira, and Ronaldo Martins da Costa. "Deep Learning-Based Method for Classification of Sugarcane Varieties." Agronomy 12, no. 11 (November 2, 2022): 2722. http://dx.doi.org/10.3390/agronomy12112722.

Full text
Abstract:
The classification of sugarcane varieties using products derived from remote sensing allows for the monitoring of plants with different profiles without necessarily having physical contact with the study objects. However, differentiating between varieties can be challenging due to the similarity of the spectral characteristics of each crop. Thus, this study aimed to classify four sugarcane varieties through deep neural networks, subsequently comparing the results with traditional machine learning techniques. In order to provide more data as input for the classification models, along with the multi-band values of the pixels and vegetation indices, other information can be obtained from the sensor bands through RGB combinations by reconciling different bands so as to yield the characteristics of crop varieties. The methodology created to discriminate sugarcane varieties consisted of a dense neural network, with the number of hidden layers determined by the greedy layer-wise method and multiples of four neurons in each layer; additionally, a 5-fold evaluation in the training data was composed of Sentinel-2 band data, vegetation indices, and RGB combinations. Comparing the results acquired from each model with the hyperparameters selected by Bayesian optimisation, except for the neural network with manually defined parameters, it was possible to observe a greater precision of 99.55% in the SVM model, followed by the neural network developed by the study, random forests, and kNN. However, the final neural network model prediction resulted in the 99.48% accuracy of a six-hidden-layers network, demonstrating the potential of using neural networks in classification. Among the characteristics that contributed the most to the classification, the chlorophyll-sensitive bands, especially B6, B7, B11, and some RGB combinations, had the most impact on the correct classification of samples by the neural network model. Thus, the regions encompassing the near-infrared and shortwave infrared regions proved to be suitable for the discrimination of sugarcane varieties.
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Jie, Decao Yin, Halvor Lie, Signe Riemer-Sørensen, Svein Sævik, and Michael Triantafyllou. "Improved VIV Response Prediction Using Adaptive Parameters and Data Clustering." Journal of Marine Science and Engineering 8, no. 2 (February 17, 2020): 127. http://dx.doi.org/10.3390/jmse8020127.

Full text
Abstract:
Slender marine structures such as deep-water riser systems are continuously exposed to currents, leading to vortex-induced vibrations (VIV) of the structure. This may result in amplified drag loads and fast accumulation of fatigue damage. Consequently, accurate prediction of VIV responses is of great importance for the safe design and operation of marine risers. Model tests with elastic pipes have shown that VIV responses are influenced by many structural and hydrodynamic parameters, which have not been fully modelled in present frequency domain VIV prediction tools. Traditionally, predictions have been computed using a single set of hydrodynamic parameters, often leading to inconsistent prediction accuracy when compared with observed field measurements and experimental data. Hence, it is necessary to implement a high safety factor of 10–20 in the riser design, which increases development costs and adds extra constraints in the field operation. One way to compensate for the simplifications in the mathematical prediction model is to apply adaptive parameters to describe different riser responses. The objective of this work is to demonstrate a new method to improve the prediction consistency and accuracy by applying adaptive hydrodynamic parameters. In the present work, a four-step approach has been proposed: First, the measured VIV response will be analysed to identify key parameters to represent the response characteristics. These parameters will be grouped by using data clustering algorithms. Secondly, optimal hydrodynamic parameters will be identified for each data group by optimisation against measured data. Thirdly, the VIV response using the obtained parameters will be calculated and the prediction accuracy evaluated. Last but not least, classification algorithms will be applied to determine the correct hydrodynamic parameters to be used for new cases. An iteration of the previous steps may be needed if the prediction accuracy of the new case is not satisfactory. This concept has been demonstrated with examples from experimental data.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Le. "Dance Art Scene Classification Based on Convolutional Neural Networks." Scientific Programming 2022 (July 8, 2022): 1–11. http://dx.doi.org/10.1155/2022/6355959.

Full text
Abstract:
Digital multimedia resources have become an important part of people’s daily cultural life. Automatic scene classification of a large number of dance art videos is the basis for scene semantic based video content retrieval. In order to improve the accuracy of scene classification, the videos are identified using a deep convolutional neural network based on differential evolution for dance art videos. First, the Canny operator is used in YCbCr colour space to detect the human silhouette in the key frames of the video. Then, the AdaBoost algorithm based on cascade structure is used to implement human target tracking and labelling, and the construction and updating of weak classifiers are analysed. Next, a differential evolution algorithm is used to optimise the structural parameters of the convolutional neural network, and an adaptive strategy is adopted for the scaling factor of the differential evolution algorithm to improve the optimisation solution accuracy. Finally, the improved deep convolutional neural network is used to train the classification of the labelled videos in order to obtain stable scene classification results. The experimental results show that by reasonably setting the crossover rate of differential evolution and the convolutional kernel size of the convolutional neural network, high scene classification performance can be obtained. The high accuracy and low root-mean-square error validate the applicability of the proposed method in dance art scene classification.
APA, Harvard, Vancouver, ISO, and other styles
47

Romanenko, Pavel S., Ilya G. Alekhin, Kanat B. Ashimov, and Sarsenbay A. Gubashev. "Application of geomechanics in the construction of horizontal wells at the fields of Embamunaigas JSC." Kazakhstan journal for oil & gas industry 4, no. 2 (July 20, 2022): 35–46. http://dx.doi.org/10.54859/kjogi108196.

Full text
Abstract:
Horizontal drilling is currently being actively developed in the near and far abroad to increase oil production, and Embamunaigas JSC is a prime example of this type of drilling in Kazakhstan. At the same time, horizontal drilling is a fairly complex and costly process, just like the construction of deep exploration wells, which requires a comprehensive approach to planning and execution of work. Geomechanical modelling plays a crucial role in selecting process parameters for drilling, determining optimum well design and other characteristics required to ensure wellbore integrity and overall well construction success. The mechanical properties model takes into account rock strength characteristics, horizontal stress directions of the geological environment and many other aspects that help make drilling safer, reduce the risk of complications and maintain borehole integrity. This study describes the process of creating geomechanical models for the construction of complex horizontal wells in the fields of Embamunaigas JSC based on the geomechanical core studies carried out at the S. Nurzhanov field and an analysis of the hydraulic fracturing database for the fields of Embamunaigas JSC. As a result, a safe drilling mud density range (safe drilling window) was determined and recommendations were given for the optimisation of drilling parameters and well design.
APA, Harvard, Vancouver, ISO, and other styles
48

Garcia, Jean Rodrigo, Paulo José Rocha de Albuquerque, and Rodrigo Álvares de Araújo Melo. "Experimental and numerical analysis of foundation pilings partially embedded in rock." Rem: Revista Escola de Minas 66, no. 4 (December 2013): 439–46. http://dx.doi.org/10.1590/s0370-44672013000400006.

Full text
Abstract:
The behaviours of four foundation pilings (ϕ=0.41 m) constructed in Foz do Iguaçu, Paraná (PR), Brazil and subjected to slow loading tests were analysed. The results were compared with results from three-dimensional numerical modelling using the finite element method, which facilitates simulation of the elasto-plastic behaviour of soil. The local subsoil comprises varied stratigraphies; it is composed of a residual soil surface layer followed by weathered rock and bedrock, which are a few meters deep. The massif geotechnical parameters were determined through correlations obtained from field tests, whereby the values for cohesion, angle of friction, modulus of deformability and uniaxial compressive strength in the different subsoil layers were estimated. The load tests were interrupted at 3000 kN and displaced by less than 5 mm in the working load (1500 kN). The pilings were subjected to lateral friction work with an average stress of approximately 70 kPa for the surface portion (residual soil) and greater than 150 kPa for the weathered rock portions. The estimated geotechnical parameters provided values that were an exact match with the numerical analyses. Thus, given the analyses and load transfer method, the piling lengths can be reduced, which will facilitate the optimisation of the geotechnical design.
APA, Harvard, Vancouver, ISO, and other styles
49

Haider, Adnan, Musleh Ud Din Musleh Ud Din, and Ejaz Ghani. "Monetary Policy, Informality and Business Cycle Fluctuations in a Developing Economy Vulnerable to External Shocks." Pakistan Development Review 51, no. 4II (December 1, 2012): 609–81. http://dx.doi.org/10.30541/v51i4iipp.609-681.

Full text
Abstract:
Modelling the sources of Business Cycle Fluctuations (BCF)1 in an open economy Dynamic Stochastic General Equilibrium (DSGE) framework is a fascinating area of research. The main advantage of this framework over traditional modelling approach is due to an additional feature of micro-foundations in terms of welfare optimisation. This feature allows structural interpretation of deep parameters in a way that is less skeptical to Lucas critique [Lucas (1976)]. In DSGE modelling context, the sources of BCF are normally viewed as exogenous shocks, which have potential power to propagate the key endogenous variables within the system. This requires a careful identification, as the transmission of these shocks may emanate from internal side, such as, political instability; weak institutional quality in terms of low governance, or from external side, such as, natural disaster (like, earth quacks and floods); international oil and commodity prices; sudden stops in foreign capital inflows; changes in term of trade and exchange rate, or any combination of shocks from both sides. Also, the nature and magnitude of these shocks may vary, depending upon their variances and persistence levels.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Tong, Suryakanta Biswal, and Ying Wang. "SHMnet: Condition assessment of bolted connection with beyond human-level performance." Structural Health Monitoring 19, no. 4 (October 17, 2019): 1188–201. http://dx.doi.org/10.1177/1475921719881237.

Full text
Abstract:
Deep learning algorithms are transforming a variety of research areas with accuracy levels that the traditional methods cannot compete with. Recently, increasingly more research efforts have been put into the structural health monitoring domain. In this work, we propose a new deep convolutional neural network, namely SHMnet, for a challenging structural condition identification case, that is, steel frame with bolted connection damage. We perform systematic studies on the optimisation of network architecture and the preparation of the training data. In the laboratory, repeated impact hammer tests are conducted on a steel frame with different bolted connection damage scenarios, as small as one bolt loosened. The time-domain monitoring data from a single accelerometer are used for training. We conduct parametric studies on different layer numbers, different sensor locations, the quantity of the training datasets and noise levels. The results show that the proposed SHMnet is effective and reliable with at least four independent training datasets and by avoiding vibration node points as sensor locations. Under up to 60% additive Gaussian noise, the average identification accuracy is over 98%. In comparison, the traditional methods based on the identified modal parameters inevitably fail due to the unnoticeable changes of identified natural frequencies and mode shapes. The results provide confidence in using the developed method as an effective structural condition identification framework. It has the potential to transform the structural health monitoring practice. The code and relevant information can be found at https://github.com/capepoint/SHMnet .
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography