Articles de revues sur le sujet « Non-smooth optimisation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Non-smooth optimisation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 47 meilleurs articles de revues pour votre recherche sur le sujet « Non-smooth optimisation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Dao, Minh Ngoc, Dominikus Noll et Pierre Apkarian. « Robust eigenstructure clustering by non-smooth optimisation ». International Journal of Control 88, no 8 (3 mars 2015) : 1441–55. http://dx.doi.org/10.1080/00207179.2015.1007393.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Yao, Zhiqiang, Jinfeng Huang, Shiguo Wang et Rukhsana Ruby. « Efficient local optimisation‐based approach for non‐convex and non‐smooth source localisation problems ». IET Radar, Sonar & ; Navigation 11, no 7 (juillet 2017) : 1051–54. http://dx.doi.org/10.1049/iet-rsn.2016.0433.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Pothiya, Saravuth, Issarachai Ngamroo et Waree Kongprawechnon. « Ant colony optimisation for economic dispatch problem with non-smooth cost functions ». International Journal of Electrical Power & ; Energy Systems 32, no 5 (juin 2010) : 478–87. http://dx.doi.org/10.1016/j.ijepes.2009.09.016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sach, Pham Huu, Gue Myung Lee et Do Sang Kim. « Efficiency and generalised convexity in vector optimisation problems ». ANZIAM Journal 45, no 4 (avril 2004) : 523–46. http://dx.doi.org/10.1017/s1446181100013547.

Texte intégral
Résumé :
AbstractThis paper gives a necessary and sufficient condition for a Kuhn-Tucker point of a non-smooth vector optimisation problem subject to inequality and equality constraints to be an efficient solution. The main tool we use is an alternative theorem which is quite different to a corresponding result by Xu.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Pecci, Filippo, Edo Abraham et Ivan Stoianov. « Quadratic head loss approximations for optimisation problems in water supply networks ». Journal of Hydroinformatics 19, no 4 (17 avril 2017) : 493–506. http://dx.doi.org/10.2166/hydro.2017.080.

Texte intégral
Résumé :
This paper presents a novel analysis of the accuracy of quadratic approximations for the Hazen–Williams (HW) head loss formula, which enables the control of constraint violations in optimisation problems for water supply networks. The two smooth polynomial approximations considered here minimise the absolute and relative errors, respectively, from the original non-smooth HW head loss function over a range of flows. Since quadratic approximations are used to formulate head loss constraints for different optimisation problems, we are interested in quantifying and controlling their absolute errors, which affect the degree of constraint violations of feasible candidate solutions. We derive new exact analytical formulae for the absolute errors as a function of the approximation domain, pipe roughness and relative error tolerance. We investigate the efficacy of the proposed quadratic approximations in mathematical optimisation problems for advanced pressure control in an operational water supply network. We propose a strategy on how to choose the approximation domain for each pipe such that the optimisation results are sufficiently close to the exact hydraulically feasible solution space. By using simulations with multiple parameters, the approximation errors are shown to be consistent with our analytical predictions.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhu, Yuteng. « Designing a physically-feasible colour filter to make a camera more colorimetric ». London Imaging Meeting 2020, no 1 (29 septembre 2020) : 96–99. http://dx.doi.org/10.2352/issn.2694-118x.2020.lim-16.

Texte intégral
Résumé :
Previously, a method has been developed to find the best colour filter for a given camera which results in the new effective camera sensitivities that best meet the Luther condition. That is, the new sensitivities are approximately linearly related to the XYZ colour matching functions. However, with no constraint, the filter derived from this Luther-condition based optimisation can be rather non-smooth and transmit very little light which are impractical for fabrication. In this paper, we extend the Luther-condition filter optimisation method to allow us to incorporate both the smoothness and transmittance bounds of the recovered filter which are key practical concerns. Experiments demonstrate that we can find physically realisable filters which are smooth and reasonably transmissive with which the effective 'camera+filter' becomes significantly more colorimetric.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wang, Wei-Xiang, You-Lin Shang et Ying Zhang. « Finding global minima with a novel filled function for non-smooth unconstrained optimisation ». International Journal of Systems Science 43, no 4 (avril 2012) : 707–14. http://dx.doi.org/10.1080/00207721.2010.520094.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Chen, Shuming, Zhenyu Zhou et Jixiu Zhang. « Multi-objective optimisation of automobile sound package with non-smooth surface based on grey theory and particle swarm optimisation ». International Journal of Vehicle Design 88, no 2/3/4 (2022) : 238. http://dx.doi.org/10.1504/ijvd.2022.127018.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Chen, Shuming, Jixiu Zhang et Zhenyu Zhou. « Multi-objective optimisation of automobile sound package with non-smooth surface based on grey theory and particle swarm optimisation ». International Journal of Vehicle Design 88, no 2/3/4 (2022) : 238. http://dx.doi.org/10.1504/ijvd.2022.10052010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Ribeiro, Tiago, Yun-Fei Fu, Luís Bernardo et Bernard Rolfe. « Topology Optimisation of Structural Steel with Non-Penalisation SEMDOT : Optimisation, Physical Nonlinear Analysis, and Benchmarking ». Applied Sciences 13, no 20 (17 octobre 2023) : 11370. http://dx.doi.org/10.3390/app132011370.

Texte intégral
Résumé :
In this work, Non-penalisation Smooth-Edged Material Distribution for Optimising Topology (np-SEMDOT) algorithm was developed as an alternative to well-established Topology Optimisation (TO) methods based on the solid/void approach. Its novelty lies in its smoother edges and enhanced manufacturability, but it requires validation in a real case study rather than using simplified benchmark problems. To such an end, a Sheikh-Ibrahim steel girder joint’s tension cover plate was optimised with np-SEMDOT, following a methodology designed to ensure compliance with the European design standards. The optimisation was assessed with Physical Nonlinear Finite Element Analyses (PhNLFEA), after recent findings that topologically optimised steel construction joint parts were not accurately modelled with linear analyses to ensure the required highly nonlinear ultimate behaviour. The results prove, on the one hand, that the quality of np-SEMDOT solutions strongly depends on the chosen optimisation parameters, and on the other hand, that the optimal np-SEMDOT solution can equalise the ultimate capacity and can slightly outperform the ultimate displacement of a benchmarking solution using a Solid Isotropic Material with Penalisation (SIMP)-based approach. It can be concluded that np-SEMDOT does not fall short of the prevalent methods. These findings highlight the novelty in this work by validating the use of np-SEMDOT for professional applications.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Rao, Mallavolu Malleswara, et Geetha Ramadas. « Multiobjective Improved Particle Swarm Optimisation for Transmission Congestion and Voltage Profile Management using Multilevel UPFC ». Power Electronics and Drives 4, no 1 (1 juin 2019) : 79–93. http://dx.doi.org/10.2478/pead-2019-0005.

Texte intégral
Résumé :
AbstractThis paper proposes a multiobjective improved particle swarm optimisation (IPSO) for placing and sizing the series modular multilevel converter-based unified power flow controller (MMC-UPFC) FACTS devices to manage the transmission congestion and voltage profile in deregulated electricity markets. The proposed multiobjective IPSO algorithm is perfect for accomplishing the close ideal distributed generation (DG) sizes while conveying smooth assembly qualities contrasted with another existing algorithm. It tends to be reasoned that voltage profile and genuine power misfortunes have generous upgrades along ideal speculation on DGs in both the test frameworks. The proposed system eliminates the congestion and the power system can be easily used to solve complex and non-linear optimisation problems in a real-time manner.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Nayak, Gopal Krishna, Tapas Kumar Panigrahi et Arun Kumar Sahoo. « A novel modified random walk grey wolf optimisation approach for non-smooth and non-convex economic load dispatch ». International Journal of Innovative Computing and Applications 13, no 2 (2022) : 59. http://dx.doi.org/10.1504/ijica.2022.10047889.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Sahoo, Arun Kumar, Tapas Kumar Panigrahi et Gopal Krishna Nayak. « A novel modified random walk grey wolf optimisation approach for non-smooth and non-convex economic load dispatch ». International Journal of Innovative Computing and Applications 13, no 2 (2022) : 59. http://dx.doi.org/10.1504/ijica.2022.123222.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Koch, Michael W., et Sigrid Leyendecker. « Structure Preserving Simulation of Monopedal Jumping ». Archive of Mechanical Engineering 60, no 1 (1 mars 2013) : 127–46. http://dx.doi.org/10.2478/meceng-2013-0008.

Texte intégral
Résumé :
The human environment consists of a large variety of mechanical and biomechanical systems in which different types of contact can occur. In this work, we consider a monopedal jumper modelled as a three-dimensional rigid multibody system with contact and simulate its dynamics using a structure preserving method. The applied mechanical integrator is based on a constrained version of the Lagranged’Alembert principle. The resulting variational integrator preserves the symplecticity and momentum maps of the multibody dynamics. To ensure the structure preservation and the geometric correctness, we solve the non-smooth problem including the computation of the contact configuration, time and force instead of relying on a smooth approximation of the contact problem via a penalty potential. In addition to the formulation of non-smooth problems in forward dynamic simulations, we are interested in the optimal control of the monopedal high jump. The optimal control problem is solved using a direct transcription method transforming it into a constrained optimisation problem, see [14].
Styles APA, Harvard, Vancouver, ISO, etc.
15

Chambon, Emmanuel, Pierre Apkarian et Laurent Burlion. « Overview of linear time-invariant interval observer design : towards a non-smooth optimisation-based approach ». IET Control Theory & ; Applications 10, no 11 (18 juillet 2016) : 1258–68. http://dx.doi.org/10.1049/iet-cta.2015.0742.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Yang, Zhijing, Wei‐Chao Kuang, Bingo Wing‐Kuen Ling et Qingyun Dai. « Instantaneous magnitudes and instantaneous frequencies of signals with their positivity constraints via non‐smooth non‐convex functional constrained optimisation ». IET Signal Processing 10, no 3 (mai 2016) : 247–53. http://dx.doi.org/10.1049/iet-spr.2014.0234.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Qi, Mingfeng, Lihua Dou et Bin Xin. « 3D Smooth Trajectory Planning for UAVs under Navigation Relayed by Multiple Stations Using Bézier Curves ». Electronics 12, no 11 (23 mai 2023) : 2358. http://dx.doi.org/10.3390/electronics12112358.

Texte intégral
Résumé :
Navigation relayed by multiple stations (NRMS) is a promising technique that can significantly extend the operational range of unmanned aerial vehicles (UAVs) and hence facilitate the execution of long-range tasks. However, NRMS employs multiple external stations in sequence to guide a UAV to its destination, introducing additional variables and constraints for UAV trajectory planning. This paper investigates the trajectory planning problem for a UAV under NRMS from its initial location to a pre-determined destination while maintaining a connection with one of the stations for safety reasons. Instead of line segments used in prior studies, a piecewise Bézier curve is applied to represent a smooth trajectory in three-dimensional (3D) continuous space, which brings both benefits and complexity. This problem is a bi-level optimisation problem consisting of upper-level station routing and lower-level UAV trajectory planning. A station sequence must be obtained first to construct a flight corridor for UAV trajectory planning while the planned trajectory evaluates it. To tackle this challenging bi-level optimisation problem, a novel efficient decoupling framework is proposed. First, the upper-level sub-problem is solved by leveraging techniques from graph theory to obtain an approximate station sequence. Then, an alternative minimisation-based algorithm is presented to address the non-linear and non-convex UAV trajectory planning sub-problem by optimising the spatial and temporal parameters of the piecewise Bézier curve iteratively. Computational experiments demonstrate the efficiency of the proposed decoupling framework and the quality of the obtained approximate station sequence. Additionally, the alternative minimisation-based algorithm is shown to outperform other non-linear optimisation methods in finding a better trajectory for the UAV within the given computational time.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Oshlakov, Victor G., et Anatoly P. Shcherbakov. « Optimisation of a Polarisation Nephelometer ». Light & ; Engineering, no 02-2021 (avril 2021) : 87–95. http://dx.doi.org/10.33383/2020-057.

Texte intégral
Résumé :
An analysis of the influence caused by polarization nephelometer parameters on the scattering matrix measurement accuracy in a non-isotropic medium is presented. The approximation errors in the actual scattering volume and radiation beam by an elementary scattering volume and an elementary radiation beam are considered. A formula for calculating the nephelometer base is proposed. It is shown that requirements to an irradiation source of a polarizing nephelometer, i.e. mono-chromaticity and high radiation intensity and directivity in a wide spectral range can be satisfied by a set of high brightness LEDs with a radiating (self-luminous) small size body. A 5-wavelength monochromatic irradiation source, with an emission flux of (0.15–0.6) W required for a polarization nephelometer, is described. The design of small-sized polarizing phase control units is shown. An electronic circuit of a radiator control unit based on an AVR-Atmega 8-bit microcontroller with feedback and drive control realized by means of an incremental angular motion sensor and a software PID controller is presented. Precise and smooth motion of the radiator is ensured by standard servo-driven numerical control mathematics and the use of precision gears. The system allows both autonomous adjustment of the radiator’s reference positions and adjustment by means of commands from a personal computer. Both the computer and microcontroller programs were developed with the use of free software, making it possible to transfer the programs to Windows‑7(10), Linux and embedded Linux operating systems. Communication between the radiator’s position control system and the personal computer is realised by means of a standard noise immune USB-RS485 interface.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Pastukhov, S. S., et K. V. Stelmashenko. « New Approaches to Pricing Management of Transport Services ». World of Transport and Transportation 19, no 6 (23 juillet 2022) : 48–60. http://dx.doi.org/10.30932/1992-3252-2021-19-6-7.

Texte intégral
Résumé :
Development of new approaches to formation of analytics mechanisms for the purpose of pricing management of services is an important aspect of increasing the efficiency of transport management processes.Research aimed at improving the tools for determining the optimal parameters of the ratio of quality and price of service for formation of a competitive and efficient tariff policy continues to remain relevant and in demand in modern market conditions. The objective of the study, presented in the article, is to analyse and evaluate the prospects for implementation of the areas to improve the apparatus for assessing the price elasticity of demand for railway passenger transport services as the transition to the use of non-linear parameters in terms of customer behaviour modelling functions, as well as introduction of the most effective algorithms from the set of modern global mathematical optimisation tools.The research conclusions are based on the use of system analysis mechanisms, methods of economic and mathematical modelling and optimisation, as well as of non-parametric statistics tools.The results based on the use of an array of data on the demand of passengers of branded trains include: a comparative assessment of quality of modelling the price elasticity of demand using 15 functions that are nonlinear in terms of parameters; the most promising tools of the search for unknown parameters for non-smooth nonlinear functions for modelling the behaviour of railway customers are identified based on a three-stage procedure for comparative analysis of the performance of more than 60 optimisation algorithms (including the calculation of minima and medians for the sums of squares of modelling errors, bootstrap analysis, Kruskal– Wallace and Mann–Whitney tests, as well as the calculation of a metric specially developed by the authors for assessing the degree of superiority of one algorithm over another within the framework of non-parametric analysis).The findings seem able to be successfully used in relation to other modes of transport in solving similar problems of developing an effective toolkit for managing the prices of transport services.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Zhu, Yingjie, Yongfa Chen, Qiuling Hua, Jie Wang, Yinghui Guo, Zhijuan Li, Jiageng Ma et Qi Wei. « A Hybrid Model for Carbon Price Forecasting Based on Improved Feature Extraction and Non-Linear Integration ». Mathematics 12, no 10 (7 mai 2024) : 1428. http://dx.doi.org/10.3390/math12101428.

Texte intégral
Résumé :
Accurately predicting the price of carbon is an effective way of ensuring the stability of the carbon trading market and reducing carbon emissions. Aiming at the non-smooth and non-linear characteristics of carbon price, this paper proposes a novel hybrid prediction model based on improved feature extraction and non-linear integration, which is built on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), fuzzy entropy (FuzzyEn), improved random forest using particle swarm optimisation (PSORF), extreme learning machine (ELM), long short-term memory (LSTM), non-linear integration based on multiple linear regression (MLR) and random forest (MLRRF), and error correction with the autoregressive integrated moving average model (ARIMA), named CEEMDAN-FuzzyEn-PSORF-ELM-LSTM-MLRRF-ARIMA. Firstly, CEEMDAN is combined with FuzzyEn in the feature selection process to improve extraction efficiency and reliability. Secondly, at the critical prediction stage, PSORF, ELM, and LSTM are selected to predict high, medium, and low complexity sequences, respectively. Thirdly, the reconstructed sequences are assembled by applying MLRRF, which can effectively improve the prediction accuracy and generalisation ability. Finally, error correction is conducted using ARIMA to obtain the final forecasting results, and the Diebold–Mariano test (DM test) is introduced for a comprehensive evaluation of the models. With respect to carbon prices in the pilot regions of Shenzhen and Hubei, the results indicate that the proposed model has higher prediction accuracy and robustness. The main contributions of this paper are the improved feature extraction and the innovative combination of multiple linear regression and random forests into a non-linear integrated framework for carbon price forecasting. However, further optimisation is still a work in progress.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Zhao, Zezheng, Chunqiu Xia, Lian Chi, Xiaomin Chang, Wei Li, Ting Yang et Albert Y. Zomaya. « Short-Term Load Forecasting Based on the Transformer Model ». Information 12, no 12 (10 décembre 2021) : 516. http://dx.doi.org/10.3390/info12120516.

Texte intégral
Résumé :
From the perspective of energy providers, accurate short-term load forecasting plays a significant role in the energy generation plan, efficient energy distribution process and electricity price strategy optimisation. However, it is hard to achieve a satisfactory result because the historical data is irregular, non-smooth, non-linear and noisy. To handle these challenges, in this work, we introduce a novel model based on the Transformer network to provide an accurate day-ahead load forecasting service. Our model contains a similar day selection approach involving the LightGBM and k-means algorithms. Compared to the traditional RNN-based model, our proposed model can avoid falling into the local minimum and outperforming the global search. To evaluate the performance of our proposed model, we set up a series of simulation experiments based on the energy consumption data in Australia. The performance of our model has an average MAPE (mean absolute percentage error) of 1.13, where RNN is 4.18, and LSTM is 1.93.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Bureika, Gintautas, et Rimantas Subačius. « MATHEMATICAL MODEL OF DYNAMIC INTERACTION BETWEEN WHEEL-SET AND RAIL TRACK ». TRANSPORT 17, no 2 (30 avril 2002) : 46–51. http://dx.doi.org/10.3846/16483480.2002.10414010.

Texte intégral
Résumé :
The main goal of this title is to show how the effects on maximum bending tensions at different locations in the track caused by simultaneous changes of the various parameters can be estimated in a rational manner The dynamic of vertical interaction between a moving rigid wheel and a flexible railway track is investigated. A round and smooth wheel tread and an initially straight and non-corrugated rail surface are assumed in the present optimisation study. Asymmetric linear three-dimensional beam structure model of a finite length of the track is suggested including rail, pads. sleepers and ballast with spatially non-proportional damping. Transient bending tensions in sleepers and rail are calculated. The influence of eight selected track parameters on the dynamic behaviour of the track is investigated. A two-level fractional factmial design method is used in the search for a combination of numerical levels of these parameters making the maximum bending tensions the minimum. Finally, the main conclusions are given.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Zhao, Dong, et Hao Guo. « A Trajectory Planning Method for Polishing Optical Elements Based on a Non-Uniform Rational B-Spline Curve ». Applied Sciences 8, no 8 (12 août 2018) : 1355. http://dx.doi.org/10.3390/app8081355.

Texte intégral
Résumé :
Optical polishing can accurately correct the surface error through controlling the dwell time of the polishing tool on the element surface. Thus, the precision of the trajectory and the dwell time (the runtime of the trajectory) are important factors affecting the polishing quality. This study introduces a systematic interpolation method for optical polishing using a non-uniform rational B-spline (NURBS). A numerical method for solving all the control points of NURBS was proposed with the help of a successive over relaxation (SOR) iterative theory, to overcome the problem of large computation. Then, an optimisation algorithm was applied to smooth the NURBS by taking the shear jerk as the evaluation index. Finally, a trajectory interpolation scheme was investigated for guaranteeing the precision of the trajectory runtime. The experiments on a prototype showed that, compared to the linear interpolation method, there was an order of magnitude improvement in interpolation, and runtime, errors. Correspondingly, the convergence rate of the surface error of elements improved from 37.59% to 44.44%.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Ross, Snizhana, Arttu Arjas, Ilkka I. Virtanen, Mikko J. Sillanpää, Lassi Roininen et Andreas Hauptmann. « Hierarchical deconvolution for incoherent scatter radar data ». Atmospheric Measurement Techniques 15, no 12 (28 juin 2022) : 3843–57. http://dx.doi.org/10.5194/amt-15-3843-2022.

Texte intégral
Résumé :
Abstract. We propose a novel method for deconvolving incoherent scatter radar data to recover accurate reconstructions of backscattered powers. The problem is modelled as a hierarchical noise-perturbed deconvolution problem, where the lower hierarchy consists of an adaptive length-scale function that allows for a non-stationary prior and as such enables adaptive recovery of smooth and narrow layers in the profiles. The estimation is done in a Bayesian statistical inversion framework as a two-step procedure, where hyperparameters are first estimated by optimisation and followed by an analytical closed-form solution of the deconvolved signal. The proposed optimisation-based method is compared to a fully probabilistic approach using Markov chain Monte Carlo techniques enabling additional uncertainty quantification. In this paper we examine the potential of the hierarchical deconvolution approach using two different prior models for the length-scale function. We apply the developed methodology to compute the backscattered powers of measured polar mesospheric winter echoes, as well as summer echoes, from the EISCAT VHF radar in Tromsø, Norway. Computational accuracy and performance are tested using a simulated signal corresponding to a typical background ionosphere and a sporadic E layer with known ground truth. The results suggest that the proposed hierarchical deconvolution approach can recover accurate and clean reconstructions of profiles, and the potential to be successfully applied to similar problems.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Yirijor, John, et Nana Asabere Siaw-Mensah. « Design and Optimisation of Horizontal Axis Wind Turbine Blades Using Biomimicry of Whale Tubercles ». Journal of Engineering Research and Reports 25, no 5 (5 juillet 2023) : 100–112. http://dx.doi.org/10.9734/jerr/2023/v25i5915.

Texte intégral
Résumé :
Wind speed is the major factor in generating power in a wind turbine. However, due to the non-optimum and redundant design of wind turbine blades, not nearly enough wind is captured for utilization. In the present study, modifications were done on the leading edge of the HAWT blade using tubercles showing their effects on aerodynamic performances. From this research, the following results found concerning the performances of HAWT with leading-edge tubercles were that; blades with tubercles on the leading edge will have superior performance in the post-stall regime by 27%, tubercles with a smaller amplitude and lower wavelength will produce higher lift and lower drag in the low wind speed condition, and tubercle blade will have a stable and smooth performance in varying wind speed conditions, producing higher torque and power at low wind speed. Using a small wind turbine model, SolidWorks Motion Analysis Simulation was used for dynamic modeling to evaluate and determine the force and torque of the mechanical structure. These results were compared and examined using standard wind turbine blades which showed an improvement of 30% in efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Thomson, R. J. « Non-Parametric Likelihood Enhancements to Parametric Graduations ». British Actuarial Journal 5, no 1 (1 avril 1999) : 197–236. http://dx.doi.org/10.1017/s1357321700000428.

Texte intégral
Résumé :
ABSTRACTParametric graduation may fail to achieve satisfactory results without overparameterisation. Whittaker-Henderson graduation tends to constrain the graduated values towards a low-order polynomial. Non-parametric methods do not generally make direct use of true likelihood functions. This paper suggests a method of enhancing the likelihood of a parametric graduation by means of non-parametric methods, thus reducing the disadvantages of both methods.The parametric graduation is taken to be ideally smooth by definition and is adjusted by using constrained maximum likelihood estimation to obtain better fidelity to the experience. The constraint imposes a minimum sacrifice of smoothness, in terms of a quantitative smoothness criterion, from the initial ideal. The method is not entirely objective in that, in some cases, professional judgement is required in order to assess the degree of smoothness that can be imposed. In other cases the method provides an objective optimum. In either case, by quantifying the degrees of departure from perfect fidelity and from ideal smoothness, the suggested method provides useful and theoretically sound criteria for the purposes of the optimisation process. In particular, by inverting the parametric graduation formula for the purposes of defining the smoothness criterion, the method ensures that the smoothness criterion is consistent over the whole age range, thus resolving the main objection to non-parametric graduation.The method is applied to the 1979-82 experience for life office pensioners in the United Kingdom with positive results.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Penney, R. W. « Collision avoidance within flight dynamics constraints for UAV applications ». Aeronautical Journal 109, no 1094 (avril 2005) : 193–99. http://dx.doi.org/10.1017/s0001924000000695.

Texte intégral
Résumé :
Abstract Avoiding collisions with other aircraft is an absolutely fundamental capability for semi-autonomous UAVs. However, an aircraft avoiding moving obstacles requires an evasive tactic that is simultaneously very quick to compute, compatible with the platform’s flight dynamics, and deals with the subtle spatio-temporal features of the threat. We will give an overview of a novel prototype method of rapidly generating smooth flight-paths constrained to avoid moving obstacles, using an efficient trajectory-optimisation technique. Obstacles are described in terms of simple geometrical shapes, such as ellipsoids, whose centres and shapes can vary with time. The technique generates a spatio-temporal trajectory which offers a high likelihood of avoiding the volume in space-time excluded by the predicted motion of each of the known obstacles. Such a flight-path could then be passed to the aircraft’s flight-control systems to negotiate the threat posed by the obstacles. Results from a demonstration implementation of the collision-avoidance technique will be discussed, including non-trivial scenarios handled well within 100ms on a 300MHz processor.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Engwirda, Darren. « JIGSAW-GEO (1.0) : locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere ». Geoscientific Model Development 10, no 6 (6 juin 2017) : 2117–40. http://dx.doi.org/10.5194/gmd-10-2117-2017.

Texte intégral
Résumé :
Abstract. An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi–Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Benreguig, Pierre, James Kelly, Vikram Pakrashi et Jimmy Murphy. « Wave-to-Wire Model Development and Validation for Two OWC Type Wave Energy Converters ». Energies 12, no 20 (18 octobre 2019) : 3977. http://dx.doi.org/10.3390/en12203977.

Texte intégral
Résumé :
The Tupperwave device is a closed-circuit oscillating water column (OWC) wave energy converter that uses non-return valves and two large fixed-volume accumulator chambers to create a smooth unidirectional air flow, harnessed by a unidirectional turbine. In this paper, the relevance of the Tupperwave concept against the conventional OWC concept, that uses a self-rectifying turbine, is investigated. For this purpose, wave-to-wire numerical models of the Tupperwave device and a corresponding conventional OWC device are developed and validated against experimental tests. Both devices have the same floating spar buoy structure and a similar turbine technology. The models include wave-structure hydrodynamic interaction, air turbines and generators, along with their control laws in order to encompass all power conversion stages from wave to electrical power. Hardware-in-the-loop is used to physically emulate the last power conversion stage from mechanic to electrical power and hence validate the control law and the generator numerical model. The dimensioning methodology for turbines and generators for power optimisation is explained. Eventually, the validated wave-to-wire numerical models of the conventional OWC and the Tupperwave device are used to assess and compare the performances of these two OWC type wave energy device concepts in the same wave climate. The benefits of pneumatic power smoothing by the Tupperwave device are discussed and the required efficiency of the non-return valves is investigated.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Skhosana, Sphiwe B., Salomon M. Millard et Frans H. J. Kanfer. « A Novel EM-Type Algorithm to Estimate Semi-Parametric Mixtures of Partially Linear Models ». Mathematics 11, no 5 (22 février 2023) : 1087. http://dx.doi.org/10.3390/math11051087.

Texte intégral
Résumé :
Semi- and non-parametric mixture of normal regression models are a flexible class of mixture of regression models. These models assume that the component mixing proportions, regression functions and/or variances are non-parametric functions of the covariates. Among this class of models, the semi-parametric mixture of partially linear models (SPMPLMs) combine the desirable interpretability of a parametric model and the flexibility of a non-parametric model. However, local-likelihood estimation of the non-parametric term poses a computational challenge. Traditional EM optimisation of the local-likelihood functions is not appropriate due to the label-switching problem. Separately applying the EM algorithm on each local-likelihood function will likely result in non-smooth function estimates. This is because the local responsibilities calculated at the E-step of each local EM are not guaranteed to be aligned. To prevent this, the EM algorithm must be modified so that the same (global) responsibilities are used at each local M-step. In this paper, we propose a one-step backfitting EM-type algorithm to estimate the SPMPLMs and effectively address the label-switching problem. The proposed algorithm estimates the non-parametric term using each set of local responsibilities in turn and then incorporates a smoothing step to obtain the smoothest estimate. In addition, to reduce the computational burden imposed by the use of the partial-residuals estimator of the parametric term, we propose a plug-in estimator. The performance and practical usefulness of the proposed methods was tested using a simulated dataset and two real datasets, respectively. Our finite sample analysis revealed that the proposed methods are effective at solving the label-switching problem and producing reasonable and interpretable results in a reasonable amount of time.
Styles APA, Harvard, Vancouver, ISO, etc.
31

George, Nishkal, et Boppana V. Chowdary. « Mitigation of Design Issues in Development of Anatomical Models Using Rapid Prototyping ». West Indian Journal of Engineering 45, no 1 (juillet 2022) : 13–21. http://dx.doi.org/10.47412/upfp4130.

Texte intégral
Résumé :
Literature indicates rapid prototyping (RP) application has become more widespread in design and development of human anatomy models. Practitioners are facing challenges in deployment of RP tools for development of cost-effective medical models, because there are no proven decision support systems in the selection of parameters such as speed, accuracy, materials, and customisation of commercial software. This study aims at alleviating some of these issues by exploring the use of a Genetic Algorithm (GA) approach combined with computer-aided design (CAD) and fused deposition modeling (FDM) techniques. Experiments were conducted using response surface methodology (RSM) to facilitate the optimisation process with build time and model material volume as responses. The validation of the study has been performed with a patella model and the results verified the effectiveness of the proposed RSM-GA approach in the design and development of the anatomical model. The results showed a 27% savings on model material compared to a non-refined model and was deemed satisfactory for practical use as there was a reduction in irregularities from CT data. The study also reveals that the parameter hollow has the largest effect on the responses, followed by the smooth parameter and then the wrap parameter.
Styles APA, Harvard, Vancouver, ISO, etc.
32

İvedi, İsmail, Bahadır Güneşoğlu, Sinem Yaprak Karavana, Gülşah Ekin Kartal, Gökhan Erkan et Ayşe Merih Sarıışık. « Using Spraying as an Alternative Method for Transferring Capsules Containing Shea Butter to Denim and Non-Denim Fabrics : Preparation of microcapsules for delivery of active ingredients ». Johnson Matthey Technology Review 66, no 1 (11 janvier 2022) : 90–102. http://dx.doi.org/10.1595/205651322x16376750190432.

Texte intégral
Résumé :
The aim of this study was to prepare microcapsules and transfer them to denim and non-denim trousers using different application methods. For this purpose, shea butter as active agent was encapsulated in an ethyl cellulose shell using the spray dryer method, and capsule optimisation was studied. A morphological assessment showed that the capsules had a smooth surface and were spherical in shape. The homogenous size distribution of the capsules was supported by laser diffraction analysis. The capsules showed a narrow size distribution, and the mean particle size of optimum formulations of shea butter was 390 nm. Denim fabrics were treated with shea butter capsules using the methods of exhaustion and spraying in order to compare these application methods. The presence of capsules on the fabrics was tested after five wash cycles. The comparison of application methods found similar preferred characteristics for both the exhaustion and spraying methods. However, the spraying method was found to be more sustainable, because it allows working with low liquor ratios in less water, with lower chemical consumption and less waste than the exhaustion method, which requires working with a high liquor ratio. This study showed that the spraying method can be used as an alternative to other application methods in the market for reducing energy consumption, and shea butter capsules can provide moisturising properties to the fabrics.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Agarwal, Shweta, Rayasa S. Ramachandra Murthy, Sasidharan Leelakumari Harikumar et Rajeev Garg. « Quality by Design Approach for Development and Characterisation of Solid Lipid Nanoparticles of Quetiapine Fumarate ». Current Computer-Aided Drug Design 16, no 1 (6 janvier 2020) : 73–91. http://dx.doi.org/10.2174/1573409915666190722122827.

Texte intégral
Résumé :
Background: Quetiapine fumarate, a 2nd generation anti-psychotic drug has oral bioavailability of 9% because of hepatic first pass metabolism. Reports suggest that co-administration of drugs with lipids affects their absorption pathways, enhances lymphatic transport thus bypassing hepatic first-pass metabolism resulting in enhanced bioavailability. Objective: The present work aimed at developing, and characterising potentially lymphatic absorbable Solid Lipid Nanoparticles (SLN) of quetiapine fumarate by Quality by Design approach. Method: Hot emulsification followed by ultrasonication was used as a method of preparation. Precirol ATO5, Phospholipon 90G and Poloxamer 188 were used as a lipid, stabilizer and surfactant respectively. A32 Central Composite design optimised the 2 independent variables, lipid concentration and stabilizer concentration and assessed their effect on percent Entrapment Efficiency (%EE: Y1). The lyophilized SLNs were studied for stability at 5 ±3οC and 25 ± 2οC/60 ± 5% RH for 3 months. Results: The optimised formula derived for SLN had 270mg Precirol ATO5 and 107mg of Phospholipon 90G giving %EE of 76.53%. Mean particle size was 159.8nm with polydispersity index 0.273 and zeta potential -6.6mV. In-vitro drug release followed Korsmeyer-Peppas kinetics (R2=0.917) with release exponent n=0.722 indicating non-Fickian diffusion. Transmission electron microscopy images exhibited particles to be spherical and smooth. Fourier-transform infrared spectroscopy, differential scanning calorimetry and X-ray diffraction studies ascertained drug-excipient compatibility. Stability studies suggested 5οC as appropriate temperature for storage and preserving important characteristics within acceptable limits. Conclusion: Development and optimisation by Quality by Design were justified as it yielded SLN having acceptable characteristics and potential application for intestinal lymphatic transport.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Fercoq, Olivier. « A generic coordinate descent solver for non-smooth convex optimisation ». Optimization Methods and Software, 27 août 2019, 1–21. http://dx.doi.org/10.1080/10556788.2019.1658758.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Fornasier, Massimo, Peter Richtárik, Konstantin Riedl et Lukang Sun. « Consensus-based optimisation with truncated noise ». European Journal of Applied Mathematics, 5 avril 2024, 1–24. http://dx.doi.org/10.1017/s095679252400007x.

Texte intégral
Résumé :
Abstract Consensus-based optimisation (CBO) is a versatile multi-particle metaheuristic optimisation method suitable for performing non-convex and non-smooth global optimisations in high dimensions. It has proven effective in various applications while at the same time being amenable to a theoretical convergence analysis. In this paper, we explore a variant of CBO, which incorporates truncated noise in order to enhance the well-behavedness of the statistics of the law of the dynamics. By introducing this additional truncation in the noise term of the CBO dynamics, we achieve that, in contrast to the original version, higher moments of the law of the particle system can be effectively bounded. As a result, our proposed variant exhibits enhanced convergence performance, allowing in particular for wider flexibility in choosing the noise parameter of the method as we confirm experimentally. By analysing the time evolution of the Wasserstein- $2$ distance between the empirical measure of the interacting particle system and the global minimiser of the objective function, we rigorously prove convergence in expectation of the proposed CBO variant requiring only minimal assumptions on the objective function and on the initialisation. Numerical evidences demonstrate the benefit of truncating the noise in CBO.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Zhong, Tianyi, et David Angeli. « A Cutting Plane-Based Distributed Algorithm for Non-Smooth Optimisation With Coupling Constraints ». IEEE Control Systems Letters, 2024, 1. http://dx.doi.org/10.1109/lcsys.2024.3408754.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
37

Staudigl, Mathias, et Paulin Jacquot. « Random block-coordinate methods for inconsistent convex optimisation problems ». Fixed Point Theory and Algorithms for Sciences and Engineering 2023, no 1 (6 novembre 2023). http://dx.doi.org/10.1186/s13663-023-00751-0.

Texte intégral
Résumé :
AbstractWe develop a novel randomised block-coordinate primal-dual algorithm for a class of non-smooth ill-posed convex programs. Lying midway between the celebrated Chambolle–Pock primal-dual algorithm and Tseng’s accelerated proximal gradient method, we establish global convergence of the last iterate as well as optimal $O(1/k)$ O ( 1 / k ) and $O(1/k^{2})$ O ( 1 / k 2 ) complexity rates in the convex and strongly convex case, respectively, k being the iteration count. Motivated by the increased complexity in the control of distribution-level electric-power systems, we test the performance of our method on a second-order cone relaxation of an AC-OPF problem. Distributed control is achieved via the distributed locational marginal prices (DLMPs), which are obtained as dual variables in our optimisation framework.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Mistri, S. R., C. S. Yerramalli, R. S. Pant et A. Guha. « Methodology for shape prediction and conversion of a conventional aerofoil to an inflatable baffled aerofoil ». Aeronautical Journal, 3 novembre 2023, 1–34. http://dx.doi.org/10.1017/aer.2023.98.

Texte intégral
Résumé :
Abstract Inflatable wings for UAVs are useful where storage space is a severe constraint. Literature in the field of inflatable wings often assumes an inflated aerofoil shape for various analyses. However, the flexible inflatable aerofoil fabric might deform to another equilibrium shape upon inflation. Hence accurate shape prediction of the inflated aerofoil is vital. Further, no standardised nomenclature or a process to convert a smooth aerofoil into its corresponding inflatable aerofoil counterpart is available. This paper analytically predicts the equilibrium shape of any inflatable aerofoil and validates the analytical prediction using non-linear finite element methods. Further, a scheme for the generation of two types of inflatable aerofoils is presented. Parameters such as the number and position of compartments and aerofoil length ratio (ALR) are identified as necessary to define the aerofoil’s shape fully. A process to minimise the deviation of the inflatable aerofoil from its original smooth aerofoil using particle swarm optimisation (PSO) is discussed. Research presented in this paper can help in performing various analyses on the actual equilibrium shape of the aerofoil.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Antczak, Tadeusz, et Kalpana Shukla. « Optimality and duality results for non-smooth vector optimisation problems with K-V-type I functions via local cone approximations ». International Journal of Mathematics in Operational Research 1, no 1 (2023). http://dx.doi.org/10.1504/ijmor.2023.10064590.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Holford, Jacob J., Myoungkyu Lee et Yongyun Hwang. « Optimal white-noise stochastic forcing for linear models of turbulent channel flow ». Journal of Fluid Mechanics 961 (24 avril 2023). http://dx.doi.org/10.1017/jfm.2023.234.

Texte intégral
Résumé :
In the present study an optimisation problem is formulated to determine the forcing of an eddy-viscosity-based linearised Navier–Stokes model in channel flow at $Re_\tau \approx 5200$ ( $Re_\tau$ is the friction Reynolds number), where the forcing is white-in-time and spatially decorrelated. The objective functional is prescribed such that the forcing drives a response to best match a set of velocity spectra from direct numerical simulation (DNS), as well as remaining sufficiently smooth. Strong quantitative agreement is obtained between the velocity spectra from the linear model with optimal forcing and from DNS, but only qualitative agreement between the Reynolds shear stress co-spectra from the model and DNS. The forcing spectra exhibit a level of self-similarity, associated with the primary peak in the velocity spectra, but they also reveal a non-negligible amount of energy spent in phenomenologically mimicking the non-self-similar part of the velocity spectra associated with energy cascade. By exploiting linearity, the effect of the individual forcing components is assessed and the contributions from the Orr mechanism and the lift-up effect are also identified. Finally, the effect of the strength of the eddy viscosity on the optimisation performance is investigated. The inclusion of the eddy viscosity diffusion operator is shown to be essential in modelling of the near-wall features, while still allowing the forcing of the self-similar primary peak. In particular, reducing the strength of the eddy viscosity results in a considerable increase in the near-wall forcing of wall-parallel components.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Betcke, Marta M., et Carola-Bibiane Schönlieb. « Mathematics of biomedical imaging today - a perspective ». Progress in Biomedical Engineering, 26 mai 2023. http://dx.doi.org/10.1088/2516-1091/acd973.

Texte intégral
Résumé :
Abstract Biomedical imaging is a fascinating, rich and dynamic research area,
which has huge importance in biomedical research and clinical practice
alike. The key technology behind the processing, and automated analysis
and quantification of imaging data is mathematics. Starting with
the optimisation of the image acquisition and the reconstruction of an
image from indirect tomographic measurement data, all the way to the
automated segmentation of tumours in medical images and the design of
optimal treatment plans based on image biomarkers, mathematics appears
in all of these in different flavours. Non-smooth optimisation in the context
of sparsity-promoting image priors, partial differential equations for
image registration and motion estimation, and deep neural networks for
image segmentation, to name just a few. In this article, we present and review
mathematical topics that arise within the whole biomedical imaging
pipeline, from tomographic measurements to clinical support tools, and
highlight some modern topics and open problems. The article is addressed
to both biomedical researchers who want to get a taste of where mathematics
arises in biomedical imaging as well as mathematicians who are
interested in what mathematical challenges biomedical imaging research
entails.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Latz, Jonas. « Gradient flows and randomised thresholding : sparse inversion and classification ». Inverse Problems, 19 octobre 2022. http://dx.doi.org/10.1088/1361-6420/ac9b84.

Texte intégral
Résumé :
Abstract Sparse inversion and classification problems are ubiquitous in modern data science and imaging. They are often formulated as non-smooth minimisation problems. In sparse inversion, we minimise, e.g., the sum of a data fidelity term and an L1/LASSO regulariser. In classification, we consider, e.g., the sum of a data fidelity term and a non-smooth Ginzburg--Landau energy. Standard (sub)gradient descent methods have shown to be inefficient when approaching such problems. Splitting techniques are much more useful: here, the target function is partitioned into a sum of two subtarget functions -- each of which can be efficiently optimised. Splitting proceeds by performing optimisation steps alternately with respect to each of the two subtarget functions. In this work, we study splitting from a stochastic continuous-time perspective. Indeed, we define a differential inclusion that follows one of the two subtarget function's negative subdifferential at each point in time. The choice of the subtarget function is controlled by a binary continuous-time Markov process. The resulting dynamical system is a stochastic approximation of the underlying subgradient flow. We investigate this stochastic approximation for an L1-regularised sparse inversion flow and for a discrete Allen-Cahn equation minimising a Ginzburg--Landau energy. In both cases, we study the longtime behaviour of the stochastic dynamical system and its ability to approximate the underlying subgradient flow at any accuracy. We illustrate our theoretical findings in a simple sparse estimation problem and also in low- and high-dimensional classification problems.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Zhang, Hengmin, Jian Yang, Bob Zhang, Yang Tang, Wenli Du et Bihan Wen. « Enhancing generalized spectral clustering with embedding Laplacian graph regularization ». CAAI Transactions on Intelligence Technology, 18 mars 2024. http://dx.doi.org/10.1049/cit2.12308.

Texte intégral
Résumé :
AbstractAn enhanced generalised spectral clustering framework that addresses the limitations of existing methods by incorporating the Laplacian graph and group effect into a regularisation term is presented. By doing so, the framework significantly enhances discrimination power and proves highly effective in handling noisy data. Its versatility enables its application to various clustering problems, making it a valuable contribution to unsupervised learning tasks. To optimise the proposed model, the authors have developed an efficient algorithm that utilises the standard Sylvester equation to compute the coefficient matrix. By setting the derivatives to zero, computational efficiency is maintained without compromising accuracy. Additionally, the authors have introduced smoothing strategies to make the non‐convex and non‐smooth terms differentiable. This enables the use of an alternative iteration re‐weighted procedure (AIwRP), which distinguishes itself from other first‐order optimisation algorithms by introducing auxiliary variables. The authors provide a provable convergence analysis of AIwRP based on the iteration procedures of unconstrained problems to support its effectiveness. Extensive numerical tests have been conducted on synthetic and benchmark databases to validate the superiority of their approaches. The results demonstrate improved clustering performance and computational efficiency compared to several existing spectral clustering methods, further reinforcing the advantages of their proposed framework. The source code is available at https://github.com/ZhangHengMin/LGR_LSRLRR.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Leek, Francesca, Cameron Anderson, Andrew P. Robinson, Robert M. Moss, Joanna C. Porter, Helen S. Garthwaite, Ashley M. Groves, Brian F. Hutton et Kris Thielemans. « Optimisation of the air fraction correction for lung PET/CT : addressing resolution mismatch ». EJNMMI Physics 10, no 1 (5 décembre 2023). http://dx.doi.org/10.1186/s40658-023-00595-y.

Texte intégral
Résumé :
Abstract Background Increased pulmonary $$^{18}{}$$ 18 F-FDG metabolism in patients with idiopathic pulmonary fibrosis, and other forms of diffuse parenchymal lung disease, can predict measurements of health and lung physiology. To improve PET quantification, voxel-wise air fractions (AF) determined from CT can be used to correct for variable air content in lung PET/CT. However, resolution mismatches between PET and CT can cause artefacts in the AF-corrected image. Methods Three methodologies for determining the optimal kernel to smooth the CT are compared with noiseless simulations and non-TOF MLEM reconstructions of a patient-realistic digital phantom: (i) the point source insertion-and-subtraction method, $$h_{pts}$$ h pts ; (ii) AF-correcting with varyingly smoothed CT to achieve the lowest RMSE with respect to the ground truth (GT) AF-corrected volume of interest (VOI), $$h_{AFC}$$ h AFC ; iii) smoothing the GT image to match the reconstruction within the VOI, $$h_{PVC}$$ h PVC . The methods were evaluated both using VOI-specific kernels, and a single global kernel optimised for the six VOIs combined. Furthermore, $$h_{PVC}$$ h PVC was implemented on thorax phantom data measured on two clinical PET/CT scanners with various reconstruction protocols. Results The simulations demonstrated that at $$<200$$ < 200 iterations (200 i), the kernel width was dependent on iteration number and VOI position in the lung. The $$h_{pts}$$ h pts method estimated a lower, more uniform, kernel width in all parts of the lung investigated. However, all three methods resulted in approximately equivalent AF-corrected VOI RMSEs (<10%) at $$\ge$$ ≥ 200i. The insensitivity of AF-corrected quantification to kernel width suggests that a single global kernel could be used. For all three methodologies, the computed global kernel resulted in an AF-corrected lung RMSE <10% at $$\ge$$ ≥ 200i, while larger lung RMSEs were observed for the VOI–specific kernels. The global kernel approach was then employed with the $$h_{PVC}$$ h PVC method on measured data. The optimally smoothed GT emission matched the reconstructed image well, both within the VOI and the lung background. VOI RMSE was <10%, pre-AFC, for all reconstructions investigated. Conclusions Simulations for non-TOF PET indicated that around 200i were needed to approach image resolution stability in the lung. In addition, at this iteration number, a single global kernel, determined from several VOIs, for AFC, performed well over the whole lung. The $$h_{PVC}$$ h PVC method has the potential to be used to determine the kernel for AFC from scans of phantoms on clinical scanners.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Pandey, Varun, Stijn van Dooren, Johannes Ritzmann, Benjamín Pla et Christopher Onder. « Variable smoothing of optimal diesel engine calibration for improved performance and drivability during transient operation ». International Journal of Engine Research, 1 juin 2020, 146808742091880. http://dx.doi.org/10.1177/1468087420918801.

Texte intégral
Résumé :
The model-based method to define the optimal calibration maps for important diesel engine parameters may involve three major steps. First, the engine speed and load domain – in which the engine is operated – are identified. Then, a global engine model is created, which can be used for offline simulations to estimate engine performance. Finally, optimal calibration maps are obtained by formulating and solving an optimisation problem, with the goal of minimising fuel consumption while meeting constraints on pollutant emissions. This last step in the calibration process usually involves smoothing of the maps in order to improve drivability. This article presents a method to trade off map smoothness, brake-specific fuel consumption and nitrogen oxide emissions. After calculating the optimal but potentially non-smooth calibration maps, a variation-based smoothing method is employed to obtain different levels of smoothness by adapting a single tuning parameter. The method was experimentally validated on a heavy-duty diesel engine, and the non-road transient cycle was used as a case study. The error between the reference and actual engine torque was used as a metric for drivability, and the error was found to decrease with increasing map smoothness. After having obtained this trade-off for various fixed levels of smoothness, a time-varying smoothness calibration was generated and tested. Experimental results showed that, with a time-varying smoothness strategy, nitrogen oxide emissions could be reduced by 4%, while achieving the same drivability and fuel consumption as in the case of a fixed smoothing strategy.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Lu, Zhenguo, Hongbin Wang, Mingyan Wang et Zhiwen Wang. « Improved dark channel priori single image defogging technique using image segmentation and joint filtering ». Science Progress 107, no 1 (janvier 2024). http://dx.doi.org/10.1177/00368504231221407.

Texte intégral
Résumé :
Foggy images affect image analysis and measurement because of their low definition and blurred details. Despite numerous studies on haze in natural images in hazy environments, the recovery effect is not ideal for processing hazy images in sky areas. A dark channel priori technique for processing haze images with sky areas where atmospheric light values are misestimated and halo artefacts are produced, as well as an improved dark channel priori single-image defogging technique based on image segmentation and joint filtering, are proposed. First, an estimation method of the atmospheric illumination value using image segmentation is proposed to obtain the atmospheric illumination value. The probability density distribution function of the haze-grey image was constructed during image segmentation. The probability density distribution function of the grey image value, the K-means clustering technique, and the method for estimating atmospheric illumination values are combined to improve image segmentation techniques and achieve the segmentation of sky and non-sky areas in hazy images. Based on the segmentation threshold, the number of pixels in the sky and non-sky areas, as well as the normalisation results, were counted to calculate the atmospheric illumination values. Second, to address the halo artefact phenomenon, a method for optimising the image transmittance map using joint filtering is proposed. The image transmittance map was optimised by combining fast-guided filtering and weighted least-squares filtering to retain the edge information and smooth the gradient change of the internal region. Finally, gamma correction and automatic level optimisation are used to improve the brightness and contrast of the defogged images. The experimental results show that the proposed technique can effectively achieve sky segmentation. Compared to the traditional dark-channel prior technique, the proposed technique suppress halo artefacts and improve image detail recovery. Compared to other techniques, the proposed technique exhibited excellent performance in subjective and objective evaluations.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Wallin, Gabriel, Yunxiao Chen et Irini Moustaki. « DIF Analysis with Unknown Groups and Anchor Items ». Psychometrika, 21 février 2024. http://dx.doi.org/10.1007/s11336-024-09948-7.

Texte intégral
Résumé :
AbstractEnsuring fairness in instruments like survey questionnaires or educational tests is crucial. One way to address this is by a Differential Item Functioning (DIF) analysis, which examines if different subgroups respond differently to a particular item, controlling for their overall latent construct level. DIF analysis is typically conducted to assess measurement invariance at the item level. Traditional DIF analysis methods require knowing the comparison groups (reference and focal groups) and anchor items (a subset of DIF-free items). Such prior knowledge may not always be available, and psychometric methods have been proposed for DIF analysis when one piece of information is unknown. More specifically, when the comparison groups are unknown while anchor items are known, latent DIF analysis methods have been proposed that estimate the unknown groups by latent classes. When anchor items are unknown while comparison groups are known, methods have also been proposed, typically under a sparsity assumption – the number of DIF items is not too large. However, DIF analysis when both pieces of information are unknown has not received much attention. This paper proposes a general statistical framework under this setting. In the proposed framework, we model the unknown groups by latent classes and introduce item-specific DIF parameters to capture the DIF effects. Assuming the number of DIF items is relatively small, an $$L_1$$ L 1 -regularised estimator is proposed to simultaneously identify the latent classes and the DIF items. A computationally efficient Expectation-Maximisation (EM) algorithm is developed to solve the non-smooth optimisation problem for the regularised estimator. The performance of the proposed method is evaluated by simulation studies and an application to item response data from a real-world educational test.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie