Статті в журналах з теми "Non-smooth optimisation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Non-smooth optimisation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-47 статей у журналах для дослідження на тему "Non-smooth optimisation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Dao, Minh Ngoc, Dominikus Noll, and Pierre Apkarian. "Robust eigenstructure clustering by non-smooth optimisation." International Journal of Control 88, no. 8 (March 3, 2015): 1441–55. http://dx.doi.org/10.1080/00207179.2015.1007393.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yao, Zhiqiang, Jinfeng Huang, Shiguo Wang, and Rukhsana Ruby. "Efficient local optimisation‐based approach for non‐convex and non‐smooth source localisation problems." IET Radar, Sonar & Navigation 11, no. 7 (July 2017): 1051–54. http://dx.doi.org/10.1049/iet-rsn.2016.0433.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Pothiya, Saravuth, Issarachai Ngamroo, and Waree Kongprawechnon. "Ant colony optimisation for economic dispatch problem with non-smooth cost functions." International Journal of Electrical Power & Energy Systems 32, no. 5 (June 2010): 478–87. http://dx.doi.org/10.1016/j.ijepes.2009.09.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sach, Pham Huu, Gue Myung Lee, and Do Sang Kim. "Efficiency and generalised convexity in vector optimisation problems." ANZIAM Journal 45, no. 4 (April 2004): 523–46. http://dx.doi.org/10.1017/s1446181100013547.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis paper gives a necessary and sufficient condition for a Kuhn-Tucker point of a non-smooth vector optimisation problem subject to inequality and equality constraints to be an efficient solution. The main tool we use is an alternative theorem which is quite different to a corresponding result by Xu.
5

Pecci, Filippo, Edo Abraham, and Ivan Stoianov. "Quadratic head loss approximations for optimisation problems in water supply networks." Journal of Hydroinformatics 19, no. 4 (April 17, 2017): 493–506. http://dx.doi.org/10.2166/hydro.2017.080.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents a novel analysis of the accuracy of quadratic approximations for the Hazen–Williams (HW) head loss formula, which enables the control of constraint violations in optimisation problems for water supply networks. The two smooth polynomial approximations considered here minimise the absolute and relative errors, respectively, from the original non-smooth HW head loss function over a range of flows. Since quadratic approximations are used to formulate head loss constraints for different optimisation problems, we are interested in quantifying and controlling their absolute errors, which affect the degree of constraint violations of feasible candidate solutions. We derive new exact analytical formulae for the absolute errors as a function of the approximation domain, pipe roughness and relative error tolerance. We investigate the efficacy of the proposed quadratic approximations in mathematical optimisation problems for advanced pressure control in an operational water supply network. We propose a strategy on how to choose the approximation domain for each pipe such that the optimisation results are sufficiently close to the exact hydraulically feasible solution space. By using simulations with multiple parameters, the approximation errors are shown to be consistent with our analytical predictions.
6

Zhu, Yuteng. "Designing a physically-feasible colour filter to make a camera more colorimetric." London Imaging Meeting 2020, no. 1 (September 29, 2020): 96–99. http://dx.doi.org/10.2352/issn.2694-118x.2020.lim-16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Previously, a method has been developed to find the best colour filter for a given camera which results in the new effective camera sensitivities that best meet the Luther condition. That is, the new sensitivities are approximately linearly related to the XYZ colour matching functions. However, with no constraint, the filter derived from this Luther-condition based optimisation can be rather non-smooth and transmit very little light which are impractical for fabrication. In this paper, we extend the Luther-condition filter optimisation method to allow us to incorporate both the smoothness and transmittance bounds of the recovered filter which are key practical concerns. Experiments demonstrate that we can find physically realisable filters which are smooth and reasonably transmissive with which the effective 'camera+filter' becomes significantly more colorimetric.
7

Wang, Wei-Xiang, You-Lin Shang, and Ying Zhang. "Finding global minima with a novel filled function for non-smooth unconstrained optimisation." International Journal of Systems Science 43, no. 4 (April 2012): 707–14. http://dx.doi.org/10.1080/00207721.2010.520094.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Shuming, Zhenyu Zhou, and Jixiu Zhang. "Multi-objective optimisation of automobile sound package with non-smooth surface based on grey theory and particle swarm optimisation." International Journal of Vehicle Design 88, no. 2/3/4 (2022): 238. http://dx.doi.org/10.1504/ijvd.2022.127018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chen, Shuming, Jixiu Zhang, and Zhenyu Zhou. "Multi-objective optimisation of automobile sound package with non-smooth surface based on grey theory and particle swarm optimisation." International Journal of Vehicle Design 88, no. 2/3/4 (2022): 238. http://dx.doi.org/10.1504/ijvd.2022.10052010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ribeiro, Tiago, Yun-Fei Fu, Luís Bernardo, and Bernard Rolfe. "Topology Optimisation of Structural Steel with Non-Penalisation SEMDOT: Optimisation, Physical Nonlinear Analysis, and Benchmarking." Applied Sciences 13, no. 20 (October 17, 2023): 11370. http://dx.doi.org/10.3390/app132011370.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this work, Non-penalisation Smooth-Edged Material Distribution for Optimising Topology (np-SEMDOT) algorithm was developed as an alternative to well-established Topology Optimisation (TO) methods based on the solid/void approach. Its novelty lies in its smoother edges and enhanced manufacturability, but it requires validation in a real case study rather than using simplified benchmark problems. To such an end, a Sheikh-Ibrahim steel girder joint’s tension cover plate was optimised with np-SEMDOT, following a methodology designed to ensure compliance with the European design standards. The optimisation was assessed with Physical Nonlinear Finite Element Analyses (PhNLFEA), after recent findings that topologically optimised steel construction joint parts were not accurately modelled with linear analyses to ensure the required highly nonlinear ultimate behaviour. The results prove, on the one hand, that the quality of np-SEMDOT solutions strongly depends on the chosen optimisation parameters, and on the other hand, that the optimal np-SEMDOT solution can equalise the ultimate capacity and can slightly outperform the ultimate displacement of a benchmarking solution using a Solid Isotropic Material with Penalisation (SIMP)-based approach. It can be concluded that np-SEMDOT does not fall short of the prevalent methods. These findings highlight the novelty in this work by validating the use of np-SEMDOT for professional applications.
11

Rao, Mallavolu Malleswara, and Geetha Ramadas. "Multiobjective Improved Particle Swarm Optimisation for Transmission Congestion and Voltage Profile Management using Multilevel UPFC." Power Electronics and Drives 4, no. 1 (June 1, 2019): 79–93. http://dx.doi.org/10.2478/pead-2019-0005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractThis paper proposes a multiobjective improved particle swarm optimisation (IPSO) for placing and sizing the series modular multilevel converter-based unified power flow controller (MMC-UPFC) FACTS devices to manage the transmission congestion and voltage profile in deregulated electricity markets. The proposed multiobjective IPSO algorithm is perfect for accomplishing the close ideal distributed generation (DG) sizes while conveying smooth assembly qualities contrasted with another existing algorithm. It tends to be reasoned that voltage profile and genuine power misfortunes have generous upgrades along ideal speculation on DGs in both the test frameworks. The proposed system eliminates the congestion and the power system can be easily used to solve complex and non-linear optimisation problems in a real-time manner.
12

Nayak, Gopal Krishna, Tapas Kumar Panigrahi, and Arun Kumar Sahoo. "A novel modified random walk grey wolf optimisation approach for non-smooth and non-convex economic load dispatch." International Journal of Innovative Computing and Applications 13, no. 2 (2022): 59. http://dx.doi.org/10.1504/ijica.2022.10047889.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Sahoo, Arun Kumar, Tapas Kumar Panigrahi, and Gopal Krishna Nayak. "A novel modified random walk grey wolf optimisation approach for non-smooth and non-convex economic load dispatch." International Journal of Innovative Computing and Applications 13, no. 2 (2022): 59. http://dx.doi.org/10.1504/ijica.2022.123222.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Koch, Michael W., and Sigrid Leyendecker. "Structure Preserving Simulation of Monopedal Jumping." Archive of Mechanical Engineering 60, no. 1 (March 1, 2013): 127–46. http://dx.doi.org/10.2478/meceng-2013-0008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The human environment consists of a large variety of mechanical and biomechanical systems in which different types of contact can occur. In this work, we consider a monopedal jumper modelled as a three-dimensional rigid multibody system with contact and simulate its dynamics using a structure preserving method. The applied mechanical integrator is based on a constrained version of the Lagranged’Alembert principle. The resulting variational integrator preserves the symplecticity and momentum maps of the multibody dynamics. To ensure the structure preservation and the geometric correctness, we solve the non-smooth problem including the computation of the contact configuration, time and force instead of relying on a smooth approximation of the contact problem via a penalty potential. In addition to the formulation of non-smooth problems in forward dynamic simulations, we are interested in the optimal control of the monopedal high jump. The optimal control problem is solved using a direct transcription method transforming it into a constrained optimisation problem, see [14].
15

Chambon, Emmanuel, Pierre Apkarian, and Laurent Burlion. "Overview of linear time-invariant interval observer design: towards a non-smooth optimisation-based approach." IET Control Theory & Applications 10, no. 11 (July 18, 2016): 1258–68. http://dx.doi.org/10.1049/iet-cta.2015.0742.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Yang, Zhijing, Wei‐Chao Kuang, Bingo Wing‐Kuen Ling, and Qingyun Dai. "Instantaneous magnitudes and instantaneous frequencies of signals with their positivity constraints via non‐smooth non‐convex functional constrained optimisation." IET Signal Processing 10, no. 3 (May 2016): 247–53. http://dx.doi.org/10.1049/iet-spr.2014.0234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Qi, Mingfeng, Lihua Dou, and Bin Xin. "3D Smooth Trajectory Planning for UAVs under Navigation Relayed by Multiple Stations Using Bézier Curves." Electronics 12, no. 11 (May 23, 2023): 2358. http://dx.doi.org/10.3390/electronics12112358.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Navigation relayed by multiple stations (NRMS) is a promising technique that can significantly extend the operational range of unmanned aerial vehicles (UAVs) and hence facilitate the execution of long-range tasks. However, NRMS employs multiple external stations in sequence to guide a UAV to its destination, introducing additional variables and constraints for UAV trajectory planning. This paper investigates the trajectory planning problem for a UAV under NRMS from its initial location to a pre-determined destination while maintaining a connection with one of the stations for safety reasons. Instead of line segments used in prior studies, a piecewise Bézier curve is applied to represent a smooth trajectory in three-dimensional (3D) continuous space, which brings both benefits and complexity. This problem is a bi-level optimisation problem consisting of upper-level station routing and lower-level UAV trajectory planning. A station sequence must be obtained first to construct a flight corridor for UAV trajectory planning while the planned trajectory evaluates it. To tackle this challenging bi-level optimisation problem, a novel efficient decoupling framework is proposed. First, the upper-level sub-problem is solved by leveraging techniques from graph theory to obtain an approximate station sequence. Then, an alternative minimisation-based algorithm is presented to address the non-linear and non-convex UAV trajectory planning sub-problem by optimising the spatial and temporal parameters of the piecewise Bézier curve iteratively. Computational experiments demonstrate the efficiency of the proposed decoupling framework and the quality of the obtained approximate station sequence. Additionally, the alternative minimisation-based algorithm is shown to outperform other non-linear optimisation methods in finding a better trajectory for the UAV within the given computational time.
18

Oshlakov, Victor G., and Anatoly P. Shcherbakov. "Optimisation of a Polarisation Nephelometer." Light & Engineering, no. 02-2021 (April 2021): 87–95. http://dx.doi.org/10.33383/2020-057.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An analysis of the influence caused by polarization nephelometer parameters on the scattering matrix measurement accuracy in a non-isotropic medium is presented. The approximation errors in the actual scattering volume and radiation beam by an elementary scattering volume and an elementary radiation beam are considered. A formula for calculating the nephelometer base is proposed. It is shown that requirements to an irradiation source of a polarizing nephelometer, i.e. mono-chromaticity and high radiation intensity and directivity in a wide spectral range can be satisfied by a set of high brightness LEDs with a radiating (self-luminous) small size body. A 5-wavelength monochromatic irradiation source, with an emission flux of (0.15–0.6) W required for a polarization nephelometer, is described. The design of small-sized polarizing phase control units is shown. An electronic circuit of a radiator control unit based on an AVR-Atmega 8-bit microcontroller with feedback and drive control realized by means of an incremental angular motion sensor and a software PID controller is presented. Precise and smooth motion of the radiator is ensured by standard servo-driven numerical control mathematics and the use of precision gears. The system allows both autonomous adjustment of the radiator’s reference positions and adjustment by means of commands from a personal computer. Both the computer and microcontroller programs were developed with the use of free software, making it possible to transfer the programs to Windows‑7(10), Linux and embedded Linux operating systems. Communication between the radiator’s position control system and the personal computer is realised by means of a standard noise immune USB-RS485 interface.
19

Pastukhov, S. S., and K. V. Stelmashenko. "New Approaches to Pricing Management of Transport Services." World of Transport and Transportation 19, no. 6 (July 23, 2022): 48–60. http://dx.doi.org/10.30932/1992-3252-2021-19-6-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Development of new approaches to formation of analytics mechanisms for the purpose of pricing management of services is an important aspect of increasing the efficiency of transport management processes.Research aimed at improving the tools for determining the optimal parameters of the ratio of quality and price of service for formation of a competitive and efficient tariff policy continues to remain relevant and in demand in modern market conditions. The objective of the study, presented in the article, is to analyse and evaluate the prospects for implementation of the areas to improve the apparatus for assessing the price elasticity of demand for railway passenger transport services as the transition to the use of non-linear parameters in terms of customer behaviour modelling functions, as well as introduction of the most effective algorithms from the set of modern global mathematical optimisation tools.The research conclusions are based on the use of system analysis mechanisms, methods of economic and mathematical modelling and optimisation, as well as of non-parametric statistics tools.The results based on the use of an array of data on the demand of passengers of branded trains include: a comparative assessment of quality of modelling the price elasticity of demand using 15 functions that are nonlinear in terms of parameters; the most promising tools of the search for unknown parameters for non-smooth nonlinear functions for modelling the behaviour of railway customers are identified based on a three-stage procedure for comparative analysis of the performance of more than 60 optimisation algorithms (including the calculation of minima and medians for the sums of squares of modelling errors, bootstrap analysis, Kruskal– Wallace and Mann–Whitney tests, as well as the calculation of a metric specially developed by the authors for assessing the degree of superiority of one algorithm over another within the framework of non-parametric analysis).The findings seem able to be successfully used in relation to other modes of transport in solving similar problems of developing an effective toolkit for managing the prices of transport services.
20

Zhu, Yingjie, Yongfa Chen, Qiuling Hua, Jie Wang, Yinghui Guo, Zhijuan Li, Jiageng Ma, and Qi Wei. "A Hybrid Model for Carbon Price Forecasting Based on Improved Feature Extraction and Non-Linear Integration." Mathematics 12, no. 10 (May 7, 2024): 1428. http://dx.doi.org/10.3390/math12101428.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Accurately predicting the price of carbon is an effective way of ensuring the stability of the carbon trading market and reducing carbon emissions. Aiming at the non-smooth and non-linear characteristics of carbon price, this paper proposes a novel hybrid prediction model based on improved feature extraction and non-linear integration, which is built on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), fuzzy entropy (FuzzyEn), improved random forest using particle swarm optimisation (PSORF), extreme learning machine (ELM), long short-term memory (LSTM), non-linear integration based on multiple linear regression (MLR) and random forest (MLRRF), and error correction with the autoregressive integrated moving average model (ARIMA), named CEEMDAN-FuzzyEn-PSORF-ELM-LSTM-MLRRF-ARIMA. Firstly, CEEMDAN is combined with FuzzyEn in the feature selection process to improve extraction efficiency and reliability. Secondly, at the critical prediction stage, PSORF, ELM, and LSTM are selected to predict high, medium, and low complexity sequences, respectively. Thirdly, the reconstructed sequences are assembled by applying MLRRF, which can effectively improve the prediction accuracy and generalisation ability. Finally, error correction is conducted using ARIMA to obtain the final forecasting results, and the Diebold–Mariano test (DM test) is introduced for a comprehensive evaluation of the models. With respect to carbon prices in the pilot regions of Shenzhen and Hubei, the results indicate that the proposed model has higher prediction accuracy and robustness. The main contributions of this paper are the improved feature extraction and the innovative combination of multiple linear regression and random forests into a non-linear integrated framework for carbon price forecasting. However, further optimisation is still a work in progress.
21

Zhao, Zezheng, Chunqiu Xia, Lian Chi, Xiaomin Chang, Wei Li, Ting Yang, and Albert Y. Zomaya. "Short-Term Load Forecasting Based on the Transformer Model." Information 12, no. 12 (December 10, 2021): 516. http://dx.doi.org/10.3390/info12120516.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
From the perspective of energy providers, accurate short-term load forecasting plays a significant role in the energy generation plan, efficient energy distribution process and electricity price strategy optimisation. However, it is hard to achieve a satisfactory result because the historical data is irregular, non-smooth, non-linear and noisy. To handle these challenges, in this work, we introduce a novel model based on the Transformer network to provide an accurate day-ahead load forecasting service. Our model contains a similar day selection approach involving the LightGBM and k-means algorithms. Compared to the traditional RNN-based model, our proposed model can avoid falling into the local minimum and outperforming the global search. To evaluate the performance of our proposed model, we set up a series of simulation experiments based on the energy consumption data in Australia. The performance of our model has an average MAPE (mean absolute percentage error) of 1.13, where RNN is 4.18, and LSTM is 1.93.
22

Bureika, Gintautas, and Rimantas Subačius. "MATHEMATICAL MODEL OF DYNAMIC INTERACTION BETWEEN WHEEL-SET AND RAIL TRACK." TRANSPORT 17, no. 2 (April 30, 2002): 46–51. http://dx.doi.org/10.3846/16483480.2002.10414010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The main goal of this title is to show how the effects on maximum bending tensions at different locations in the track caused by simultaneous changes of the various parameters can be estimated in a rational manner The dynamic of vertical interaction between a moving rigid wheel and a flexible railway track is investigated. A round and smooth wheel tread and an initially straight and non-corrugated rail surface are assumed in the present optimisation study. Asymmetric linear three-dimensional beam structure model of a finite length of the track is suggested including rail, pads. sleepers and ballast with spatially non-proportional damping. Transient bending tensions in sleepers and rail are calculated. The influence of eight selected track parameters on the dynamic behaviour of the track is investigated. A two-level fractional factmial design method is used in the search for a combination of numerical levels of these parameters making the maximum bending tensions the minimum. Finally, the main conclusions are given.
23

Zhao, Dong, and Hao Guo. "A Trajectory Planning Method for Polishing Optical Elements Based on a Non-Uniform Rational B-Spline Curve." Applied Sciences 8, no. 8 (August 12, 2018): 1355. http://dx.doi.org/10.3390/app8081355.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Optical polishing can accurately correct the surface error through controlling the dwell time of the polishing tool on the element surface. Thus, the precision of the trajectory and the dwell time (the runtime of the trajectory) are important factors affecting the polishing quality. This study introduces a systematic interpolation method for optical polishing using a non-uniform rational B-spline (NURBS). A numerical method for solving all the control points of NURBS was proposed with the help of a successive over relaxation (SOR) iterative theory, to overcome the problem of large computation. Then, an optimisation algorithm was applied to smooth the NURBS by taking the shear jerk as the evaluation index. Finally, a trajectory interpolation scheme was investigated for guaranteeing the precision of the trajectory runtime. The experiments on a prototype showed that, compared to the linear interpolation method, there was an order of magnitude improvement in interpolation, and runtime, errors. Correspondingly, the convergence rate of the surface error of elements improved from 37.59% to 44.44%.
24

Ross, Snizhana, Arttu Arjas, Ilkka I. Virtanen, Mikko J. Sillanpää, Lassi Roininen, and Andreas Hauptmann. "Hierarchical deconvolution for incoherent scatter radar data." Atmospheric Measurement Techniques 15, no. 12 (June 28, 2022): 3843–57. http://dx.doi.org/10.5194/amt-15-3843-2022.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. We propose a novel method for deconvolving incoherent scatter radar data to recover accurate reconstructions of backscattered powers. The problem is modelled as a hierarchical noise-perturbed deconvolution problem, where the lower hierarchy consists of an adaptive length-scale function that allows for a non-stationary prior and as such enables adaptive recovery of smooth and narrow layers in the profiles. The estimation is done in a Bayesian statistical inversion framework as a two-step procedure, where hyperparameters are first estimated by optimisation and followed by an analytical closed-form solution of the deconvolved signal. The proposed optimisation-based method is compared to a fully probabilistic approach using Markov chain Monte Carlo techniques enabling additional uncertainty quantification. In this paper we examine the potential of the hierarchical deconvolution approach using two different prior models for the length-scale function. We apply the developed methodology to compute the backscattered powers of measured polar mesospheric winter echoes, as well as summer echoes, from the EISCAT VHF radar in Tromsø, Norway. Computational accuracy and performance are tested using a simulated signal corresponding to a typical background ionosphere and a sporadic E layer with known ground truth. The results suggest that the proposed hierarchical deconvolution approach can recover accurate and clean reconstructions of profiles, and the potential to be successfully applied to similar problems.
25

Yirijor, John, and Nana Asabere Siaw-Mensah. "Design and Optimisation of Horizontal Axis Wind Turbine Blades Using Biomimicry of Whale Tubercles." Journal of Engineering Research and Reports 25, no. 5 (July 5, 2023): 100–112. http://dx.doi.org/10.9734/jerr/2023/v25i5915.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Wind speed is the major factor in generating power in a wind turbine. However, due to the non-optimum and redundant design of wind turbine blades, not nearly enough wind is captured for utilization. In the present study, modifications were done on the leading edge of the HAWT blade using tubercles showing their effects on aerodynamic performances. From this research, the following results found concerning the performances of HAWT with leading-edge tubercles were that; blades with tubercles on the leading edge will have superior performance in the post-stall regime by 27%, tubercles with a smaller amplitude and lower wavelength will produce higher lift and lower drag in the low wind speed condition, and tubercle blade will have a stable and smooth performance in varying wind speed conditions, producing higher torque and power at low wind speed. Using a small wind turbine model, SolidWorks Motion Analysis Simulation was used for dynamic modeling to evaluate and determine the force and torque of the mechanical structure. These results were compared and examined using standard wind turbine blades which showed an improvement of 30% in efficiency.
26

Thomson, R. J. "Non-Parametric Likelihood Enhancements to Parametric Graduations." British Actuarial Journal 5, no. 1 (April 1, 1999): 197–236. http://dx.doi.org/10.1017/s1357321700000428.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ABSTRACTParametric graduation may fail to achieve satisfactory results without overparameterisation. Whittaker-Henderson graduation tends to constrain the graduated values towards a low-order polynomial. Non-parametric methods do not generally make direct use of true likelihood functions. This paper suggests a method of enhancing the likelihood of a parametric graduation by means of non-parametric methods, thus reducing the disadvantages of both methods.The parametric graduation is taken to be ideally smooth by definition and is adjusted by using constrained maximum likelihood estimation to obtain better fidelity to the experience. The constraint imposes a minimum sacrifice of smoothness, in terms of a quantitative smoothness criterion, from the initial ideal. The method is not entirely objective in that, in some cases, professional judgement is required in order to assess the degree of smoothness that can be imposed. In other cases the method provides an objective optimum. In either case, by quantifying the degrees of departure from perfect fidelity and from ideal smoothness, the suggested method provides useful and theoretically sound criteria for the purposes of the optimisation process. In particular, by inverting the parametric graduation formula for the purposes of defining the smoothness criterion, the method ensures that the smoothness criterion is consistent over the whole age range, thus resolving the main objection to non-parametric graduation.The method is applied to the 1979-82 experience for life office pensioners in the United Kingdom with positive results.
27

Penney, R. W. "Collision avoidance within flight dynamics constraints for UAV applications." Aeronautical Journal 109, no. 1094 (April 2005): 193–99. http://dx.doi.org/10.1017/s0001924000000695.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Avoiding collisions with other aircraft is an absolutely fundamental capability for semi-autonomous UAVs. However, an aircraft avoiding moving obstacles requires an evasive tactic that is simultaneously very quick to compute, compatible with the platform’s flight dynamics, and deals with the subtle spatio-temporal features of the threat. We will give an overview of a novel prototype method of rapidly generating smooth flight-paths constrained to avoid moving obstacles, using an efficient trajectory-optimisation technique. Obstacles are described in terms of simple geometrical shapes, such as ellipsoids, whose centres and shapes can vary with time. The technique generates a spatio-temporal trajectory which offers a high likelihood of avoiding the volume in space-time excluded by the predicted motion of each of the known obstacles. Such a flight-path could then be passed to the aircraft’s flight-control systems to negotiate the threat posed by the obstacles. Results from a demonstration implementation of the collision-avoidance technique will be discussed, including non-trivial scenarios handled well within 100ms on a 300MHz processor.
28

Engwirda, Darren. "JIGSAW-GEO (1.0): locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere." Geoscientific Model Development 10, no. 6 (June 6, 2017): 2117–40. http://dx.doi.org/10.5194/gmd-10-2117-2017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract. An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi–Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
29

Benreguig, Pierre, James Kelly, Vikram Pakrashi, and Jimmy Murphy. "Wave-to-Wire Model Development and Validation for Two OWC Type Wave Energy Converters." Energies 12, no. 20 (October 18, 2019): 3977. http://dx.doi.org/10.3390/en12203977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Tupperwave device is a closed-circuit oscillating water column (OWC) wave energy converter that uses non-return valves and two large fixed-volume accumulator chambers to create a smooth unidirectional air flow, harnessed by a unidirectional turbine. In this paper, the relevance of the Tupperwave concept against the conventional OWC concept, that uses a self-rectifying turbine, is investigated. For this purpose, wave-to-wire numerical models of the Tupperwave device and a corresponding conventional OWC device are developed and validated against experimental tests. Both devices have the same floating spar buoy structure and a similar turbine technology. The models include wave-structure hydrodynamic interaction, air turbines and generators, along with their control laws in order to encompass all power conversion stages from wave to electrical power. Hardware-in-the-loop is used to physically emulate the last power conversion stage from mechanic to electrical power and hence validate the control law and the generator numerical model. The dimensioning methodology for turbines and generators for power optimisation is explained. Eventually, the validated wave-to-wire numerical models of the conventional OWC and the Tupperwave device are used to assess and compare the performances of these two OWC type wave energy device concepts in the same wave climate. The benefits of pneumatic power smoothing by the Tupperwave device are discussed and the required efficiency of the non-return valves is investigated.
30

Skhosana, Sphiwe B., Salomon M. Millard, and Frans H. J. Kanfer. "A Novel EM-Type Algorithm to Estimate Semi-Parametric Mixtures of Partially Linear Models." Mathematics 11, no. 5 (February 22, 2023): 1087. http://dx.doi.org/10.3390/math11051087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Semi- and non-parametric mixture of normal regression models are a flexible class of mixture of regression models. These models assume that the component mixing proportions, regression functions and/or variances are non-parametric functions of the covariates. Among this class of models, the semi-parametric mixture of partially linear models (SPMPLMs) combine the desirable interpretability of a parametric model and the flexibility of a non-parametric model. However, local-likelihood estimation of the non-parametric term poses a computational challenge. Traditional EM optimisation of the local-likelihood functions is not appropriate due to the label-switching problem. Separately applying the EM algorithm on each local-likelihood function will likely result in non-smooth function estimates. This is because the local responsibilities calculated at the E-step of each local EM are not guaranteed to be aligned. To prevent this, the EM algorithm must be modified so that the same (global) responsibilities are used at each local M-step. In this paper, we propose a one-step backfitting EM-type algorithm to estimate the SPMPLMs and effectively address the label-switching problem. The proposed algorithm estimates the non-parametric term using each set of local responsibilities in turn and then incorporates a smoothing step to obtain the smoothest estimate. In addition, to reduce the computational burden imposed by the use of the partial-residuals estimator of the parametric term, we propose a plug-in estimator. The performance and practical usefulness of the proposed methods was tested using a simulated dataset and two real datasets, respectively. Our finite sample analysis revealed that the proposed methods are effective at solving the label-switching problem and producing reasonable and interpretable results in a reasonable amount of time.
31

George, Nishkal, and Boppana V. Chowdary. "Mitigation of Design Issues in Development of Anatomical Models Using Rapid Prototyping." West Indian Journal of Engineering 45, no. 1 (July 2022): 13–21. http://dx.doi.org/10.47412/upfp4130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Literature indicates rapid prototyping (RP) application has become more widespread in design and development of human anatomy models. Practitioners are facing challenges in deployment of RP tools for development of cost-effective medical models, because there are no proven decision support systems in the selection of parameters such as speed, accuracy, materials, and customisation of commercial software. This study aims at alleviating some of these issues by exploring the use of a Genetic Algorithm (GA) approach combined with computer-aided design (CAD) and fused deposition modeling (FDM) techniques. Experiments were conducted using response surface methodology (RSM) to facilitate the optimisation process with build time and model material volume as responses. The validation of the study has been performed with a patella model and the results verified the effectiveness of the proposed RSM-GA approach in the design and development of the anatomical model. The results showed a 27% savings on model material compared to a non-refined model and was deemed satisfactory for practical use as there was a reduction in irregularities from CT data. The study also reveals that the parameter hollow has the largest effect on the responses, followed by the smooth parameter and then the wrap parameter.
32

İvedi, İsmail, Bahadır Güneşoğlu, Sinem Yaprak Karavana, Gülşah Ekin Kartal, Gökhan Erkan, and Ayşe Merih Sarıışık. "Using Spraying as an Alternative Method for Transferring Capsules Containing Shea Butter to Denim and Non-Denim Fabrics : Preparation of microcapsules for delivery of active ingredients." Johnson Matthey Technology Review 66, no. 1 (January 11, 2022): 90–102. http://dx.doi.org/10.1595/205651322x16376750190432.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The aim of this study was to prepare microcapsules and transfer them to denim and non-denim trousers using different application methods. For this purpose, shea butter as active agent was encapsulated in an ethyl cellulose shell using the spray dryer method, and capsule optimisation was studied. A morphological assessment showed that the capsules had a smooth surface and were spherical in shape. The homogenous size distribution of the capsules was supported by laser diffraction analysis. The capsules showed a narrow size distribution, and the mean particle size of optimum formulations of shea butter was 390 nm. Denim fabrics were treated with shea butter capsules using the methods of exhaustion and spraying in order to compare these application methods. The presence of capsules on the fabrics was tested after five wash cycles. The comparison of application methods found similar preferred characteristics for both the exhaustion and spraying methods. However, the spraying method was found to be more sustainable, because it allows working with low liquor ratios in less water, with lower chemical consumption and less waste than the exhaustion method, which requires working with a high liquor ratio. This study showed that the spraying method can be used as an alternative to other application methods in the market for reducing energy consumption, and shea butter capsules can provide moisturising properties to the fabrics.
33

Agarwal, Shweta, Rayasa S. Ramachandra Murthy, Sasidharan Leelakumari Harikumar, and Rajeev Garg. "Quality by Design Approach for Development and Characterisation of Solid Lipid Nanoparticles of Quetiapine Fumarate." Current Computer-Aided Drug Design 16, no. 1 (January 6, 2020): 73–91. http://dx.doi.org/10.2174/1573409915666190722122827.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: Quetiapine fumarate, a 2nd generation anti-psychotic drug has oral bioavailability of 9% because of hepatic first pass metabolism. Reports suggest that co-administration of drugs with lipids affects their absorption pathways, enhances lymphatic transport thus bypassing hepatic first-pass metabolism resulting in enhanced bioavailability. Objective: The present work aimed at developing, and characterising potentially lymphatic absorbable Solid Lipid Nanoparticles (SLN) of quetiapine fumarate by Quality by Design approach. Method: Hot emulsification followed by ultrasonication was used as a method of preparation. Precirol ATO5, Phospholipon 90G and Poloxamer 188 were used as a lipid, stabilizer and surfactant respectively. A32 Central Composite design optimised the 2 independent variables, lipid concentration and stabilizer concentration and assessed their effect on percent Entrapment Efficiency (%EE: Y1). The lyophilized SLNs were studied for stability at 5 ±3οC and 25 ± 2οC/60 ± 5% RH for 3 months. Results: The optimised formula derived for SLN had 270mg Precirol ATO5 and 107mg of Phospholipon 90G giving %EE of 76.53%. Mean particle size was 159.8nm with polydispersity index 0.273 and zeta potential -6.6mV. In-vitro drug release followed Korsmeyer-Peppas kinetics (R2=0.917) with release exponent n=0.722 indicating non-Fickian diffusion. Transmission electron microscopy images exhibited particles to be spherical and smooth. Fourier-transform infrared spectroscopy, differential scanning calorimetry and X-ray diffraction studies ascertained drug-excipient compatibility. Stability studies suggested 5οC as appropriate temperature for storage and preserving important characteristics within acceptable limits. Conclusion: Development and optimisation by Quality by Design were justified as it yielded SLN having acceptable characteristics and potential application for intestinal lymphatic transport.
34

Fercoq, Olivier. "A generic coordinate descent solver for non-smooth convex optimisation." Optimization Methods and Software, August 27, 2019, 1–21. http://dx.doi.org/10.1080/10556788.2019.1658758.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Fornasier, Massimo, Peter Richtárik, Konstantin Riedl, and Lukang Sun. "Consensus-based optimisation with truncated noise." European Journal of Applied Mathematics, April 5, 2024, 1–24. http://dx.doi.org/10.1017/s095679252400007x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Consensus-based optimisation (CBO) is a versatile multi-particle metaheuristic optimisation method suitable for performing non-convex and non-smooth global optimisations in high dimensions. It has proven effective in various applications while at the same time being amenable to a theoretical convergence analysis. In this paper, we explore a variant of CBO, which incorporates truncated noise in order to enhance the well-behavedness of the statistics of the law of the dynamics. By introducing this additional truncation in the noise term of the CBO dynamics, we achieve that, in contrast to the original version, higher moments of the law of the particle system can be effectively bounded. As a result, our proposed variant exhibits enhanced convergence performance, allowing in particular for wider flexibility in choosing the noise parameter of the method as we confirm experimentally. By analysing the time evolution of the Wasserstein- $2$ distance between the empirical measure of the interacting particle system and the global minimiser of the objective function, we rigorously prove convergence in expectation of the proposed CBO variant requiring only minimal assumptions on the objective function and on the initialisation. Numerical evidences demonstrate the benefit of truncating the noise in CBO.
36

Zhong, Tianyi, and David Angeli. "A Cutting Plane-Based Distributed Algorithm for Non-Smooth Optimisation With Coupling Constraints." IEEE Control Systems Letters, 2024, 1. http://dx.doi.org/10.1109/lcsys.2024.3408754.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Staudigl, Mathias, and Paulin Jacquot. "Random block-coordinate methods for inconsistent convex optimisation problems." Fixed Point Theory and Algorithms for Sciences and Engineering 2023, no. 1 (November 6, 2023). http://dx.doi.org/10.1186/s13663-023-00751-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractWe develop a novel randomised block-coordinate primal-dual algorithm for a class of non-smooth ill-posed convex programs. Lying midway between the celebrated Chambolle–Pock primal-dual algorithm and Tseng’s accelerated proximal gradient method, we establish global convergence of the last iterate as well as optimal $O(1/k)$ O ( 1 / k ) and $O(1/k^{2})$ O ( 1 / k 2 ) complexity rates in the convex and strongly convex case, respectively, k being the iteration count. Motivated by the increased complexity in the control of distribution-level electric-power systems, we test the performance of our method on a second-order cone relaxation of an AC-OPF problem. Distributed control is achieved via the distributed locational marginal prices (DLMPs), which are obtained as dual variables in our optimisation framework.
38

Mistri, S. R., C. S. Yerramalli, R. S. Pant, and A. Guha. "Methodology for shape prediction and conversion of a conventional aerofoil to an inflatable baffled aerofoil." Aeronautical Journal, November 3, 2023, 1–34. http://dx.doi.org/10.1017/aer.2023.98.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Inflatable wings for UAVs are useful where storage space is a severe constraint. Literature in the field of inflatable wings often assumes an inflated aerofoil shape for various analyses. However, the flexible inflatable aerofoil fabric might deform to another equilibrium shape upon inflation. Hence accurate shape prediction of the inflated aerofoil is vital. Further, no standardised nomenclature or a process to convert a smooth aerofoil into its corresponding inflatable aerofoil counterpart is available. This paper analytically predicts the equilibrium shape of any inflatable aerofoil and validates the analytical prediction using non-linear finite element methods. Further, a scheme for the generation of two types of inflatable aerofoils is presented. Parameters such as the number and position of compartments and aerofoil length ratio (ALR) are identified as necessary to define the aerofoil’s shape fully. A process to minimise the deviation of the inflatable aerofoil from its original smooth aerofoil using particle swarm optimisation (PSO) is discussed. Research presented in this paper can help in performing various analyses on the actual equilibrium shape of the aerofoil.
39

Antczak, Tadeusz, and Kalpana Shukla. "Optimality and duality results for non-smooth vector optimisation problems with K-V-type I functions via local cone approximations." International Journal of Mathematics in Operational Research 1, no. 1 (2023). http://dx.doi.org/10.1504/ijmor.2023.10064590.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Holford, Jacob J., Myoungkyu Lee, and Yongyun Hwang. "Optimal white-noise stochastic forcing for linear models of turbulent channel flow." Journal of Fluid Mechanics 961 (April 24, 2023). http://dx.doi.org/10.1017/jfm.2023.234.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the present study an optimisation problem is formulated to determine the forcing of an eddy-viscosity-based linearised Navier–Stokes model in channel flow at $Re_\tau \approx 5200$ ( $Re_\tau$ is the friction Reynolds number), where the forcing is white-in-time and spatially decorrelated. The objective functional is prescribed such that the forcing drives a response to best match a set of velocity spectra from direct numerical simulation (DNS), as well as remaining sufficiently smooth. Strong quantitative agreement is obtained between the velocity spectra from the linear model with optimal forcing and from DNS, but only qualitative agreement between the Reynolds shear stress co-spectra from the model and DNS. The forcing spectra exhibit a level of self-similarity, associated with the primary peak in the velocity spectra, but they also reveal a non-negligible amount of energy spent in phenomenologically mimicking the non-self-similar part of the velocity spectra associated with energy cascade. By exploiting linearity, the effect of the individual forcing components is assessed and the contributions from the Orr mechanism and the lift-up effect are also identified. Finally, the effect of the strength of the eddy viscosity on the optimisation performance is investigated. The inclusion of the eddy viscosity diffusion operator is shown to be essential in modelling of the near-wall features, while still allowing the forcing of the self-similar primary peak. In particular, reducing the strength of the eddy viscosity results in a considerable increase in the near-wall forcing of wall-parallel components.
41

Betcke, Marta M., and Carola-Bibiane Schönlieb. "Mathematics of biomedical imaging today - a perspective." Progress in Biomedical Engineering, May 26, 2023. http://dx.doi.org/10.1088/2516-1091/acd973.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Biomedical imaging is a fascinating, rich and dynamic research area,
which has huge importance in biomedical research and clinical practice
alike. The key technology behind the processing, and automated analysis
and quantification of imaging data is mathematics. Starting with
the optimisation of the image acquisition and the reconstruction of an
image from indirect tomographic measurement data, all the way to the
automated segmentation of tumours in medical images and the design of
optimal treatment plans based on image biomarkers, mathematics appears
in all of these in different flavours. Non-smooth optimisation in the context
of sparsity-promoting image priors, partial differential equations for
image registration and motion estimation, and deep neural networks for
image segmentation, to name just a few. In this article, we present and review
mathematical topics that arise within the whole biomedical imaging
pipeline, from tomographic measurements to clinical support tools, and
highlight some modern topics and open problems. The article is addressed
to both biomedical researchers who want to get a taste of where mathematics
arises in biomedical imaging as well as mathematicians who are
interested in what mathematical challenges biomedical imaging research
entails.
42

Latz, Jonas. "Gradient flows and randomised thresholding: sparse inversion and classification." Inverse Problems, October 19, 2022. http://dx.doi.org/10.1088/1361-6420/ac9b84.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Sparse inversion and classification problems are ubiquitous in modern data science and imaging. They are often formulated as non-smooth minimisation problems. In sparse inversion, we minimise, e.g., the sum of a data fidelity term and an L1/LASSO regulariser. In classification, we consider, e.g., the sum of a data fidelity term and a non-smooth Ginzburg--Landau energy. Standard (sub)gradient descent methods have shown to be inefficient when approaching such problems. Splitting techniques are much more useful: here, the target function is partitioned into a sum of two subtarget functions -- each of which can be efficiently optimised. Splitting proceeds by performing optimisation steps alternately with respect to each of the two subtarget functions. In this work, we study splitting from a stochastic continuous-time perspective. Indeed, we define a differential inclusion that follows one of the two subtarget function's negative subdifferential at each point in time. The choice of the subtarget function is controlled by a binary continuous-time Markov process. The resulting dynamical system is a stochastic approximation of the underlying subgradient flow. We investigate this stochastic approximation for an L1-regularised sparse inversion flow and for a discrete Allen-Cahn equation minimising a Ginzburg--Landau energy. In both cases, we study the longtime behaviour of the stochastic dynamical system and its ability to approximate the underlying subgradient flow at any accuracy. We illustrate our theoretical findings in a simple sparse estimation problem and also in low- and high-dimensional classification problems.
43

Zhang, Hengmin, Jian Yang, Bob Zhang, Yang Tang, Wenli Du, and Bihan Wen. "Enhancing generalized spectral clustering with embedding Laplacian graph regularization." CAAI Transactions on Intelligence Technology, March 18, 2024. http://dx.doi.org/10.1049/cit2.12308.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractAn enhanced generalised spectral clustering framework that addresses the limitations of existing methods by incorporating the Laplacian graph and group effect into a regularisation term is presented. By doing so, the framework significantly enhances discrimination power and proves highly effective in handling noisy data. Its versatility enables its application to various clustering problems, making it a valuable contribution to unsupervised learning tasks. To optimise the proposed model, the authors have developed an efficient algorithm that utilises the standard Sylvester equation to compute the coefficient matrix. By setting the derivatives to zero, computational efficiency is maintained without compromising accuracy. Additionally, the authors have introduced smoothing strategies to make the non‐convex and non‐smooth terms differentiable. This enables the use of an alternative iteration re‐weighted procedure (AIwRP), which distinguishes itself from other first‐order optimisation algorithms by introducing auxiliary variables. The authors provide a provable convergence analysis of AIwRP based on the iteration procedures of unconstrained problems to support its effectiveness. Extensive numerical tests have been conducted on synthetic and benchmark databases to validate the superiority of their approaches. The results demonstrate improved clustering performance and computational efficiency compared to several existing spectral clustering methods, further reinforcing the advantages of their proposed framework. The source code is available at https://github.com/ZhangHengMin/LGR_LSRLRR.
44

Leek, Francesca, Cameron Anderson, Andrew P. Robinson, Robert M. Moss, Joanna C. Porter, Helen S. Garthwaite, Ashley M. Groves, Brian F. Hutton, and Kris Thielemans. "Optimisation of the air fraction correction for lung PET/CT: addressing resolution mismatch." EJNMMI Physics 10, no. 1 (December 5, 2023). http://dx.doi.org/10.1186/s40658-023-00595-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background Increased pulmonary $$^{18}{}$$ 18 F-FDG metabolism in patients with idiopathic pulmonary fibrosis, and other forms of diffuse parenchymal lung disease, can predict measurements of health and lung physiology. To improve PET quantification, voxel-wise air fractions (AF) determined from CT can be used to correct for variable air content in lung PET/CT. However, resolution mismatches between PET and CT can cause artefacts in the AF-corrected image. Methods Three methodologies for determining the optimal kernel to smooth the CT are compared with noiseless simulations and non-TOF MLEM reconstructions of a patient-realistic digital phantom: (i) the point source insertion-and-subtraction method, $$h_{pts}$$ h pts ; (ii) AF-correcting with varyingly smoothed CT to achieve the lowest RMSE with respect to the ground truth (GT) AF-corrected volume of interest (VOI), $$h_{AFC}$$ h AFC ; iii) smoothing the GT image to match the reconstruction within the VOI, $$h_{PVC}$$ h PVC . The methods were evaluated both using VOI-specific kernels, and a single global kernel optimised for the six VOIs combined. Furthermore, $$h_{PVC}$$ h PVC was implemented on thorax phantom data measured on two clinical PET/CT scanners with various reconstruction protocols. Results The simulations demonstrated that at $$<200$$ < 200 iterations (200 i), the kernel width was dependent on iteration number and VOI position in the lung. The $$h_{pts}$$ h pts method estimated a lower, more uniform, kernel width in all parts of the lung investigated. However, all three methods resulted in approximately equivalent AF-corrected VOI RMSEs (<10%) at $$\ge$$ ≥ 200i. The insensitivity of AF-corrected quantification to kernel width suggests that a single global kernel could be used. For all three methodologies, the computed global kernel resulted in an AF-corrected lung RMSE <10% at $$\ge$$ ≥ 200i, while larger lung RMSEs were observed for the VOI–specific kernels. The global kernel approach was then employed with the $$h_{PVC}$$ h PVC method on measured data. The optimally smoothed GT emission matched the reconstructed image well, both within the VOI and the lung background. VOI RMSE was <10%, pre-AFC, for all reconstructions investigated. Conclusions Simulations for non-TOF PET indicated that around 200i were needed to approach image resolution stability in the lung. In addition, at this iteration number, a single global kernel, determined from several VOIs, for AFC, performed well over the whole lung. The $$h_{PVC}$$ h PVC method has the potential to be used to determine the kernel for AFC from scans of phantoms on clinical scanners.
45

Pandey, Varun, Stijn van Dooren, Johannes Ritzmann, Benjamín Pla, and Christopher Onder. "Variable smoothing of optimal diesel engine calibration for improved performance and drivability during transient operation." International Journal of Engine Research, June 1, 2020, 146808742091880. http://dx.doi.org/10.1177/1468087420918801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The model-based method to define the optimal calibration maps for important diesel engine parameters may involve three major steps. First, the engine speed and load domain – in which the engine is operated – are identified. Then, a global engine model is created, which can be used for offline simulations to estimate engine performance. Finally, optimal calibration maps are obtained by formulating and solving an optimisation problem, with the goal of minimising fuel consumption while meeting constraints on pollutant emissions. This last step in the calibration process usually involves smoothing of the maps in order to improve drivability. This article presents a method to trade off map smoothness, brake-specific fuel consumption and nitrogen oxide emissions. After calculating the optimal but potentially non-smooth calibration maps, a variation-based smoothing method is employed to obtain different levels of smoothness by adapting a single tuning parameter. The method was experimentally validated on a heavy-duty diesel engine, and the non-road transient cycle was used as a case study. The error between the reference and actual engine torque was used as a metric for drivability, and the error was found to decrease with increasing map smoothness. After having obtained this trade-off for various fixed levels of smoothness, a time-varying smoothness calibration was generated and tested. Experimental results showed that, with a time-varying smoothness strategy, nitrogen oxide emissions could be reduced by 4%, while achieving the same drivability and fuel consumption as in the case of a fixed smoothing strategy.
46

Lu, Zhenguo, Hongbin Wang, Mingyan Wang, and Zhiwen Wang. "Improved dark channel priori single image defogging technique using image segmentation and joint filtering." Science Progress 107, no. 1 (January 2024). http://dx.doi.org/10.1177/00368504231221407.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Foggy images affect image analysis and measurement because of their low definition and blurred details. Despite numerous studies on haze in natural images in hazy environments, the recovery effect is not ideal for processing hazy images in sky areas. A dark channel priori technique for processing haze images with sky areas where atmospheric light values are misestimated and halo artefacts are produced, as well as an improved dark channel priori single-image defogging technique based on image segmentation and joint filtering, are proposed. First, an estimation method of the atmospheric illumination value using image segmentation is proposed to obtain the atmospheric illumination value. The probability density distribution function of the haze-grey image was constructed during image segmentation. The probability density distribution function of the grey image value, the K-means clustering technique, and the method for estimating atmospheric illumination values are combined to improve image segmentation techniques and achieve the segmentation of sky and non-sky areas in hazy images. Based on the segmentation threshold, the number of pixels in the sky and non-sky areas, as well as the normalisation results, were counted to calculate the atmospheric illumination values. Second, to address the halo artefact phenomenon, a method for optimising the image transmittance map using joint filtering is proposed. The image transmittance map was optimised by combining fast-guided filtering and weighted least-squares filtering to retain the edge information and smooth the gradient change of the internal region. Finally, gamma correction and automatic level optimisation are used to improve the brightness and contrast of the defogged images. The experimental results show that the proposed technique can effectively achieve sky segmentation. Compared to the traditional dark-channel prior technique, the proposed technique suppress halo artefacts and improve image detail recovery. Compared to other techniques, the proposed technique exhibited excellent performance in subjective and objective evaluations.
47

Wallin, Gabriel, Yunxiao Chen, and Irini Moustaki. "DIF Analysis with Unknown Groups and Anchor Items." Psychometrika, February 21, 2024. http://dx.doi.org/10.1007/s11336-024-09948-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractEnsuring fairness in instruments like survey questionnaires or educational tests is crucial. One way to address this is by a Differential Item Functioning (DIF) analysis, which examines if different subgroups respond differently to a particular item, controlling for their overall latent construct level. DIF analysis is typically conducted to assess measurement invariance at the item level. Traditional DIF analysis methods require knowing the comparison groups (reference and focal groups) and anchor items (a subset of DIF-free items). Such prior knowledge may not always be available, and psychometric methods have been proposed for DIF analysis when one piece of information is unknown. More specifically, when the comparison groups are unknown while anchor items are known, latent DIF analysis methods have been proposed that estimate the unknown groups by latent classes. When anchor items are unknown while comparison groups are known, methods have also been proposed, typically under a sparsity assumption – the number of DIF items is not too large. However, DIF analysis when both pieces of information are unknown has not received much attention. This paper proposes a general statistical framework under this setting. In the proposed framework, we model the unknown groups by latent classes and introduce item-specific DIF parameters to capture the DIF effects. Assuming the number of DIF items is relatively small, an $$L_1$$ L 1 -regularised estimator is proposed to simultaneously identify the latent classes and the DIF items. A computationally efficient Expectation-Maximisation (EM) algorithm is developed to solve the non-smooth optimisation problem for the regularised estimator. The performance of the proposed method is evaluated by simulation studies and an application to item response data from a real-world educational test.

До бібліографії