Journal articles on the topic 'Certified Numerical Algorithms'

To see the other types of publications on this topic, follow the link: Certified Numerical Algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 journal articles for your research on the topic 'Certified Numerical Algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bouazizi, Faïçal. "Algebraic certification of numerical algorithms computing Lagrange resolvents." Journal of Algebra and Its Applications 17, no. 01 (January 2018): 1850007. http://dx.doi.org/10.1142/s021949881850007x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Segonne, Charlotte, and Pierre Duthon. "Qualification of the PAVIN Fog and Rain Platform and Its Digital Twin for the Evaluation of a Pedestrian Detector in Fog." Journal of Imaging 9, no. 10 (October 3, 2023): 211. http://dx.doi.org/10.3390/jimaging9100211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Vehicles featuring partially automated driving can now be certified within a guaranteed operational design domain. The verification in all kinds of scenarios, including fog, cannot be carried out in real conditions (risks or low occurrence). Simulation tools for adverse weather conditions (e.g., physical, numerical) must be implemented and validated. The aim of this study is, therefore, to verify what criteria need to be met to obtain sufficient data to test AI-based pedestrian detection algorithms. It presents both analyses on real and numerically simulated data. A novel method for the test environment evaluation, based on a reference detection algorithm, was set up. The following parameters are taken into account in this study: weather conditions, pedestrian variety, the distance of pedestrians to the camera, fog uncertainty, the number of frames, and artificial fog vs. numerically simulated fog. Across all examined elements, the disparity between results derived from real and simulated data is less than 10%. The results obtained provide a basis for validating and improving standards dedicated to the testing and approval of autonomous vehicles.
3

Mykhailov, O. V. "Results of Comparative Evaluation of Algorithms for Calculating Scaling Factors of Difficult-To-Measure Nuclides in CHNPP Wastes at 241Am Example." Nuclear Power and the Environment 21, no. 2 (2021): 61–70. http://dx.doi.org/10.31717/2311-8253.21.2.6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The criteria for radwaste acceptance valid in the Exclusion Zone of SSE “Chornobyl NPP” require that each batch (package) of solid radioactive waste (SRW) sent for burial be certified. For the radionuclides that are difficult to measure (DTM) with using standard control systems directly in a package, the IAEA recommends using the Scaling Factors (SF). In the course of special studies to determine their quantitative values, some difficulties were encountered when analyzing laboratory data on the DTM nuclide in SRW, which largely consisted of so-called nondetectable (ND) results declared in the reports as <MDA (less than the minimum detectable activity). The work was aimed to evaluate the known algorithms for SF determination used in the world practice of radwaste management, in terms of acceptability of their application to laboratory data sampling with different proportions of ND results (<MDA) on the example of 241Am content in ChNPP historical waste. Three data sampling were formed with the share of ND results equaling to 18, 42 and 55 percent. This work addresses several methods for SF calculation, which are used in radwaste management systems of the countries with developed nuclear fuel cycle. Among the selected algorithms for data process testing, the most powerful one of methods for ND results censoring is included — the method of maximum likelihood estimation (MLE), which allows by restoring the normal law of distribution of random data to most accurately adjust the value of mean contaminant content according to the probability of appearance of each of measurement results, with taking into account the added ND results after their censoring by a numerical value multiple of MDA. The possibilities of selected algorithms, from the viewpoint of accuracy of statistical indicators’ reproduction in the initial arrays of experimental data sampling with SF application, were investigated on “problematic” data sampling related to 241Am content in ChNPP historical waste. The studied algorithms are ranked according to the quantitative scale of acceptability (reliability) assessment for each of them for the use in radwaste management system of ChNPP for certification of DTM activity contained in the SRW packages. The data obtained allowed drawing conclusions on the most acceptable algorithms that can be recommended for SF calculation, depending on the content of experimental data collected after laboratory control. The influence of data censoring on the accuracy of reproduction of the original spectrum of experimental data for different algorithms is estimated. The validity of use of Mean Activity Method recommended by the IAEA for data sampling, which contains a significant proportion of ND results, was confirmed.
4

Giacomini, Matteo, Olivier Pantz, and Karim Trabelsi. "Certified Descent Algorithm for shape optimization driven by fully-computable a posteriori error estimators." ESAIM: Control, Optimisation and Calculus of Variations 23, no. 3 (April 28, 2017): 977–1001. http://dx.doi.org/10.1051/cocv/2016021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper we introduce a novel certified shape optimization strategy – named Certified Descent Algorithm (CDA) – to account for the numerical error introduced by the Finite Element approximation of the shape gradient. We present a goal-oriented procedure to derive a certified upper bound of the error in the shape gradient and we construct a fully-computable, constant-free a posteriori error estimator inspired by the complementary energy principle. The resulting CDA is able to identify a genuine descent direction at each iteration and features a reliable stopping criterion. After validating the error estimator, some numerical simulations of the resulting certified shape optimization strategy are presented for the well-known inverse identification problem of Electrical Impedance Tomography.
5

Imbach, Rémi, Guillaume Moroz, and Marc Pouget. "A certified numerical algorithm for the topology of resultant and discriminant curves." Journal of Symbolic Computation 80 (May 2017): 285–306. http://dx.doi.org/10.1016/j.jsc.2016.03.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Horai, Mio, Hideo Kobayashi, and Takashi G. Nitta. "A Linearized Relaxing Algorithm for the Specific Nonlinear Optimization Problem." Abstract and Applied Analysis 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/1304954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We propose a new method for the specific nonlinear and nonconvex global optimization problem by using a linear relaxation technique. To simplify the specific nonlinear and nonconvex optimization problem, we transform the problem to the lower linear relaxation form, and we solve the linear relaxation optimization problem by the Branch and Bound Algorithm. Under some reasonable assumptions, the global convergence of the algorithm is certified for the problem. Numerical results show that this method is more efficient than the previous methods.
7

Krait, George, Sylvain Lazard, Guillaume Moroz, and Marc Pouget. "Certified numerical algorithm for isolating the singularities of the plane projection of generic smooth space curves." Journal of Computational and Applied Mathematics 394 (October 2021): 113553. http://dx.doi.org/10.1016/j.cam.2021.113553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arkhipova, O. V., N. N. Dolgikh, S. Yu Dolinger, V. Z. Kovalev, and D. S. Osipov. "Wavelet transform algorithm of daily load graphs for choosing parameters of hybrid energy storage." Omsk Scientific Bulletin, no. 174 (2020): 57–62. http://dx.doi.org/10.25206/1813-8225-2020-174-57-62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The paper presents an algorithm for frequency decomposition of daily load graphs based on a discrete wavelet transform. This algorithm makes it possible to choose the optimal type of wavelet function, optimal level and wavelet decomposition tree. The inverse wavelet transform (recovery) along a single branch of the approximating coefficient allows obtaining the lowfrequency component of the power graph for selecting the optimal mode of the hybrid energy storage battery. The detailing branch of the wavelet coefficients determines the operating mode of the supercapacitor. A numerical experiment is built on the basis of data obtained using certified equipment
9

Tran, Gia, Ricardo Santos, Hiroaki Ogawa, Makoto Nakamura, Kei Sakaguchi, and Andreas Kassler. "Context-Based Dynamic Meshed Backhaul Construction for 5G Heterogeneous Networks." Journal of Sensor and Actuator Networks 7, no. 4 (October 2, 2018): 43. http://dx.doi.org/10.3390/jsan7040043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Five-G heterogeneous network overlaid by millimeter-wave (mmWave) access employs mmWave meshed backhauling as a promising cost-efficient backhaul architecture. Due to the nature of mobile traffic distribution in practice which is both time-variant and spatially non-uniform, dynamic construction of mmWave meshed backhaul is a prerequisite to support the varying traffic distribution. Focusing on such scenario of outdoor dynamic crowd (ODC), this paper proposes a novel method to control mmWave meshed backhaul for efficient operation of mmWave overlay 5G HetNet through Software-Defined Network (SDN) technology. Our algorithm is featured by two functionalities, i.e., backhauling route multiplexing for overloaded mmWave small cell base stations (SC-BSs) and mmWave SC-BSs’ ON/OFF status switching for underloaded spot. In this paper, the effectiveness of the proposed meshed network is confirmed by both numerical analyses and experimental results. Simulations are conducted over a practical user distribution modeled from measured data in realistic environments. Numerical results show that the proposed algorithm can cope with the locally intensive traffic and reduce energy consumption. Furthermore, a WiGig (Wireless Gigabit Alliance certified) device based testbed is developed for Proof-of-Concept (PoC) and preliminary measurement results confirm the proposed dynamic formation of the meshed network’s efficiency.
10

Ma, Ceyi, Yinghong Wen, and Jinbao Zhang. "A Fast, Hybrid, Time-Domain Discontinuous Galerkin-Physical Optics Method for Composite Electromagnetic Scattering Analysis." Applied Sciences 11, no. 6 (March 17, 2021): 2694. http://dx.doi.org/10.3390/app11062694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To accelerate the solution of transient electromagnetic scattering from composite scatters, a novel hybrid discontinuous Galerkin time domain (DGTD) and time-domain physical optics (TDPO) method is proposed. The DGTD method is used to solve the accurate scattering field of the multi-scale objects region, and a hybrid explicit-implicit time integration method is also used to improve the efficiency of multi-scale problems in the time domain. Meanwhile, the TDPO method is used to accelerate the speed of surface current integration in an electrically large region. In addition, the DGTDPO method considers the mutual coupling between two regions, and effectively reduces the number of numerical calculations for the other space of the composite target, thereby significantly reducing the computer memory consumption. Numerical results certified the high efficiency and accuracy of the hybrid DGTDPO. According to the results, in comparison with the DGTD algorithm in the entire computational domain, the DGTDPO method can reduce computing time and memory by 90% and 70% respectively. Meanwhile, the normalized root mean square deviation (NRMSD) of the time-domain, high-frequency approximation method is over 0.2, and that of the DGTDPO method is only 0.0971. That is, compared with the approximation methods, the hybrid method improves the accuracy by more than 64%.
11

Sitarz, M., A. Sladkowski, K. Bizoń, and K. Chruzik. "Designing of railway wheels. Part 2: Comparison of numerical analysis and experimental research." Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit 219, no. 2 (March 1, 2005): 111–20. http://dx.doi.org/10.1243/095440905x8817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Current design process can be simplified and accelerated, if computer simulation based on finite element method (FEM) is used. FEM numerical calculations of different wheels of railway wheelsets are investigated in the Department of Railway Engineering. The justification for undertaking this issue is that there is no methodology in the design of railway wheelsets both in Poland and abroad. In order to assess the calculation model, experimental tests of static loads (due to carriage weight) and thermal loads (due to braking) have been run. The results of these tests have been compared with the results of numerical analysis. The proposed calculation model has been compared with the results of test stand investigation. Comparison of the results has confirmed the correctness of the proposed discrete model and calculation algorithm. In the framework of the common scientific-didactic projects of Railway Engineering Department and Central Rail Research Institute in Moscow and producer of railway wheelsets-BONATRANS a.s. in Bohumin (Czech Republic), the method was worked out, and the experimental researches were conducted on specialists certified stands for railway wheelset researches (Sitarz, M., Bizoń, K., and Chruzik, K. Numerical calculations reckonings of railway wheel sets. W: Scientific Proceedings of Riga Technical University, 2002, str 84-89, Sladkowski, A., Bizoń, K., and Chruzik, K. Porównanie wyników analizy MES z wynikami pomiarów tensometrycznych deformacji kól kolejowych zestawów kolowych, Zeszyty Naukowe Katedry Mechaniki Stosowanej nr 18/2002, Gliwice, 2002, str 397-402, Zima, R., Bizoń, K., Chruzik, K. Badania teoretyczne i eksperymentalne kól kolejowych zestawów kolowych, W: XLII Sympozjon ‘Modelowanie w mechanice’, Wisla, 2003).
12

Pestrenin, V. M., I. V. Pestrenina, L. V. Landik, A. R. Fagalov, and A. G. Pelevin. "REPRESENTATIVE VOLUME AND EFFECTIVE MATERIAL CHARACTERISTICS OF PERIODIC AND STATISTICALLY UNIFORMLY REINFORCED FIBER COMPOSITES." PNRPU Mechanics Bulletin, no. 1 (December 15, 2023): 103–10. http://dx.doi.org/10.15593/perm.mech/2023.1.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the deformable solid mechanics, there are concepts associated with continuum points (displacements, relative elongations, shifts) and a set of continuum points – an elementary volume (mass, energy, stresses). The role of such volume in the mechanics of composite materials is played by the representative volume element (RVE).This concept was first introduced by R. Hill (1963). Modern authors use the W.J. Drugan, J.R. Willis (1996) formulation. Based on the analysis of the RVE concept, we formulated its essential features: RVE is the minimum possible sample for numerical tests to determine the effective material parameters of the composite; under any RVE loading, its macroscopic stress-strain state is uniform. Its significance for the mechanics of composite materials is revealed: the existence of RVE for a composite is a criterion for applying the effective modulus theory to the analysis of its stress-strain state; the dehomogenization of a stressed-state composite material at a point is a solution to the micromechanics problem of the RVE stress-strain state determination; the characteristic size of RVE limits the size of the sampling grid in the numerical study. An iterative algorithm for constructing a representative volume of a periodic structure composite and its effective material thermoelastic characteristics is proposed. It is shown that the geometric shape of such a composition is a rectangular parallelepiped. The RVE construction algorithm for periodic compositions is extended to the composites statistically uniformly reinforced with continuous fibers. A method for modeling such materials with a following regular structure is suggested described: in the section perpendicular to the fibers, fiber centers should be located at the vertices of regular triangles. Examples of constructing RVE and thermoelastic material characteristics of specific compositions are given. The calculation results are compared with the data obtained using certified software products.
13

Gaudioso, Manlio, Sona Taheri, Adil M. Bagirov, and Napsu Karmitsa. "Bundle Enrichment Method for Nonsmooth Difference of Convex Programming Problems." Algorithms 16, no. 8 (August 21, 2023): 394. http://dx.doi.org/10.3390/a16080394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Bundle Enrichment Method (BEM-DC) is introduced for solving nonsmooth difference of convex (DC) programming problems. The novelty of the method consists of the dynamic management of the bundle. More specifically, a DC model, being the difference of two convex piecewise affine functions, is formulated. The (global) minimization of the model is tackled by solving a set of convex problems whose cardinality depends on the number of linearizations adopted to approximate the second DC component function. The new bundle management policy distributes the information coming from previous iterations to separately model the DC components of the objective function. Such a distribution is driven by the sign of linearization errors. If the displacement suggested by the model minimization provides no sufficient decrease of the objective function, then the temporary enrichment of the cutting plane approximation of just the first DC component function takes place until either the termination of the algorithm is certified or a sufficient decrease is achieved. The convergence of the BEM-DC method is studied, and computational results on a set of academic test problems with nonsmooth DC objective functions are provided.
14

Gong, Xu, Zhengqi Gu, and Zhenlei Li. "Surrogate model for aerodynamic shape optimization of a tractor-trailer in crosswinds." Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 226, no. 10 (May 9, 2012): 1325–39. http://dx.doi.org/10.1177/0954407012442295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A surrogate model-based aerodynamic shape optimization method applied to the wind deflector of a tractor-trailer is presented in this paper. The aerodynamic drag coefficient of the tractor-trailer with and without the wind deflector subjected to crosswinds is analyzed. The numerical results show that the wind deflector can decrease drag coefficient. Four parameters are used to describe the wind deflector geometry: width, length, height, and angle. A 30-level design of experiments study using the optimal Latin hypercube method was conducted to analyze the sensitivity of the design variables and build a database to set up the surrogate model. The surrogate model was constructed based on the Kriging interpolation technique. The fitting precision of the surrogate model was examined using computational fluid dynamics and certified using a surrogate model simulation. Finally, a multi-island genetic algorithm was used to optimize the shape of the wind deflector based on the surrogate model. The tolerance between the results of the computational fluid dynamics simulation and the surrogate model was only 0.92% when using the optimal design variables, and the aerodynamic drag coefficient decreased by 4.65% compared to the drag coefficient of the tractor-trailer installed with the original wind deflector. The effect of the optimal shape of the wind deflector was validated by computational fluid dynamics and wind tunnel experiment.
15

Xu, Wei, Mengyan Zang, Jiro Sakamoto, and Shupei Zhang. "3D Adaptive Combined DE/FE Algorithm for Analyzing Impact Fracture of Laminated Glass." International Journal of Computational Methods 16, no. 04 (May 13, 2019): 1850101. http://dx.doi.org/10.1142/s0219876218501013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Laminated glass has been wildly employed in automobile windshields, modern buildings, etc. thanks to its security and durability performance. A novel 3D adaptive combined DE/FE algorithm is proposed to research its impact fracture mechanical properties if the fracture region is small relative to a specimen while the cracks are propagating at a random position. The proposed method can automatically convert the distorted finite elements into the spherical discrete elements during simulating the impact fracture of laminated glass. In this method a system is completely discretized into the finite elements at the initial moment without any discrete element existing until part of the finite elements becoming severely deformed. Subsequently each finite element, whose maximum tensile stress exceeds a user-specified conversion criterion, is converted into eight spherical discrete elements. At the same time the system is fragmented into two subdomains, the finite element (FE) and the discrete element (DE) subdomains. An extrinsic cohesive fracture model is adaptively adopted only in the DE subregion to capture the crack propagation when the normal stress between the DEs equals or exceeds the cohesive strength. The impact fracture of a glass beam is simulated by the adaptive algorithm and the discrete element method, respectively. Beside of the micro-cracks and cohesive zone, almost the same crack patterns are captured by both the numerical methods. Fortunately, the efficiency of the proposed method is much higher (10 times in this case) than that of the pure DEM. A satisfactory agreement of the simulation results certified the feasibility and effectiveness of such an adaptive algorithm. Finally, the impact fracture simulation is performed by the adaptive algorithm on a laminated glass beam which has the same size as the experimental specimens. Besides of the differences on the cracks occurrence and propagation angle, a similar agreement of the fracture patterns is observed as the experimental results. The common conclusions on the role of PVB interlayer can be obviously obtained by analyzing the simulation results, the same by analyzing the experimental ones. The proposed method is hopeful to be employed to analyze the impact fracture of an automobile windshield subjects to the head impact for the protection of pedestrians safety, the traffic accident reconstruction and the structural optimization of windshield.
16

Çavuşluk, Bayram, and Başaran Karademir. "Lojistik Firma Personellerinin Organik Tarım Anlayışı; Iğdır İli Örneği." Turkish Journal of Agriculture - Food Science and Technology 11, no. 1 (January 31, 2023): 43–50. http://dx.doi.org/10.24925/turjaf.v11i1.43-50.5442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Organic agricultural product is the food whose naturalness is certified and should not lose its naturalness until the end user. In this research, it was aimed to reveal the approach of the logistics (transport) sector employees to the transportation of organic agricultural products. The research was conducted in the form of a face-to-face survey on 115 logistics employees in Iğdır. The collected data were evaluated numerically and proportionally, and analyzed by statistical methods also including the decision tree CART algorithm. Logistics firm employees in the rate of 96.5% claimed to know what organic food is, 100% to eat organic food, and 59.1% even claimed to know what organic food certification is, but only 4.3% of the participants answered the question of "What do you think is organic food" as "organic certified food" for organic food, and 27.8% said that they pay attention to the certificate when purchasing organic food. Even, 27.8% of the participants claimed that they understood whether the food was organic when they looked at the product. Therewithal, 78.3% of the logistics employees stated that they do transportation of organic food and transportation time (94.8%), the cooling system (100%), and the cleanliness (100%) are more important criteria for organic food transportation compared to normal food. Apart from these, they stated that the employees of the sector other than themselves did not have sufficient knowledge (69.6%) about organic food transportation, while they claimed that they had sufficient knowledge (61.7%) about the subject. Consequently, although the logistics company employees in Iğdır region claimed that they knew organic food in general, it was revealed that their level of knowledge was not sufficient. Despite these results, it was observed that the personnel knew what to pay attention to in organic food transportation. It was concluded that the use of TV-Radio and the Internet could be used as a tool to eliminate the lack of information on the subject.
17

Aksoy, Mehmet, Başaran Karademir, and İbrahim Hakkı Kadirhanoğulları. "Amasya İlinde Toprak Tarımı Çiftçilerinin Organik Tarım Anlayışı." Turkish Journal of Agriculture - Food Science and Technology 9, no. 11 (December 2, 2021): 2065–72. http://dx.doi.org/10.24925/turjaf.v9i11.2065-2072.4646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Organic Agricultural products are natural food products that are checked and certified by authorized institutions at all stages of cultivation and presented to the consumer. In this study, the knowledge and practices of soil agriculture farmers in Amasya province about Organic Agriculture production were investigated by face-to-face survey method. In the survey, the local producers were asked what Organic Agriculture is in general, their status of producing Organic Agriculture, whether they fulfill the Organic Agriculture requirements for Organic Agriculture production. The obtained data were presented numerically and proportionally. Data were also analyzed with the Decision Tree method using the CART (Classification and Regression Tree) algorithm. According to the findings, although the local farmers declared that they knew what Organic Agriculture was at a high level, it was seen that none of the producers fulfilled the requirements of Organic Agriculture in practice. According to the survey data, while the farmers reported that they received the information about Organic Agriculture from TV-radio and the internet to a large extent (58.5%), a very small percentage (11%) stated that they received training. According to this result, it has been concluded that the local farmers do not have enough information about Organic Agriculture, but the lack of information can be easily eliminated via TV-radio and the internet. As a result, it was concluded that Amasya farmers do not have sufficient information about Organic Agriculture production for now, but if appropriate training is provided, successful Organic Agriculture production output can be achieved.
18

Nichols, James. "Coarse reduced model selection for nonlinear state estimation." ANZIAM Journal 62 (February 7, 2022): C192—C207. http://dx.doi.org/10.21914/anziamj.v62.16169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
State estimation is the task of approximately reconstructing a solu- tion u of a parametric partial differential equation when the parameter vector y is unknown and the only information is m linear measurements of u. Cohen et al. [arXiv:2009.02687, Nov. 2020] proposed a method to use a family of linear reduced spaces as a generalised nonlinear reduced model for state estimation. A computable surrogate distance is used to evaluate which linear estimate lies closest to a true solution of the pde problem. In this article we propose a strategy of coarse computation of the surrogate distance while maintaining a fine mesh reduced model, as the computational cost of the surrogate distance is large relative to the reduced modelling task. We demonstrate numerically that the error induced by the coarse distance is dominated by other approximation errors. References P. Binev, A. Cohen, W. Dahmen, R. DeVore, G. Petrova, and P. Wojtaszczyk. Convergence rates for geedy algorithms in reduced basis methods. SIAM J. Math. Anal. 43.3 (2011), pp. 1457–1472. doi: 10.1137/100795772. P. Binev, A. Cohen, W. Dahmen, R. DeVore, G. Petrova, and P. Wojtaszczyk. Data assimilation in reduced modeling. SIAM/ASA J. Uncert. Quant. 5.1 (2017), pp. 1–29. doi: 10.1137/15M1025384 A. Cohen, W. Dahmen, O. Mula, and J. Nichols. Nonlinear reduced models for state and parameter estimation. arXiv:2009.02687 [cs, math] (2020). url: http://arxiv.org/abs/2009.02687 (visited on 01/07/2021) A. Cohen, D. Wolfgang, R. DeVore, and J. Nichols. Reduced basis greedy selection using random training sets. ESAIM: Math. Model. Num. Anal. 54 (2020), pp. 1509–1524. doi: 10.1051/m2an/2020004. J. S. Hesthaven, G. Rozza, and B. Stamm. Certified reduced basis methods for parametrized partial differential equations. SpringerBriefs in Mathematics. Springer, 2016. doi: 10.1007/978-3-319-22470-1. Y. Maday, A. T. Patera, J. D. Penn, and M. Yano. A parameterized-background data-weak approach to variational data assimilation: formulation, analysis, and application to acoustics. Int. J. Num. Meth. Eng. 102.5 (2015), pp. 933–965. doi: 10.1002/nme.4747.
19

Fyrillas, Andreas, Boris Bourdoncle, Alexandre Maïnos, Pierre-Emmanuel Emeriau, Kayleigh Start, Nico Margaria, Martina Morassi, et al. "Certified Randomness in Tight Space." PRX Quantum 5, no. 2 (May 30, 2024). http://dx.doi.org/10.1103/prxquantum.5.020348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Reliable randomness is a core ingredient in algorithms and applications ranging from numerical simulations to statistical sampling and cryptography. The outcomes of measurements on entangled quantum states can violate Bell inequalities, thus guaranteeing their intrinsic randomness. This constitutes the basis for certified randomness generation. However, this certification requires spacelike separated devices, making it unfit for a compact apparatus. Here we provide a general method for certified randomness generation on a small-scale application-ready device and perform an integrated photonic demonstration combining a solid-state emitter and a glass chip. In contrast to most existing certification protocols, which in the absence of spacelike separation are vulnerable to loopholes inherent to realistic devices, the protocol we implement accounts for information leakage and is thus compatible with emerging compact scalable devices. We demonstrate a two-qubit photonic device that achieves the highest standard in randomness, yet is cut out for real-world applications. The full 94.5-h-long stabilized process harnesses a bright and stable single-photon quantum-dot-based source, feeding into a reconfigurable photonic chip, with stability in the milliradian range on the implemented phases and consistent indistinguishability of the entangled photons above 93%. Using the contextuality framework, we certify private randomness generation and achieve a rate compatible with randomness expansion secure against quantum adversaries. Published by the American Physical Society 2024
20

Zhu, Banghua, Jiantao Jiao, and Jacob Steinhardt. "Robust estimation via generalized quasi-gradients." Information and Inference: A Journal of the IMA, August 4, 2021. http://dx.doi.org/10.1093/imaiai/iaab018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract We explore why many recently proposed robust estimation problems are efficiently solvable, even though the underlying optimization problems are non-convex. We study the loss landscape of these robust estimation problems, and identify the existence of ’generalized quasi-gradients’. Whenever these quasi-gradients exist, a large family of no-regret algorithms are guaranteed to approximate the global minimum; this includes the commonly used filtering algorithm. For robust mean estimation of distributions under bounded covariance, we show that any first-order stationary point of the associated optimization problem is an approximate global minimum if and only if the corruption level $\epsilon &lt; 1/3$. Consequently, any optimization algorithm that approaches a stationary point yields an efficient robust estimator with breakdown point $1/3$. With carefully designed initialization and step size, we improve this to $1/2$, which is optimal. For other tasks, including linear regression and joint mean and covariance estimation, the loss landscape is more rugged: there are stationary points arbitrarily far from the global minimum. Nevertheless, we show that generalized quasi-gradients exist and construct efficient algorithms. These algorithms are simpler than previous ones in the literature, and for linear regression we improve the estimation error from $O(\sqrt{\epsilon })$ to the optimal rate of $O(\epsilon )$ for small $\epsilon $ assuming certified hypercontractivity. For mean estimation with near-identity covariance, we show that a simple gradient descent algorithm achieves breakdown point $1/3$ and iteration complexity $\tilde{O}(d/\epsilon ^2)$.
21

Induti, Marco, Carlo Romito, and Luca Scaccabarozzi. "Magnetic crawler for welds Visual Testing, based on 3D profilometry and 2D image processing." e-journal of nondestructive testing 28, no. 8 (August 2023). http://dx.doi.org/10.58286/28172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To ensure the satisfactory performance of a welded structure, the quality of the welds must be determined by adequate testing procedures. During Visual Testing (VT), the weld is examined through the eyes of an inspector to determine surface discontinuities. Although this is commonly considered the easiest, quickest, and least expensive type of inspection, it has some limitations: a certified inspector shall always be on site to perform the test, making it strongly dependant on his experience, knowledge and current environmental conditions. In addition, there is no possibility to record inspection data except standard pictures and comments to be added to inspection reports. Modern technology brings novel solutions for quality assurance. This article describes a powerful tool which can improve the reliability of VT that combines precise laser measurements, image processing and data cloud computing. A portable magnetic crawler has been developed using a Raspberry Pi SBC, a smart profile sensor and a 5 MPix industrial colour camera in order to gather both weld 3D point cloud and surface pictures. Laser triangulation and processing power directly integrated on board allow an easy weld profile measurement. Undercuts, reinforcement excess, spatters, hi-low are precisely detected and sized by the laser sensor while integrated 2D camera records and analyses surface features such contaminations, corrosion, and weld discoloration. The point cloud weld reconstruction is realized stitching together 2D profile data at a fixed interval that can range up to a hundredth of a millimetre with a spatial resolution up to 50um. Inspected weld is fully digitalized in the form of 3D object together with a set of numerical features, data are securely stored in a local flash memory and automatically synchronized with cloud platform for remote data visualization. Automatic weld assessment can also be applied using cloud computing and artificial intelligence (AI) algorithms together with deep learning tools. The combination of images, 3D models, measurements, and other evaluable features all together in the same digital platform definitely enriches the weld assessment making it quicker and reliable with the support of AI, it gives the possibility of remote visual inspection, it ensures a flawless workflow where visual inspection data are stored in a dedicated platform and can be accessed at any moment by stakeholders.
22

Bernreuther, Marco, and Stefan Volkwein. "An adaptive certified space-time reduced basis method for nonsmooth parabolic partial differential equations." Advances in Computational Mathematics 50, no. 3 (May 15, 2024). http://dx.doi.org/10.1007/s10444-024-10137-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIn this paper, a nonsmooth semilinear parabolic partial differential equation (PDE) is considered. For a reduced basis (RB) approach, a space-time formulation is used to develop a certified a-posteriori error estimator. This error estimator is adopted to the presence of the discrete empirical interpolation method (DEIM) as approximation technique for the nonsmoothness. The separability of the estimated error into an RB and a DEIM part then guides the development of an adaptive RB-DEIM algorithm, combining both offline phases into one. Numerical experiments show the capabilities of this novel approach in comparison with classical RB and RB-DEIM approaches.
23

Nannicini, Giacomo. "Fast Quantum Subroutines for the Simplex Method." Operations Research, October 18, 2022. http://dx.doi.org/10.1287/opre.2022.2341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
What would Dantzig do with a quantum computer? It is unlikely we will ever find out the answer to this question. However, we can try to understand if the simplex method can be implemented on a quantum computer, and this might have piqued Dantzig’s interest. The paper “Fast Quantum Subroutines for the Simplex Method” gives a quantum implementation of an iteration of the simplex method, in which the basis inverse is never explicitly computed: the quantum computer takes as input the current basis and certifies optimality or outputs the next basis. Because computing the basis inverse is expensive, this can lead to an asymptotically faster algorithm in terms of the problem size: in the best case, the quantum algorithm can identify pivots in essentially linear time! This, however, comes at the cost of worse dependence on some numerical parameters: all these tradeoffs are discussed in the full article.
24

UYAR, Esra, and Zeynep Aybüke GÜNEKBAY. "Comparison of the Number of Particle History for Monte Carlo Codes in Gamma-Ray Spectroscopy." Gazi University Journal of Science Part A: Engineering and Innovation, June 20, 2023, 176–83. http://dx.doi.org/10.54287/gujsa.1276486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Monte Carlo is a numerical computation algorithm that is widely used in many fields of science and is used to obtain numerical results with a large number of repeated random samplings. Radiation transport with Monte Carlo simulation continues to increase its popularity in the fields of radiation measurement. The high accuracy and precision measurement of radionuclide activity amounts in gamma-ray spectrometry depends on the efficiency calibration of the detector. Efficiency calibration is carried out in two ways, using certified reference materials, by experimental method or Monte Carlo simulation method. The experimental method is expensive, procedurally complex and time-consuming due to the supply of reference material. The use of the Monte Carlo technique in a reliable way without the need for a standard radioactive source in determining the detector efficiency is becoming common. The most critical step for accurate and precise results in getting the response of a detector with the Monte Carlo method is modeling the detector with its realistic dimensions. Another parameter as important as detector modeling is the number of histories in the simulation code examined in this study. The effect of the number of histories on efficiency was examined in detail using PHITS, GESPECOR and DETEFF Monte Carlo simulation codes. Since there is no definite number about this effect, which is important for obtaining meaningful and realistic results, the change in the efficiency value was examined by increasing the number of stories from 105 to 108. The results obtained in this work showed that at least 107 particle numbers should be used in all three programs where the uncertainty is below 1%. If the existing facilities are sufficient, it can be increased to 108s in case of having a more equipped and fast computer. However, going higher than this value does not make any sense as seen from the study.

To the bibliography