Journal articles on the topic 'Tutorat inversé'

To see the other types of publications on this topic, follow the link: Tutorat inversé.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 journal articles for your research on the topic 'Tutorat inversé.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

DOUZE, E. J. "TUTORIAL LINEAR INVERSE FILTERS IN ORTHOGONAL COORDINATES*." Geophysical Prospecting 33, no. 8 (December 1985): 1093–102. http://dx.doi.org/10.1111/j.1365-2478.1985.tb01354.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Givoli, Dan. "A tutorial on the adjoint method for inverse problems." Computer Methods in Applied Mechanics and Engineering 380 (July 2021): 113810. http://dx.doi.org/10.1016/j.cma.2021.113810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Bangti, and William Rundell. "A tutorial on inverse problems for anomalous diffusion processes." Inverse Problems 31, no. 3 (February 10, 2015): 035003. http://dx.doi.org/10.1088/0266-5611/31/3/035003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ulrych, Tadeusz J., Mauricio D. Sacchi, and Alan Woodbury. "A Bayes tour of inversion: A tutorial." GEOPHYSICS 66, no. 1 (January 2001): 55–69. http://dx.doi.org/10.1190/1.1444923.

Full text
Abstract:
It is unclear whether one can (or should) write a tutorial about Bayes. It is a little like writing a tutorial about the sense of humor. However, this tutorial is about the Bayesian approach to the solution of the ubiquitous inverse problem. Inasmuch as it is a tutorial, it has its own special ingredients. The first is that it is an overview; details are omitted for the sake of the grand picture. In fractal language, it is the progenitor of the complex pattern. As such, it is a vision of the whole. The second is that it does, of necessity, assume some ill‐defined knowledge on the part of the reader. Finally, this tutorial presents our view. It may not appeal to, let alone be agreed to, by all.
APA, Harvard, Vancouver, ISO, and other styles
5

Scales, John A., and Luis Tenorio. "Prior information and uncertainty in inverse problems." GEOPHYSICS 66, no. 2 (March 2001): 389–97. http://dx.doi.org/10.1190/1.1444930.

Full text
Abstract:
Solving any inverse problem requires understanding the uncertainties in the data to know what it means to fit the data. We also need methods to incorporate data‐independent prior information to eliminate unreasonable models that fit the data. Both of these issues involve subtle choices that may significantly influence the results of inverse calculations. The specification of prior information is especially controversial. How does one quantify information? What does it mean to know something about a parameter a priori? In this tutorial we discuss Bayesian and frequentist methodologies that can be used to incorporate information into inverse calculations. In particular we show that apparently conservative Bayesian choices, such as representing interval constraints by uniform probabilities (as is commonly done when using genetic algorithms, for example) may lead to artificially small uncertainties. We also describe tools from statistical decision theory that can be used to characterize the performance of inversion algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Tan K. "Prestack Inverse-Ray Imaging of A 2D Homogeneous Layer: A Tutorial Study." Terrestrial, Atmospheric and Oceanic Sciences 13, no. 4 (2002): 399. http://dx.doi.org/10.3319/tao.2002.13.4.399(t).

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Stamnes, Knut, Børge Hamre, Snorre Stamnes, Nan Chen, Yongzhen Fan, Wei Li, Zhenyi Lin, and Jakob Stamnes. "Progress in Forward-Inverse Modeling Based on Radiative Transfer Tools for Coupled Atmosphere-Snow/Ice-Ocean Systems: A Review and Description of the AccuRT Model." Applied Sciences 8, no. 12 (December 19, 2018): 2682. http://dx.doi.org/10.3390/app8122682.

Full text
Abstract:
A tutorial review is provided of forward and inverse radiative transfer in coupled atmosphere-snow/ice-water systems. The coupled system is assumed to consist of two adjacent horizontal slabs separated by an interface across which the refractive index changes abruptly from its value in air to that in ice/water. A comprehensive review is provided of the inherent optical properties of air and water (including snow and ice). The radiative transfer equation for unpolarized as well as polarized radiation is described and solutions are outlined. Several examples of how to formulate and solve inverse problems encountered in environmental optics involving coupled atmosphere-water systems are discussed in some detail to illustrate how the solutions to the radiative transfer equation can be used as a forward model to solve practical inverse problems.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Xiangang, Mingbo Pu, Fei Zhang, Mingfeng Xu, Yinghui Guo, Xiong Li, and Xiaoliang Ma. "Vector optical field manipulation via structural functional materials: Tutorial." Journal of Applied Physics 131, no. 18 (May 14, 2022): 181101. http://dx.doi.org/10.1063/5.0089859.

Full text
Abstract:
Vector optical field (VOF) manipulation greatly extended the boundaries of traditional scalar optics over the past decades. Meanwhile, the newly emerging techniques enabled by structural functional optical materials have driven the research domain into the subwavelength regime, where abundant new physical phenomena and technologies have been discovered and exploited for practical applications. In this Tutorial, we outline the basic principles, methodologies, and applications of VOF via structural functional materials. Among various technical routes, we focus on the metasurface-based approaches, which show obvious advantages regarding the design flexibility, the compactness of systems, and the overall performances. Both forward and inverse design methods based on the rigorous solution of Maxwell's equations are presented, which provide a valuable basis for future researchers. Finally, we discuss the generalized optical laws and conventions based on VOF manipulation. The applications in optical imaging, communications, precision measurement, laser fabrication, etc. are highlighted.
APA, Harvard, Vancouver, ISO, and other styles
9

Mohammad-Djafari, Ali. "Regularization, Bayesian Inference, and Machine Learning Methods for Inverse Problems." Entropy 23, no. 12 (December 13, 2021): 1673. http://dx.doi.org/10.3390/e23121673.

Full text
Abstract:
Classical methods for inverse problems are mainly based on regularization theory, in particular those, that are based on optimization of a criterion with two parts: a data-model matching and a regularization term. Different choices for these two terms and a great number of optimization algorithms have been proposed. When these two terms are distance or divergence measures, they can have a Bayesian Maximum A Posteriori (MAP) interpretation where these two terms correspond to the likelihood and prior-probability models, respectively. The Bayesian approach gives more flexibility in choosing these terms and, in particular, the prior term via hierarchical models and hidden variables. However, the Bayesian computations can become very heavy computationally. The machine learning (ML) methods such as classification, clustering, segmentation, and regression, based on neural networks (NN) and particularly convolutional NN, deep NN, physics-informed neural networks, etc. can become helpful to obtain approximate practical solutions to inverse problems. In this tutorial article, particular examples of image denoising, image restoration, and computed-tomography (CT) image reconstruction will illustrate this cooperation between ML and inversion.
APA, Harvard, Vancouver, ISO, and other styles
10

Christiansen, Rasmus E., and Ole Sigmund. "Compact 200 line MATLAB code for inverse design in photonics by topology optimization: tutorial: erratum." Journal of the Optical Society of America B 38, no. 6 (May 10, 2021): 1822. http://dx.doi.org/10.1364/josab.427899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Roberts, Ken, and S. R. Valluri. "Tutorial: The quantum finite square well and the Lambert W function." Canadian Journal of Physics 95, no. 2 (February 2017): 105–10. http://dx.doi.org/10.1139/cjp-2016-0602.

Full text
Abstract:
We present a solution of the quantum mechanics problem of the allowable energy levels of a bound particle in a one-dimensional finite square well. The method is a geometric-analytic technique utilizing the conformal mapping w → z = wew between two complex domains. The solution of the finite square well problem can be seen to be described by the images of simple geometric shapes, lines, and circles, under this map and its inverse image. The technique can also be described using the Lambert W function. One can work in either of the complex domains, thereby obtaining additional insight into the finite square well problem and its bound energy states. This suggests interesting possibilities for the design of materials that are sensitive to minute changes in their environment such as nanostructures and the quantum well infrared photodetector.
APA, Harvard, Vancouver, ISO, and other styles
12

Wampler, C. W., A. P. Morgan, and A. J. Sommese. "Numerical Continuation Methods for Solving Polynomial Systems Arising in Kinematics." Journal of Mechanical Design 112, no. 1 (March 1, 1990): 59–68. http://dx.doi.org/10.1115/1.2912579.

Full text
Abstract:
Many problems in mechanism design and theoretical kinematics can be formulated as systems of polynomial equations. Recent developments in numerical continuation have led to algorithms that compute all solutions to polynomial systems of moderate size. Despite the immediate relevance of these methods, they are unfamiliar to most kinematicians. This paper attempts to bridge that gap by presenting a tutorial on the main ideas of polynomial continuation along with a section surveying advanced techniques. A seven position Burmester problem serves to illustrate the basic material and the inverse position problem for general six-axis manipulators shows the usefulness of the advanced techniques.
APA, Harvard, Vancouver, ISO, and other styles
13

Helsing, Johan. "Solving Integral Equations on Piecewise Smooth Boundaries Using the RCIP Method: A Tutorial." Abstract and Applied Analysis 2013 (2013): 1–20. http://dx.doi.org/10.1155/2013/938167.

Full text
Abstract:
Recursively compressed inverse preconditioning (RCIP) is a numerical method for obtaining highly accurate solutions to integral equations on piecewise smooth surfaces. The method originated in 2008 as a technique within a scheme for solving Laplace’s equation in two-dimensional domains with corners. In a series of subsequent papers, the technique was then refined and extended as to apply to integral equation formulations of a broad range of boundary value problems in physics and engineering. The purpose of the present paper is threefold: first, to review the RCIP method in a simple setting; second, to show how easily the method can be implemented in MATLAB; third, to present new applications of RCIP to integral equations of scattering theory on planar curves with corners.
APA, Harvard, Vancouver, ISO, and other styles
14

Lee, Sangwon, and Woojoo Lee. "Application of Standardization for Causal Inference in Observational Studies: A Step-by-step Tutorial for Analysis Using R Software." Journal of Preventive Medicine and Public Health 55, no. 2 (March 31, 2022): 116–24. http://dx.doi.org/10.3961/jpmph.21.569.

Full text
Abstract:
Epidemiological studies typically examine the causal effect of exposure on a health outcome. Standardization is one of the most straightforward methods for estimating causal estimands. However, compared to inverse probability weighting, there is a lack of user-centric explanations for implementing standardization to estimate causal estimands. This paper explains the standardization method using basic R functions only and how it is linked to the R package stdReg, which can be used to implement the same procedure. We provide a step-by-step tutorial for estimating causal risk differences, causal risk ratios, and causal odds ratios based on standardization. We also discuss how to carry out subgroup analysis in detail.
APA, Harvard, Vancouver, ISO, and other styles
15

Rudolph, Kara E., Dana E. Goin, and Elizabeth A. Stuart. "The Peril of Power: A Tutorial on Using Simulation to Better Understand When and How We Can Estimate Mediating Effects." American Journal of Epidemiology 189, no. 12 (May 16, 2020): 1559–67. http://dx.doi.org/10.1093/aje/kwaa083.

Full text
Abstract:
Abstract Mediation analyses are valuable for examining mechanisms underlying an association, investigating possible explanations for nonintuitive results, or identifying interventions that can improve health in the context of nonmanipulable exposures. However, designing a study for the purpose of answering a mediation-related research question remains challenging because sample size and power calculations for mediation analyses are typically not conducted or are crude approximations. Consequently, many studies are probably conducted without first establishing that they have the statistical power required to detect a meaningful effect, potentially resulting in wasted resources. In an effort to advance more accurate power calculations for estimating direct and indirect effects, we present a tutorial demonstrating how to conduct a flexible, simulation-based power analysis. In this tutorial, we compare power to estimate direct and indirect effects across various estimators (the Baron and Kenny estimator (J Pers Soc Psychol. 1986;51(6):1173–1182), inverse odds ratio weighting, and targeted maximum likelihood estimation) using various data structures designed to mimic important features of real data. We include step-by-step commented R code (R Foundation for Statistical Computing, Vienna, Austria) in an effort to lower implementation barriers to ultimately improving power assessment in mediation studies.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Hang, Wei Chen, Weilin Huang, Shaohuan Zu, Xingye Liu, Liuqing Yang, and Yangkang Chen. "Nonstationary predictive filtering for seismic random noise suppression — A tutorial." GEOPHYSICS 86, no. 3 (March 19, 2021): W21—W30. http://dx.doi.org/10.1190/geo2020-0368.1.

Full text
Abstract:
Predictive filtering (PF) in the frequency domain is one of the most widely used denoising algorithms in seismic data processing. PF is based on the assumption of linear or planar events in the time-space domain. In traditional PF methods, a predictive filter is fixed across the spatial dimension, which cannot deal with spatial variations in seismic data well. To handle the curved events, the predictive filter is either applied in local windows or extended into a nonstationary version. The regularized nonstationary autoregression (RNAR) method can be treated as a nonstationary extension of traditional PF, in which the predictive filter coefficients are variable in different spatial locations. This highly underdetermined inverse problem is solved by shaping regularization with a smoothness constraint in space. We further extend the RNAR method to the more general case, in which we can apply more constraints to the filter coefficients according to the features of seismic data. First, apart from the smoothness in space, we also apply a smoothing constraint in frequency, considering the coherency of the coefficients in the frequency dimension. Second, we apply a frequency-dependent smoothing radius in the spatial dimension to better take advantage of the nonstationarity of seismic data in the frequency axis and to better deal with noise. The effectiveness of our method is validated using several synthetic and field data examples.
APA, Harvard, Vancouver, ISO, and other styles
17

Pujol, Jose. "The solution of nonlinear inverse problems and the Levenberg-Marquardt method." GEOPHYSICS 72, no. 4 (July 2007): W1—W16. http://dx.doi.org/10.1190/1.2732552.

Full text
Abstract:
Although the Levenberg-Marquardt damped least-squares method is an extremely powerful tool for the iterative solution of nonlinear problems, its theoretical basis has not been described adequately in the literature. This is unfortunate, because Levenberg and Marquardt approached the solution of nonlinear problems in different ways and presented results that go far beyond the simple equation that characterizes the method. The idea of damping the solution was introduced by Levenberg, who also showed that it is possible to do that while at the same time reducing the value of a function that must be minimized iteratively. This result is not obvious, although it is taken for granted. Moreover, Levenberg derived a solution more general than the one currently used. Marquardt started with the current equation and showed that it interpolates between the ordinary least-squares-method and the steepest-descent method. In this tutorial, the two papers are combined into a unified presentation, which will help the reader gain a better understanding of what happens when solving nonlinear problems. Because the damped least-squares and steepest-descent methods are intimately related, the latter is also discussed, in particular in its relation to the gradient. When the inversion parameters have the same dimensions (and units), the direction of steepest descent is equal to the direction of minus the gradient. In other cases, it is necessary to introduce a metric (i.e., a definition of distance) in the parameter space to establish a relation between the two directions. Although neither Levenberg nor Marquardt discussed these matters, their results imply the introduction of a metric. Some of the concepts presented here are illustrated with the inversion of synthetic gravity data corresponding to a buried sphere of unknown radius and depth. Finally, the work done by early researchers that rediscovered the damped least-squares method is put into a historical context.
APA, Harvard, Vancouver, ISO, and other styles
18

Valmorbida, Janice, Anderson Fernando Wamser, Bruna Luisa Santin, and Marcos Ender. "Métodos de manejo e plantas de cobertura do solo para o cultivo do tomateiro tutorado." Agropecuária Catarinense 33, no. 2 (September 1, 2020): 76–81. http://dx.doi.org/10.52945/rac.v33i2.753.

Full text
Abstract:
O objetivo desse trabalho foi avaliar a influência de métodos de manejo e culturas de cobertura do solo de inverno na produtividade de tomateiro tutorado. Os tratamentos consistiram na combinação de dois métodos de plantio de tomate (convencional e direto) e quatro coberturas de solo de inverno (pousio, aveia, nabo e aveia+nabo). Avaliou-se o teor de nitrogênio (N), a relação C/N, a massa seca e o acúmulo de N na parte aérea de plantas de cobertura de solo, bem como a produtividade do tomate, nas safras 2009/10 e 2011/12. Ao final da safra foi determinada a resistência mecânica à penetração do solo usando um penetrômetro de impacto. As espécies de cobertura do solo e os métodos de preparo para a semeadura não afetaram a produção de massa seca e N total das plantas de cobertura, sendo que apenas na safra 2011/12 o consórcio aveia+nabo apresentou maior teor de N do que a aveia, com redução da relação C/N. Maior produtividade comercial de tomate foi observada no plantio convencional (79,2 x 65,3 ton ha-1) na safra 2009/10, na qual o sistema de plantio direto apresentou maior resistência do solo ao teste do penetrômetro. Na safra 2011/12 a produtividade comercial foi maior no plantio direto (102,8 x 97,3 ton ha-1). As coberturas de solo com aveia e nabo, em cultivo consorciado ou solteiro, bem como o pousio invernal, não afetaram a produtividade comercial de tomate em nenhuma das safras estudadas.
APA, Harvard, Vancouver, ISO, and other styles
19

Ignatius Xavier, Agnes Pristy, Francis Gracy Arockiaraj, Shivasubramanian Gopinath, Aravind Simon John Francis Rajeswary, Andra Naresh Kumar Reddy, Rashid A. Ganeev, M. Scott Arockia Singh, S. D. Milling Tania, and Vijayakumar Anand. "Single-Shot 3D Incoherent Imaging Using Deterministic and Random Optical Fields with Lucy–Richardson–Rosen Algorithm." Photonics 10, no. 9 (August 30, 2023): 987. http://dx.doi.org/10.3390/photonics10090987.

Full text
Abstract:
Coded aperture 3D imaging techniques have been rapidly evolving in recent years. The two main directions of evolution are in aperture engineering to generate the optimal optical field and in the development of a computational reconstruction method to reconstruct the object’s image from the intensity distribution with minimal noise. The goal is to find the ideal aperture–reconstruction method pair, and if not that, to optimize one to match the other for designing an imaging system with the required 3D imaging characteristics. The Lucy–Richardson–Rosen algorithm (LR2A), a recently developed computational reconstruction method, was found to perform better than its predecessors, such as matched filter, inverse filter, phase-only filter, Lucy–Richardson algorithm, and non-linear reconstruction (NLR), for certain apertures when the point spread function (PSF) is a real and symmetric function. For other cases of PSF, NLR performed better than the rest of the methods. In this tutorial, LR2A has been presented as a generalized approach for any optical field when the PSF is known along with MATLAB codes for reconstruction. The common problems and pitfalls in using LR2A have been discussed. Simulation and experimental studies for common optical fields such as spherical, Bessel, vortex beams, and exotic optical fields such as Airy, scattered, and self-rotating beams have been presented. From this study, it can be seen that it is possible to transfer the 3D imaging characteristics from non-imaging-type exotic fields to indirect imaging systems faithfully using LR2A. The application of LR2A to medical images such as colonoscopy images and cone beam computed tomography images with synthetic PSF has been demonstrated. We believe that the tutorial will provide a deeper understanding of computational reconstruction using LR2A.
APA, Harvard, Vancouver, ISO, and other styles
20

Páez-Rueda, Carlos-Iván, Arturo Fajardo, Manuel Pérez, German Yamhure, and Gabriel Perilla. "Exploring the Potential of Mixed Fourier Series in Signal Processing Applications Using One-Dimensional Smooth Closed-Form Functions with Compact Support: A Comprehensive Tutorial." Mathematical and Computational Applications 28, no. 5 (September 1, 2023): 93. http://dx.doi.org/10.3390/mca28050093.

Full text
Abstract:
This paper studies and analyzes the approximation of one-dimensional smooth closed-form functions with compact support using a mixed Fourier series (i.e., a combination of partial Fourier series and other forms of partial series). To explore the potential of this approach, we discuss and revise its application in signal processing, especially because it allows us to control the decreasing rate of Fourier coefficients and avoids the Gibbs phenomenon. Therefore, this method improves the signal processing performance in a wide range of scenarios, such as function approximation, interpolation, increased convergence with quasi-spectral accuracy using the time domain or the frequency domain, numerical integration, and solutions of inverse problems such as ordinary differential equations. Moreover, the paper provides comprehensive examples of one-dimensional problems to showcase the advantages of this approach.
APA, Harvard, Vancouver, ISO, and other styles
21

Salustiano, Maria Eloísa, Francisco Xavier Ribeiro do Vale, Laércio Zambolim, and Paulo César Rezende Fontes. "O manejo da pinta-preta do tomateiro em épocas de temperaturas baixas." Summa Phytopathologica 32, no. 4 (September 2006): 353–59. http://dx.doi.org/10.1590/s0100-54052006000400006.

Full text
Abstract:
O desempenho do cultivar de tomate Santa Clara e do híbrido Débora Plus em relação ao desenvolvimento da pinta-preta (Alternaria solani) em plantios de verão e outono sob dois sistemas de condução foi verificado em dois experimentos conduzidos na área experimental da Universidade Federal de Viçosa. Temperaturas baixas, ou escassez de chuva e, ou, curtas durações dos períodos de molhamento foliares propiciaram baixa incidência da pinta-preta e conseqüentemente os sistemas de condução tradicional e tutorado vertical não influíram na severidade nos dois ensaios. O cultivar Santa Clara apresentou maiores valores de área abaixo da curva de progresso da doença (AACPD) em relação ao híbrido Débora Plus. A aplicação do clorotalonil no aparecimento dos primeiros sintomas associado aos fatores climáticos atrasou o desenvolvimento da doença em 30 dias nas estações de verão-outono e outono-inverno com reduções da severidade da pinta-preta de 22% e 18% dos valores de AACPD nos dois ensaios, respectivamente.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Guang, Zhiqiang Shen, Akshay Iyer, Umar Farooq Ghumman, Shan Tang, Jinbo Bi, Wei Chen, and Ying Li. "Machine-Learning-Assisted De Novo Design of Organic Molecules and Polymers: Opportunities and Challenges." Polymers 12, no. 1 (January 8, 2020): 163. http://dx.doi.org/10.3390/polym12010163.

Full text
Abstract:
Organic molecules and polymers have a broad range of applications in biomedical, chemical, and materials science fields. Traditional design approaches for organic molecules and polymers are mainly experimentally-driven, guided by experience, intuition, and conceptual insights. Though they have been successfully applied to discover many important materials, these methods are facing significant challenges due to the tremendous demand of new materials and vast design space of organic molecules and polymers. Accelerated and inverse materials design is an ideal solution to these challenges. With advancements in high-throughput computation, artificial intelligence (especially machining learning, ML), and the growth of materials databases, ML-assisted materials design is emerging as a promising tool to flourish breakthroughs in many areas of materials science and engineering. To date, using ML-assisted approaches, the quantitative structure property/activity relation for material property prediction can be established more accurately and efficiently. In addition, materials design can be revolutionized and accelerated much faster than ever, through ML-enabled molecular generation and inverse molecular design. In this perspective, we review the recent progresses in ML-guided design of organic molecules and polymers, highlight several successful examples, and examine future opportunities in biomedical, chemical, and materials science fields. We further discuss the relevant challenges to solve in order to fully realize the potential of ML-assisted materials design for organic molecules and polymers. In particular, this study summarizes publicly available materials databases, feature representations for organic molecules, open-source tools for feature generation, methods for molecular generation, and ML models for prediction of material properties, which serve as a tutorial for researchers who have little experience with ML before and want to apply ML for various applications. Last but not least, it draws insights into the current limitations of ML-guided design of organic molecules and polymers. We anticipate that ML-assisted materials design for organic molecules and polymers will be the driving force in the near future, to meet the tremendous demand of new materials with tailored properties in different fields.
APA, Harvard, Vancouver, ISO, and other styles
23

Kieling, André dos Santos, Jucinei José Comin, Jamil Abdalla Fayad, Marcos Alberto Lana, and Paulo Emílio Lovato. "Plantas de cobertura de inverno em sistema de plantio direto de hortaliças sem herbicidas: efeitos sobre plantas espontâneas e na produção de tomate." Ciência Rural 39, no. 7 (October 2009): 2207–9. http://dx.doi.org/10.1590/s0103-84782009000700040.

Full text
Abstract:
O objetivo deste trabalho foi eliminar o uso de herbicidas nas lavouras de tomate, em sistema de plantio direto (PD). Para conhecer a melhor combinação de plantas de cobertura (PC) de inverno para o controle de plantas espontâneas (PE) e a produção do tomate, conduziu-se experimento a campo na Estação Experimental da Empresa de Pesquisa Agropecuária e Extensão Rural (EPAGRI) de Ituporanga, Santa Catarina (SC). Foram testadas a aveia preta (Avena strigosa Schreb), a ervilhaca (Vicia villosa Roth) e o nabo forrageiro (Raphanus sativus L.) em sistemas de cobertura solteiros e consorciados. O tomate, variedade Márcia-EPAGRI, foi tutorado e conduzido sob sistema de fertirrigação. Os melhores tratamentos na produção de matéria seca (MS) de PC foram aveia + ervilhaca e aveia solteira, seguidos de ervilhaca + nabo, aveia + nabo e aveia + ervilhaca + nabo. Entre os cinco melhores resultados na produção de MS, apenas aveia não foi um tratamento consorciado. No controle de plantas espontâneas, destacaram-se os tratamentos aveia + nabo, aveia + ervilhaca e aveia + ervilhaca + nabo, seguidos de aveia. Não ocorreram diferenças estatísticas entre tratamentos no rendimento total do tomate e na produção comercial.
APA, Harvard, Vancouver, ISO, and other styles
24

Stanica, Iulia-Cristina, Florica Moldoveanu, Giovanni-Paul Portelli, Maria-Iuliana Dascalu, Alin Moldoveanu, and Mariana Georgiana Ristea. "Flexible Virtual Reality System for Neurorehabilitation and Quality of Life Improvement." Sensors 20, no. 21 (October 23, 2020): 6045. http://dx.doi.org/10.3390/s20216045.

Full text
Abstract:
As life expectancy is mostly increasing, the incidence of many neurological disorders is also constantly growing. For improving the physical functions affected by a neurological disorder, rehabilitation procedures are mandatory, and they must be performed regularly. Unfortunately, neurorehabilitation procedures have disadvantages in terms of costs, accessibility and a lack of therapists. This paper presents Immersive Neurorehabilitation Exercises Using Virtual Reality (INREX-VR), our innovative immersive neurorehabilitation system using virtual reality. The system is based on a thorough research methodology and is able to capture real-time user movements and evaluate joint mobility for both upper and lower limbs, record training sessions and save electromyography data. The use of the first-person perspective increases immersion, and the joint range of motion is calculated with the help of both the HTC Vive system and inverse kinematics principles applied on skeleton rigs. Tutorial exercises are demonstrated by a virtual therapist, as they were recorded with real-life physicians, and sessions can be monitored and configured through tele-medicine. Complex movements are practiced in gamified settings, encouraging self-improvement and competition. Finally, we proposed a training plan and preliminary tests which show promising results in terms of accuracy and user feedback. As future developments, we plan to improve the system’s accuracy and investigate a wireless alternative based on neural networks.
APA, Harvard, Vancouver, ISO, and other styles
25

Beaudoin, N. "Tutorial/Artical didactique: A high-accuracy mathematical and numerical method for Fourier transform, integral, derivatives, and polynomial splines of any order." Canadian Journal of Physics 76, no. 9 (September 1, 1998): 659–77. http://dx.doi.org/10.1139/p98-046.

Full text
Abstract:
From few simple and relatively well-known mathematical tools, an easily understandable, though powerful, method has been devised that gives many useful results about numerical functions. With mere Taylor expansions, Dirac delta functions and Fourier transform with its discrete counterpart, the DFT, we can obtain, from a digitized function, its integral between any limits, its Fourier transform without band limitations and its derivatives of any order. The same method intrinsically produces polynomial splines of any order and automatically generates the best possible end conditions. For a given digitized function, procedures to determine the optimum parameters of the method are presented. The way the method is structured makes it easy to estimate fairly accurately the error for any result obtained. Tests conducted on nontrivial numerical functions show that relative as well as absolute errors can be much smaller than 10-100, and there is no indication that even better results could not be obtained. The method works with real or complex functions as well; hence, it can be used for inverse Fourier transforms too. Implementing the method is an easy task, particularly if one uses symbolic mathematical software to establish the formulas. Once formulas are worked out, they can be efficiently implemented in a fast compiled program. The method is relatively fast; comparisons between computation time for fast Fourier transform and Fourier transform computed at different orders are presented. Accuracy increases exponentially while computation time increases quadratically with the order. So, as long as one can afford it, the trade-off is beneficial. As an example, for the fifth order, computation time is only ten times greater than that of the FFT while accuracy is 108 times better. Comparisons with other methods are presented.PACS Nos.: 02.00 and 02.60
APA, Harvard, Vancouver, ISO, and other styles
26

Geraci, Marco, and Alessio Farcomeni. "Mid-quantile regression for discrete responses." Statistical Methods in Medical Research 31, no. 5 (February 28, 2022): 821–38. http://dx.doi.org/10.1177/09622802211060525.

Full text
Abstract:
We develop quantile regression methods for discrete responses by extending Parzen’s definition of marginal mid-quantiles. As opposed to existing approaches, which are based on either jittering or latent constructs, we use interpolation and define the conditional mid-quantile function as the inverse of the conditional mid-distribution function. We propose a two-step estimator whereby, in the first step, conditional mid-probabilities are obtained nonparametrically and, in the second step, regression coefficients are estimated by solving an implicit equation. When constraining the quantile index to a data-driven admissible range, the second-step estimating equation has a least-squares type, closed-form solution. The proposed estimator is shown to be strongly consistent and asymptotically normal. A simulation study shows that our estimator performs satisfactorily and has an advantage over a competing alternative based on jittering. Our methods can be applied to a large variety of discrete responses, including binary, ordinal, and count variables. We show an application using data on prescription drugs in the United States and discuss two key findings. First, our analysis suggests a possible differential medical treatment that worsens the gender inequality among the most fragile segment of the population. Second, obesity is a strong driver of the number of prescription drugs and is stronger for more frequent medications users. The proposed methods are implemented in the R package Qtools. Supplemental materials for this article, including a brief R tutorial, are available as an online supplement.
APA, Harvard, Vancouver, ISO, and other styles
27

Chudowolska-Kiełkowska, Magdalena, and Łukasz A. Małek. "A nurse-led intervention to promote physical activity in sedentary older adults with cardiovascular risk factors: a randomized clinical trial (STEP-IT-UP study)." European Journal of Cardiovascular Nursing 19, no. 7 (May 3, 2020): 638–45. http://dx.doi.org/10.1177/1474515120920450.

Full text
Abstract:
Background Regular physical activity should constitute the essence of treatment in patients with cardiovascular risk factors. We sought to determine the benefits of nurse-led intervention to promote physical activity in sedentary older adults in a primary care setting. Methods A group of 199 sedentary older adults (mean age 62.7±6.9, 34.2% male) with at least one more cardiovascular risk factor were randomized 1:1 to receive a nurse-led tutorial on lifestyle modification, including pedometer hand-out – with a daily goal of at least 7000 steps – and supporting phone calls (study group), or without a goal or calls (control group). Body weight (BW), resting heart rate, systolic and diastolic blood pressure (SBP/DBP), total cholesterol (TC) and glucose were assessed at baseline and after 3 months. Results Subjects in the study group ( n = 86) achieved higher daily step count in comparison to the control group ( n = 78), 10,648±3098 vs. 3589±2000, p < 0.0001. The study group presented an improvement in all analysed parameters but glucose, including BW (−2.5±1.9 kg), SBP and DBP (−7.9±7.6 mmHg and −6.2±6.5 mmHg) and TC (−14.7±30.4 mg%), all p < 0.0001. In the control group, all parameters increased or remained unchanged. An inverse correlation between the daily step count and delta of the analysed parameters ( r = −0.26 to −0.72, p < 0.001) was found. Conclusion Nurse-led intervention with pedometer, goal setting and supporting phone calls is an effective way to promote physical activity in sedentary older adults and leads to improvement of cardiovascular risk factors within 3 months.
APA, Harvard, Vancouver, ISO, and other styles
28

Fernández-Giusti, Alicia Jesús, Isabel Amemiya-Hoshi, Zully Luz Acosta-Evangelista, Hilda Solis-Acosta, Enma Cambillo-Moyano, María Gutarra-Vela, and Beatriz Guillermo-Sánchez. "Proteína C reactiva y su relación con la adiposidad abdominal y otros factores de riesgo cardiovascular en escolares." ACTA MEDICA PERUANA 32, no. 4 (February 26, 2016): 229. http://dx.doi.org/10.35663/amp.2015.324.6.

Full text
Abstract:
Introducción. En adultos, la proteína C reactiva es un marcador de riesgo cardiovascular que se asocia con los factores de riesgo metabólicos tradicionales y predice eventos cardiovasculares. Objetivo. Determinar la relación entre los valores de proteína C reactiva, detectada con técnicas ultrasensibles (PCRus),y la adiposidad abdominal y otros factores de riesgo cardiovasculares tradicionales, en escolares. Materiales y Método. Estudio de tipo analítico, correlacional y transversal. El trabajo se realizó con escolares del primero al sexto grado de educación primaria, de la Institución Educativa Privada Héroes del Pacífico, del distrito de San Juan de Miraflores, en Lima, en el 2012. Se incluyeron a quienes fueron autorizados por sus padres o tutores. Se realizaronmediciones antropométricas: peso, talla, índice de masa corporal (IMC) y circunferencia de la cintura (CC). Resultados. Fueron estudiados 100 escolares; 46 niñas y 54 niños, con edad promedio de 8,78 ± 1,76 años. 74 % tenían peso normal; 24%, obesidad y 2%, sobrepeso. La media de PCRus fue 1,47 mg/l. En ambos sexos, la proteína C reactiva se correlacionó en forma directa ysignificativa con el IMC (p < 0,01) y la CC (p < 0,05). En las niñas se encontró una asociación inversa significativa de la PRCus con el cHDL (p < 0,05). En los niños, la proteína C reactiva no se correlacionó en forma significativa conel colesterol total y cLDL. COnClusiOnes. El mejor predictor de concentraciones elevadas de PCRus fue el índice de masa corporal. En los niños, la PCRus se asocia en forma directa y significativa con el grado de adiposidad,especialmente el índice de masa corporal, pero no con los factores de riesgo cardiovascular tradicionales.
APA, Harvard, Vancouver, ISO, and other styles
29

Gonzalez Moreno, Jesús, Desirée Castellano Olivera Castellano Olivera, Nieves López-Brea Serrat, and María Cantero García Cantero García. "Relación entre inteligencia y funciones ejecutivas en niños de siete años." Revista iberoamericana de psicología 15, no. 3 (May 2, 2023): 73–82. http://dx.doi.org/10.33881/2027-1786.rip.15307.

Full text
Abstract:
Los estudios sobre la relación entre inteligencia y las funciones ejecutivas son contradictorios, por un lado, unos niegan su existencia y, por otro lado, otros encuentran correlaciones estadísticamente significativas en, al menos, alguno de sus componentes. En la presente investigación han participado 76 alumnos de siete años que cursaban estudios primarios en un colegio de la provincia de Málaga (España). Durante la recogida de datos, se utilizó el Test Breve de Inteligencia de Kaufman (K-BIT) para evaluar la inteligencia, así como el Instrumento de Evaluación Conductual de la Función Ejecutiva-2 (BRIEF-2) para medir los diferentes elementos del control ejecutivo. Estos instrumentos fueron cumplimentados por la familia y el tutor de cada alumno. Respecto a los resultados, no se encontraron relaciones entre los conceptos si los informantes eran los familiares. No obstante, cuando la información procedía de los tutores, se han hallado relaciones inversas entre el cociente intelectual y déficits en funciones ejecutivas (supervisión de sí mismo, flexibilidad, control emocional, iniciativa, memoria de trabajo y planificación). La diferencia en los resultados observados puede deberse a percepciones subjetivas de los padres y el profesorado en la observación conductual y/o a la diversidad de comportamientos que despliegan los niños según el entorno. Los resultados llevan a la conclusión de que se requiere mayor investigación sobre el tema, ya que contribuiría a la fundamentación teórica y aportaría recursos para el ámbito clínico y educativo.
APA, Harvard, Vancouver, ISO, and other styles
30

Deng, Yang, Simiao Ren, Jordan Malof, and Willie J. Padilla. "Deep Inverse Photonic Design: A Tutorial." Photonics and Nanostructures - Fundamentals and Applications, September 2022, 101070. http://dx.doi.org/10.1016/j.photonics.2022.101070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Deng, Yang, Simiao Ren, Jordan Malof, and Willie Padilla. "Deep Inverse Photonic Design: A Tutorial." SSRN Electronic Journal, 2022. http://dx.doi.org/10.2139/ssrn.4204784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Waqar, Faaiq G., Swati Patel, and Cory M. Simon. "A tutorial on the Bayesian statistical approach to inverse problems." APL Machine Learning 1, no. 4 (November 6, 2023). http://dx.doi.org/10.1063/5.0154773.

Full text
Abstract:
Inverse problems are ubiquitous in science and engineering. Two categories of inverse problems concerning a physical system are (1) estimate parameters in a model of the system from observed input–output pairs and (2) given a model of the system, reconstruct the input to it that caused some observed output. Applied inverse problems are challenging because a solution may (i) not exist, (ii) not be unique, or (iii) be sensitive to measurement noise contaminating the data. Bayesian statistical inversion (BSI) is an approach to tackle ill-posed and/or ill-conditioned inverse problems. Advantageously, BSI provides a “solution” that (i) quantifies uncertainty by assigning a probability to each possible value of the unknown parameter/input and (ii) incorporates prior information and beliefs about the parameter/input. Herein, we provide a tutorial of BSI for inverse problems by way of illustrative examples dealing with heat transfer from ambient air to a cold lime fruit. First, we use BSI to infer a parameter in a dynamic model of the lime temperature from measurements of the lime temperature over time. Second, we use BSI to reconstruct the initial condition of the lime from a measurement of its temperature later in time. We demonstrate the incorporation of prior information, visualize the posterior distributions of the parameter/initial condition, and show posterior samples of lime temperature trajectories from the model. Our Tutorial aims to reach a wide range of scientists and engineers.
APA, Harvard, Vancouver, ISO, and other styles
33

Spencer, Richard G., and Chuan Bi. "A Tutorial Introduction to Inverse Problems in Magnetic Resonance." NMR in Biomedicine 33, no. 12 (August 16, 2020). http://dx.doi.org/10.1002/nbm.4315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Christiansen, Rasmus, and Ole Sigmund. "A Tutorial for Inverse Design in Photonics by Topology Optimization." Journal of the Optical Society of America B, December 2, 2020. http://dx.doi.org/10.1364/josab.406048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Christiansen, Rasmus Ellebæk. "Inverse design of optical mode converters by topology optimization: tutorial." Journal of Optics, June 6, 2023. http://dx.doi.org/10.1088/2040-8986/acdbdd.

Full text
Abstract:
Abstract This tutorial details the use of topology optimization (TopOpt) for the inverse design of electromagnetic mode-converters. First, the design problem under consideration is stated. Second, suitable models for the geometry and physics are formulated and third the TopOpt method is outlined. Then follows three increasingly advanced design examples. In the first, the mode converter is allowed to consist of a non-physically-realizable material distribution, leading to a design exhibiting near perfect conversion from the input mode $i$ to the output mode $o$ in terms of power conversion $\left( P_{o,\mathcal{B}}/ P_{i,\mathcal{A}} > 0.99 \right)$, providing a performance benchmark. Then follows two examples demonstrating the imposition of relevant restrictions on the design, first ensuring a physically realizable device blueprint, and second introducing feature-size control and ensuring device connectivity. These examples demonstrate how TopOpt can be used to design device blueprints that only require a minimum of post-processing prior to fabrication, which only encur a minor reduction of performance compared to the completely unconstrained design. A software tool is provided for reproducing the first design example. This tool may be extended to implement the other design examples in the paper, to explore other device configurations or, given suffient computational resources, to design 3D devices.
APA, Harvard, Vancouver, ISO, and other styles
36

Yime, Eugenio, Roque Jacinto Saltarén, and Javier Agustin Roldán Mckinley. "Análisis dinámico inverso de robots paralelos: Un tutorial con álgebra de Lie." Revista Iberoamericana de Automática e Informática industrial, May 12, 2023. http://dx.doi.org/10.4995/riai.2023.18356.

Full text
Abstract:
La cinematica y dinámica de mecanismos paralelos es un campo de investigación donde tradicionalmente se realizan los análisis de los mecanismos empleando la teoría de los torsores. En este artículo se presenta un enfoque alternativo, basado en la teoría de grupos y álgebra de Lie, el cual es un método que ha sido utilizado de manera exitosa en el análisis de cadenas cinemáticas abiertas. El artículo inicia con una breve introducción a las cadenas cinemáticas abiertas y su algebra de Lie, y posteriormente aplica dichos conceptos a los mecanismos paralelos. El artículo se ha redactado utilizando unicamente álgebra de vectores y matrices, con el objetivo de cubrir la mayor cantidad de investigadores del campo de la Robótica. En ese sentido, se analizan ejemplos típicos de robots paralelos en forma de tutorial, entre los que se encuentran, el mecanismo de cinco barras, el mecanismo de cuatro barras espacial y el robot 3-RRR planar. Se espera que el enfoque practico dado al presente artículo contribuya a fomentar el uso del algebra de Lie para el análisis cinemático y dinámico de mecanismos paralelos.
APA, Harvard, Vancouver, ISO, and other styles
37

Park, Frank C., Beobkyoon Kim, Cheongjae Jang, and Jisoo Hong. "Geometric Algorithms for Robot Dynamics: A Tutorial Review." Applied Mechanics Reviews 70, no. 1 (January 1, 2018). http://dx.doi.org/10.1115/1.4039078.

Full text
Abstract:
We provide a tutorial and review of the state-of-the-art in robot dynamics algorithms that rely on methods from differential geometry, particularly the theory of Lie groups. After reviewing the underlying Lie group structure of the rigid-body motions and the geometric formulation of the equations of motion for a single rigid body, we show how classical screw-theoretic concepts can be expressed in a reference frame-invariant way using Lie-theoretic concepts and derive recursive algorithms for the forward and inverse dynamics and their differentiation. These algorithms are extended to robots subject to closed-loop and other constraints, joints driven by variable stiffness actuators, and also to the modeling of contact between rigid bodies. We conclude with a demonstration of how the geometric formulations and algorithms can be effectively used for robot motion optimization.
APA, Harvard, Vancouver, ISO, and other styles
38

Khaireh-Walieh, Abdourahman, Denis Langevin, Pauline Bennet, Olivier Teytaud, Antoine Moreau, and Peter R. Wiecha. "A newcomer’s guide to deep learning for inverse design in nano-photonics." Nanophotonics, November 29, 2023. http://dx.doi.org/10.1515/nanoph-2023-0527.

Full text
Abstract:
Abstract Nanophotonic devices manipulate light at sub-wavelength scales, enabling tasks such as light concentration, routing, and filtering. Designing these devices to achieve precise light–matter interactions using structural parameters and materials is a challenging task. Traditionally, solving this problem has relied on computationally expensive, iterative methods. In recent years, deep learning techniques have emerged as promising tools for tackling the inverse design of nanophotonic devices. While several review articles have provided an overview of the progress in this rapidly evolving field, there is a need for a comprehensive tutorial that specifically targets newcomers without prior experience in deep learning. Our goal is to address this gap and provide practical guidance for applying deep learning to individual scientific problems. We introduce the fundamental concepts of deep learning and critically discuss the potential benefits it offers for various inverse design problems in nanophotonics. We present a suggested workflow and detailed, practical design guidelines to help newcomers navigate the challenges they may encounter. By following our guide, newcomers can avoid frustrating roadblocks commonly experienced when venturing into deep learning for the first time. In a second part, we explore different iterative and direct deep learning-based techniques for inverse design, and evaluate their respective advantages and limitations. To enhance understanding and facilitate implementation, we supplement the manuscript with detailed Python notebook examples, illustrating each step of the discussed processes. While our tutorial primarily focuses on researchers in (nano-)photonics, it is also relevant for those working with deep learning in other research domains. We aim at providing a solid starting point to empower researchers to leverage the potential of deep learning in their scientific pursuits.
APA, Harvard, Vancouver, ISO, and other styles
39

Su, Li, Shaun R. Seaman, and Sean Yiu. "Sensitivity analysis for calibrated inverse probability-of-censoring weighted estimators under non-ignorable dropout." Statistical Methods in Medical Research, April 12, 2022, 096228022210907. http://dx.doi.org/10.1177/09622802221090763.

Full text
Abstract:
Inverse probability of censoring weighting is a popular approach to handling dropout in longitudinal studies. However, inverse probability-of-censoring weighted estimators (IPCWEs) can be inefficient and unstable if the weights are estimated by maximum likelihood. To alleviate these problems, calibrated IPCWEs have been proposed, which use calibrated weights that directly optimize covariate balance in finite samples rather than the weights from maximum likelihood. However, the existing calibrated IPCWEs are all based on the unverifiable assumption of sequential ignorability and sensitivity analysis strategies under non-ignorable dropout are lacking. In this paper, we fill this gap by developing an approach to sensitivity analysis for calibrated IPCWEs under non-ignorable dropout. A simple technique is proposed to speed up the computation of bootstrap and jackknife confidence intervals and thus facilitate sensitivity analyses. We evaluate the finite-sample performance of the proposed methods using simulations and apply our methods to data from an international inception cohort study of systemic lupus erythematosus. An R Markdown tutorial to demonstrate the implementation of the proposed methods is provided.
APA, Harvard, Vancouver, ISO, and other styles
40

Greenstein, Brianna L., Danielle C. Elsey, and Geoffrey R. Hutchison. "Determining best practices for using genetic algorithms in molecular discovery." Journal of Chemical Physics 159, no. 9 (September 1, 2023). http://dx.doi.org/10.1063/5.0158053.

Full text
Abstract:
Genetic algorithms (GAs) are a powerful tool to search large chemical spaces for inverse molecular design. However, GAs have multiple hyperparameters that have not been thoroughly investigated for chemical space searches. In this tutorial, we examine the general effects of a number of hyperparameters, such as population size, elitism rate, selection method, mutation rate, and convergence criteria, on key GA performance metrics. We show that using a self-termination method with a minimum Spearman’s rank correlation coefficient of 0.8 between generations maintained for 50 consecutive generations along with a population size of 32, a 50% elitism rate, three-way tournament selection, and a 40% mutation rate provides the best balance of finding the overall champion, maintaining good coverage of elite targets, and improving relative speedup for general use in molecular design GAs.
APA, Harvard, Vancouver, ISO, and other styles
41

Kostouraki, Andriana, David Hajage, Bernard Rachet, Elizabeth J. Williamson, Guillaume Chauvet, Aurélien Belot, and Clémence Leyrat. "On variance estimation of the inverse probability‐of‐treatment weighting estimator: A tutorial for different types of propensity score weights." Statistics in Medicine, April 15, 2024. http://dx.doi.org/10.1002/sim.10078.

Full text
Abstract:
Propensity score methods, such as inverse probability‐of‐treatment weighting (IPTW), have been increasingly used for covariate balancing in both observational studies and randomized trials, allowing the control of both systematic and chance imbalances. Approaches using IPTW are based on two steps: (i) estimation of the individual propensity scores (PS), and (ii) estimation of the treatment effect by applying PS weights. Thus, a variance estimator that accounts for both steps is crucial for correct inference. Using a variance estimator which ignores the first step leads to overestimated variance when the estimand is the average treatment effect (ATE), and to under or overestimated estimates when targeting the average treatment effect on the treated (ATT). In this article, we emphasize the importance of using an IPTW variance estimator that correctly considers the uncertainty in PS estimation. We present a comprehensive tutorial to obtain unbiased variance estimates, by proposing and applying a unifying formula for different types of PS weights (ATE, ATT, matching and overlap weights). This can be derived either via the linearization approach or M‐estimation. Extensive R code is provided along with the corresponding large‐sample theory. We perform simulation studies to illustrate the behavior of the estimators under different treatment and outcome prevalences and demonstrate appropriate behavior of the analytical variance estimator. We also use a reproducible analysis of observational lung cancer data as an illustrative example, estimating the effect of receiving a PET‐CT scan on the receipt of surgery.
APA, Harvard, Vancouver, ISO, and other styles
42

Albert, Félicie. "Principles and applications of x-ray light sources driven by laser wakefield acceleration." Physics of Plasmas 30, no. 5 (May 1, 2023). http://dx.doi.org/10.1063/5.0142033.

Full text
Abstract:
One of the most prominent applications of modern particle accelerators is the generation of radiation. In a synchrotron or an x-ray free electron laser (XFEL), high energy electrons oscillating in periodic magnetic structures emit bright x rays. In spite of their scientific appeal that will remain evident for many decades, one limitation of synchrotrons and XFELs is their typical mile-long size and their cost, which often limits access to the broader scientific community. This tutorial reviews the principles and prospects of using plasmas produced by intense lasers as particle accelerators and x-ray light sources, as well as some of the applications they enable. A plasma is an ionized medium that can sustain electrical fields many orders of magnitude higher than that in conventional radio frequency accelerator structures and can be used to accelerate electrons. When short, intense laser pulses are focused into a gas, it produces electron plasma waves in which electrons can be trapped and accelerated to GeV energies. This process, laser-wakefield acceleration (LWFA), is analogous to a surfer being propelled by an ocean wave. Many radiation sources, from THz to gamma-rays, can be produced by these relativistic electrons. This tutorial reviews several LWFA-driven sources in the keV-MeV photon energy range: betatron radiation, inverse Compton scattering, bremsstrahlung radiation, and undulator/XFEL radiation. X rays from laser plasma accelerators have many emerging applications. They can be used in innovative and flexible x-ray imaging and x-ray absorption spectroscopy configurations, for use in biology, industry, and high-energy density science.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Wei, and Jinghuai Gao. "A tutorial of image-domain least-squares reverse time migration through point spread functions." GEOPHYSICS, May 15, 2023, 1–161. http://dx.doi.org/10.1190/geo2022-0629.1.

Full text
Abstract:
Least-squares reverse time migration (LSRTM) has shown great potential to improve the amplitude-fidelity and spatial resolution of reverse time migration (RTM) image. However, the main disadvantage is that it requires significant computational resources for the iterative solution. To ameliorate this problem, the image-domain least-squares reverse time migration (IDLSRTM) approach through point-spread functions (PSFs) has been proven to be a viable and alternative technique to deconvolve the standard RTM image. In this paper, we present a tutorial for the numerical implementation of IDLSRTM through PSFs. The Hessian matrix in the standard IDLSRTM approach is estimated through the spatial interpolation of precomputed PSFs on the fly, where the PSFs are computed by one-round forward modeling and migration. However, the resulting IDLSRTM scheme is highly ill-conditioned, because of the incomplete acquisition condition, irregular subsurface illumination, and band-limited data. To stabilize inversion and improve the inverted image, we suggest that the PSFs and RTM image should be deblurred by applying a deblurring filter. The deblurring PSFs can reduce the condition number of the standard Hessian matrix and make the inverse problem more well-conditioned, thus improving imaging quality and accelerating convergence. Furthermore, the regularization operators should be imposed to produce a reasonable inverted image and avoid the overfitting of the image-matching term. Based on several examples with synthetic and field data, we can conclude two significant points. The first point is that the standard IDLSRTM approach through the conventional PSFs can only recover a similar reflectivity image to the deblurring RTM and non-stationary matching filter (NMF) approaches. The second point is that the deblurring IDLSRTM approach through the deblurring PSFs can retrieve the reflectivity image with higher resolution and better amplitude-fidelity than the standard IDLSRTM approach and the standard RTM approach with the deblurring filter or NMF.
APA, Harvard, Vancouver, ISO, and other styles
44

Operto, Stéphane, Ali Gholami, Hossein Aghamiry, Gaoshan Guo, Stephen Beller, Kamal Aghazade, Frichnel Mamfoumbi, Laure Combe, and Alessandra Ribodetti. "Extending the search space of Full waveform inversion beyond the single-scattering Born approximation: A tutorial review." GEOPHYSICS, July 6, 2023, 1–115. http://dx.doi.org/10.1190/geo2022-0758.1.

Full text
Abstract:
Full Waveform Inversion can be made immune to cycle skipping by matching the recorded data arbitrarily well from inaccurate subsurface models. To achieve this goal, the simulated wavefields can be computed in an extended search space as the solution of an overdetermined problem aiming at jointly satisfying the wave equation and fitting the data in a least-squares sense. This leads to data-assimilated wavefields that are computed by solving the wave equation in the inaccurate background model with a feedback term to the data added to the source term. Then, the subsurface parameters are updated by canceling out these additional source terms, sometime called unwisely wave-equation errors, to push the background model towards the true model in the left-hand side wave-equation operator. Although many studies were devoted to these approaches with promising numerical results, their governing physical principles and their relationships with classical FWI do not seem to be understood well yet. The goal of this tutorial is to review these principles in the framework of inverse scattering theory whose governing forward equation is the Lippmann-Schwinger equation. From this equation, we show how the data-assimilated wavefields embed an approximation of the scattered field generated by the sought model perturbation and how they modify the sensitivity kernel of classical FWI beyond the Born approximation. We also clarify how the approximation with which these wavefields approximate the unknown true wavefields is accounted for in the adjoint source and in the full Newton Hessian of the parameter-estimation problem. The theory is finally illustrated with numerical examples. Understanding the physical principles governing these methods is a necessary prerequisite to assess their potential and limits and design relevant heuristics to manage the latter.
APA, Harvard, Vancouver, ISO, and other styles
45

"A Review on State of the Art in Flipped Classroom Technology A Blended E-Learning." International Journal of Emerging Trends in Engineering Research 9, no. 7 (July 5, 2021): 973–82. http://dx.doi.org/10.30534/ijeter/2021/22972021.

Full text
Abstract:
Flipped is an emerging mode of blended e-learning, as blended is mix mode of learning to enhance the skills of students, saving the time and cost either sides (students and educators). Flipped classroom is such as an e-learning blended approach in which instructions can be hovered into individual and groups, interactively to exchange and solving the assigned problems. Flipped classroom approach presents the dynamic, interactive and user’s friendly environments for in class and online blended learning. This kind of mode leads students to enhance their interpersonal skills, collaborate the experiences, and innovate new horizons by mutual activities for the given problems. The flipped mode has at least four dimensions; friendly easy environment, supporting culture for learning, contents availability when offline, minimizing time cost, trained educators/professionals and reducing repetition of learning content’s cost. In Flipped model of learning, first students go to seeking the exposure of problems by visiting the contents, materials, videos and other materials before getting in the classroom in person, fully aware to share or inquire the problem related matters in the class, its inverse approach against traditional institutional learning model. This is very effective mode of learningfor activity based and assignment-based learning strategies prior commencing class, fruitfulfeedback can be shared and reducing the tutorial part.
APA, Harvard, Vancouver, ISO, and other styles
46

Della Justina, Fabiano, Richard S. Smith, and Rajesh Vayavur. "A case-history tutorial describing the incorporation of geophysical, petrophysical and geological constraints to generate realistic geological models of the Matheson Study Area, Ontario." GEOPHYSICS, July 21, 2024, 1–52. http://dx.doi.org/10.1190/geo2023-0522.1.

Full text
Abstract:
The model that is used to explain potential-field data is highly dependent on the constraints applied in the modelling process. Many studies demonstrate the necessity of constraining gravity and magnetic models. However, typically they do not demonstrate the individual enhancements that come as a consequence of integrating each constraint into the geophysical model. In this paper, we show that when there are no constraints, it is possible to find an inverse model that is consistent with gravity data, but the model is unrealistic, as one sedimentary basin is too deep. Adding a depth weighting constraint can ensure the depth is correct, but all other features have the same depth, which is unrealistic. Including densities from a density compilation makes the densities at surface realistic, but the dips are all close to vertical and the thicknesses are similar, which is unrealistic. In this case, the inversion is believed to have found a local minimum close to the starting model. Reflection seismic data was used to constrain a two-dimensional (2D) modeling exercise (on multiple profiles) to determine the geometry of one sedimentary sub-basin. These 2D models were then combined to build a realistic three-dimensional (3D) starting model. An inversion from this model fixed the densities of each lithology, but allowed the thicknesses of the layers to vary. The resulting model was realistic, with the dips and thicknesses away from the seismic constraints being consistent with geological expectations. Although the fit to the data was much better than the previous model, it was poorer than hoped. If the densities were then allowed to vary within a realistic range of values, the fit could be improved so that both the fit to the data and the geologic model are realistic.
APA, Harvard, Vancouver, ISO, and other styles
47

الاعرجي, عقيل يحيى, and علي مهدي حس. "دراسة مقارنة لاثر استخدام كل من الاسلوبين المتدرج والعكسي من الطريقة الجزئية في تحقيق المستوى الرقمي في فعالية قذف الثقل." Journal of Kufa Studies Center 1, no. 17 (January 29, 2010). http://dx.doi.org/10.36322/jksc.v1i17.5262.

Full text
Abstract:
يهدف البحث إلى المقارنة بين اثر استخدام كل من الأسلوبين المتدرج والعكسي من الطريقة الجزئية في تحقيق المستوى الرقمي بقذف الثقل وافترض الباحث وجود فروق ذات دلالة معنوية في تحقيق المستوى الرقمي بقذف الثقل بين الأسلوب المتدرج والأسلوب العكسي من الطريقة الجزئية ولصالح الأسلوب العكسي . وتم اختيار مجتمع للبحث بالطريقة العمودية من طالبات السنة الدراسية الأولى في كلية التربية للبنات (قسم التربية الرياضية)، جامعة الكوفة للعام الدراسي 2007-2008م والبالغ عددهم (48) طالبة، أما عينة البحث فقد تكونت من 30 طالبة قسموا إلى مجموعتين أحداهما تجريبية والتي استخدمت الأسلوب العكسي وعددها 14 طالبة والأخرى ضابطة والتي استخدمت الأسلوب المتدرج وعددها 16 طالبة، وتم تحقيق التكافؤ بين المجموعتين في متغيرات العمر الزمني والطول والكتلة وبعض الصفات البدنية والمركبة في تحقيق المستوى الرقمي لفعالية قذف الثقل. واستخدم الباحث المنهج التجريبي لملائمته وطبيعة البحث، وقد تم تنفيذ البرنامج التعليمي بالأسلوبيين التعليميين للفترة من 25/3/2008 ولغاية 18/5/2008، وبعد الانتهاء من الوحدات التعليمية السبعة للمجموعتين تم اختبار المجموعتين بفعالية قذف الثقل وقد أعطيت لكل طالبة (3) محاولات واختيرت أفضلها وتم تطبيق القانون الدولي في احتساب النتائج إلا باستخدام وزن الثقل، إذ استخدمت المجموعتين ثقل وزنه (3) كغم .The research aims to compare the effect of using both methods gradual and reverse of the way partial to the achievement-level digital threw the weight and I suppose the researcher and there are significant differences in achievement-level digital threw the weight between the method of gradual and style the reverse of the way the partial and in favor of the method reverse.Was chosen as a society to look the way the vertical of the students first year in the College of Education (Department of Physical Education), University of Kufa for the academic year 2007-2008 m's (48) student, and the research sample consisted of 30 students divided into two groups: One is a pilot and have been used inverse method and the 14 student and the other officer and used gradient method and the 16 student, was to achieve parity between the two groups in the variables of chronological age, height, mass and some physical characteristics and the vehicle in achieving the level of the effectiveness of digital throwing the weight.The researcher used the experimental method and the suitability of the nature of the research, has been implemented Baloslobien educational tutorial for the period from 25/3/2008 until 18/5/2008, after the completion of seven modules for the two test groups were effectively throwing the weight has been given to each student (3) attempts and was chosen as the best application of international law in the calculation results using only the weight of gravity, the weight of the two groups were used and the weight (3) kg.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography