Добірка наукової літератури з теми "Complex differentiation method"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Complex differentiation method".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Complex differentiation method":

1

Abokhodair, Abdulwahab A. "Complex differentiation tools for geophysical inversion." GEOPHYSICS 74, no. 2 (March 2009): H1—H11. http://dx.doi.org/10.1190/1.3052111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
I propose a method for derivative approximation, virtually unknown in the geophysical literature but is rapidly gaining recognition in other areas of computational science. The technique, called the complex-step derivative (CSD), is based on the theory of complex variables and approximates first-order derivatives of real-valued functions that are analytic in the complex plane. I extend the CSD technique to second-order derivatives, while preserving the robustness of the first-order formula. Thus the formulas together provide a complete differentiation system (referred to as semiautomatic differentiation, or SD) that allows efficient and accurate approximation of gradients, Jacobians and Hessians, as well as 2D and 3D partial and cross-partial spatial derivatives. Performance evaluation tests indicate that in comparison with ordinary finite-difference schemes (FD), the SD scheme is six to eight orders of magnitude more accurate, numerically highly stable, and step-size insensitive, which are major advantages over FD. The method shares with FD its attractive feature of ease of implementation. The SD scheme is implemented in a MATLAB toolkit. The validity of the CSD method depends critically on the requirement that the target function of differentiation be analytic in the complex plane. Therefore, prior to using the CSD method, one must ensure that all functions in the complex library are defined so that they satisfy this requirement. Specialized complex-function libraries that resolve this and other technical issues for CSD applications are available in the public domain.
2

Cao Xiao-Qun, Song Jun-Qiang, Zhang Wei-Min, Zhao Yan-Lai, and Liu Bai-Nian. "A new data assimilation method using complex-variable differentiation." Acta Physica Sinica 62, no. 17 (2013): 170504. http://dx.doi.org/10.7498/aps.62.170504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ridout, Martin S. "Statistical Applications of the Complex-Step Method of Numerical Differentiation." American Statistician 63, no. 1 (February 2009): 66–74. http://dx.doi.org/10.1198/tast.2009.0013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tan, Feng Lei, Ya Kun Guo, Yong Ming Nie, and Fuan Sun. "High Precision Data Processing Method Based on the Complex-Step Differentiation." Advanced Materials Research 989-994 (July 2014): 1938–41. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.1938.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Finite differencing is a very commonly method used in numerical algorithms to compute derivatives, which is well known for its accurate precision. Besides, the best advantage of this method resides in the fact that it is extremely easy to implement. But the finite differencing may be in a dilemma. The reason is that if the step size is large, the precision is not satisfying, but if the step size is small, the error is increased due to subtractive cancellation. In this manuscript, a new method for differential, complex-step differentiation (CSD) is proposed, which uses complex computations to compute derivatives. We first give a detailed account of the principles of the complex-step differentiation. Then analyze the CSD method from two sides, error and efficiency. At last, the implementation of CSD in MATLAB is presented. Simulating results indicate that they are fitting well with the theoretical analysis.
5

Vergnault, E., and P. Sagaut. "Application of Lattice Boltzmann Method to sensitivity analysis via complex differentiation." Journal of Computational Physics 230, no. 13 (June 2011): 5417–29. http://dx.doi.org/10.1016/j.jcp.2011.03.044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hu, J. X., and X. W. Gao. "Development of complex-variable differentiation method and its application in isogeometric analysis." Australian Journal of Mechanical Engineering 11, no. 1 (January 2013): 37–43. http://dx.doi.org/10.7158/m12-052.2013.11.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Krutov, Vladimir, Dmitriy Bezuglov, and Viacheslav Voronin. "Television images identification in the vision system basis on the mathematical apparatus of cubic normalized B-splines." Serbian Journal of Electrical Engineering 14, no. 3 (2017): 387–99. http://dx.doi.org/10.2298/sjee1703387k.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The solution the task of television image identification is used in industry when creating autonomous robots and systems of technical vision. A similar problem also arises in the development of image analysis systems to function under the influence of various interfering factors in complex observation conditions complicated the registration process and existing when priori information is absent, in background noise type. One of the most important operators is the contour selection operator. Methods and algorithms of processing information from image sensors must take into account the different character of noise associated with images and signals registration. The solution of the task of isolating contours, and in fact of digital differentiation of two-dimensional signals registered against a different character of background noise, is far from trivial. This is due to the fact that such task is incorrect. In modern information systems, methods of numerical differentiation or masks are usually used to solve the task of isolating contours. The paper considers a new method of differentiating measurement results against a noise background using the modern mathematical apparatus of cubic smoothing B-splines. The new high-precision method of digital differentiation of signals using splines is proposed for the first time, without using standard numerical differentiation procedures, to calculate the values of the derivatives with high accuracy. In fact, a method has been developed for calculating the image gradient module using spline differentiation. The method, as proved by experimental studies, and computational experiments has higher noise immunity than algorithms based on standard differentiation procedures using masks.
8

Li, Yuanlu, Chang Pan, Xiao Meng, Yaqing Ding, and Haixiu Chen. "Haar Wavelet Based Implementation Method of the Non–integer Order Differentiation and its Application to Signal Enhancement." Measurement Science Review 15, no. 3 (June 1, 2015): 101–6. http://dx.doi.org/10.1515/msr-2015-0015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Non–integer order differentiation is changing application of traditional differentiation because it can achieve a continuous interpolation of the integer order differentiation. However, implementation of the non–integer order differentiation is much more complex than that of integer order differentiation. For this purpose, a Haar wavelet-based implementation method of non–integer order differentiation is proposed. The basic idea of the proposed method is to use the operational matrix to compute the non–integer order differentiation of a signal through expanding the signal by the Haar wavelets and constructing Haar wavelet operational matrix of the non–integer order differentiation. The effectiveness of the proposed method was verified by comparison of theoretical results and those obtained by another non–integer order differential filtering method. Finally, non–integer order differentiation was applied to enhance signal.
9

Lee, Cheuk-Yu, Hui Wang, and Qing-Hua Qin. "Efficient hypersingular line and surface integrals direct evaluation by complex variable differentiation method." Applied Mathematics and Computation 316 (January 2018): 256–81. http://dx.doi.org/10.1016/j.amc.2017.08.027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Jastrzębski, Marek, Piotr Kukla, and Danuta Czarnecka. "Ventricular tachycardia score – A novel method for wide QRS complex tachycardia differentiation – Explained." Journal of Electrocardiology 50, no. 5 (September 2017): 704–9. http://dx.doi.org/10.1016/j.jelectrocard.2017.04.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Complex differentiation method":

1

Vincent, Hugo. "Simulations et analyses de sensibilité du bruit produit des écoulements cisaillés." Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans ce travail de thèse, des études de sensibilité portant sur le développement et le bruit des écoulements cisaillés turbulents sont réalisées à l'aide de simulations aéroacoustiques directes et de la méthode de la différentiation complexe.Dans un premier temps, la méthode de la différentiation complexe est appliquée à des couches de mélange bidimensionnelles afin d'étudier sa capacité à mettre en évidence les effets d'un paramètre sur les champs aérodynamiques et acoustiques d'un écoulement.Pour cela, des simulations numériques directes de couches de mélange sont réalisées avec cette méthode pour différents nombres de Mach, nombres de Reynolds et tailles de maille.Dans chaque calcul, les dérivées des niveaux acoustiques par rapport à un des trois paramètres considérés sont estimées avec la méthode de la différentiation complexe.Les résultats obtenus sont en bon accord avec d'autres issus de la littérature et d'études paramétriques.Ils indiquent que la méthode de la différentiation complexe peut être utilisée pour, d'une part, étudier l'influence d'un paramètre physique sur le développement et le bruit d'un écoulement et, d'autre part, déterminer la sensibilité au maillage des solutions d'une simulation.Dans un second temps, la méthode de la différentiation complexe est appliquée à l'étude du mécanisme de réceptivité se produisant lorsqu'une onde acoustique se réfléchit sur les lèvres de la buse d'un jet.Dans ce but, à partir des résultats d'une simulation de jet impactant une plaque pleine, un pulse acoustique d'amplitude imaginaire est introduit à un instant donné près de la buse en dehors du jet.En poursuivant la simulation après cet instant, la sensibilité des couches de mélange près de la buse à une perturbation acoustique est déterminée avec la méthode de la différentiation complexe.Cette sensibilité est utilisée pour mettre en évidence la génération d'une onde d'instabilité par la perturbation acoustique.Enfin, l'influence des conditions de sortie de buse (profil de vitesse et taux de turbulence) sur les composantes tonales produites par les jets subsoniques impactant une plaque pleine est étudiée.Pour cela, plusieurs jets impactants sont simulés pour différents profils de vitesse en sortie de buse, plusieurs niveaux d'excitation des couches limites, des nombres de Mach de 0.6 et 0.9, et deux distances plaque-buse.Les résultats montrent que les conditions de sortie affectent considérablement l'amplitude des composantes tonales et que des jets impactants à un nombre de Mach inférieur à 0.65, généralement non résonants, peuvent être résonants pour des conditions de sortie spécifiques.Les effets des conditions de sortie sont attribués à des modifications dans le développement des couches de mélange des jets, qui conduisent à des différences dans les propriétés d'amplification des ondes d'instabilité entre la buse et la plaque, et dans l'énergie contenue dans les structures cohérentes des jets près de la zone d'impact
In this PhD work, sensitivity studies are carried out for turbulent shear flows using direct noise computations and the complex differentiation method.First, the complex differentiation method is applied to two-dimensional mixing layers to investigate its capacity to highlight the effects of a parameter on the aerodynamic noise.For that, direct numerical simulations of mixing layers are performed using this method for different Mach numbers, Reynolds numbers and mesh spacings.In each case, the derivatives of the noise levels with respect to one of the three parameters are obtained using the complex differentiation method.The results are in good agreement with others from the literature and parametric studies.They indicate that the complex differentiation method can be used to describe the effects of physical parameters and of the grid resolution on the sound produced by a high-speed flow.Secondly, the complex differentiation method is applied to the study of the receptivity mechanism occurring when an acoustic wave reflects at the nozzle lip of a jet.For this purpose, using the results of a simulation of a jet impinging on a plate, an imaginary amplitude acoustic pulse is introduced at a given time in the near-nozzle region outside the jet.The sensitivity of the near-nozzle mixing layers to an acoustic disturbance is then determined using the complex differentiation method.This sensitivity is used to highlight the excitation of an instability wave by the acoustic disturbance.Finally, the influence of nozzle-exit conditions (velocity profile and turbulence level) on the tonal noise components generated by subsonic impinging jets is investigated.For that, jets with different nozzle-exit velocity profiles, several boundary-layer excitation levels, at Mach numbers of 0.6 or 0.9, impinging on a plate located at 6 or 8 nozzle radii from the nozzle, are simulated.The results show that the nozzle-exit conditions significantly affect the amplitude of the tonal noise components and that impinging jets at Mach numbers below 0.65, which are generally non-resonant, can be resonant for specific nozzle-exit conditions.The effects of the nozzle-exit conditions are found to result from changes in the development of the jet mixing layers, which lead to differences in the amplification properties of the Kelvin-Helmholtz instability waves between the nozzle and the plate, and in the energy contained in the coherent structures of the jets near the impingement region
2

Akoussan, Komlan. "Modélisation et conception de structures composites viscoélastiques à haut pouvoir amortissant." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0188/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L’objectif de ce travail est de développer des outils numériques utilisables dans la détermination de manière exacte des propriétés modales des structures sandwichs viscoélastiques composites au vue de la conception des structures sandwichs viscoélastiques légères mais à haut pouvoir amortissant. Pour cela nous avons tout d’abord développé un outil générique implémenté en Matlab pour la détermination des propriétés modales en vibration libre des plaques sandwichs viscoélastiques dont les faces sont en stratifié de plusieurs couches orientées dans diverses directions. L’intérêt de cet outil, basé sur une formulation éléments finis, réside dans sa capacité à prendre en compte l’anisotropie des couches composites, la non linéarité matérielle de la couche viscoélastique traduit par diverses lois viscoélastiques dépendant de la fréquence ainsi que diverses conditions aux limites. La résolution du problème aux valeurs propres non linéaires complexes se fait par le couplage entre la technique d’homotopie, la méthode asymptotique numérique et la différentiation automatique. Ensuite pour permettre une étude continue des effets d’un paramètre de modélisation sur les propriétés modales des sandwichs viscoélastiques, nous avons proposé une méthode générique de résolution de problème résiduel non linéaire aux valeurs propres complexes possédant en plus de la dépendance en fréquence introduite par la couche viscoélastique du coeur, la dépendance du paramètre de modélisation qui décrit un intervalle d’étude bien spécifique. Cette résolution est basée sur la méthode asymptotique numérique, la différentiation automatique, la technique d’homotopie et la continuation et prend en compte diverses lois viscoélastiques. Nous proposons après cela, deux formulations distinctes pour étudier les effets, sur les propriétés amortissantes, de deux paramètres de modélisation qui sont importants dans la conception de sandwichs viscoélastiques à haut pouvoir amortissement. Le premier est l’orientation des fibres des composites dans la référence du sandwich et le second est l’épaisseur des couches qui lorsqu’elles sont bien définies permettent d’obtenir non seulement des structures sandwichs à haut pouvoir amortissant mais très légères. Les équations fortement non linéaires aux valeurs propres complexes obtenues dans ces formulations sont résolues par la nouvelle méthode de résolution d’équation résiduelle développée. Des comparaisons avec des résultats discrets sont faites ainsi que les temps de calcul pour montrer non seulement l’utilité de ces deux formulations mais également celle de la méthode de résolution d’équations résiduelles non linéaires aux valeurs propres complexes à double dépendance
Modeling and design of composite viscoelastic structures with high damping powerThe aim of this thesis is to develop numerical tools to determine accurately damping properties of composite sandwich structures for the design of lightweight viscoelastic sandwichs structures with high damping power. In a first step, we developed a generic tool implemented in Matlab for determining damping properties in free vibration of viscoelastic sandwich plates with laminate faces composed of multilayers. The advantage of this tool, which is based on a finite element formulation, is its ability to take into account the anisotropy of composite layers, the material non-linearity of the viscoelastic core induiced by the frequency-dependent viscoelastic laws and various boundary conditions . The nonlinear complex eigenvalues problem is solved by coupling homotopy technic, asymptotic numerical method and automatic differentiation. Then for the continuous study of a modeling parameter on damping properties of viscoelastic sandwichs, we proposed a generic method to solve the nonlinear residual complex eigenvalues problem which has in addition to the frequency dependence introduced by the viscoelastic core, a modeling parameter dependence that describes a very specific study interval. This resolution is based on asymptotic numerical method, automatic differentiation, homotopy technique and continuation technic and takes into account various viscoelastic laws. We propose after that, two separate formulations to study effects on the damping properties according to two modeling parameters which are important in the design of high viscoelastic sandwichs with high damping power. The first is laminate fibers orientation in the sandwich reference and the second is layers thickness which when they are well defined allow to obtain not only sandwich structures with high damping power but also very light. The highly nonlinear complex eigenvalues problems obtained in these formulations are solved by the new method of resolution of eigenvalue residual problem with two nonlinearity developed before. Comparisons with discrete results and computation time are made to show the usefulness of these two formulations and of the new method of solving nonlinear complex eigenvalues residual problem of two dependances
3

Mašková, Hana. "Biosystematická studie okruhu Carlina vulgaris ve střední Evropě s využitím molekulárních a morfometrických metod." Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-386798.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Carlina vulgaris complex in central Europe includes several lineages defined by their ecology, morphology and distribution. This diploma thesis is focused on relationships between the taxa recognized in the Czech Republic, namely Carlina vulgaris subsp. vulgaris, C. biebersteinii subsp. biebersteinii, C. biebersteinii subsp. brevibracteata and C. biebersteinii subsp. sudetica. Molecular analysis revealed two genetically defined groups. One includes samples from relict populations in western Bohemia and from high mountains classified as C. biebersteinii subsp. biebersteinii and C. biebersteinii subsp. sudetica. The other is represented by plants classified as C. vulgaris and C. biebersteinii subsp. brevibracteata. This genetic differentiation was also confirmed by morphometric analysis. However, relationships within these two groups remain unclear. The Czech populations of Carlina biebersteinii subsp. biebersteinii as well as of C. biebersteinii subsp. sudetica are closely related to the mountain populations in the Alps and Carpathians. Their occurrence in the Czech Republic is relict and they should be in focus of nature conservation. However, the separate taxonomic position of the claimed endemic C. biebersteinii subsp. sudetica is probably unjustified.

Книги з теми "Complex differentiation method":

1

Kesoretskikh, Ivan, and Sergey Zotov. Landscape vulnerability: concept and assessment. ru: INFRA-M Academic Publishing LLC., 2019. http://dx.doi.org/10.12737/1045820.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The monograph presents a methodology for assessing the vulnerability of landscapes to external influences. A comparative analysis of the concepts of "stability", "sensitivity", "vulnerability" in relation to natural complexes. An overview of existing methods for assessing the vulnerability of natural complexes is presented. The author's method of assessing the vulnerability of landscapes to anthropogenic impacts is described. The methodology is based on: selection and justification of criteria for assessing the vulnerability of landscapes; preparation of a parametric matrix and gradation of assessment criteria in accordance with the developed vulnerability classes; calculation of weighting factors of vulnerability assessment parameters; selection of optimal territorial operational unit for landscape vulnerability assessment. The method is implemented in the GIS environment "Assessment of vulnerability of landscapes of the Kaliningrad region to anthropogenic impacts", created by the authors using modern geoinformation products. The specificity of spatial differentiation of different landscapes in terms of vulnerability to anthropogenic impacts at the regional and local levels is revealed. It is stated that the use of the methodology for assessing the vulnerability of landscapes to anthropogenic impacts and its integration into the system of nature management will ensure a balanced account of geoecological features and environmental priorities in territorial planning. It is of interest to specialists in the field of rational nature management, environmental protection, spatial planning.
2

Albert, Tyler J., and Erik R. Swenson. The blood cells and blood count. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199600830.003.0265.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Blood is a dynamic fluid consisting of cellular and plasma components undergoing constant regeneration and recycling. Like most physiological systems, the concentrations of these components are tightly regulated within narrow limits under normal conditions. In the critically-ill population, however, haematological abnormalities frequently occur and are largely due to non-haematological single- or multiple-organ pathology. Haematopoiesis originates from the pluripotent stem cell, which undergoes replication, proliferation, and differentiation, giving rise to cells of the erythroid, myeloid, and lymphoid series, as well as megakaryocytes, the precursors to platelets. The haemostatic system is responsible for maintaining blood fluidity and, at the same time, prevents blood loss by initiating rapid, localized, and appropriate blood clotting at sites of vascular damage. This system is complex, comprising both cellular and plasma elements, i.e. platelets, coagulation and fibrinolytic cascades, the natural intrinsic and extrinsic pathways of anticoagulation, and the vascular endothelium. A rapid, reliable, and inexpensive method of examining haematological disorders is the peripheral blood smear, which allows practitioners to assess the functional status of the bone marrow during cytopenic states. Red blood cells, which are primarily concerned with oxygen and carbon dioxide transport, have a normal lifespan of only 120 days and require constant erythropoiesis. White blood cells represent a summation of several circulating cell types, each deriving from the hematopoietic stem cell, together forming the critical components of both the innate and adaptive immune systems. Platelets are integral to haemostasis, and also aid our inflammatory and immune responses, help maintain vascular integrity, and contribute to wound healing.
3

Bertocci, Michele A., and Mary L. Phillips. Neuroimaging of Depression. Edited by Dennis S. Charney, Eric J. Nestler, Pamela Sklar, and Joseph D. Buxbaum. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190681425.003.0025.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This chapter illustrates the historical progression, methodological approaches, and current neurobiological understanding of depression, the first leading cause of mental and behavioral disorder disability in the United States. We describe and position, in relation to depressive symptoms, the complex abnormalities that depressed adults and youth show concerning neural function during tasks and at rest, structural abnormalities, as well as key neurotransmitter, neuroreceptor, and metabolic abnormalities that have been examined in the literature. We also describe newer findings and methods such as differentiating between unipolar and bipolar depression and applying machine learning to individual prediction. Finally, we provide suggestions for future study.
4

Bland, Lucy, and Lesley Hall. Eugenics in Britain: The View from the Metropole. Edited by Alison Bashford and Philippa Levine. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780195373141.013.0012.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article discusses the impact of eugenics in Britain. It discusses eugenics as a biological way of thinking about social, economic, political, and cultural change. It gives scientific credibility to prejudices, anxieties, and fears that are prevalent primarily among the middle and upper classes. It delineates the tensions between “classic” and “reform”, although this is only one modality along which to align the complex factors that polarized the society—some of them ideological, some of them about tactics, and some based on personalities. It gives a detailed description of the differentiation of societies' activities into study and practice. The social problem group; research into contraceptive methods; family allowances; race mixture; and immigration are discussed. The practices are divided into negative and positive. Finally, this article concludes that eugenicists see feeblemindedness as hereditary, emblematic of degeneracy, and contributes to numerous social problems, such as poverty and unemployment.
5

Paro, John A. M., and Geoffrey C. Gurtner. Pathophysiology and assessment of burns. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199600830.003.0346.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Burn injury represents a complex clinical entity with significant associated morbidity and remains the second leading cause of trauma-related death. An understanding of the local and systemic pathophysiology of burns has led to significant improvements in mortality. Thermal insult results in coagulative necrosis of the skin and the depth or degree of injury is classified according to the skin layers involved. First-degree burns involve only epidermis and heal quickly with no scar. Second-degree burns are further classified into superficial partial thickness or deep partial thickness depending on the level of dermal involvement. Damage in a third-degree burn extends to subcutaneous fat. There is a substantial hypermetabolic response to severe burn, resulting in significant catabolism and untoward effects on the immune, gastrointestinal, and renal systems. Accurate assessment of the extent of burn injury is critical for prognosis and initiation of resuscitation. Burn size, measured in total body surface area, can be quickly estimated using the rule of nines or palmar method. A more detailed sizing system is recommended once the patient has been triaged. Appropriate diagnosis of burn depth will be important for later management. First-degree burns are erythematous and painful, like a sunburn; third-degree burns are leathery and insensate. Differentiating between second-degree burn types remains difficult. There are a number of formalized criteria during assessment that should prompt transfer to a burn centre.

Частини книг з теми "Complex differentiation method":

1

Annuario, Emily, Kristal Ng, and Alessio Vagnoni. "High-Resolution Imaging of Mitochondria and Mitochondrial Nucleoids in Differentiated SH-SY5Y Cells." In Methods in Molecular Biology, 291–310. New York, NY: Springer US, 2022. http://dx.doi.org/10.1007/978-1-0716-1990-2_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractMitochondria are highly dynamic organelles which form intricate networks with complex dynamics. Mitochondrial transport and distribution are essential to ensure proper cell function, especially in cells with an extremely polarised morphology such as neurons. A layer of complexity is added when considering mitochondria have their own genome, packaged into nucleoids. Major mitochondrial morphological transitions, for example mitochondrial division, often occur in conjunction with mitochondrial DNA (mtDNA) replication and changes in the dynamic behaviour of the nucleoids. However, the relationship between mtDNA dynamics and mitochondrial motility in the processes of neurons has been largely overlooked. In this chapter, we describe a method for live imaging of mitochondria and nucleoids in differentiated SH-SY5Y cells by instant structured illumination microscopy (iSIM). We also include a detailed protocol for the differentiation of SH-SY5Y cells into cells with a pronounced neuronal-like morphology and show examples of coordinated mitochondrial and nucleoid motility in the long processes of these cells.
2

Garrett, Steven L. "Comfort for the Computationally Crippled." In Understanding Acoustics, 1–55. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44787-8_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The difference between engineering and science, and all other human activity, is the fact that engineers and scientists make quantitative predictions about measurable outcomes and can specify their uncertainty in such predictions. Because those predictions are quantitative, they must employ mathematics. This chapter is intended as review of some of the more useful mathematical concepts, strategies, and techniques that are employed in the description of vibrational and acoustical systems and in the calculation of their behavior. Topics in this review include techniques such as Taylor series expansions, integration by parts, and logarithmic differentiation. Equilibrium and stability considerations lead to relations between potential energies and forces. The concept of linearity leads to superposition and Fourier analysis. Complex numbers and phasors are introduced along with the techniques for their algebraic manipulation. The discussion of physical units is extended to include their use for predicting functional dependencies of resonance frequencies, quality factors, propagation speeds, flow noise, and other system behaviors using similitude and the Buckingham Π-theorem to form dimensionless variables. Linearized least-squares fitting is introduced as a method for extraction of experimental parameters and their uncertainties and error propagation is presented to allow those uncertainties to be combined.
3

de Win, Maartje M. L. "Imaging of the Orbit: “Current Concepts”." In Surgery in and around the Orbit, 121–39. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-40697-3_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractOrbital imaging with CT or MRI can be essential in the evaluation of many orbital conditions. Because of its superior bony characterization and fast acquisition, CT is imaging method of first choice in urgent situations like trauma, infection, and evaluation of lesions arising from the orbital wall. Through recent years, CT has also gained a prominent role in (pre)operative planning and navigation, especially through the development of postprocessing software. For the evaluation of more complex orbital disease, MRI is the preferred modality. With its superior soft-tissue differentiation, MRI is useful for determining the extent of orbital lesions, like inflammatory disease, vascular malformations, and orbital tumors. By adding functional MRI techniques, like diffusion and perfusion-weighted imaging, and by combining parameters of different imaging techniques in multiparametric imaging, it is possible to further improve characterization of orbital lesions. In this chapter, the optimal approach to orbital imaging is described, combining knowledge of orbital imaging techniques and imaging indications, together with a structured way of reviewing the orbital images, knowledge of radiological features of common, and more uncommon orbital pathology, and integrating this with the clinical features of the patient.
4

Wang, Xiukun, Qing Chen, Brad Lackford, and Guang Hu. "Dissecting the Role of the Ccr4–Not Deadenylase Complex in Pluripotency and Differentiation." In Methods in Molecular Biology, 125–41. New York, NY: Springer US, 2023. http://dx.doi.org/10.1007/978-1-0716-3481-3_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mucha, Eike, Daniel Thomas, Maike Lettow, Gerard Meijer, Kevin Pagel, and Gert von Helden. "Spectroscopy of Small and Large Biomolecular Ions in Helium-Nanodroplets." In Topics in Applied Physics, 241–80. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-94896-2_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractA vast number of experiments have now shown that helium nanodroplets are an exemplary cryogenic matrix for spectroscopic investigations. The experimental techniques are well established and involve in most cases the pickup of evaporated neutral species by helium droplets. These techniques have been extended within our research group to enable nanodroplet pickup of anions or cations stored in an ion trap. By using electrospray ionization (ESI) in combination with modern mass spectrometric methods to supply ions to the trap, an immense variety of mass-to-charge selected species can be doped into the droplets and spectroscopically investigated. We have combined this droplet doping methodology with IR action spectroscopy to investigate anions and cations ranging in size from a few atoms to proteins that consist of thousands of atoms. Herein, we show examples of small complexes of fluoride anions (F−) with CO2 and H2O and carbohydrate molecules. In the case of the small complexes, novel compounds could be identified, and quantum chemistry can in some instances quantitatively explain the results. For biologically relevant complex carbohydrate molecules, the IR spectra are highly diagnostic and allow the differentiation of species that would be difficult or impossible to identify by more conventional methods.
6

Al Jabri, Madhad Ali Said. "Enhancing Products Delivery Through the Application of Innovative Operating Model Based on Hybrid Agile Delivery Method: Case Information Communication Technologies “ICT” Service Providers." In Lecture Notes in Civil Engineering, 35–46. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27462-6_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractPurpose: the purpose of this study is to find out a cutting-edge innovative solution for enhancing ICT products delivery and proposing an implementation plan with transformation recommendations aiming higher success rate.Methodology: This qualitative study is done through intensive literature review followed by qualitative analysis and selection of best product delivery method using cost benefit analysis and previous successful implementations benchmark. A proposed delivery operating model was also recommended accordingly.Findings: Based on the problem analysis and subsequent literature review, the existing delivery methodologies of ICT services providers are not fully sufficient to cater for high complex products delivery which require high integration and customer centricity. Hence, a hybrid delivery method is recommended based on previous success stories of 5 major companies with a proposed transformative operating model (framework) including a transformation plan and implementation recommendations.Implications: Information Communication Technologies services providers can benefit from the proposed hybrid product development approach and the suggested operating model as key differentiator for agility, customer centricity, and improved time to market. It enhances collaboration, innovation, culture, and Employee Engagement.Originality/value: Previous literature has extensively explored projects delivery models in various industries where there was minimum focus on ICT industry. This paper uncovers and contribute positively in proposing a successful hybrid agile delivery method with a transformative operating model to support higher success rate in ICT Industry to assure high-value creation, maximize return on investments with high net present value.
7

Tran, Quang Duy, and Fabio Di Troia. "Word Embeddings for Fake Malware Generation." In Silicon Valley Cybersecurity Conference, 22–37. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-24049-2_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractSignature and anomaly-based techniques are the fundamental methods to detect malware. However, in recent years this type of threat has advanced to become more complex and sophisticated, making these techniques less effective. For this reason, researchers have resorted to state-of-the-art machine learning techniques to combat the threat of information security. Nevertheless, despite the integration of the machine learning models, there is still a shortage of data in training that prevents these models from performing at their peak. In the past, generative models have been found to be highly effective at generating image-like data that are similar to the actual data distribution. In this paper, we leverage the knowledge of generative modeling on opcode sequences and aim to generate malware samples by taking advantage of the contextualized embeddings from BERT. We obtained promising results when differentiating between real and generated samples. We observe that generated malware has such similar characteristics to actual malware that the classifiers are having difficulty in distinguishing between the two, in which the classifiers falsely identify the generated malware as actual malware almost $$90\%$$ of the time.
8

Malisch, Rainer, Alexander Schächtele, Ralf Lippold, Björn Hardebusch, Kerstin Krätschmer, F. X. Rolaf van Leeuwen, Gerald Moy, et al. "Overall Conclusions and Key Messages of the WHO/UNEP-Coordinated Human Milk Studies on Persistent Organic Pollutants." In Persistent Organic Pollutants in Human Milk, 615–75. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-34087-1_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractBuilding on the two rounds of exposure studies with human milk coordinated by the World Health Organization (WHO) in the mid-1980s and 1990s on polychlorinated biphenyls (PCB), polychlorinated dibenzo-p-dioxins (PCDD), and polychlorinated dibenzofurans (PCDF), five expanded studies on persistent organic pollutants (POPs) were performed between 2000 and 2019. After the adoption of the Stockholm Convention on POPs (the Convention) in 2001, WHO and the United Nations Environment Programme (UNEP) collaborated in joint studies starting in 2004. The collaboration aimed at provision of POPs data for human milk as a core matrix under the Global Monitoring Plan (GMP) to assess the effectiveness of the Convention as required under Article 16. Over time, the number of analytes in the studies expanded from the initial 12 POPs targeted by the Convention for elimination or reduction to the 30 POPs covered under the Stockholm Convention and two other POPs proposed for listing as of 2019. Many of these chemicals have numerous congeners, homologous groups, isomeric forms, and transformation products, which significantly extends the number of recommended analytes.In the studies between 2000 and 2019, 82 countries from all five United Nations regions participated, of which 50 countries participated in more than one study. For the human milk samples of the 2016–2019 period, results are available for the full set of 32 POPs of interest for the Convention until 2019: (i) the 26 POPs listed by the start of the study in 2016; (ii) decabromodiphenyl ether [BDE-209] and short-chain chlorinated paraffins [SCCP] as listed in 2017; (3) dicofol and perfluorooctanoic acid [PFOA] as listed in 2019; (4) medium-chain chlorinated paraffins [MCCP] and perfluorohexane sulfonic acid [PFHxS] as proposed for listing. This is a unique characteristic among the core matrices under the GMP.Four key messages can be derived: These studies are an efficient and effective tool with global coverage as key contributor to the GMP. After collection of a large number of individual samples (usually 50) fulfilling protocol criteria, pooled samples are prepared using equal aliquots of individual samples (physical averaging) and are considered to be representative for a country, subregion or subpopulation at the time of the sampling. The analysis of pooled representative human milk samples by dedicated Reference Laboratories meeting rigorous quality criteria contributes to reliability and comparability and reduces uncertainty of the analytical results. Additionally, this concept is very cost-effective. These studies can be used for regional differentiation based on concentrations of individual POPs between and within the five UN Regional Groups (African Group, Asia-Pacific Group, Eastern European Group, Group of Latin American and Caribbean Countries; Western European and Others Group). For some POPs, a wide range of concentrations with up to three orders of magnitude between lower and upper concentrations was found, even for countries in the same UN region. Some countries had levels within the usual range for most POPs, but high concentrations for certain POPs. Findings of concentrations in the upper third of the frequency distribution may motivate targeted follow-up studies rather than if the observed level of a POP is found in the lower third of frequency distribution. However, the concentration of a POP has also to be seen in context of the sampling period and the history and pattern of use of the POPs in each country. Therefore, results are not intended for ranking of individual countries but rather to distinguish broader patterns. These studies can provide an assessment of time trends, as possible sources of variation were minimized by the survey concepts building on two factors (sampling design; analysis of the pooled samples by dedicated Reference Laboratories). The estimation of time trends based on comparison of median or mean concentrations in UN Regional Groups over the five surveys in five equal four-year periods between 2000 and 2019 provides a first orientation. However, the variation of the number of countries participating in a UN Regional Group in a certain period can influence the median or mean concentrations. Thus, it is more prudent to only use results of countries with repeated participation in these studies for drawing conclusions on temporal trends. The reduction rates in countries should be seen in context with the concentration range: A differentiation of high levels and those in the range of the background contamination is meaningful. If high levels are found, sources might be detected which could be eliminated. This can lead to significant decrease rates over the following years. However, if low background levels are reported, no specific sources can be detected. Other factors for exposure, e.g. the contamination of feed and food by air via long-range transport and subsequent bioaccumulation, cannot be influenced locally. However, only very few time points from most individual countries for most POPs of interest are available, which prevents the derivation of statistically significant temporal trends in these cases. Yet, the existing data can indicate decreasing or increasing tendencies in POP concentrations in these countries. Furthermore, pooling of data in regions allows to derive statistically significant time trends in the UN Regional Groups and globally. Global overall time trends using the data from countries with repeated participation were calculated by the Theil–Sen method. Regarding the median levels of the five UN Regional Groups, a decrease per 10 years by 58% was found for DDT, by 84% for beta-HCH, by 57% for HCB, by 32% for PBDE, by 48% for PFOS, by 70% for PCB, and by 48% for PCDD and PCDF (expressed as toxic equivalents). In contrast, the concentrations of chlorinated paraffins (CP) as “emerging POPs” showed increasing tendencies in some UN Regional Groups. On a global level, a statistically significant increase of total CP (total CP content including SCCP [listed in the Convention in 2017] and MCCP [proposed to be listed]) concentrations in human milk of 30% over 10 years was found. The studies can provide the basis for discussion of the relative importance (“ranking”) of the quantitative occurrence of POPs. This, however, requires a differentiation between two subgroups of lipophilic substances ([i] dioxin-like compounds, to be determined in the pg/g [=ng/kg] range, and [ii] non-dioxin-like chlorinated and brominated POPs, to be determined in the ng/g [=μg/kg] range; both groups reported on lipid base) and the more polar perfluorinated alkyl substances (PFAS); reported on product base [as pg/g fresh weight] or on volume base [ng/L]. For this purpose, results for the complete set of the 32 POPs of interest for the 2016–2019 period were considered. By far, the highest concentrations of lipophilic substances were found for DDT (expressed as “DDT complex”: sum of all detected analytes, calculated as DDT; maximum: 7100 ng/g lipid; median: 125 ng/g lipid) and for chlorinated paraffins (total CP content; maximum: 700 total CP/g lipid; median: 116 ng total CP/g lipid). PCB was next in the ranking and had on average an order of magnitude lower concentrations than the average of the total CP concentrations. The high CP concentrations were caused predominantly by MCCP. If the pooled samples from mothers without any known major contamination source nearby showed a high level of CP, some individual samples (e.g. from local population close to emission sources, as a result of exposure to consumer products or from the domestic environment) might even have significantly higher levels. The lactational intake of SCCP and MCCP of the breastfed infant in the microgram scale resulting from the mothers’ dietary and environmental background exposure should therefore motivate targeted follow-up studies and further measures to reduce exposure (including in the case of MCCP, regulatory efforts, e.g. restriction in products). Further, due to observed levels, targeted research should look at the balance among potential adverse effects against positive health aspects for the breastfed infants for three groups of POPs (dioxin-like compounds; non-dioxin-like chlorinated and brominated POPs; PFAS) regarding potentially needed updates of the WHO guidance. As an overall conclusion, the seven rounds of WHO/UNEP human milk exposure studies are the largest global survey on human tissues with a harmonized protocol spanning over the longest time period and carried out in a uniform format. Thus, these rounds are an effective tool to obtain reliable and comparable data sets on this core matrix and a key contributor to the GMP. A comprehensive set of global data covering all POPs targeted by the Stockholm Convention, in all UN Regional Groups, and timelines covering a span of up to three decades allows to evaluate data from various perspectives. A widened three-dimensional view is necessary to discuss results and can be performed using the three pillars for assessments of the comprehensive data set, namely: analytes of interest; regional aspects; time trends. This can identify possible problems for future targeted studies and interventions at the country, regional, or global level. Long-term trends give an indication of the effectiveness of measures to eliminate or reduce specific POPs. The consideration of countries with repeated participation in these studies provides the best possible database for the evaluation of temporal trends. The continuation of these exposure studies is important for securing sufficient data for reliable time trend assessments in the future. Therefore, it is highly recommended to continue this monitoring effort, particularly for POPs that are of public health concern.
9

Sholl, Robert. "5. Artistic Practice as Embodied Learning." In Teaching Music Performance in Higher Education, 135–64. Cambridge, UK: Open Book Publishers, 2024. http://dx.doi.org/10.11647/obp.0398.07.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Almost fourty-five years ago, Joseph Kerman proposed the notion of getting out of analysis, in fact a strategy through criticism to broaden its formalist parameters (Kerman, 1980). Kerman’s argument was flawed in many respects as Agawu pointed out; analysis was necessary in his view to teach “undergraduate music theory” and “basic musical literacy,” (Agawu, 2004: 269) something Kerman would not have denied. Yet these debates, and their continuation (Horton 2020; Cavett et al. 2023) have missed something more fundamental, especially as ideological ivory towers and territories needed protection. Over the last fourty years the rise of theory courses, has led to a schism between theory as a discipline and theory as a necessary precursor to practice (for learning repertoire, improvisation and composition); this is till prevalent in Universities and conservatoires today. This issue has not been helped by the interdisciplinarity of musicology, by the concomitant continual expansion of the curriculum, and the move away in many university departments from study of ‘the dots’ to other equally-valid forms of engagement with music. Part of this separation results from an educational ideal that differentiation is necessary before integration, something that the somatic thinker Mosche Feldenkrais advocated, but the ‘integration’ element, has more often been left to chance. This study seeks to make a pedagogical synthesis between theory improvisation and composition, allowing the teacher and student to move freely between these areas, and the student to develop their own sense of autonomy. Artistic research is premised on knowing something, on having some ‘petrol in the tank’, and especially on the ability to make aesthetic choices. This paper develops a critical and reflexive method to do begin this task. It begins by presenting a creative rethinking of species counterpoint, a foundation for thinking in Schenkerian analysis (Forte and Gilbert, 1983, also played out through Kennan 1987, Schubert 2003, Davidian 2015, and Denisch 2017) through Bach’s Goldberg Variations (1741). This develops a resource for pedagogy and practice through teaching musical techniques of composition. I present a layered-cake of musical lines against the figured bass of the theme (moving from semibreves to quavers) as an exercise that inculcates various aspects of var. 1 of the ‘Goldbergs’, and then I explore the codes and ramifications of this that allow both historical sensitivity and creative development. This contextualized exercise provides a stepping-stone to a discussion of Variation 1 (prefigured in my species example), and the development of complete variations beginning with a given “invention” (Dreyfus 1997), and then moving to the composition of new ideas. I suggest how these exercises can be used for teaching improvisation and show how this invaluable connection can be developed through other models (‘la folia’ for example). This model of thinking is historically connected to partimenti (Gjerdingen 2010, 2020), and, following Feldenkrais’s thinking (Sholl 2019, 2021), I provide different solutions to the same exercises. This strategy attempts to promote an “adaptive flexibility” (Thelen and Smith 2004) in which students can enactively and organically learn musical and technical fluency, while also developing their creativity and autonomy.
10

Zinn-Justin, Jean. "Quantum statistical physics: Functional integration formalism." In Quantum Field Theory and Critical Phenomena, 64–89. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198834625.003.0004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The functional integral representation of the density matrix at thermal equilibrium in non-relativistic quantum mechanics (QM) with many degrees of freedom, in the grand canonical formulation is introduced. In QM, Hamiltonians H(p,q) can be also expressed in terms of creation and annihilation operators, a method adapted to the study of perturbed harmonic oscillators. In the holomorphic formalism, quantum operators act by multiplication and differentiation on a vector space of analytic functions. Alternatively, they can also be represented by kernels, functions of complex variables that correspond in the classical limit to a complex parametrization of phase space. The formalism is adapted to the description of many-body boson systems. To this formalism corresponds a path integral representation of the density matrix at thermal equilibrium, where paths belong to complex spaces, instead of the more usual position–momentum phase space. A parallel formalism can be set up to describe systems with many fermion degrees of freedom, with Grassmann variables replacing complex variables. Both formalisms can be generalized to quantum gases of Bose and Fermi particles in the grand canonical formulation. Field integral representations of the corresponding quantum partition functions are derived.

Тези доповідей конференцій з теми "Complex differentiation method":

1

Marchiano, Régis, Philippe Druault, and Pierre Sagaut. "A Time Reversal Method Coupled with Complex Differentiation for the Study of Aeroacoustic Sources." In 16th AIAA/CEAS Aeroacoustics Conference. Reston, Virigina: American Institute of Aeronautics and Astronautics, 2010. http://dx.doi.org/10.2514/6.2010-3820.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

T, Soumya. "Detection and Differentiation of blood cancer cells using Edge Detection method." In The International Conference on scientific innovations in Science, Technology, and Management. International Journal of Advanced Trends in Engineering and Management, 2023. http://dx.doi.org/10.59544/zbua6077/ngcesi23p138.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Medical imaging is an essential data source that has been leveraged worldwide in health- care systems. In pathology, histopathology images are used for cancer diagnosis, whereas these images are very complex and their analyses by pathologists require large amounts of time and effort. On the other hand, although convolutional neural networks (CNNs) have produced near-human results in image processing tasks, their processing time is becoming longer and they need higher computational power. In this paper, we implement a quantized ResNet model on two histopathology image datasets to optimize the inference power consumption
3

Druault, Philippe, Regis Marchiano, and Pierre Sagaut. "Time reversal method coupled to complex differentiation technique for the aeroacoustic source detection in viscous flow." In 18th AIAA/CEAS Aeroacoustics Conference (33rd AIAA Aeroacoustics Conference). Reston, Virigina: American Institute of Aeronautics and Astronautics, 2012. http://dx.doi.org/10.2514/6.2012-2285.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Callejo, Alfonso, Olivier Bauchau, and Boris Diskin. "Parallel Sensitivity Analysis of Rotor Blade Cross-sections." In Vertical Flight Society 75th Annual Forum & Technology Display. The Vertical Flight Society, 2019. http://dx.doi.org/10.4050/f-0075-2019-14724.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Designing mechanical systems in a robust way requires computational tools that not only conduct the direct analysis (aerodynamic, structural, etc.), but can also carry out a robust sensitivity analysis of the outputs with respect to the design parameters. This is especially the case for highly nonlinear, multidisciplinary systems such as helicopter rotors. The structural analysis aspect of rotor blade design has been comprehensively studied in the literature, as shown by the multiple software packages available. On the other hand, very few instances of general-purpose sensitivity analysis tools exist. This is not the case in the aerodynamics area, where most software packages already provide design sensitivities. Building upon a recently developed tool for the adjoint sensitivity analysis of beam cross-sections with respect to material properties, this paper presents a systematic evaluation of its parallel efficiency. Both OpenMP and MPI libraries are used to run the code in parallel on examples of varying size, and the results are compared with those obtained through parallel numerical differentiation (real and complex). The results show that the adjoint method is more efficient than both complex- and real-step differentiation methods, and almost as accurate as complex-step differentiation.
5

Herloski, Robert. "Aberration calculation and lens tolerancing using automatic differentiation." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.wm6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Automatic differentiation is a technique that allows one to easily calculate exact partial derivatives of complex, composite mathematical functions. This technique is described in a book by Rall1 of the University of Wisconsin at Madison. Independently, S. Marshall of Xerox Corp. developed an algorithm that is effectively automatic differentiation, and extends the technique to arbitrary order partial derivatives of functions of arbitrary number of independent variables. Since aberration coefficients are merely partial derivatives of final ray coordinates with respect to initial ray coordinates, automatic differentiation can be employed to calculate any aberration coefficient desired of any lens system, including misaligned and nonsymmetric ones. Partial derivatives of these systems for use in lens tolerancing and optimization can also be directly calculated. Examples of aberration calculation and Gaussian beam tolerancing, using an implementation of ray tracing and automatic differentiation in the ADA programming language, are given. The work of Forbes2 in the area of computation of chromatic aberration coefficients using an equivalent, highly optimized power series method, is also mentioned.
6

Aboubakr, Ahmed, and Ahmed A. Shabana. "Efficient and Robust Implementation of the TLISMNI Method." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-48105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The dynamics of large scale and complex multibody systems (MBS) that include flexible bodies and contact/impact pairs is governed by stiff equations. Because explicit integration methods can be very inefficient and often fail in the case of stiff problems, the use of implicit numerical integration methods is recommended in this case. This paper presents a new and efficient implementation of the two-loop implicit sparse matrix numerical integration (TLISMNI) method proposed for the solution of constrained rigid and flexible MBS differential and algebraic equations. The TLISMNI method has desirable features that include avoiding numerical differentiation of the forces, allowing for an efficient sparse matrix implementation, and ensuring that the kinematic constraint equations are satisfied at the position, velocity and acceleration levels. In this method, a sparse Lagrangian augmented form of the equations of motion that ensures that the constraints are satisfied at the acceleration level is first used to solve for all the accelerations and Lagrange multipliers. The generalized coordinate partitioning or recursive methods can be used to satisfy the constraint equations at the position and velocity levels. In order to improve the efficiency and robustness of the TLISMNI method, the simple iteration and the Jacobian-Free Newton-Krylov approaches are used in this investigation. The new implementation is tested using several low order formulas that include Hilber–Hughes–Taylor (HHT), L-stable Park, A-stable Trapezoidal, and A-stable BDF methods. The HHT method allow for including numerical damping. Discussion on which method is more appropriate to use for a certain application is provided. The paper also discusses TLISMNI implementation issues including the step size selection, the convergence criteria, the error control, and the effect of the numerical damping. The use of the computer algorithm described in this paper is demonstrated by solving complex rigid and flexible tracked vehicle models, railroad vehicle models, and very stiff structure problems. The results, obtained using these low order formulas, are compared with the results obtained using the explicit Adams-Bashforth predictor-corrector method. Using the TLISMNI method, which does not require numerical differentiation of the forces and allows for an efficient sparse matrix implementation, for solving complex and stiff structure problems leads to significant computational cost saving as demonstrated in this paper. In some problems, it was found that the new TLISMNI implementation is 35 times faster than the explicit Adams-Bashforth method.
7

Urruzola, Javier, Alejo Avello, and Juan T. Celigüeta. "Optimization of Multibody Dynamics Using Pointwise Constraints and Arbitrary Functions." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/mech-1569.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Multibody dynamics optimization requires the computation of sensitivities of the objective function and the constraints. This calculation can be done by two methods, direct differentiation and adjoint variable method, that are reviewed in this paper. In either cases, the complexity of the terms that appear in the formulation makes almost a need the use of symbolic computation for the derivation of sensitivities. An existing symbolic manipulator designed for multibody optimization has been enhanced with new and more powerful capabilities. The use of arbitrary functions as design variables and pointwise constraints permits the solution of more complex optimization problems. Some illustrative examples prove the capacity of the method to handle complex optimization problems.
8

Callejo, Alfonso, Olivier Bauchau, Boris Diskin, and Li Wang. "Sensitivity Analysis of Beam Cross-Section Stiffness Using Adjoint Method." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67846.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The design optimization of rotorcraft through multidisciplinary aeroelastic models with hundreds of thousands of degrees of freedom requires a computationally efficient sensitivity analysis to obtain the objective function gradient. A fundamental part of rotorcraft analysis is the flexible multibody dynamics solver, which in the current work relies on an accurate three-dimensional representation of the beams. This paper presents the theoretical adjoint sensitivity analysis of the first structural analysis step, namely the computation of cross-sectional properties of the beams in the form of six-dimensional stiffness matrices. The adjoint equations are carefully derived, as are the derivatives of the objective function with respect to the design parameters. The method is then validated by comparing certain design sensitivities of a three-ply, composite cross-section with those obtained through real-step and complex-step numerical differentiation. The presented analysis allows the user to quantify the effect of basic structural parameters on fundamental sectional properties that can later be used in the full dynamic simulation.
9

Hsu, Yuhung, and Kurt S. Anderson. "Efficient Direct Differentiation Sensitivity Analysis for General Multi-Rigid-Body Dynamic Systems." In ASME 2001 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/detc2001/vib-21335.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Sensitivity analysis plays an important role in modern engineering applications where design optimization is desired. A computationally efficient sensitivity analysis scheme is presented in this paper in an effort to facilitate design optimization as it pertains to general, complex multi-rigid-body dynamic systems. Based on the underlying velocity space projection, state space formulation, and direct differentiation approach, the first-order sensitivity information can be efficiently determined in a fully recursive manner for general multi-rigid-body dynamic systems involving an arbitrary number of closed loops. The overall computational expense of this method is bilinear in the number of design variables and the number of system generalized coordinates. The solution accuracy and the computational performance are demonstrated by several numerical examples.
10

Wang, Yunlai, and Xi Wang. "Fast Model Predictive Control for Aircraft Engine Based on Automatic Differentiation." In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-16161.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Nonlinear model predictive control (NMPC) is a strategy suitable for dealing with highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. Because of the complexity of the algorithm and the real-time performance of the predictive model, it has thus far been infeasible to implement model predictive control in the realtime control system of aircraft engine. In most nonlinear model predictive control, nonlinear interior point methods (IPM) are used to calculate the optimal solution, which iterate to the optimal solution based on the Jacobian and Hessian matrix. Most nonlinear IPM solver, such as MATLAB fmincon and IPOPT, cannot calculate the Jacobian and Hessian matrix precisely and quickly, instead of using numerical differentiation to calculate the Jacobian matrix and BFGS method to approach to the Hessian matrix. From what has been discussed above, we will 1) improve the real-time performance of predictive model by replacing the time-consuming component level model (CLM) with a neural network model, which is trained based on the data of component level model, 2) precisely calculate the Jacobian and Hessian matrix using automatic differentiation, and propose a group of algorithms to make NMPC strategy quicker, which include making use of the structure of predictive model, and the integrity of weighted sums of Hessian matrix in IPM. Finally, considering input and output constraints, the fast NMPC strategy is compared with normal NMPC. Simulation results with mean time of 19.3% – 27.9% of normal NMPC on different platforms, verify that the fast NMPC proposed can improve the real-time performance during the process of acceleration, deceleration for aircraft engine.

Звіти організацій з теми "Complex differentiation method":

1

Ali, Rassul. Konzeptentwicklung für CDM-Projekte - Risikoanalyse der projektbezogenen Generierung von CO2-Zertifikaten (CER). Sonderforschungsgruppe Institutionenanalyse, 2007. http://dx.doi.org/10.46850/sofia.9783933795842.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Clean Development Mechanism (CDM) is a complex legal-institutional system that, on the one hand, offers industrialized countries options for cost-effective emission reductions and, on the other, provides developing countries with opportunities for sustainable development. Investors face the difficulty of identifying suitable CDM projects from approximately 130 possible host countries and nearly 60 possible project activities. In order to develop points of reference for strategic investments, this paper identifies and categorizes the risks arising in the value creation process of bilateral energy projects into four action-related levels. At the host level, the focus is on political-institutional and sector-specific risks, while at the investor state level, the legal design of the CDM's complementary function is relevant. The project level covers technology- and process-related risks, with the identification of the reference case and the proof of additionality posing particular problems. The future design of the CDM and the reform of the procedure at the UNFCCC level pose a fundamental risk. A two-stage assessment procedure is proposed for risk assessment: a rough analysis captures sociographic, climate policy, institutional and sector-specific criteria of the host. The differentiation of the project stage allows the localization of the project in the value chain and a differentiation regarding the use of methods. The assessment of project registration is based on the methods used and gives recognition rates per method and project category; project performance is measured in terms of the ratio of emission reductions actually realized to those planned in the project documentation. A detailed analysis following the coarse analysis provides qualitative guidance for project evaluation. These include the Executive Board's methodological principles, correct application of methodologies, identification of the reference case, proof of additionality, as well as the financial conditions of the relevant sector and publicity-related aspects. Despite individual hosts and project technologies, the developed two-step risk analysis allows, with relatively little effort and in line with business practice, an initial assessment of CDM project risks, so that overall it lays a fundamental building block for the elaboration of a strategic implementation and sustainable investment under the CDM.
2

Irudayaraj, Joseph, Ze'ev Schmilovitch, Amos Mizrach, Giora Kritzman, and Chitrita DebRoy. Rapid detection of food borne pathogens and non-pathogens in fresh produce using FT-IRS and raman spectroscopy. United States Department of Agriculture, October 2004. http://dx.doi.org/10.32747/2004.7587221.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rapid detection of pathogens and hazardous elements in fresh fruits and vegetables after harvest requires the use of advanced sensor technology at each step in the farm-to-consumer or farm-to-processing sequence. Fourier-transform infrared (FTIR) spectroscopy and the complementary Raman spectroscopy, an advanced optical technique based on light scattering will be investigated for rapid and on-site assessment of produce safety. Paving the way toward the development of this innovative methodology, specific original objectives were to (1) identify and distinguish different serotypes of Escherichia coli, Listeria monocytogenes, Salmonella typhimurium, and Bacillus cereus by FTIR and Raman spectroscopy, (2) develop spectroscopic fingerprint patterns and detection methodology for fungi such as Aspergillus, Rhizopus, Fusarium, and Penicillium (3) to validate a universal spectroscopic procedure to detect foodborne pathogens and non-pathogens in food systems. The original objectives proposed were very ambitious hence modifications were necessary to fit with the funding. Elaborate experiments were conducted for sensitivity, additionally, testing a wide range of pathogens (more than selected list proposed) was also necessary to demonstrate the robustness of the instruments, most crucially, algorithms for differentiating a specific organism of interest in mixed cultures was conceptualized and validated, and finally neural network and chemometric models were tested on a variety of applications. Food systems tested were apple juice and buffer systems. Pathogens tested include Enterococcus faecium, Salmonella enteritidis, Salmonella typhimurium, Bacillus cereus, Yersinia enterocolitis, Shigella boydii, Staphylococus aureus, Serratiamarcescens, Pseudomonas vulgaris, Vibrio cholerae, Hafniaalvei, Enterobacter cloacae, Enterobacter aerogenes, E. coli (O103, O55, O121, O30 and O26), Aspergillus niger (NRRL 326) and Fusarium verticilliodes (NRRL 13586), Saccharomyces cerevisiae (ATCC 24859), Lactobacillus casei (ATCC 11443), Erwinia carotovora pv. carotovora and Clavibacter michiganense. Sensitivity of the FTIR detection was 103CFU/ml and a clear differentiation was obtained between the different organisms both at the species as well as at the strain level for the tested pathogens. A very crucial step in the direction of analyzing mixed cultures was taken. The vector based algorithm was able to identify a target pathogen of interest in a mixture of up to three organisms. Efforts will be made to extend this to 10-12 key pathogens. The experience gained was very helpful in laying the foundations for extracting the true fingerprint of a specific pathogen irrespective of the background substrate. This is very crucial especially when experimenting with solid samples as well as complex food matrices. Spectroscopic techniques, especially FTIR and Raman methods are being pursued by agencies such as DARPA and Department of Defense to combat homeland security. Through the BARD US-3296-02 feasibility grant, the foundations for detection, sample handling, and the needed algorithms and models were developed. Successive efforts will be made in transferring the methodology to fruit surfaces and to other complex food matrices which can be accomplished with creative sampling methods and experimentation. Even a marginal success in this direction will result in a very significant breakthrough because FTIR and Raman methods, in spite of their limitations are still one of most rapid and nondestructive methods available. Continued interest and efforts in improving the components as well as the refinement of the procedures is bound to result in a significant breakthrough in sensor technology for food safety and biosecurity.
3

Li, Hang, Hosam Hegazy, Xiaorui Xue, Jiansong Zhang, and Yunfeng Chen. BIM Standards for Roads and Related Transportation Assets. Purdue University, 2023. http://dx.doi.org/10.5703/1288284317641.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the industry foundation classes (IFC) building information modeling (BIM) standard (ISO 16739) being adopted by AASHTO as the national standard for modeling bridge and road infrastructure projects, there comes a great opportunity to upgrade the INDOT model development standard of roads and related assets to 2D+3D BIM. This upgrade complies with the national standard and creates a solid foundation for preserving accurate asset information for lifecycle data needs. This study reviewed the current modeling standards for drainage and pavement at different state DOTs and investigated the interoperability between state-of-the-art design modeling software and IFC. It was found that while the latest modeling software is capable of supporting interoperability with IFC, there remain gaps that must be addressed to achieve smooth interoperability for supporting life cycle asset data management. Specifically, the prevalent use of IfcBuildingElementProxy and IfcCourse led to a lack of differentiation in the use of IFC entities for the representations of different components, such as inlets, outfalls, conduits, and different concrete pavement layers. This, in turn, caused challenges in the quality assurance (QA) of IFC models and rendered the conventional model view definition (MVD)-based model checking insufficient. To address these gaps and push forward BIM for infrastructure at INDOT, efforts were made in this project to initially create model development instruction manuals that can serve as the foundation for further development and the eventual establish a consistent and comprehensive IFC-based modeling standards and protocols. In addition, automated object classification leveraging invariant signatures of architecture, engineering, and construction (AEC) objects was investigated. Correspondingly, a QA method and tool was developed to check and identify the different components in an IFC model. The developed tool achieved 91% accuracy on drainage and 100% accuracy in concrete pavement in its tested performance. These solutions aim to support the lifecycle management of INDOT transportation infrastructure projects using BIM and IFC.
4

Burns, Malcom, and Gavin Nixon. Literature review on analytical methods for the detection of precision bred products. Food Standards Agency, September 2023. http://dx.doi.org/10.46756/sci.fsa.ney927.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Genetic Technology (Precision Breeding) Act (England) aims to develop a science-based process for the regulation and authorisation of precision bred organisms (PBOs). PBOs are created by genetic technologies but exhibit changes which could have occurred through traditional processes. This current review, commissioned by the Food Standards Agency (FSA), aims to clarify existing terminologies, explore viable methods for the detection, identification, and quantification of products of precision breeding techniques, address and identify potential solutions to the analytical challenges presented, and provide recommendations for working towards an infrastructure to support detection of precision bred products in the future. The review includes a summary of the terminology in relation to analytical approaches for detection of precision bred products. A harmonised set of terminology contributes towards promoting further understanding of the common terms used in genome editing. A review of the current state of the art of potential methods for the detection, identification and quantification of precision bred products in the UK, has been provided. Parallels are drawn with the evolution of synergistic analytical approaches for the detection of Genetically Modified Organisms (GMOs), where molecular biology techniques are used to detect DNA sequence changes in an organism’s genome. The scope and limitations of targeted and untargeted methods are summarised. Current scientific opinion supports that modern molecular biology techniques (i.e., quantitative real-time Polymerase Chain Reaction (qPCR), digital PCR (dPCR) and Next Generation Sequencing (NGS)) have the technical capability to detect small alterations in an organism’s genome, given specific prerequisites of a priori information on the DNA sequence of interest and of the associated flanking regions. These techniques also provide the best infra-structure for developing potential approaches for detection of PBOs. Should sufficient information be known regarding a sequence alteration and confidence can be attributed to this being specific to a PBO line, then detection, identification and quantification can potentially be achieved. Genome editing and new mutagenesis techniques are umbrella terms, incorporating a plethora of approaches with diverse modes of action and resultant mutational changes. Generalisations regarding techniques and methods for detection for all PBO products are not appropriate, and each genome edited product may have to be assessed on a case-by-case basis. The application of modern molecular biology techniques, in isolation and by targeting just a single alteration, are unlikely to provide unequivocal evidence to the source of that variation, be that as a result of precision breeding or as a result of traditional processes. In specific instances, detection and identification may be technically possible, if enough additional information is available in order to prove that a DNA sequence or sequences are unique to a specific genome edited line (e.g., following certain types of Site-Directed Nucelase-3 (SDN-3) based approaches). The scope, gaps, and limitations associated with traceability of PBO products were examined, to identify current and future challenges. Alongside these, recommendations were made to provide the infrastructure for working towards a toolkit for the design, development and implementation of analytical methods for detection of PBO products. Recognition is given that fully effective methods for PBO detection have yet to be realised, so these recommendations have been made as a tool for progressing the current state-of-the-art for research into such methods. Recommendations for the following five main challenges were identified. Firstly, PBOs submitted for authorisation should be assessed on a case-by-case basis in terms of the extent, type and number of genetic changes, to make an informed decision on the likelihood of a molecular biology method being developed for unequivocal identification of that specific PBO. The second recommendation is that a specialist review be conducted, potentially informed by UK and EU governmental departments, to monitor those PBOs destined for the authorisation process, and actively assess the extent of the genetic variability and mutations, to make an informed decision on the type and complexity of detection methods that need to be developed. This could be further informed as part of the authorisation process and augmented via a publicly available register or database. Thirdly, further specialist research and development, allied with laboratory-based evidence, is required to evaluate the potential of using a weight of evidence approach for the design and development of detection methods for PBOs. This concept centres on using other indicators, aside from the single mutation of interest, to increase the likelihood of providing a unique signature or footprint. This includes consideration of the genetic background, flanking regions, off-target mutations, potential CRISPR/Cas activity, feasibility of heritable epigenetic and epitranscriptomic changes, as well as supplementary material from supplier, origin, pedigree and other documentation. Fourthly, additional work is recommended, evaluating the extent/type/nature of the genetic changes, and assessing the feasibility of applying threshold limits associated with these genetic changes to make any distinction on how they may have occurred. Such a probabilistic approach, supported with bioinformatics, to determine the likelihood of particular changes occurring through genome editing or traditional processes, could facilitate rapid classification and pragmatic labelling of products and organisms containing specific mutations more readily. Finally, several scientific publications on detection of genome edited products have been based on theoretical principles. It is recommended to further qualify these using evidenced based practical experimental work in the laboratory environment. Additional challenges and recommendations regarding the design, development and implementation of potential detection methods were also identified. Modern molecular biology-based techniques, inclusive of qPCR, dPCR, and NGS, in combination with appropriate bioinformatics pipelines, continue to offer the best analytical potential for developing methods for detecting PBOs. dPCR and NGS may offer the best technical potential, but qPCR remains the most practicable option as it is embedded in most analytical laboratories. Traditional screening approaches, similar to those for conventional transgenic GMOs, cannot easily be used for PBOs due to the deficit in common control elements incorporated into the host genome. However, some limited screening may be appropriate for PBOs as part of a triage system, should a priori information be known regarding the sequences of interest. The current deficit of suitable methods to detect and identify PBOs precludes accurate PBO quantification. Development of suitable reference materials to aid in the traceability of PBOs remains an issue, particularly for those PBOs which house on- and off-target mutations which can segregate. Off-target mutations may provide an additional tool to augment methods for detection, but unless these exhibit complete genetic linkage to the sequence of interest, these can also segregate out in resulting generations. Further research should be conducted regarding the likelihood of multiple mutations segregating out in a PBO, to help inform the development of appropriate PBO reference materials, as well as the potential of using off-target mutations as an additional tool for PBO traceability. Whilst recognising the technical challenges of developing and maintaining pan-genomic databases, this report recommends that the UK continues to consider development of such a resource, either as a UK centric version, or ideally through engagement in parallel EU and international activities to better achieve harmonisation and shared responsibilities. Such databases would be an invaluable resource in the design of reliable detection methods, as well as for confirming that a mutation is as a result of genome editing. PBOs and their products show great potential within the agri-food sector, necessitating a science-based analytical framework to support UK legislation, business and consumers. Differentiating between PBOs generated through genome editing compared to organisms which exhibit the same mutational change through traditional processes remains analytically challenging, but a broad set of diagnostic technologies (e.g., qPCR, NGS, dPCR) coupled with pan-genomic databases and bioinformatics approaches may help contribute to filling this analytical gap, and support the safety, transparency, proportionality, traceability and consumer confidence associated with the UK food chain.

До бібліографії