Добірка наукової літератури з теми "Reduced-form framework"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Reduced-form framework".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Reduced-form framework"

1

Biagini, Francesca, and Yinglin Zhang. "Reduced-form framework under model uncertainty." Annals of Applied Probability 29, no. 4 (August 2019): 2481–522. http://dx.doi.org/10.1214/18-aap1458.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Biagini, Francesca, Andrea Mazzon, and Katharina Oberpriller. "Reduced-form framework for multiple ordered default times under model uncertainty." Stochastic Processes and their Applications 156 (February 2023): 1–43. http://dx.doi.org/10.1016/j.spa.2022.11.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gündüz, Yalin, and Marliese Uhrig-Homburg. "Does modeling framework matter? A comparative study of structural and reduced-form models." Review of Derivatives Research 17, no. 1 (April 17, 2013): 39–78. http://dx.doi.org/10.1007/s11147-013-9090-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ge, L., X. Qian, and X. Yue. "Explicit formulas for pricing credit-linked notes with counterparty risk under reduced-form framework." IMA Journal of Management Mathematics 26, no. 3 (January 20, 2014): 325–44. http://dx.doi.org/10.1093/imaman/dpt028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Buonanno, Amedeo, Antonio Nogarotto, Giuseppe Cacace, Giovanni Di Gennaro, Francesco A. N. Palmieri, Maria Valenti, and Giorgio Graditi. "Bayesian Feature Fusion Using Factor Graph in Reduced Normal Form." Applied Sciences 11, no. 4 (February 22, 2021): 1934. http://dx.doi.org/10.3390/app11041934.

Повний текст джерела
Анотація:
In this work, we investigate an Information Fusion architecture based on a Factor Graph in Reduced Normal Form. This paradigm permits to describe the fusion in a completely probabilistic framework and the information related to the different features are represented as messages that flow in a probabilistic network. In this way we build a sort of context for observed features conferring to the solution a great flexibility for managing different type of features with wrong and missing values as required by many real applications. Moreover, modifying opportunely the messages that flow into the network, we obtain an effective way to condition the inference based on the different reliability of each information source or in presence of single unreliable signal. The proposed architecture has been used to fuse different detectors for an identity document classification task but its flexibility, extendibility and robustness make it suitable to many real scenarios where the signal can be wrongly received or completely missing.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Biagini, Francesca, and Katharina Oberpriller. "Reduced-form setting under model uncertainty with non-linear affine intensities." Probability, Uncertainty and Quantitative Risk 6, no. 3 (2021): 159. http://dx.doi.org/10.3934/puqr.2021008.

Повний текст джерела
Анотація:
<p style='text-indent:20px;'>In this paper we extend the reduced-form setting under model uncertainty introduced in [<xref ref-type="bibr" rid="b5">5</xref>] to include intensities following an affine process under parameter uncertainty, as defined in [<xref ref-type="bibr" rid="b15">15</xref>]. This framework allows us to introduce a longevity bond under model uncertainty in a way consistent with the classical case under one prior and to compute its valuation numerically. Moreover, we price a contingent claim with the sublinear conditional operator such that the extended market is still arbitrage-free in the sense of “no arbitrage of the first kind” as in [<xref ref-type="bibr" rid="b6">6</xref>]. </p>
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Andersen, Torben G., Tim Bollerslev, and Xin Huang. "A reduced form framework for modeling volatility of speculative prices based on realized variation measures." Journal of Econometrics 160, no. 1 (January 2011): 176–89. http://dx.doi.org/10.1016/j.jeconom.2010.03.029.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Cassar, Johann, Andrew Sammut, Nicholas Sammut, Marco Calvi, Sasa Spasic, and Dragana Popovic Renella. "Performance Analysis of a Reduced Form-Factor High Accuracy Three-Axis Teslameter." Electronics 8, no. 11 (October 28, 2019): 1230. http://dx.doi.org/10.3390/electronics8111230.

Повний текст джерела
Анотація:
In the framework of the SwissFEL project at the Paul Scherrer Institute (PSI), a Hall probe bench is being developed for the high-precision magnetic characterization of the insertion devices for the ATHOS soft X-ray beamline. For this purpose, a novel three-axis teslameter has been developed, which will be placed between the undulator and its outer shell in a very limited volumetric space of 150 x 50 x 45 mm. Together with a Hall probe at the center of the cross sectional area of the undulator, the setup will traverse along the undulator length on a specifically designed rig with minimal vibrations. This teslameter has all the analog signal conditioning circuitry for the Hall probe and also has on board 24-bit digitization. The instrument also handles an interface to a linear absolute encoder. The old instrumentation used only had analog signal conditioning circuitry whilst digitization was done off board. The new instrument also provides a very accurate magnetic field map in the µT range with simultaneous readings from the position encoder at an accuracy of ±3 µm. In this paper, a series of tests are described, which were performed at PSI in order to establish the measuring precision and repeatability of the instrument.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jeong, Shinkyu, and Hyunyul Kim. "Development of an Efficient Hull Form Design Exploration Framework." Mathematical Problems in Engineering 2013 (2013): 1–12. http://dx.doi.org/10.1155/2013/838354.

Повний текст джерела
Анотація:
A high-efficiency design exploration framework for hull form has been developed. The framework consists of multiobjective shape optimization and design knowledge extraction. In multiobjective shape optimization, a multiobjective genetic algorithm (MOGA) using the response surface methodology is introduced to achieve efficient design space exploration. As a response surface methodology, the Kriging model, which was developed in the field of spatial statistics and geostatistics, is applied. A new surface modification method using shifting method and radial basis function interpolation is also adopted here to represent various hull forms. This method enables both global and local modifications of hull form with fewer design variables. In design knowledge extraction, two data mining techniques—functional analysis of variance (ANOVA) and self-organizing map (SOM)—are applied to acquire useful design knowledge about a hull form. The present framework has been applied to hull form optimization exploring the minimum wave drag configuration under a wide range of speeds. The results show that the present method markedly reduced the design period. From the results of data mining, it is possible to identify the design variables controlling wave drag performances at different speed regions and their corresponding geometric features.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Xingchun. "ANALYTICAL VALUATION OF VULNERABLE OPTIONS IN A DISCRETE-TIME FRAMEWORK." Probability in the Engineering and Informational Sciences 31, no. 1 (September 13, 2016): 100–120. http://dx.doi.org/10.1017/s0269964816000292.

Повний текст джерела
Анотація:
In this paper, we present a pricing model for vulnerable options in discrete time. A Generalized Autoregressive Conditional Heteroscedasticity process is used to describe the variance of the underlying asset, which is correlated with the returns of the asset. As for counterparty default risk, we study it in a reduced form model and the proposed model allows for the correlation between the intensity of default and the variance of the underlying asset. In this framework, we derive a closed-form solution for vulnerable options and investigate quantitative impacts of counterparty default risk on option prices.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Reduced-form framework"

1

OBERPRILLER, KATHARINA. "Reduced-form framework under model uncertainty and generalized Feynman-Kac formula in the G-setting." Doctoral thesis, Gran Sasso Science Institute, 2022. http://hdl.handle.net/20.500.12571/25844.

Повний текст джерела
Анотація:
The thesis dealing with topics under model uncertainty consists of two main parts. In the first part, we introduce a reduced-form framework in the presence of multiple default times under model uncertainty. In particular, we define a sublinear conditional operator with respect to a family of possibly non-dominated priors for a filtration progressively enlarged by multiple ordered defaults. Moreover, we analyze the properties of this sublinear conditional expectation as a pricing instrument and consider an application to insurance market modeling with non-linear affine intensities. In the second part of this thesis, we prove a Feynman-Kac formula under volatility uncertainty which allows to take into account a discounting factor. In the first part, we generalize the results of a reduced-form framework under model uncertainty for a single default time in order to consider multiple ordered default times. The construction of these default times is based on a generalization of the Cox model under model uncertainty. Within this setting, we progressively enlarge a reference filtration by N ordered default times and define the sublinear expectation with respect to the enlarged filtration and a set of possibly non-dominated probability measures. We derive a weak dynamic programming principle for the operator and use it for the valuation of credit portfolio derivatives under model uncertainty. Moreover, we analyze the properties of the operator as a pricing instrument under model uncertainty. First, we derive some robust superhedging duality results for payment streams, which allow to interpret the operator as a pricing instrument in the context of superhedging. Second, we use the operator to price a contingent claim such that the extended market is still arbitrage-free in the sense of “no arbitrage of the first kind”. Moreover, we provide some conditions which guarantee the existence of a modification of the operator which has quasi-sure càdlàg paths. Finally, we conclude this part by an application to insurance market modeling. For this purpose, we extend the reduced-form framework under model uncertainty for a single default time to include intensities following a non-linear affine process under parameter uncertainty. This allows to introduce a longevity bond under model uncertainty in a way consistent with the classical case under a single prior and to compute its valuation numerically. In the second part, we focus on volatility uncertainty and, more specifically on the G-expectation setting. In this setting, we provide a generalization of a Feynman-Kac formula under volatility uncertainty in presence of a linear term in the PDE due to discounting. We state our result under different hypothesis with respect to the current result in the literature, where the Lipschitz continuity of some functionals is assumed, which is not necessarily satisfied in our setting. Thus, we establish for the first time a relation between non-linear PDEs and G-conditional expectation of a discounted payoff. To do so, we introduce a family of fully non-linear PDEs identified by a regularizing parameter with terminal condition φ at time T > 0, and obtain the G-conditional expectation of a discounted payoff as the limit of the solutions of such a family of PDEs when the regularity parameter goes to zero. Using a stability result, we can prove that such a limit is a viscosity solution of the limit PDE. Therefore, we are able to show that the G-conditional expectation of the discounted payoff is a solution of the PDE. In applications, this permits to calculate such a sublinear expectation in a computationally efficient way.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Reduced-form framework"

1

Sriram, Anitha, Rahul Kumar, Indrani Maji, Dharmendra Kumar Khatri, Shashi Bala Singh, Saurabh Srivastava, and Pankaj Kumar Singh. "Nanotechnology Impact and It’s Future Perspectives on COVID-19 Therapy." In An Update on SARS-CoV-2: Damage-response Framework, Potential Therapeutic Avenues and the Impact of Nanotechnology on COVID-19 Therapy Volume 1, 192–203. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/9789815039863122010012.

Повний текст джерела
Анотація:
Nanoscience deals with the study of materials at nanoscale dimensions. It has been shown that modernizing drugs to a nanosized delivery system will afford fruitful outcomes with enhanced efficacy against the virus, reduction of the drug dose and improved targetability with enhanced/sustained bioavailability, If the drug dose is reduced, the dose-related side effects are reduced. Dose reduction through nanotechnology also limits non-target tissue toxicity. Hence, nanotechnology has remarkable potential in the pharmaceutical industry. Due to their small molecular level of tunable size, nanopharmaceuticals (in the form of nanoparticles) will easily enter the cells or tissues. The case of COVID-19, they interact with the virus and act accordingly and might inhibit the viral processes of 2019nCoV in COVID-19.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Malin, Nigel. "The impact of service cutbacks, job insecurity and globalisation." In De-Professionalism and Austerity, 103–20. Policy Press, 2020. http://dx.doi.org/10.1332/policypress/9781447350163.003.0007.

Повний текст джерела
Анотація:
The austerity agenda links deficit reduction to cuts in public service budgets. The main argument is that de-professionalisation lies at the heart of assessing the impact of the ‘commercial model’ in the form of efficiencies, pay cuts, rationing, reduced training/staff development and potentially affecting overall economic productivity. This chapter begins to shape an analytical framework for understanding the UK context in which a process of de-professionalisation exists within an employment culture dominated by inequality, precarity, globalisation and declining solidarity.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Eliseev, Andrey Vladimirovich, Nikolay Konstantinovich Kuznetsov, and Artem Sergeevich Mironov. "System Approaches to the Problem of Dynamic Damping of Vibrations of Technical Objects of Transport and Technological Purpose." In Trends, Paradigms, and Advances in Mechatronics Engineering, 127–42. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5887-7.ch007.

Повний текст джерела
Анотація:
The methodological basis for solving the problems of assessment, control, and formation of dynamic states of technical objects of transport and technological purposes in a state of vibration loading is being developed. Within the framework of structural mathematical modeling, mechanical oscillatory systems used as design schemes of technical objects are compared with schemes of dynamically equivalent automatic control systems. New results are presented in the field of technology for evaluating dynamic states and forms of interactions of elements of mechanical oscillatory systems with the reduced elastic characteristics. For mechanical oscillatory systems, an interpretation of dynamic states and forms of interactions in the form of an oriented graph has been developed, creating prerequisites for the classification of a set of dynamic states in which a system can be located under the condition of various types of force excitation
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Wedderburn-Bisshop, Gerard, and Lauren Rickards. "Livestock's Near-Term Climate Impact and Mitigation Policy Implications." In Research Anthology on Environmental and Societal Impacts of Climate Change, 1027–48. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3686-8.ch051.

Повний текст джерела
Анотація:
Human consumption of livestock remains a marginal issue in climate change debates, partly due to the IPCC's arbitrary adoption of 100-year global warming potential framework to compare different emissions, blinding us to the significance of shorter-term emissions, namely methane. Together with the gas it reacts to form - tropospheric ozone - methane has been responsible for 37% of global warming since 1750, yet its atmospheric life is just 10 years. Neglecting its role means overlooking powerful mitigation opportunities. The chapter discusses the role of livestock, the largest anthropogenic methane source, and the need to include reduced meat consumption in climate change responses. Looking beyond the conventional focus on the consumer, we point to some underlying challenges in addressing the meat-climate relationship, including the climate science community's reluctance to adopt a short-term focus in its climate projections. Policy options are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Halder, Basudev, Sucharita Mitra, and Madhuchhanda Mitra. "Healthcare Automation System by Using Cloud-Based Telemonitoring Technique for Cardiovascular Disease Classification." In Research Anthology on Telemedicine Efficacy, Adoption, and Impact on Healthcare Delivery, 474–93. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8052-3.ch025.

Повний текст джерела
Анотація:
This paper illustrates the cloud-based telemonitoring framework that implements healthcare automation system for myocardial infarction (MI) disease classification. For this purpose, the pathological feature of ECG signal such as elevated ST segment, inverted T wave, and pathological Q wave are extracted, and MI disease is detected by the rule-based rough set classifier. The information system involves pathological feature as an attribute and decision class. The degree of attributes dependency finds a smaller set of attributes and predicted the comprehensive decision rules. For MI decision, the ECG signal is shared with the respective cardiologist who analyses and prescribes the required medication to the first-aid professional through the cloud. The first-aid professional is notified accordingly to attend the patient immediately. To avoid the identity crisis, ECG signal is being watermarked and uploaded to the cloud in a compressed form. The proposed system reduces both data storage space and transmission bandwidth which facilitates accessibility to quality care in much reduced cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mitchell, Peter. "Origins." In The Donkey in Human History. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198749233.003.0008.

Повний текст джерела
Анотація:
Over 50,000 years ago a Neanderthal hunter approached a wild ass on the plains of northeastern Syria. Taking aim from the right as the animal nervously assessed the threat, he launched his stone-tipped spear into its neck, penetrating the third cervical vertebra and paralyzing it immediately. Butchered at the kill site, this bone and most of the rest of the animal were taken back to the hunter’s camp at Umm el Tlel, a short distance away. Closely modelled on archaeological observations of that vertebra and the Levallois stone point still embedded within it, this incident helps define the framework for this chapter. At the start of the period it covers, human interactions with the donkey’s ancestors were purely a matter of hunting wild prey, but by its end the donkey had been transformed into a domesticated animal. Chapter 2 thus looks at how this process came about, where it did so, and what the evolutionary history of the donkey’s forebears had been until that point. Donkeys and the wild asses that are their closest relatives form part of the equid family to which zebras and horses also belong. Collectively, equids, like rhinoceroses and tapirs, fall within the Perissodactyla, the odd-toed division of hoofed mammals or ungulates. Though this might suggest a close connection with the much larger order known as the Artiodactyla, the even-toed antelopes (including deer, cattle, sheep, and goats), their superficial resemblances may actually reflect evolutionary convergence; some genetic studies hint that perissodactyls are more closely related to carnivores. Like tapirs and rhinoceroses, the earliest equids had three toes, not the one that has characterized them for the past 40 million years. That single toe, the third, now bears all their weight in the form of a single, enlarged hoof with the adjacent toes reduced to mere splints. This switch, and the associated elongation of the third (or central) metapodial linking the toe to the wrist or ankle, is one of the key evolutionary transformations through which equids have passed. A second involves diet since the earliest perissodactyls were all browsers, not grazers like the equids of today.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

G. Kaplan, Ilya. "Modern State of the Conventional DFT Method Studies and the Limits Following from the Quantum State of the System and Its Total Spin." In Density Functional Theory - Recent Advances, New Perspectives and Applications [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.102670.

Повний текст джерела
Анотація:
At present, the density functional theory (DFT) approach became the most widely used method for study molecules and solids. In the atmosphere of such great popularity, it is particularly important to know the limits of the applicability of DFT methods. In this chapter, I will discuss the modern state of DFT studies basing on the last publications and will consider in detail two cases when the conventional DFT approaches, in which used only electron density and its modifications by gradients, cannot be applied. First, the case related to the total spin S of the state. As I rigorously proved for an arbitrary N-electron state by group theoretical methods, the electron density does not depend on the total spin S of the state. From this follows that the Kohn-Sham equations have the same form for states with different S. The critical survey of elaborated DFT procedures, in which the spin is taken into account, shows that they modified only exchange functionals, and the correlation functionals do not correspond to the spin of the state. The point is that the conception of spin in principle cannot be defined in the framework of the electron density formalism, and this is the main reason of the problems arising in the study by DFT approaches the magnetic properties of the transition metals. The possible way of resolving spin problems can be found in the two-particle reduced density matrix formulation of DFT. In the end, it will be considered the case of the degenerated states, in which, as follows from the adiabatic approximation, the electron density may not be defined, since electronic and nuclear motions cannot be separated, since, the vibronic interaction mixed them.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kanade, Aditya, Mansi Sharma, and Muniyandi Manivannan. "Virtual Reality, Robotics, and Artificial Intelligence." In The Internet of Medical Things (IoMT) and Telemedicine Frameworks and Applications, 105–23. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3533-5.ch005.

Повний текст джерела
Анотація:
Stroke is a leading cause of death in humans. In the US, someone has a stroke every 40 seconds. More than half of the stroke-affected patients over the age of 65 have reduced mobility. The prevalence of stroke in our society is increasing; however, since stroke comes with a lot of post-hospitalization care, a lot of infrastructure is lacking to cater to the demands of the increasing population of patients. In this chapter, the authors look at three technological interventions in the form of machine learning, virtual reality, and robotics. They look at how the research is evolving in these fields and pushing for easier and more reliable ways for rehabilitation. They also highlight methods that show promise in the area of home-based rehabilitation.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Graham, Alan. "Setting the Goal: Modern Vegetation of North America Composition and Arrangement of Principal Plant Formations." In Late Cretaceous and Cenozoic History of North American Vegetation (North of Mexico). Oxford University Press, 1999. http://dx.doi.org/10.1093/oso/9780195113426.003.0004.

Повний текст джерела
Анотація:
Vegetation is the plant cover of a region, which usually refers to the potential natural vegetation prior to any intensive human disturbance. The description of vegetation for an extensive area involves the recognition and characterization of units called formations, which are named with reference to composition (e.g., coniferous), aspect of habit (deciduous), distribution (western North America), and climate, either directly (tropical) or indirectly (tundra). Further subdivisions are termed associations or series, such as the beech-maple association or series within the deciduous forest formation. Formations and associations constitute a convenient organizational framework for considering the development of vegetation through Late Cretaceous and Cenozoic time. For this purpose seven extant plant formations are recognized for North America: (1) tundra, (2) coniferous forest, (3) deciduous forest, (4) grassland, (5) shrubland/chaparral- woodland- savanna, (6) desert, and (7) elements of a tropical formation. Several summaries are available for the modern vegetation of North America, including Barbour and Billings (1988), Barbour and Christensen, Kuchler (1964), and Vankat (1979). The following discussions are based primarily on these surveys. Tundra (Fig. 1.2) is a treeless vegetation dominated by shrubs and herbs, and it is characteristic of the cold climates of polar regions (Arctic tundra) and high-altitude regions (alpine tundra). In the Arctic tundra a few isolated trees or small stands may occur locally, such as Picea glauca (white spruce), but these are always in protected habitats. The Arctic region experiences nearly continuous darkness in midwinter, and nearly continuous daylight in midsummer. There is a short growing season of only 6-24 weeks; this accounts, in part, for the fact that 98% of all Arctic tundra plants are perennials (Vankat, 1979). Strong winds are another feature of the Arctic landscape, often exceeding 65 km/h for 24 h or more. They likely account for the frequency of rosettes, persistent dead leaves, and the cushion growth form, in the center of which wind velocities may be reduced by 90%. The harsh growing conditions also result in leaves of the microphyllous size class being comparable to those of desert plants. Vegetative reproduction and self-pollination is common, and phenotypic plasticity is high among Arctic tundra plants.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Constantinesco, Thomas. "Coda." In Writing Pain in the Nineteenth-Century United States, 203–8. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192855596.003.0008.

Повний текст джерела
Анотація:
The coda considers the ways literary forms produce and circulate our thinking about the function of pain in individual and social formations, relating the book’s claims and interventions to the critical purchase of literature and literary studies. While the work of literature has often been reduced to its dimension of storytelling and to its capacity to foster empathetic identification through narratives of pain, attending to the labor of form shows how literature effectively theorizes the affordances of pain—its challenges and its potentials—outside the frameworks of medicalization and sentimental sympathy, as a generative feeling to be neither anesthetized nor bemoaned.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Reduced-form framework"

1

Dhingra, A. K., A. N. Almadi, and D. Kohli. "A Framework for Closed-Form Displacement Analysis of 10-Link 1-DOF Mechanisms." In ASME 1998 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/detc98/mech-5885.

Повний текст джерела
Анотація:
Abstract This paper presents a closed-form approach, based on the theory of resultants, to the displacement analysis problem of planar 10-link 1-DOF mechanisms. Since each 10-link mechanism has 4 independent loops, its displacement analysis problem can be written as a system of 4 reduced loop-closure equations in 4 unknowns. This system of 4 reduced loop closure equations, for all non-trivial mechanisms resulting from 230 10-link kinematic chains, can be classified into 22 distinct structures. Using the successive and repeated elimination procedures presented herein, it is shown how each of these structures can be reduced into a univariate polynomial devoid of any extraneous roots. This univariate polynomial corresponds to the input-output (I/O) polynomial of the mechanism. Based on the results presented, it can be seen that the displacement analysis problem for all 10-link 1-DOF mechanisms is completely solvable, in closed-form, devoid of any extraneous roots.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hencey, Brandon, and Andrew Alleyne. "Robust Controller Interpolation via Parameterization." In ASME 2008 Dynamic Systems and Control Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/dscc2008-2269.

Повний текст джерела
Анотація:
Arbitrarily fast switching or blending among controllers often leads to reduced performance and possible instability. This paper presents a controller interpolation framework for robustly transitioning among controllers with respect to the H∞ norm. The framework leverages a robust controller parameterization, analogous to a Youla parameterization of nominally stabilizing controllers, to greatly simplify the robust controller interpolation problem. In addition, an explicit construction is provided for the robust interpolated controller. A practical example is presented in the form of a robust interpolated controller implemented on an electro-hydraulic test bed.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ye, Qian, Yang Guo, Shikui Chen, Xianfeng David Gu, and Na Lei. "Topology Optimization of Conformal Structures Using Extended Level Set Methods and Conformal Geometry Theory." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85655.

Повний текст джерела
Анотація:
In this paper, we propose a new method to approach the problem of structural shape and topology optimization on manifold (or free-form surfaces). A manifold is conformally mapped onto a 2D rectangle domain, where the level set functions are defined. With conformal mapping, the corresponding covariant derivatives on a manifold can be represented by the Euclidean differential operators multiplied by a scalar. Therefore, the topology optimization problem on a free-form surface can be formulated as a 2D problem in the Euclidean space. To evolve the boundaries on a free-form surface, we propose a modified Hamilton-Jacobi equation and solve it on a 2D plane following the conformal geometry theory. In this way, we can fully utilize the conventional level-set-based computational framework. Compared with other established approaches which need to project the Euclidean differential operators to the manifold, the computational difficulty of our method is highly reduced while all the advantages of conventional level set methods are well preserved. We hope the proposed computational framework can provide a timely solution to increasing applications involving innovative structural designs on free-form surfaces in different engineering fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kostovich, Vincent, Daniel A. McAdams, and Seung Ki Moon. "Representing User Activity and Product Function for Universal Design." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87507.

Повний текст джерела
Анотація:
This paper presents a product analysis framework to improve universal design research and practice. Seventeen percent of the US population has some form of a disability. Nevertheless, many companies are unfamiliar with approaches to achieving universal design. A key element of the framework is the combination of activity diagrams and functional models. The framework is applied in the analysis of 20 pairs of products that satisfy a common high level need but differ with one product intended for fully able users and the other intended for persons with some disability or reduced functioning. Discoveries based on the analysis include the observation that differences in typical and universal products can be categorized functionally, morphologically, or parametrically different than typical products. Additionally, simple products appear to be made more accessible through parametric changes whereas more complex products require functional additions and changes.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Moeenfard, Hamid, and Shorya Awtar. "Modeling Geometric Non-Linearities in the Free Vibration of a Planar Beam Flexure With a Tip Mass." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71498.

Повний текст джерела
Анотація:
The objective of this work is to create an analytical framework to study the non-linear dynamics of beam flexures with a tip mass undergoing large deflections. Hamilton’s principal is utilized to derive the equations governing the non-linear vibrations of the cantilever beam and the associated boundary conditions. Then, using a single mode approximation, these non-linear partial differential equations are reduced to two coupled non-linear ordinary differential equations. These equations are solved analytically using combination of the method of multiple time scales and homotopy perturbation analysis. Closed-form, parametric analytical expressions are presented for the time domain response of the beam around and far from its internal resonance state. These analytical results are compared with numerical ones to validate the accuracy of the proposed closed-form model. We expect that the qualitative and quantitative knowledge resulting from this effort will ultimately allow the analysis, optimization, and synthesis of flexure mechanisms for improved dynamic performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Reddy, Sandeep B., Allan Ross Magee, Rajeev K. Jaiman, J. Liu, W. Xu, A. Choudhary, and A. A. Hussain. "Reduced Order Model for Unsteady Fluid Flows via Recurrent Neural Networks." In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/omae2019-96543.

Повний текст джерела
Анотація:
Abstract In this paper, we present a data-driven approach to construct a reduced-order model (ROM) for the unsteady flow field and fluid-structure interaction. This proposed approach relies on (i) a projection of the high-dimensional data from the Navier-Stokes equations to a low-dimensional subspace using the proper orthogonal decomposition (POD) and (ii) integration of the low-dimensional model with the recurrent neural networks. For the hybrid ROM formulation, we consider long short term memory networks with encoder-decoder architecture, which is a special variant of recurrent neural networks. The mathematical structure of recurrent neural networks embodies a non-linear state space form of the underlying dynamical behavior. This particular attribute of an RNN makes it suitable for non-linear unsteady flow problems. In the proposed hybrid RNN method, the spatial and temporal features of the unsteady flow system are captured separately. Time-invariant modes obtained by low-order projection embodies the spatial features of the flow field, while the temporal behavior of the corresponding modal coefficients is learned via recurrent neural networks. The effectiveness of the proposed method is first demonstrated on a canonical problem of flow past a cylinder at low Reynolds number. With regard to a practical marine/offshore engineering demonstration, we have applied and examined the reliability of the proposed data-driven framework for the predictions of vortex-induced vibrations of a flexible offshore riser at high Reynolds number.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mantegh, Iraj. "Robot Task Planning and Trajectory Learning for Flexible Automation." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71359.

Повний текст джерела
Анотація:
A task planning method is presented to model and reproduce robot trajectories based on those captured from human demonstrations. In the framework of the Programming by Demonstration (PbD) approach, task planning algorithm is developed to determine the general type of trajectory pattern, its parameters, and its kinematic profile. The pattern is described independently of the shape of the surface on which it is demonstrated. Key pattern points are identified based on changes in direction and velocity, and are then reduced based on their proximity. The results of the analysis are provided are used inside a task planning algorithm, to produce robot trajectories based on the workpiece geometries. The trajectory is output in the form of robot native language code so that it can be readily downloaded on the robot.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kundu, Prithwish, Muhsin M. Ameen, Chao Xu, Umesh Unnikrishnan, Tianfeng Lu, and Sibendu Som. "Implementation of Detailed Chemistry Mechanisms in Engine Simulations." In ASME 2017 Internal Combustion Engine Division Fall Technical Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icef2017-3596.

Повний текст джерела
Анотація:
The stiffness of large chemistry mechanisms has been proved to be a major hurdle towards predictive engine simulations. As a result, detailed chemistry mechanisms with a few thousand species need to be reduced based on target conditions so that they can be accommodated within the available computational resources. The computational cost of simulations typically increase super-linearly with the number of species and reactions. This work aims to bring detailed chemistry mechanisms within the realm of engine simulations by coupling the framework of unsteady flamelets and fast chemistry solvers. A previously developed Tabulated Flamelet Model (TFM) framework for non-premixed combustion was used in this study. The flamelet solver consists of the traditional operator-splitting scheme with VODE (Variable coefficient ODE solver) and a numerical Jacobian for solving the chemistry. In order to use detailed mechanisms with thousands of species, a new framework with the LSODES (Livermore Solver for ODEs in Sparse form) chemistry solver and an analytical Jacobian was implemented in this work. Results from 1D simulations show that with the new framework, the computational cost is linearly proportional to the number of species in a given chemistry mechanism. As a result, the new framework is 2–3 orders of magnitude faster than the conventional VODE solver for large chemistry mechanisms. This new framework was used to generate unsteady flamelet libraries for n-dodecane using a detailed chemistry mechanism with 2,755 species and 11,173 reactions. The Engine Combustion Network (ECN) Spray A experiments which consist of an igniting n-dodecane spray in turbulent, high-pressure engine conditions are simulated using large eddy simulations (LES) coupled with detailed mechanisms. A grid with 0.06 mm minimum cell size and 22 million peak cell count was implemented. The framework is validated across a range of ambient temperatures against ignition delay and liftoff lengths. Qualitative results from the simulations were compared against experimental OH and CH2O PLIF data. The models are able to capture the spatial and temporal trends in species compared to those observed in the experiments. Quantitative and qualitative comparisons between the predictions of the reduced and detailed mechanisms are presented in detail. The main goal of this study is to demonstrate that detailed reaction mechanisms (∼1000 species) can now be used in engine simulations with a linear increase in computation cost with number of species during the tabulation process and a small increase in the 3D simulation cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Cai, Yuecheng, and Jasmin Jelovica. "Adaptive Constraint Handling in Optimization of Complex Structures by Using Machine Learning." In ASME 2021 40th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/omae2021-62304.

Повний текст джерела
Анотація:
Abstract Optimization of complex systems requires robust and computationally efficient global search algorithms. Constraints make this a very difficult task, significantly slowing down an algorithm, and can even prevent finding the true Pareto front. This study continues the development of a recently proposed repair approach that exploits infeasible designs to increase computational efficiency of a prominent genetic algorithm, and to find a wider spread of the Pareto front. This paper proposes adaptive and automatized discovery of sensitivity of constraints to variables, i.e. the link, which needed direct designer’s input in the previous version of the repair approach. This is achieved by using machine learning in the form of artificial neural networks (ANN). A surrogate model is afterwards utilized in optimization based on ANN. The proposed approach is used for the recently proposed constraint handling implemented into NSGA-II optimization algorithm. The proposed framework is compared with two other constraint handling methods. The performance is analyzed on a structural optimization of a 178 m long chemical tanker which needs to fulfil class society’s criteria for strength. The results show that the proposed framework is competitive in terms of convergence and spread of the front. This is achieved while discovering the link automatically using ANN, without an input from a user. In addition, computational time is reduced by 60%.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Eriten, Melih, Mehmet Kurt, Guanyang Luo, Donald M. McFarland, Lawrence A. Bergman, and Alexander F. Vakakis. "Nonlinear System Identification of Frictional Connections in a Bolted Beam Assembly." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-70432.

Повний текст джерела
Анотація:
In modern structures, mechanical joints are ubiquitous, significantly influencing a structure’s dynamics. Frictional connections contained in a joint provide coupling of forces and moments between assembled components as well as localized nonlinear energy dissipation. Certain aspects of the mechanics of these friction connections are yet to be fully understood and characterized in a dynamical systems framework. This work applies a nonlinear system identification (NSI) technique to characterize the influence of frictional connections on the dynamics of a bolted beam assembly. The methodology utilized in this work combines experimental measurements with slow-flow dynamic analysis and empirical mode decomposition, and reconstructs the dynamics through reduced-order models. These are in the form of single-degree-of-freedom linear oscillators (termed intrinsic modal oscillators — IMOs) with forcing terms derived directly from the experimental measurements through slow-flow analysis. The derived reduced order models are capable of reproducing the measured dynamics, whereas the forcing terms provide important information about nonlinear damping effects. The NSI methodology is applied to model nonlinear friction effects in a bolted beam assembly. A ‘monolithic’ beam with identical geometric and material properties is also tested for comparison. Three different forcing (energy) levels are considered in the tests in order to study the energy-dependencies of the damping nonlinearities induced in the beam from the bolted joint. In all cases, the NSI technique employed is successful in identifying the damping nonlinearities, their spatial distributions and their effects on the vibration modes of the structural component.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Reduced-form framework"

1

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Повний текст джерела
Анотація:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії