Academic literature on the topic 'Functional expansion methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Functional expansion methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Functional expansion methods"

1

Giese, Timothy J., and Darrin M. York. "Density-functional expansion methods: Generalization of the auxiliary basis." Journal of Chemical Physics 134, no. 19 (May 21, 2011): 194103. http://dx.doi.org/10.1063/1.3587052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Hong Bo, Wu Ying Zhang, Feng Lin, and Hong Da Cao. "Comparison and Characterization of Two Preparation Methods of Graphene Oxide." Advanced Materials Research 989-994 (July 2014): 125–29. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.125.

Full text
Abstract:
The graphene oxides were prepared form graphite by thermal expansion and ultrasonic dispersion. The structure of graphene oxides was characterized by Fourier transform infrared spectrometer (FTIR), scanning electron microscope (SEM), X-ray diffraction (XRD) and Raman spectra. The difference of structure of graphene oxides by two preparation methods was compared. The measurement of FTIR and XRD showed the graphite was completely oxidized. The graphene oxide prepared by thermal expansion would lose large number of active functional groups, such as hydroxyl, carboxyl group, et al. However, the graphene oxide prepared by ultrasonic dispersion can retain these active functional groups. These active functional groups will be benefit to chemically modify the graphene oxides and prepare the polymer/graphene nanocomposites.
APA, Harvard, Vancouver, ISO, and other styles
3

Giese, Timothy J., and Darrin M. York. "Density-functional expansion methods: Evaluation of LDA, GGA, and meta-GGA functionals and different integral approximations." Journal of Chemical Physics 133, no. 24 (December 28, 2010): 244107. http://dx.doi.org/10.1063/1.3515479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ionescu, Carmen, Corina N. Babalic, Radu Constantinescu, and Raluca Efrem. "The Functional Expansion Approach for Solving NPDEs as a Generalization of the Kudryashov and G′/G Methods." Symmetry 14, no. 4 (April 15, 2022): 827. http://dx.doi.org/10.3390/sym14040827.

Full text
Abstract:
This paper presents the functional expansion approach as a generalized method for finding traveling wave solutions of various nonlinear partial differential equations. The approach can be seen as a combination of the Kudryashov and G′/G solving methods. It allowed the extension of the first method to the use of second order auxiliary equations, and, at the same time, it allowed non-standard G′/G-solutions to be generated. The functional expansion is illustrated here on the Dodd–Bullough–Mikhailov model, using a linear second order ordinary differential equation as an auxiliary equation.
APA, Harvard, Vancouver, ISO, and other styles
5

Gelman, V. Ya. "Ways of Development of Equipment and Research Methods for Functional Diagnostics." Medicina 10, no. 3 (2022): 42–52. http://dx.doi.org/10.29234/2308-9113-2022-10-3-42-52.

Full text
Abstract:
Functional diagnostics is currently one of the rapidly developing areas of medicine. The instrumentation of functional diagnostics is of crucial importance for the efficiency and quality of the actual diagnostics. The aim of the work was to identify and analyze trends in the development of instrumental and software tools of physical research methods in functional diagnostics. It is shown that the following principal directions of development of the technical support of functional diagnostics can be considered: expansion of the range of measured physiological characteristics; increase in measurement accuracy; application of methods of simultaneous recording of several indicators; computerization and digitalization of functional diagnostics; use of artificial intelligence methods; application of non-contact measurements; transition to permanent monitoring; development of wearable devices; creation of complex systems for carrying out functional tests; improving the interface with medical staff and facilitating research; increase in the availability of functional diagnostic equipment for patients and its use in home telemedicine.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Ji-Yeon, Jeong-Woo Jeon, and Jin-Seop Kim. "Effects of the Chest Expansion Exercise on Chest Expansion, Respiratory Function and Functional Activity: A Aystematic Review." KOREAN ACADEMY OF CARDIORESPIRATORY PHYSICAL THERAPY 10, no. 1 (June 30, 2022): 41–46. http://dx.doi.org/10.32337/kacpt.2022.10.1.41.

Full text
Abstract:
Purpose: The study aimed to analyze the effect of chest-expansion exercise studies on chest expansion, respiratory function, and functional activity over the past 8 years (2015-2022). Methods: Previous studies were electronically searched using the PubMed, PEDro, KISS, and RISS4U databases. A total of five studies were selected according to the PRISMA guidelines, and the PEDro scale was used for the qualitative analysis. Results: Five studies met the criteria. Two studies scored 6 of 10 and three scored 5. Four studies had an intervention of 20 and 30 min, each at a frequency of 4-5 times per week, for a total of 4-8 weeks, except for one study that performed 20 min of intervention at a time, which demonstrated an immediate effect. The results indicated that chest-expansion exercises effectively improved chest expansion, respiratory function, and functional activity. Conclusion: Chest-expansion exercises exert positive effects on chest expansion, respiratory function, and functional activity.
APA, Harvard, Vancouver, ISO, and other styles
7

Zlatska, A. V., A. E. Rodnichenko, O. S. Gubar, R. O. Zubov, S. N. Novikova, and R. G. Vasyliev. "ENDOMETRIAL STROMAL CELLS: ISOLATION, EXPANSION, MORPHOLOGICAL AND FUNCTIONAL PROPERTIES." Experimental Oncology 39, no. 3 (September 22, 2017): 197–202. http://dx.doi.org/10.31768/2312-8852.2017.39(3):197-202.

Full text
Abstract:
Aim: We aimed to study biological properties of human endometrial stromal cells in vitro. Materials and Methods: The endometrium samples (n = 5) were obtained by biopsy at the first phase of the menstrual cycle from women with endometrial hypoplasia. In all cases, a voluntary written informed consent was obtained from the patients. Endometrial fragments were dissociated by enzymatic treatment. The cells were cultured in DMEM/F12 supplemented with 10% FBS, 2 mМ L-glutamine and 1 ng/ml FGF-2 in a multi-gas incubator at 5% CO2 and 5% O2. At P3 the cells were subjected to immunophenotyping, multilineage differentiation, karyotype stability and colony forming efficiency. The cell secretome was assessed by BioRad Multiplex immunoassay kit. Results: Primary population of endometrial cells was heterogeneous and contained cells with fibroblast-like and epithelial-like morphology, but at P3 the majority of cell population had fibroblast-like morphology. The cells possessed typical for MSCs phenotype CD90+CD105+CD73+CD34-CD45-HLA-DR-. The cells also expressed CD140a, CD140b, CD146, and CD166 antigents; and were negative for CD106, CD184, CD271, and CD325. Cell doubling time was 29.6 ± 1.3 h. The cells were capable of directed osteogenic, adipogenic and chondrogenic differentiation. The cells showed 35.7% colony forming efficiency and a tendency to 3D spheroid formation. The GTG-banding assay confirmed the stability of eMSC karyotype during long-term culturing (up to P8). After 48 h incubation period in serum-free medium eMSC secreted anti-inflammatory IL-1ra, as well as IL-6, IL-8 and IFNγ, angiogenic factors VEGF, GM-CSF and FGF-2, chemokines IP-10 and MCP-1. Conclusion: Thus, cultured endometrial stromal cells meet minimal ISCT criteria for MSC. Proliferative potential, karyotype stability, multilineage plasticity and secretome profile make eMSC an attractive object for the regenerative medicine use.
APA, Harvard, Vancouver, ISO, and other styles
8

Shibkova, Dariya Zakharovna, and Pavel Azifovich Baiguzhin. "NEUROSCIENCE: INTERDISCIPLINARY INTEGRATION OR EXPANSION?" Психология. Психофизиология 13, no. 3 (October 21, 2020): 111–21. http://dx.doi.org/10.14529/jpps200312.

Full text
Abstract:
Aim. The paper aims to study the differentiation and integration of scientific disciplines in the natural sciences and humanities research areas of neuroscience based on a review of Russian scientific works and to propose a structural and functional model of neuroscience as an interdisciplinary system of knowledge about brain features that ensure human activity in various professional spheres. Methods. A theoretical analysis of scientific publications on the topic over the last ten years has been used along with such methods as comparison, generalization, and modelling. Results. The paper presents various points of view on the subject field of separate disciplines within neuroscience, as well as on the relations between them. The interdisciplinarity of neuroscience is considered by a number of authors (philosophers) as a form of disciplinary colonization, epistemic expansion or intervention. Another group of authors considers neuroscience as a systemic level of science that unites multidisciplinary research activities related to the study of the brain. The third position is represented by authors who consider neuroscience as an extension of the problem field of neurobiology or as its synonym. A number of authors pay special attention to the popularity of neuroscience among politicians, military structures, pharmacological companies and other professionals with their disciplinary totality: neurophilosophy, neuropsychology, neuroinformatics, neurogenetics, neurobiology, neurosociology, neuropedagogy, etc. The paper demonstrates that there is no unified point of view on psychophysiology as a part of neuroscience, which also has interdisciplinary connections with many sciences that study individual psychological characteristics and behavior. Conclusion. Based on the analysis of the discussion, the authors emphasize the need to logically build the structural and functional relationships of individual disciplines within a unified neuroscience and determine its subject field on the basis of a systemic evolutionary approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Рогожин, Андрей, and Andrei Rogozhin. "Traditional and non-traditional methods of optimizing the functional states of public servants." Applied psychology and pedagogy 4, no. 1 (January 10, 2019): 57–64. http://dx.doi.org/10.12737/article_5c2cfc8f48a604.29122318.

Full text
Abstract:
The article analyzes the peculiarities of the problem of the states of human life: functional, mental and psycho-physiological. The question of the characteristics of the professional activities of civil servants and the level of stressful influences in it are discussed. Consider methods of optimizing functional states, both medical and psychological. Special attention is paid to the practices of self-regulation from transpersonal psychology, which are non-traditional for practical psychology, which, on the one hand, offer a wide range of opportunities to overcome stressful influences, and on the other, it is a potential reserve for spiritual and personal development, expansion of world view, development of creativity and internal integrity.
APA, Harvard, Vancouver, ISO, and other styles
10

DÖRING, LEIF, BLANKA HORVATH, and JOSEF TEICHMANN. "FUNCTIONAL ANALYTIC (IR-)REGULARITY PROPERTIES OF SABR-TYPE PROCESSES." International Journal of Theoretical and Applied Finance 20, no. 03 (April 24, 2017): 1750013. http://dx.doi.org/10.1142/s0219024917500133.

Full text
Abstract:
The stochastic alpha, beta, rho (SABR) model is a benchmark stochastic volatility model in interest rate markets, which has received much attention in the past decade. Its popularity arose from a tractable asymptotic expansion for implied volatility, derived by heat kernel methods. As markets moved to historically low rates, this expansion appeared to yield inconsistent prices. Since the model is deeply embedded in the markets, alternative pricing methods for SABR have been addressed in numerous approaches in recent years. All standard option pricing methods make certain regularity assumptions on the underlying model, but for SABR, these are rarely satisfied. We examine here regularity properties of the model from this perspective with a view to a number of (asymptotic and numerical) option pricing methods. In particular, we highlight delicate degeneracies of the SABR model (and related processes) at the origin, which deem the currently used popular heat kernel methods and all related methods from (sub-) Riemannian geometry ill-suited for SABR-type processes, when interest rates are near zero. We describe a more general semigroup framework, which permits to derive a suitable geometry for SABR-type processes (in certain parameter regimes) via symmetric Dirichlet forms. Furthermore, we derive regularity properties (Feller properties and strong continuity properties) necessary for the applicability of popular numerical schemes to SABR-semigroups and identify suitable Banach and Hilbert spaces for these. Finally, we comment on the short-time and large time asymptotic behavior of SABR-type processes beyond the heat-kernel framework.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Functional expansion methods"

1

Correia, Fagner Cintra [UNESP]. "The standard model effective field theory: integrating UV models via functional methods." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151703.

Full text
Abstract:
Submitted by FAGNER CINTRA CORREIA null (ccorreia@ift.unesp.br) on 2017-09-24T14:11:35Z No. of bitstreams: 1 Correia_TeseIFT.pdf: 861574 bytes, checksum: 1829fcb0903e20303312d37d7c1e0ffc (MD5)
Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-09-27T19:37:47Z (GMT) No. of bitstreams: 1 correia_fc_dr_ift.pdf: 861574 bytes, checksum: 1829fcb0903e20303312d37d7c1e0ffc (MD5)
Made available in DSpace on 2017-09-27T19:37:47Z (GMT). No. of bitstreams: 1 correia_fc_dr_ift.pdf: 861574 bytes, checksum: 1829fcb0903e20303312d37d7c1e0ffc (MD5) Previous issue date: 2017-07-27
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O Modelo Padrão Efetivo é apresentado como um método consistente de parametrizar Física Nova. Os conceitos de Matching e Power Counting são tratados, assim como a Expansão em Derivadas Covariantes introduzida como alternativa à construção do conjunto de operadores efetivos resultante de um modelo UV particular. A técnica de integração funcional é aplicada em casos que incluem o MP com Tripleto de Escalares e diferentes setores do modelo 3-3-1 na presença de Leptons pesados. Finalmente, o coeficiente de Wilson de dimensão-6 gerado a partir da integração de um quark-J pesado é limitado pelos valores recentes do parâmetro obliquo Y.
It will be presented the principles behind the use of the Standard Model Effective Field Theory as a consistent method to parametrize New Physics. The concepts of Matching and Power Counting are covered and a Covariant Derivative Expansion introduced to the construction of the operators set coming from the particular integrated UV model. The technique is applied in examples including the SM with a new Scalar Triplet and for different sectors of the 3-3-1 model in the presence of Heavy Leptons. Finally, the Wilson coefficient for a dimension-6 operator generated from the integration of a heavy J-quark is then compared with the measurements of the oblique Y parameter.
CNPq: 142492/2013-2
CAPES: 88881.132498/2016-01
APA, Harvard, Vancouver, ISO, and other styles
2

Correia, Fagner Cintra. "The standard model effective field theory : integrating UV models via functional methods /." São Paulo, 2017. http://hdl.handle.net/11449/151703.

Full text
Abstract:
Orientador: Vicente Pleitez
Resumo: O Modelo Padrão Efetivo é apresentado como um método consistente de parametrizar FísicaNova. Os conceitos de Matching e Power Counting são tratados, assim como a Expansão emDerivadas Covariantes introduzida como alternativa à construção do conjunto de operadoresefetivos resultante de um modelo UV particular. A técnica de integração funcional é aplicadaem casos que incluem o MP com Tripleto de Escalares e diferentes setores do modelo 3-3-1 napresença de Leptons pesados. Finalmente, o coeficiente de Wilson de dimensão-6 gerado a partirda integração de um quark-J pesado é limitado pelos valores recentes do parâmetro obliquo Y.
Doutor
APA, Harvard, Vancouver, ISO, and other styles
3

Rau, Christian, and rau@maths anu edu au. "Curve Estimation and Signal Discrimination in Spatial Problems." The Australian National University. School of Mathematical Sciences, 2003. http://thesis.anu.edu.au./public/adt-ANU20031215.163519.

Full text
Abstract:
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
APA, Harvard, Vancouver, ISO, and other styles
4

Rustaey, Abid 1961. "A comparison of conventional acceleration schemes to the method of residual expansion functions." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277176.

Full text
Abstract:
The algebraic equations resulting from a finite difference approximation may be solved numerically. A new scheme that appears quite promising is the method of residual expansion functions. In addition to speedy convergence, it is also independent of the number of algebraic equations under consideration, hence enabling us to analyze larger systems with higher accuracies. A factor which plays an important role in convergence of some numerical schemes is the concept of diagonal dominance. Matrices that converge at high rates are indeed the ones that possess a high degree of diagonal dominance. Another attractive feature of the method of residual expansion functions is its accurate convergence with minimal degree of diagonal dominance. Methods such as simultaneous and successive displacements, Chebyshev and projection are also discussed, but unlike the method of residual expansion functions, their convergence rates are strongly dependent on the degree of diagonal dominance.
APA, Harvard, Vancouver, ISO, and other styles
5

Zipperer, Travis Jonathan. "Pulse height tally response expansion method for application in detector problems." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44816.

Full text
Abstract:
A pulse height tally response expansion (PHRE) method is developed for detectors. By expanding the incident flux at the detector window/surface, a set of response functions is constructed via Monte Carlo estimators for pulse height tallies. B-spline functions are selected to perform the expansion of the response functions as well as for the expansion of the incident flux in photon energy. The method is verified for several incident flux spectra on a CsI(Na) detector. Results are compared to the solutions generated using direct Monte Carlo calculations. It is found that the method is several orders faster than MCNP5 while maintaining paralleled accuracy.
APA, Harvard, Vancouver, ISO, and other styles
6

Lladser, Manuel Eugenio. "Asymptotic enumeration via singularity analysis." Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1060976912.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains x, 227 p.; also includes graphics Includes bibliographical references (p. 224-227). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
7

Calatayud, Gregori Julia. "Computational methods for random differential equations: probability density function and estimation of the parameters." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138396.

Full text
Abstract:
[EN] Mathematical models based on deterministic differential equations do not take into account the inherent uncertainty of the physical phenomenon (in a wide sense) under study. In addition, inaccuracies in the collected data often arise due to errors in the measurements. It thus becomes necessary to treat the input parameters of the model as random quantities, in the form of random variables or stochastic processes. This gives rise to the study of random ordinary and partial differential equations. The computation of the probability density function of the stochastic solution is important for uncertainty quantification of the model output. Although such computation is a difficult objective in general, certain stochastic expansions for the model coefficients allow faithful representations for the stochastic solution, which permits approximating its density function. In this regard, Karhunen-Loève and generalized polynomial chaos expansions become powerful tools for the density approximation. Also, methods based on discretizations from finite difference numerical schemes permit approximating the stochastic solution, therefore its probability density function. The main part of this dissertation aims at approximating the probability density function of important mathematical models with uncertainties in their formulation. Specifically, in this thesis we study, in the stochastic sense, the following models that arise in different scientific areas: in Physics, the model for the damped pendulum; in Biology and Epidemiology, the models for logistic growth and Bertalanffy, as well as epidemiological models; and in Thermodynamics, the heat partial differential equation. We rely on Karhunen-Loève and generalized polynomial chaos expansions and on finite difference schemes for the density approximation of the solution. These techniques are only applicable when we have a forward model in which the input parameters have certain probability distributions already set. When the model coefficients are estimated from collected data, we have an inverse problem. The Bayesian inference approach allows estimating the probability distribution of the model parameters from their prior probability distribution and the likelihood of the data. Uncertainty quantification for the model output is then carried out using the posterior predictive distribution. In this regard, the last part of the thesis shows the estimation of the distributions of the model parameters from experimental data on bacteria growth. To do so, a hybrid method that combines Bayesian parameter estimation and generalized polynomial chaos expansions is used.
[ES] Los modelos matemáticos basados en ecuaciones diferenciales deterministas no tienen en cuenta la incertidumbre inherente del fenómeno físico (en un sentido amplio) bajo estudio. Además, a menudo se producen inexactitudes en los datos recopilados debido a errores en las mediciones. Por lo tanto, es necesario tratar los parámetros de entrada del modelo como cantidades aleatorias, en forma de variables aleatorias o procesos estocásticos. Esto da lugar al estudio de las ecuaciones diferenciales aleatorias. El cálculo de la función de densidad de probabilidad de la solución estocástica es importante en la cuantificación de la incertidumbre de la respuesta del modelo. Aunque dicho cálculo es un objetivo difícil en general, ciertas expansiones estocásticas para los coeficientes del modelo dan lugar a representaciones fieles de la solución estocástica, lo que permite aproximar su función de densidad. En este sentido, las expansiones de Karhunen-Loève y de caos polinomial generalizado constituyen herramientas para dicha aproximación de la densidad. Además, los métodos basados en discretizaciones de esquemas numéricos de diferencias finitas permiten aproximar la solución estocástica, por lo tanto, su función de densidad de probabilidad. La parte principal de esta disertación tiene como objetivo aproximar la función de densidad de probabilidad de modelos matemáticos importantes con incertidumbre en su formulación. Concretamente, en esta memoria se estudian, en un sentido estocástico, los siguientes modelos que aparecen en diferentes áreas científicas: en Física, el modelo del péndulo amortiguado; en Biología y Epidemiología, los modelos de crecimiento logístico y de Bertalanffy, así como modelos de tipo epidemiológico; y en Termodinámica, la ecuación en derivadas parciales del calor. Utilizamos expansiones de Karhunen-Loève y de caos polinomial generalizado y esquemas de diferencias finitas para la aproximación de la densidad de la solución. Estas técnicas solo son aplicables cuando tenemos un modelo directo en el que los parámetros de entrada ya tienen determinadas distribuciones de probabilidad establecidas. Cuando los coeficientes del modelo se estiman a partir de los datos recopilados, tenemos un problema inverso. El enfoque de inferencia Bayesiana permite estimar la distribución de probabilidad de los parámetros del modelo a partir de su distribución de probabilidad previa y la verosimilitud de los datos. La cuantificación de la incertidumbre para la respuesta del modelo se lleva a cabo utilizando la distribución predictiva a posteriori. En este sentido, la última parte de la tesis muestra la estimación de las distribuciones de los parámetros del modelo a partir de datos experimentales sobre el crecimiento de bacterias. Para hacerlo, se utiliza un método híbrido que combina la estimación de parámetros Bayesianos y los desarrollos de caos polinomial generalizado.
[CAT] Els models matemàtics basats en equacions diferencials deterministes no tenen en compte la incertesa inherent al fenomen físic (en un sentit ampli) sota estudi. A més a més, sovint es produeixen inexactituds en les dades recollides a causa d'errors de mesurament. Es fa així necessari tractar els paràmetres d'entrada del model com a quantitats aleatòries, en forma de variables aleatòries o processos estocàstics. Açò dóna lloc a l'estudi de les equacions diferencials aleatòries. El càlcul de la funció de densitat de probabilitat de la solució estocàstica és important per a quantificar la incertesa de la sortida del model. Tot i que, en general, aquest càlcul és un objectiu difícil d'assolir, certes expansions estocàstiques dels coeficients del model donen lloc a representacions fidels de la solució estocàstica, el que permet aproximar la seua funció de densitat. En aquest sentit, les expansions de Karhunen-Loève i de caos polinomial generalitzat esdevenen eines per a l'esmentada aproximació de la densitat. A més a més, els mètodes basats en discretitzacions mitjançant esquemes numèrics de diferències finites permeten aproximar la solució estocàstica, per tant la seua funció de densitat de probabilitat. La part principal d'aquesta dissertació té com a objectiu aproximar la funció de densitat de probabilitat d'importants models matemàtics amb incerteses en la seua formulació. Concretament, en aquesta memòria s'estudien, en un sentit estocàstic, els següents models que apareixen en diferents àrees científiques: en Física, el model del pèndol amortit; en Biologia i Epidemiologia, els models de creixement logístic i de Bertalanffy, així com models de tipus epidemiològic; i en Termodinàmica, l'equació en derivades parcials de la calor. Per a l'aproximació de la densitat de la solució, ens basem en expansions de Karhunen-Loève i de caos polinomial generalitzat i en esquemes de diferències finites. Aquestes tècniques només són aplicables quan tenim un model cap avant en què els paràmetres d'entrada tenen ja determinades distribucions de probabilitat. Quan els coeficients del model s'estimen a partir de les dades recollides, tenim un problema invers. L'enfocament de la inferència Bayesiana permet estimar la distribució de probabilitat dels paràmetres del model a partir de la seua distribució de probabilitat prèvia i la versemblança de les dades. La quantificació de la incertesa per a la resposta del model es fa mitjançant la distribució predictiva a posteriori. En aquest sentit, l'última part de la tesi mostra l'estimació de les distribucions dels paràmetres del model a partir de dades experimentals sobre el creixement de bacteris. Per a fer-ho, s'utilitza un mètode híbrid que combina l'estimació de paràmetres Bayesiana i els desenvolupaments de caos polinomial generalitzat.
This work has been supported by the Spanish Ministerio de Econom´ıa y Competitividad grant MTM2017–89664–P.
Calatayud Gregori, J. (2020). Computational methods for random differential equations: probability density function and estimation of the parameters [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138396
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
8

Jornet, Sanz Marc. "Mean square solutions of random linear models and computation of their probability density function." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/138394.

Full text
Abstract:
[EN] This thesis concerns the analysis of differential equations with uncertain input parameters, in the form of random variables or stochastic processes with any type of probability distributions. In modeling, the input coefficients are set from experimental data, which often involve uncertainties from measurement errors. Moreover, the behavior of the physical phenomenon under study does not follow strict deterministic laws. It is thus more realistic to consider mathematical models with randomness in their formulation. The solution, considered in the sample-path or the mean square sense, is a smooth stochastic process, whose uncertainty has to be quantified. Uncertainty quantification is usually performed by computing the main statistics (expectation and variance) and, if possible, the probability density function. In this dissertation, we study random linear models, based on ordinary differential equations with and without delay and on partial differential equations. The linear structure of the models makes it possible to seek for certain probabilistic solutions and even approximate their probability density functions, which is a difficult goal in general. A very important part of the dissertation is devoted to random second-order linear differential equations, where the coefficients of the equation are stochastic processes and the initial conditions are random variables. The study of this class of differential equations in the random setting is mainly motivated because of their important role in Mathematical Physics. We start by solving the randomized Legendre differential equation in the mean square sense, which allows the approximation of the expectation and the variance of the stochastic solution. The methodology is extended to general random second-order linear differential equations with analytic (expressible as random power series) coefficients, by means of the so-called Fröbenius method. A comparative case study is performed with spectral methods based on polynomial chaos expansions. On the other hand, the Fröbenius method together with Monte Carlo simulation are used to approximate the probability density function of the solution. Several variance reduction methods based on quadrature rules and multilevel strategies are proposed to speed up the Monte Carlo procedure. The last part on random second-order linear differential equations is devoted to a random diffusion-reaction Poisson-type problem, where the probability density function is approximated using a finite difference numerical scheme. The thesis also studies random ordinary differential equations with discrete constant delay. We study the linear autonomous case, when the coefficient of the non-delay component and the parameter of the delay term are both random variables while the initial condition is a stochastic process. It is proved that the deterministic solution constructed with the method of steps that involves the delayed exponential function is a probabilistic solution in the Lebesgue sense. Finally, the last chapter is devoted to the linear advection partial differential equation, subject to stochastic velocity field and initial condition. We solve the equation in the mean square sense and provide new expressions for the probability density function of the solution, even in the non-Gaussian velocity case.
[ES] Esta tesis trata el análisis de ecuaciones diferenciales con parámetros de entrada aleatorios, en la forma de variables aleatorias o procesos estocásticos con cualquier tipo de distribución de probabilidad. En modelización, los coeficientes de entrada se fijan a partir de datos experimentales, los cuales suelen acarrear incertidumbre por los errores de medición. Además, el comportamiento del fenómeno físico bajo estudio no sigue patrones estrictamente deterministas. Es por tanto más realista trabajar con modelos matemáticos con aleatoriedad en su formulación. La solución, considerada en el sentido de caminos aleatorios o en el sentido de media cuadrática, es un proceso estocástico suave, cuya incertidumbre se tiene que cuantificar. La cuantificación de la incertidumbre es a menudo llevada a cabo calculando los principales estadísticos (esperanza y varianza) y, si es posible, la función de densidad de probabilidad. En este trabajo, estudiamos modelos aleatorios lineales, basados en ecuaciones diferenciales ordinarias con y sin retardo, y en ecuaciones en derivadas parciales. La estructura lineal de los modelos nos permite buscar ciertas soluciones probabilísticas e incluso aproximar su función de densidad de probabilidad, lo cual es un objetivo complicado en general. Una parte muy importante de la disertación se dedica a las ecuaciones diferenciales lineales de segundo orden aleatorias, donde los coeficientes de la ecuación son procesos estocásticos y las condiciones iniciales son variables aleatorias. El estudio de esta clase de ecuaciones diferenciales en el contexto aleatorio está motivado principalmente por su importante papel en la Física Matemática. Empezamos resolviendo la ecuación diferencial de Legendre aleatorizada en el sentido de media cuadrática, lo que permite la aproximación de la esperanza y la varianza de la solución estocástica. La metodología se extiende al caso general de ecuaciones diferenciales lineales de segundo orden aleatorias con coeficientes analíticos (expresables como series de potencias), mediante el conocido método de Fröbenius. Se lleva a cabo un estudio comparativo con métodos espectrales basados en expansiones de caos polinomial. Por otro lado, el método de Fröbenius junto con la simulación de Monte Carlo se utilizan para aproximar la función de densidad de probabilidad de la solución. Para acelerar el procedimiento de Monte Carlo, se proponen varios métodos de reducción de la varianza basados en reglas de cuadratura y estrategias multinivel. La última parte sobre ecuaciones diferenciales lineales de segundo orden aleatorias estudia un problema aleatorio de tipo Poisson de difusión-reacción, en el que la función de densidad de probabilidad es aproximada mediante un esquema numérico de diferencias finitas. En la tesis también se tratan ecuaciones diferenciales ordinarias aleatorias con retardo discreto y constante. Estudiamos el caso lineal y autónomo, cuando el coeficiente de la componente no retardada i el parámetro del término retardado son ambos variables aleatorias mientras que la condición inicial es un proceso estocástico. Se demuestra que la solución determinista construida con el método de los pasos y que involucra la función exponencial retardada es una solución probabilística en el sentido de Lebesgue. Finalmente, el último capítulo lo dedicamos a la ecuación en derivadas parciales lineal de advección, sujeta a velocidad y condición inicial estocásticas. Resolvemos la ecuación en el sentido de media cuadrática y damos nuevas expresiones para la función de densidad de probabilidad de la solución, incluso en el caso de velocidad no Gaussiana.
[CAT] Aquesta tesi tracta l'anàlisi d'equacions diferencials amb paràmetres d'entrada aleatoris, en la forma de variables aleatòries o processos estocàstics amb qualsevol mena de distribució de probabilitat. En modelització, els coeficients d'entrada són fixats a partir de dades experimentals, les quals solen comportar incertesa pels errors de mesurament. A més a més, el comportament del fenomen físic sota estudi no segueix patrons estrictament deterministes. És per tant més realista treballar amb models matemàtics amb aleatorietat en la seua formulació. La solució, considerada en el sentit de camins aleatoris o en el sentit de mitjana quadràtica, és un procés estocàstic suau, la incertesa del qual s'ha de quantificar. La quantificació de la incertesa és sovint duta a terme calculant els principals estadístics (esperança i variància) i, si es pot, la funció de densitat de probabilitat. En aquest treball, estudiem models aleatoris lineals, basats en equacions diferencials ordinàries amb retard i sense, i en equacions en derivades parcials. L'estructura lineal dels models ens fa possible cercar certes solucions probabilístiques i inclús aproximar la seua funció de densitat de probabilitat, el qual és un objectiu complicat en general. Una part molt important de la dissertació es dedica a les equacions diferencials lineals de segon ordre aleatòries, on els coeficients de l'equació són processos estocàstics i les condicions inicials són variables aleatòries. L'estudi d'aquesta classe d'equacions diferencials en el context aleatori està motivat principalment pel seu important paper en Física Matemàtica. Comencem resolent l'equació diferencial de Legendre aleatoritzada en el sentit de mitjana quadràtica, el que permet l'aproximació de l'esperança i la variància de la solució estocàstica. La metodologia s'estén al cas general d'equacions diferencials lineals de segon ordre aleatòries amb coeficients analítics (expressables com a sèries de potències), per mitjà del conegut mètode de Fröbenius. Es duu a terme un estudi comparatiu amb mètodes espectrals basats en expansions de caos polinomial. Per altra banda, el mètode de Fröbenius juntament amb la simulació de Monte Carlo són emprats per a aproximar la funció de densitat de probabilitat de la solució. Per a accelerar el procediment de Monte Carlo, es proposen diversos mètodes de reducció de la variància basats en regles de quadratura i estratègies multinivell. L'última part sobre equacions diferencials lineals de segon ordre aleatòries estudia un problema aleatori de tipus Poisson de difusió-reacció, en què la funció de densitat de probabilitat és aproximada mitjançant un esquema numèric de diferències finites. En la tesi també es tracten equacions diferencials ordinàries aleatòries amb retard discret i constant. Estudiem el cas lineal i autònom, quan el coeficient del component no retardat i el paràmetre del terme retardat són ambdós variables aleatòries mentre que la condició inicial és un procés estocàstic. Es prova que la solució determinista construïda amb el mètode dels passos i que involucra la funció exponencial retardada és una solució probabilística en el sentit de Lebesgue. Finalment, el darrer capítol el dediquem a l'equació en derivades parcials lineal d'advecció, subjecta a velocitat i condició inicial estocàstiques. Resolem l'equació en el sentit de mitjana quadràtica i donem noves expressions per a la funció de densitat de probabilitat de la solució, inclús en el cas de velocitat no Gaussiana.
This work has been supported by the Spanish Ministerio de Economía y Competitividad grant MTM2017–89664–P. I acknowledge the doctorate scholarship granted by Programa de Ayudas de Investigación y Desarrollo (PAID), Universitat Politècnica de València.
Jornet Sanz, M. (2020). Mean square solutions of random linear models and computation of their probability density function [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/138394
TESIS
APA, Harvard, Vancouver, ISO, and other styles
9

Starkloff, Hans-Jörg, and Ralf Wunderlich. "Stationary solutions of linear ODEs with a randomly perturbed system matrix and additive noise." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501335.

Full text
Abstract:
The paper considers systems of linear first-order ODEs with a randomly perturbed system matrix and stationary additive noise. For the description of the long-term behavior of such systems it is necessary to study their stationary solutions. We deal with conditions for the existence of stationary solutions as well as with their representations and the computation of their moment functions. Assuming small perturbations of the system matrix we apply perturbation techniques to find series representations of the stationary solutions and give asymptotic expansions for their first- and second-order moment functions. We illustrate the findings with a numerical example of a scalar ODE, for which the moment functions of the stationary solution still can be computed explicitly. This allows the assessment of the goodness of the approximations found from the derived asymptotic expansions.
APA, Harvard, Vancouver, ISO, and other styles
10

Cao, Liang. "Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densities." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6539.

Full text
Abstract:
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred. The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Functional expansion methods"

1

Marti, Kurt. Differentiation of probability functions: The transformation method. Neubiberg: Forschungsschwerpunkt Simulation und Optimierung Deterministischer und Stochastischer Dynamischer Systeme, Universität der Bundeswehr München, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gil, Amparo. Numerical methods for special functions. Philadelphia, Pa: Society for Industrial and Applied Mathematics, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Patil, S. H. Asymptotic Methods in Quantum Mechanics: Application to Atoms, Molecules and Nuclei. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Michel, Volker. Lectures on Constructive Approximation: Fourier, Spline, and Wavelet Methods on the Real Line, the Sphere, and the Ball. Boston: Birkhäuser Boston, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

ŁU, Novokshenov V. I., ed. The isomonodromic deformation method in the theory of Painleve equations. Berlin: Springer-Verlag, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

N.Y.) AMS Special Session in Memory of Daryl Geller Wavelet and Frame Theoretic Methods in Harmonic Analysis and Partial Differential Equations (2012 Rochester. Commutative and noncommutative harmonic analysis and applications: AMS Special Session in Memory of Daryl Geller on Wavelet and Frame Theoretic Methods in Harmonic Analysis and Partial Differential Equations, September 22-23, 2012, Rochester Institute of Technology, Rochester, NY. Edited by Mayeli, Azita, 1976- editor of compilation. Providence, Rhode Island: American Mathematical Society, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Patil, S. H. Asymptotic methods in quantum mechanics: Application to atoms, molecules, and nuclei. Berlin: Springer, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

service), SpringerLink (Online, ed. Extremal Polynomials and Riemann Surfaces. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Numerical Methods for Special Functions. Society for Industrial Mathematics, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Budelmann, Felix, and Tom Phillips, eds. Textual Events. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198805823.001.0001.

Full text
Abstract:
Recent decades have seen a major expansion in our understanding of how early Greek lyric functioned in its social, political, and ritual contexts. The fundamental role song played in the day-to-day lives of communities, groups, and individuals has been the object of intense study. This volume places its focus elsewhere, and attempts to illuminate poetic effects that cannot be captured in functional terms. Employing a range of interpretative methods, it explores the idea of lyric performances as textual events. Several chapters investigate the pragmatic relationship between real performance contexts and imaginative settings. Others consider how lyric poems position themselves in relation to earlier texts and textual traditions, or discuss the distinctive encounters lyric poems create between listeners, authors, and performers. In addition to studies that analyse individual lyric texts and lyric authors (Sappho, Alcaeus, Pindar), the volume includes treatments of the relationship between lyric and the Homeric Hymns. Building on the renewed concern with the aesthetic in the study of Greek lyric and beyond, Textual Events re-examines the relationship between the poems’ formal features and their historical contexts. Lyric poems are a type of sociopolitical discourse, but they are also objects of attention in themselves. They enable reflection on social and ritual practices as much as they are embedded within them. As well as enacting cultural norms, lyric challenges listeners to think about and experience the world afresh.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Functional expansion methods"

1

Giese, Timothy J., and Darrin M. York. "Density-functional expansion methods: grand challenges." In Highlights in Theoretical Chemistry, 51–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34450-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Elnaggar, Hebatalla, Pieter Glatzel, Marius Retegan, Christian Brouder, and Amélie Juhin. "X-ray Dichroisms in Spherical Tensor and Green’s Function Formalism." In Springer Proceedings in Physics, 83–130. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-64623-3_4.

Full text
Abstract:
AbstractIn this book chapter, our goal is to provide experimentalists and theoreticians with an accessible approach to the measurement or calculation of X-ray dichroisms in X-ray absorption spectroscopy (XAS). We start by presenting the key ideas of different calculation methods such as density functional theory (DFT) and ligand-field multiplet (LFM) theory and discuss the pros and cons for each approach. The second part of the chapter is dedicated to the expansion of the XAS cross section using spherical tensors for electric dipole and quadrupole transitions. This expansion enables to identify a set of linearly independent spectra that represent the smallest number of measurements (or calculations) to be performed on a sample, in order to extract all spectroscopic information. Examples of the different dichroic effects which can be expected depending on the type of transitions and on the symmetry of the system are then given.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Wen L., and Weiming Sun. "Fourier Series Expansions of Functions." In Fourier Methods in Science and Engineering, 11–30. Boca Raton: CRC Press, 2022. http://dx.doi.org/10.1201/9781003194859-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jennings, Byron K. "Gradient Expansions and Quantum Mechanical Extensions of the Classical Phase Space." In Density Functional Methods In Physics, 503–7. Boston, MA: Springer US, 1985. http://dx.doi.org/10.1007/978-1-4757-0818-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Dong. "A Expansion Method for DriveMonitor Trace Function." In Advances in Computer, Communication and Computational Sciences, 883–92. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4409-5_78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yasuda, Muneki. "Empirical Bayes Method for Boltzmann Machines." In Sublinear Computation Paradigm, 277–93. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-4095-7_11.

Full text
Abstract:
AbstractThe framework of the empirical Bayes method allows the estimation of the values of the hyperparameters in the Boltzmann machine by maximizing a specific likelihood function referred to as the empirical Bayes likelihood function. However, the maximization is computationally difficult because the empirical Bayes likelihood function involves intractable integrations of the partition function. The method presented in this chapter avoids this computational problem by using the replica method and the Plefka expansion, which is quite simple and fast because it does not require any iterative procedures and gives reasonable estimates under certain conditions.
APA, Harvard, Vancouver, ISO, and other styles
7

Du, Yuncheng, and Dongping Du. "Cardiac Image Segmentation Using Generalized Polynomial Chaos Expansion and Level Set Function." In Level Set Method in Medical Imaging Segmentation, 261–88. Boca Raton : Taylor & Francis, 2019.: CRC Press, 2019. http://dx.doi.org/10.1201/b22435-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dos Santos Pacheco, Nicolas, and Dominique Soldati-Favre. "Coupling Auxin-Inducible Degron System with Ultrastructure Expansion Microscopy to Accelerate the Discovery of Gene Function in Toxoplasma gondii." In Methods in Molecular Biology, 121–37. New York, NY: Springer US, 2021. http://dx.doi.org/10.1007/978-1-0716-1681-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Varandas, A. J. C. "Intermolecular and Intramolecular Potentials: Topographical Aspects, Calculation, and Functional Representation via A Double Many-Body Expansion Method." In Advances in Chemical Physics, 255–338. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2007. http://dx.doi.org/10.1002/9780470141236.ch2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aung, Shuh-Wen, Noor Hayaty Abu Kasim, and Thamil Selvee Ramasamy. "Isolation, Expansion, and Characterization of Wharton’s Jelly-Derived Mesenchymal Stromal Cell: Method to Identify Functional Passages for Experiments." In Stem Cells and Aging, 323–35. New York, NY: Springer New York, 2019. http://dx.doi.org/10.1007/7651_2019_242.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Functional expansion methods"

1

Rivera-García, Diego, Luis Angel García-Escudero, Agustín Mayo-Iscar, and Joaquin Ortega. "Stationary Intervals for Random Waves by Functional Clustering of Spectral Densities." In ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-19171.

Full text
Abstract:
Abstract A new time series clustering procedure, based on Functional Data Analysis techniques applied to spectral densities, is employed in this work for the detection of stationary intervals in random waves. Long records of wave data are divided into 30-minute or one-hour segments and the spectral density of each interval is estimated by one of the standard methods available. These spectra are regarded as the main characteristic of each 30-minute time series for clustering purposes. The spectra are considered as functional data and, after representation on a spline basis, they are clustered by a mixtures model method based on a truncated Karhunen-Loéve expansion as an approximation to the density function for functional data. The clustering method uses trimming techniques and restrictions on the scatter within groups to reduce the effect of outliers and to prevent the detection of spurious clusters. Simulation examples show that the procedure works well in the presence of noise and the restrictions on the scatter are effective in avoiding the detection of false clusters. Consecutive time intervals clustered together are considered as a single stationary segment of the time series. An application to real wave data is presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Shirahatti, Uday S., and Pollapragada K. Raju. "A Variational Method for Solving Vibration Problems." In ASME 1993 Design Technical Conferences. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/detc1993-0253.

Full text
Abstract:
Abstract Rayleigh-Ritz and Galerkin methods are frequently used in engineering to solve boundary value and eigenvalue problems. The success in applying these methods depends entirely on the construction of a variational entity called the functional, and the choice of a system of elements known as the basis functions. This in some cases greatly narrows down the class of problems to which the above methods may be applied. Here, we present a specific but sufficiently general method known as the Methods of Moments for constructing the elements going into the expansion of the approximate solution. The Method of Moments has been shown to posses the capability of generating the basis functions successfully. The Method of Moments in its various forms has been widely used in electromagnetism. Due to generality involved in the construction of these basis functions, the Method of Moments may be readily used to solve a large variety of problems arising in discrete as well as continuous vibrating systems. This idea forms the central theme of this article.
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Hsiao-Ying Shadow, Siyao Huang, Taylor Gettys, Peter M. Prim, and Ola L. Harrysson. "A Biomechanical Study of Directional Mechanical Properties of Porcine Skin Tissues." In ASME 2013 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/imece2013-63829.

Full text
Abstract:
Skin is a multilayered composite material and composed principally of the proteins collagen, elastic fibers, and fibroblasts. The direction-dependent material properties of skin tissue is important for physiological functions like skin expansion. The current study has developed methods to characterize the directional biomechanical properties of porcine skin tissues. It is observed that skin tissue has a nonlinear anisotropy biomechanical behavior, where the parameters of material stiffness is 378 ±160 kPa in the preferred-fiber direction and 65.96±40.49 kPa in the cross-fiber direction when stretching above 30% strain equibiaxially. The results from the current study will help optimize functional skin stretching for patients requiring large surface area skin grafts and reconstructions due to burns or other injuries.
APA, Harvard, Vancouver, ISO, and other styles
4

Rose, Michael. "Modal Based Correction Methods for the Placement of Piezoceramic Modules." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-80789.

Full text
Abstract:
Conventional Finite-Element programs are able to compute the vibration response of mechanical structures. Increasingly also so-called multi-field problems can be solved. For piezoelectric actuators and sensors, electrical degrees of freedom apart from the mechanical ones have to be considered too. The pure actuator effect can also be modelled using the coefficients of thermal expansion. But regarding the optimal placement of flat piezoceramic modules, which couple in the mechanical part through the d31-effect, it proves to be advantageous to consider them after doing the computational complex modal analysis. In this paper, this modal coupling approach is described in detail. It introduces an additional modelling error, because the effect of the stiffness and mass of the modules is not considered in the construction process of the functional space, from which modal shapes are derived. But due to the comparatively small contribution to the global mass and stiffness of such flat devices, this additional error can generally be accepted. Furthermore this error can be reduced to an arbitrarily small amount, if the number of retained eigenmodes is increased and the gain in computational speed is significant. For the calculations, self-written triangle elements with full electro-mechanical coupling have been used, being coded completely in MATLAB. Finally the optimization procedure for the placement of the piezoceramic modules including their mass and stiffness is demonstrated for a test structure.
APA, Harvard, Vancouver, ISO, and other styles
5

Melli, Roberto, Enrico Sciubba, Claudia Toro, and Alessandro Zoli-Porroni. "An Example of Thermo-Economic Optimization of a CCGT by Means of the Proper Orthogonal Decomposition Method." In ASME 2012 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/imece2012-87317.

Full text
Abstract:
This paper presents an application of the Proper Orthogonal Decomposition (POD) technique (also called Karhunen-Loève Decomposition, Principal Component Analysis, or Singular Value Decomposition) to the thermo-economic optimization of a realistic Combined-Cycle Gas Turbine (CCGT) process. The novel inverse-design approach proposed here employs the thermo-economic cost of the two products as objective function. The proposed procedure does not require the generation of a complete simulated set of results at each iteration step of the optimization, because POD constructs a very accurate approximation to the function described by a certain number of initial simulations, and thus “new” points in design space can be extrapolated without recurring to repeated process simulations. Thus, the often taxing computational effort needed to iteratively generate numerical process simulations of incrementally different configurations is substantially reduced by replacing much of it by easy-to-perform matrix operations: a non-negligible but quite small number N of initial process simulations is used to calculate the basis of the POD interpolation and to validate (i.e., extend) the results. Since the accuracy of a POD expansion depends of course on the number N of initial simulations (the “snapshots”), the computational intensity of the method is certainly not negligible: but, as successfully demonstrated in the paper for a realistic CCGT inverse process design problem, the idea that additional full simulations are performed only in the “right direction” indicated by the gradient of the objective function in the solution space leads to a successful strategy at a substantially reduced computational intensity. This “economy” with respect to other classical “optimization” methods is basically due to the capability of the POD procedure to identify the most important “modes” in the functional expansion of the vector basis consisting of a subset of the design parameters used in the evaluation of the objective function.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Wenbin. "A Variational-Asymptotic Cell Method for Periodically Heterogeneous Materials." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-79611.

Full text
Abstract:
A new cell method, variational-asymptotic cell method (VACM), is developed to homogenize periodically heterogenous anisotropic materials based on the variational asymptotic method. The variational asymptotic method is a mathematical technique to synthesize both merits of variational methods and asymptotic methods by carrying out the asymptotic expansion of the functional governing the physical problem. Taking advantage of the small parameter (the periodicity in this case) inherent in the heterogenous solids, we can use the variational asymptotic method to systematically obtain the effective material properties. The main advantages of VACM are that: a) it does not rely on ad hoc assumptions; b) it has the same rigor as mathematical homogenization theories; c) its numerical implementation is straightforward because of its variational nature; d) it can calculate different material properties in different directions simultaneously without multiple analyses. To illustrate the application of VACM, a binary composite with two orthotropic layers are studied analytically, and a closed-form solution is given for effective stiffness matrix and the corresponding effective engineering constants. It is shown that VACM can reproduce the results of a mathematical homogenization theory.
APA, Harvard, Vancouver, ISO, and other styles
7

Panzini, G., E. Sciubba, and A. Zoli-Porroni. "An Improved Proper Orthogonal Decomposition Technique for the Solution of a 2-D Rotor Blade Inverse Design Problem Using the Entropy Generation Rate as the Objective Function." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-43461.

Full text
Abstract:
This paper discusses the optimization of a 2D rotor profile attained via a novel inverse-design approach that uses the entropy generation rate as the objective function. A fundamental methodological novelty of the proposed procedure is that it does not require the generation of the fluid-dynamic fields at each iteration step of the optimisation, because the objective function is computed by a functional extrapolation based on the Proper Orthogonal Decomposition (POD) method. With this new method, the (often excessively taxing) computational cost for repeated numerical CFD simulations of incrementally different geometries is substantially decreased by reducing much of it to easy-to-perform matrix-multiplications: CFD simulations are used only to calculate the basis of the POD interpolation and to validate (i.e., extend) the results. As the accuracy of a POD expansion critically depends on the allowable number of CFD simulations, our methodology is still rather computationally intensive: but, as successfully demonstrated in the paper for an airfoil profile design problem, the idea that, given a certain number of necessary initial CFD simulations, additional full simulations are performed only in the “right direction” indicated by the gradient of the objective function in the solution space leads to a successful strategy, and substantially decreases the computational intensity of the solution. This “economy” with respect to other classical “optimization” methods is basically due to the reduction of the complete CFD simulations needed for the generation of the fluid-dynamic fields on which the objective function is calculated.
APA, Harvard, Vancouver, ISO, and other styles
8

Klein, Steven A., Aleksandar Aleksov, Vijay Subramanian, Rajendra Dias, Pramod Malatkar, and Ravi Mahajan. "Mechanical Testing for Stretchable Electronics." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-68215.

Full text
Abstract:
Stretchable electronics have been a subject of increased research over the past decade [1–3]. Although stretchable electronic devices are a relatively new area for the semiconductor/electronics industries, recent market research indicates the market could be worth more than 900 million dollars by 2023 [4]. At CES (Consumer Electronics Show) in January 2016, two commercial patches were announced which attach to the skin to measure information about the user’s vitals and environmental conditions [5]. One of these measures the sun exposure of the user with a UV sensitive dye — which can communicate with the user’s cell phone to track the user’s sun exposure. Another device is a re-usable flexible patch which measures cardiac activity, muscle activity, galvanic skin response, and user’s motion. These are just two examples of the many devices that will be developed in the coming years for consumer and medical use. This paper investigates mechanical testing methods designed to test the stretching capabilities of potential products across the electronics industry to help quantify and understand the mechanical integrity, response, and the reliability of these devices. Typically, the devices consist of stiff modules connected by stretchable traces [6]. They require electrical and mechanical connectivity between the modules to function. In some cases, these devices will be subject to bi-axial and/or cyclic mechanical strain, especially for wearable applications. The ability to replicate these mechanical strains and understand their effect on the function of the devices is critical to meet performance, process and reliability requirements. There has been a test method proposed recently for harsh / high-rate testing (shock) of stretchable electronics [7]. The focus of the approach presented in the paper aims to simulate expected user conditions in the consumer and medical fields, whereas earlier research was focused on shock testing. In this paper, methods for simulating bi-axial and out-of-plane strains similar to what may occur in a wearable device on the human body are proposed. Electrical and / or optical monitoring (among other methods) can be used to determine cycles to failure depending on expected failure modes. Failure modes can include trace damage in stretchable regions, trace damage in functional component regions, or bulk stretchable material damage, among others. Three different methods of applying mechanical strain are described, including a stretchable air bladder method, membrane test method, and lateral expansion method. This work will describe a prototype of the air bladder method with initial results of the testing for example devices. The system utilizes an expandable bladder to roughly simulate the expansion of muscles in the human body. Besides strain and # of cycles, other variables such as humidity, temperature, ultraviolet exposure, and others can be utilized to determine their effect on the mechanical and electrical reliability of the devices.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Yunzhao, Hongchun Wu, Liangzhi Cao, and Qichang Chen. "Exponential Function Expansion Nodal Diffusion Method." In 18th International Conference on Nuclear Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/icone18-29447.

Full text
Abstract:
An exponential function expansion nodal diffusion method is proposed to take care of diffusion calculation in unstructured geometry. Transverse integral technique is widely used in nodal method in regular geometry, such as rectangular and hexagonal, while improper in arbitrary triangular geometry because of the mathematical singularity. In this paper, nodal response matrix is derived by expanding detailed nodal flux distribution into a sum of exponential functions, and nodal balance equation can be obtained by strict integral in the polygonal node. Numerical results illustrate that the exponential function expansion nodal method in rectangular and triangular block can solve neutron diffusion equation in regular and irregular geometry.
APA, Harvard, Vancouver, ISO, and other styles
10

Svoboda, Jiří, and Vladislav Kocián. "Framework for Virtual and Physical Testing of Automated Vehicle Systems." In FISITA World Congress 2021. FISITA, 2021. http://dx.doi.org/10.46720/f2020-acm-046.

Full text
Abstract:
Massive expansion and implementation of Advanced Driver Assistant Systems and advent of Highly Automated Driving functions brings huge challenges in terms of design and development, but also function validation and certification process which is a limiting factor for their market introduction. To ensure safety of such systems, whose complexity is rapidly growing, it is essential to evaluate functionality of automated driving systems within the mandatory certification before it’s deployed on the road. And after their deployment, they must be a subject to periodical technical inspection during life cycle as well. The number of regulations and standards considering safety of AD functions gradually increases, but current safety standards and regulations still have to be adopted and enhanced. For highly automated driving functions and AVs that do not require permanent monitoring by the driver, a theoretically infinite number of possible traffic situations, that a self-driving car could possibly encounter, needs be tested. One promising method to overcome this matter is the scenario-based approach focused on critical, dangerous and extreme situations. Such approach ensures a repeatability and robustness of an approval process if it is supported by a significant sample of harmonized scenarios. Since confronting conventional physical driving tests with this test effort is not feasible anymore, virtualization of testing methods by means of computer simulation needs to be emphasized. To meet above described challenges, TÜV SÜD is developing a methodology for scenario-based evaluation of AD functionality as a supplement for either development or future certification of automated driving systems. The methodology combines virtual-based approach and physical testing and guarantees repeatability of test conditions. Virtual-based testing is provided by an in-house simulation toolchain with an open architecture. The toolchain consists of functional blocks as: database of standardized scenario, virtual environment model, high fidelity physics-based sensor simulation, model of vehicle dynamics, control functions and algorithms, automated and standardized post-processing and reporting. Physical testing provides real-world data measurement used among other purposes for validation of the simulation toolchain and its relevant functional blocks respectively. Physical testing is performed on our own test track using typical equipment as: driving robots, inertial measurement unit, guided soft target, soft VRU targets, master control station and others. In presentation, an overview of the current state of methodology is given and the workflow is demonstrated for a specific operational design domain (ODD). Architecture of simulation toolchain is described and explanation how functional blocks are embedded into overall architecture and how they interact with each other is given. Trustworthiness for virtual test execution will be discussed by means of a comparison and correlation between real-world and virtual-simulation measurement results for a specific operational design domain.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Functional expansion methods"

1

Banai, Menachem, and Gary Splitter. Molecular Characterization and Function of Brucella Immunodominant Proteins. United States Department of Agriculture, July 1993. http://dx.doi.org/10.32747/1993.7568100.bard.

Full text
Abstract:
The BARD project was a continuation of a previous BARD funded research project. It was aimed at characterization of the 12kDa immunodominant protein and subsequently the cloning and expression of the gene in E. coli. Additional immunodominant proteins were sought among genomic B. abortus expression library clones using T-lymphocyte proliferation assay as a screening method. The 12kDa protein was identified as the L7/L12 ribosomal protein demonstrating in the first time the role a structural protein may play in the development of the host's immunity against the organism. The gene was cloned from B. abortus (USA) and B. melitensis (Israel) showing identity of the oligonucleotide sequence between the two species. Further subcloning allowed expression of the protein in E. coli. While the native protein was shown to have DTH antigenicity its recombinant analog lacked this activity. In contrast the two proteins elicited lymphocyte proliferation in experimental murine brucellosis. CD4+ cells of the Th1 subset predominantly responded to this protein demonstrating the development of protective immunity (g-IFN, and IL-2) in the host. Similar results were obtained with bovine Brucella primed lymphocytes. UvrA, GroE1 and GroEs were additional Brucella immunodominant proteins that demonstrated MHC class II antigenicity. The role cytotoxic cells are playing in the clearance of brucella cells was shown using knock out mice defective either in their CD4+ or CD8+ cells. CD4+ defective mice were able to clear brucella as fast as did normal mice. In contrast mice which were defective in their CD8+ cells could not clear the organisms effectively proving the importance of this subtype cell line in development of protective immunity. The understanding of the host's immune response and the expansion of the panel of Brucella immunodominant proteins opened new avenues in vaccine design. It is now feasible to selectively use immunodominant proteins either as subunit vaccine to fortify immunity of older animals or as diagnostic reagents for the serological survaillance.
APA, Harvard, Vancouver, ISO, and other styles
2

Halych, Valentyna. SERHII YEFREMOV’S COOPERATION WITH THE WESTERN UKRAINIAN PRESS: MEMORIAL RECEPTION. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11055.

Full text
Abstract:
The subject of the study is the cooperation of S. Efremov with Western Ukrainian periodicals as a page in the history of Ukrainian journalism which covers the relationship of journalists and scientists of Eastern and Western Ukraine at the turn of the XIX-XX centuries. Research methods (biographical, historical, comparative, axiological, statistical, discursive) develop the comprehensive disclosure of the article. As a result of scientific research, the origins of Ukrainocentrism in the personality of S. Efremov were clarified; his person as a public figure, journalist, publisher, literary critic is multifaceted; taking into account the specifics of the memoir genre and with the involvement of the historical context, the turning points in the destiny of the author of memoirs are interpreted, revealing cooperation with Western Ukrainian magazines and newspapers. The publications ‘Zoria’, ‘Narod’, ‘Pravda’, ‘Bukovyna’, ‘Dzvinok’, are secretly got into sub-Russian Ukraine, became for S. Efremov a spiritual basis in understanding the specifics of the national (Ukrainian) mass media, ideas of education in culture of Ukraine at the end of XIX century, its territorial integrity, and state independence. Memoirs of S. Efremov on cooperation with the iconic Galician journals ‘Notes of the Scientific Society after the name Shevchenko’ and ‘Literary-Scientific Bulletin’, testify to an important stage in the formation of the author’s worldview, the expansion of the genre boundaries of his journalism, active development as a literary critic. S. Yefremov collaborated most fruitfully and for a long time with the Literary-Scientific Bulletin, and he was impressed by the democratic position of this publication. The author’s comments reveal a long-running controversy over the publication of a review of the new edition of Kobzar and thematically related discussions around his other literary criticism, in which the talent of the demanding critic was forged. S. Efremov steadfastly defended the main principles of literary criticism: objectivity and freedom of author’s thought. The names of the allies of the Ukrainian idea L. Skochkovskyi, O. Lototskyi, O. Konyskyi, P. Zhytskyi, M. Hrushevskyi in S. Efremov’s memoirs unfold in multifaceted portrait descriptions and function as historical and cultural facts that document the pages of the author’s biography, record his activities in space and time. The results of the study give grounds to characterize S. Efremov as the first professional Ukrainian-speaking journalist.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography