Dissertations / Theses on the topic 'Transformation methods'

To see the other types of publications on this topic, follow the link: Transformation methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transformation methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chin, Wei Ngan. "Automatic methods for program transformation." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/47806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bandarian, Ellen. "Linear transformation methods for multivariate geostatistical simulation." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2008. https://ro.ecu.edu.au/theses/191.

Full text
Abstract:
Multivariate geostatistical techniques take into account the statistical and spatial relationships between attributes but can be inferentially and computationally expensive. One way to circumvent these issues is to transform the spatially correlated attributes into a set of decorrelated factors for which the off diagonal elements of the spatial covariance matrix are zero. This requires the derivation of a transformation matrix that exactly or approximately diagonalises the spatial covariance matrix for all separation distances. The resultant factors can then analysed using the more straightforward univariate techniques. This thesis is concerned with the investigation of linear decorrclation methods whereby the resulting factors are linear combinations of the original attributes.
APA, Harvard, Vancouver, ISO, and other styles
3

Beach, Nicholas James. "Metathesis Catalysts in Tandem Catalysis: Methods and Mechanisms for Transformation." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22731.

Full text
Abstract:
The ever-worsening environmental crisis has stimulated development of less wasteful “green” technologies. To this end, tandem catalysis enables multiple catalytic cycles to be performed within a single reaction vessel, thereby eliminating intermediate processing steps and reducing solvent waste. Assisted tandem catalysis employs suitable chemical triggers to transform the initial catalyst into new species, thereby providing a mechanism for “switching on” secondary catalytic activity. This thesis demonstrates the importance of highly productive secondary catalysts through a comparative hydrogenation study involving prominent hydrogenation catalysts of tandem ring-opening metathesis polymerization (ROMP)-hydrogenation, of which hydridocarbonyl species were proved superior. This thesis illuminates optimal routes to hydridocarbonyls under conditions relevant to our ROMP-hydrogenation protocol, using Grubbs benzylidenes as isolable proxies for ROMP-propagating alkylidene species. Analogous studies of ruthenium methylidenes and ethoxylidenes illuminate optimal routes to hydridocarbonyls following ring-closing metathesis (RCM) and metathesis quenching, respectively. The formation of unexpected side products using aggressive chemical triggers is also discussed, and emphasizes the need for cautious design of the post-metathesis trigger phase.
APA, Harvard, Vancouver, ISO, and other styles
4

Sophocleous, Christodoulos. "Transformation methods in the study of nonlinear partial differential equations." Thesis, University of Nottingham, 1991. http://eprints.nottingham.ac.uk/11133/.

Full text
Abstract:
Transformation methods are perhaps the most powerful analytic tool currently available in the study of nonlinear partial differential equations. Transformations may be classified into two categories: category I includes transformations of the dependent and independent variables of a given partial differential equation and category II additionally includes transformations of the derivatives of the dependent variables. In part I of this thesis our principal attention is focused on transformations of the category I, namely point transformations. We mainly deal with groups of transformations. These groups enable us to derive similarity transformations which reduce the number of independent variables of a certain partial differential equation. Firstly, we introduce the concept of transformation groups and in the analysis which follows three methods for determining transformation groups are presented and consequently the corresponding similarity transformations are derived. We also present a direct method for determining similarity transformations. Finally, we classify all point transformations for a particular class of equations, namely the generalised Burgers equation. Bäcklund transformations belong to category II and they are investigated in part II. The first chapter is an introduction to the theory of Bäcklund transformations. Here two different classes of Bäcklund transformations are defined and appropriate example are given. These two classes are considered in the proceeding analysis, where we search for Bäcklund transformations for specific classes of partial differential equations.
APA, Harvard, Vancouver, ISO, and other styles
5

Fears, Justin. "Alternative School Leadership Transformation| A Mixed-Methods Evaluation of Outcomes." Thesis, Lindenwood University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10174303.

Full text
Abstract:

This study was a documentation and assessment of Beta Academy Alternative School’s (pseudonym) transition to a newly introduced educational model/ leadership paradigm and examination of student educational outcomes, resulting from the leadership change. As a first year administrator, the researcher undertook the task of transforming an underperforming alternative education program by targeting areas of identified deficiency and/or concern (graduation rates, attendance, and discipline).

In this study, the researcher executed a mixed-method evaluation of the new educational model in an effort to determine contributions to success, potential barriers to change, and the characteristics associated with both, as well as the quantitative analysis that would support or not support the researcher’s hypotheses.

The first goal of the study stated that following the implementation of the new model for alternative education, building discipline referrals would decrease by 10% per semester, as compared to previous referral data. The results indicated a 280% decrease in student referrals, thus illustrating a dramatic and statistically significant decrease.

The second goal indicated that graduation rates would increase or would stay the same, within 2% of previous rates (percentage of total seniors), as compared to the previous year’s results and following implementation of the new educational and leadership paradigms. A z-test for difference in proportion tested a change in graduation rates of less than 1%, thus, supporting the graduation rates goal.

The last goal outlined in the study stated that following the implementation of the new model for alternative education, building attendance would increase by 30% per semester, as determined by ADA hours and compared to previous attendance data. Upon calculation, it was determined that there was an increase in attendance of 36.2% providing statistical support that the increase in attendance was significant, as well as met the outlined goal for attendance improvement.

The qualitative component of the study used responses to a questionnaire analysis to gauge stakeholder involvement and perceptions associated with the new educational model. The feedback was positive and indicated the measured criteria to be impactful and effective in the areas of fidelity, implementation, development, and attainment of desired goals.

APA, Harvard, Vancouver, ISO, and other styles
6

Fuchs, Christoph [Verfasser]. "Agile Methods in the Digital Transformation – Exploration of the Organizational Processes of an Agile Transformation / Christoph Fuchs." Berlin : epubli, 2019. http://d-nb.info/1202665128/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ngounda, Edgard. "Numerical Laplace transformation methods for integrating linear parabolic partial differential equations." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2735.

Full text
Abstract:
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2009.
ENGLISH ABSTRACT: In recent years the Laplace inversion method has emerged as a viable alternative method for the numerical solution of PDEs. Effective methods for the numerical inversion are based on the approximation of the Bromwich integral. In this thesis, a numerical study is undertaken to compare the efficiency of the Laplace inversion method with more conventional time integrator methods. Particularly, we consider the method-of-lines based on MATLAB’s ODE15s and the Crank-Nicolson method. Our studies include an introductory chapter on the Laplace inversion method. Then we proceed with spectral methods for the space discretization where we introduce the interpolation polynomial and the concept of a differentiation matrix to approximate derivatives of a function. Next, formulas of the numerical differentiation formulas (NDFs) implemented in ODE15s, as well as the well-known second order Crank-Nicolson method, are derived. In the Laplace method, to compute the Bromwich integral, we use the trapezoidal rule over a hyperbolic contour. Enhancement to the computational efficiency of these methods include the LU as well as the Hessenberg decompositions. In order to compare the three methods, we consider two criteria: The number of linear system solves per unit of accuracy and the CPU time per unit of accuracy. The numerical results demonstrate that the new method, i.e., the Laplace inversion method, is accurate to an exponential order of convergence compared to the linear convergence rate of the ODE15s and the Crank-Nicolson methods. This exponential convergence leads to high accuracy with only a few linear system solves. Similarly, in terms of computational cost, the Laplace inversion method is more efficient than ODE15s and the Crank-Nicolson method as the results show. Finally, we apply with satisfactory results the inversion method to the axial dispersion model and the heat equation in two dimensions.
AFRIKAANSE OPSOMMING: In die afgelope paar jaar het die Laplace omkeringsmetode na vore getree as ’n lewensvatbare alternatiewe metode vir die numeriese oplossing van PDVs. Effektiewe metodes vir die numeriese omkering word gebasseer op die benadering van die Bromwich integraal. In hierdie tesis word ’n numeriese studie onderneem om die effektiwiteit van die Laplace omkeringsmetode te vergelyk met meer konvensionele tydintegrasie metodes. Ons ondersoek spesifiek die metode-van-lyne, gebasseer op MATLAB se ODE15s en die Crank-Nicolson metode. Ons studies sluit in ’n inleidende hoofstuk oor die Laplace omkeringsmetode. Dan gaan ons voort met spektraalmetodes vir die ruimtelike diskretisasie, waar ons die interpolasie polinoom invoer sowel as die konsep van ’n differensiasie-matriks waarmee afgeleides van ’n funksie benader kan word. Daarna word formules vir die numeriese differensiasie formules (NDFs) ingebou in ODE15s herlei, sowel as die welbekende tweede orde Crank-Nicolson metode. Om die Bromwich integraal te benader in die Laplace metode, gebruik ons die trapesiumreël oor ’n hiperboliese kontoer. Die berekeningskoste van al hierdie metodes word verbeter met die LU sowel as die Hessenberg ontbindings. Ten einde die drie metodes te vergelyk beskou ons twee kriteria: Die aantal lineêre stelsels wat moet opgelos word per eenheid van akkuraatheid, en die sentrale prosesseringstyd per eenheid van akkuraatheid. Die numeriese resultate demonstreer dat die nuwe metode, d.i. die Laplace omkeringsmetode, akkuraat is tot ’n eksponensiële orde van konvergensie in vergelyking tot die lineêre konvergensie van ODE15s en die Crank-Nicolson metodes. Die eksponensiële konvergensie lei na hoë akkuraatheid met slegs ’n klein aantal oplossings van die lineêre stelsel. Netso, in terme van berekeningskoste is die Laplace omkeringsmetode meer effektief as ODE15s en die Crank-Nicolson metode. Laastens pas ons die omkeringsmetode toe op die aksiale dispersiemodel sowel as die hittevergelyking in twee dimensies, met bevredigende resultate.
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Donna Lee. "Development of novel metal-catalysed methods for the transformation of Ynamides." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8895.

Full text
Abstract:
I. Rhodium-Catalysed Carbometalation of Ynamides using Organoboron Reagents. As an expansion of existing procedures for the carbometalation of ynamides, it was discovered that [Rh(cod)(MeCN)2]BF4 successfully promotes the carbometalation of ynamides with organoboron reagents. A variety of organoboron reagents were found to be suitable for this reaction, but mostly the use of arylboronic acids was explored. The developed methodology provides β,β-disubstituted enamide products in a regio- and stereocontrolled manner. II. Palladium-Catalysed Hydroacyloxylation of Ynamides. In the presence of palladium(II) acetate, ynamides successfully underwent a hydroacyloxylation reaction with a variety of carboxylic acids. This carboxylic acid addition occurred highly regio- and stereoselectively to provide α-acyloxyenamides. Applications of the α-acyloxyenamide products were also investigated. III. Rhodium-Catalysed [2+2] Cycloaddition of Ynamides with Nitroalkenes. A novel rhodium catalyst system has been developed in order to promote the [2+2] cycloaddition reaction between ynamides and nitroalkenes. The reaction provides cyclobutenamide products and was diastereoselective in favour of the trans cyclobutenamide. Both the ynamide scope and the nitroalkene scope of the reaction have been explored.
APA, Harvard, Vancouver, ISO, and other styles
9

Rosas, Martins Sara. "Development of genetic control methods in two lepidopteran species." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dobrin, Radu. "Transformation methods for off-line schedules to attributes for fixed priority scheduling /." Västerås : Mälardalen University, 2003. http://www.mrtc.mdh.se/publications/0540.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sirisuk, Phaophak. "Transformation methods and partial prior information for blind system identification and equalisation." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

McCarthy, Aidan. "Digital transformation in education: A mixed methods study of teachers and systems." Thesis, McCarthy, Aidan (2020) Digital transformation in education: A mixed methods study of teachers and systems. Professional Doctorate thesis, Murdoch University, 2020. https://researchrepository.murdoch.edu.au/id/eprint/56439/.

Full text
Abstract:
The growth in digital transformation in many societies is outpacing its uptake in education. Leaders in education are seeking guidance about best practices to achieve their transformation goals, from mobile innovation for classroom teachers to system wide digital transformational change. The goal of this thesis is to offer insights, strategies and guidance for education leaders as they implement digital technologies for the purpose of transforming teaching, learning and administration. The mixed methods study utilised teacher interviews and surveys in a hospital school to gather data for descriptive statistics and inductive analysis, and qualitative thematic content analysis of existing industry and education digital transformation frameworks. The findings are presented in three articles, the first two from the hospital school setting. The focus of Paper One is teachers’ professional learning needs to enable effective mobile technology integration in a hospital school setting. Paper Two examines the effectiveness of a customised professional development program for teachers to facilitate integration of mobile technologies with digital pedagogies. The findings of the hospital school-based research identified three types of teacher professional learning needs to enable the effective use of mobile technology: technological, pedagogical, and personal support. Participation in a customised professional development program resulted in notable improvement in hospital teachers’ perceived preparedness to use mobile technology to transform pedagogical practices. Furthermore, technology needs were significantly impacted as teachers gained confidence and collaborated as a learning community. The third Paper used thematic content analysis to identify critical components that provide guidance for education leaders embarking on digital transformation. This Paper recognised four key digital transformation needs: leadership, people, experience, and technology. The thesis affirms that identifying the needs of key stakeholders is a fundamental first step when embarking on transformative initiatives in education, and offers guidance on developing a coherent strategy that addresses drivers for scalable and sustainable change.
APA, Harvard, Vancouver, ISO, and other styles
13

Farhanieh, Arman. "Investigation on methods to improve heat loadprediction of the SGT-600 gas turbine." Thesis, Linköpings universitet, Mekanisk värmeteori och strömningslära, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-124552.

Full text
Abstract:
In modern gas turbines, with the increase of inlet gas temperature to raise thework output, the importance of accurate aero-thermal analysis has become of vitalimportance. These analysis are required for temperature prediction throughoutthe turbine and to predict the thermal stresses and to estimate the cooling requiredfor each component.In the past 20 years, computational fluid dynamics (CFD) methods have becomea powerfool tool aero-thermal analysis. Due to reasons including numericallimitation, flow complications caused by blade row interactions and the effect offilm cooling, using simple steady state CFD methods may result in inaccuratepredictions. Even though employing transient simulations can improve the accuracyof the simulations, it will also greatly increase the simulation time and cost.Therefore, new methods are constantly being developed to increase the accuracywhile keeping the computational costs relatively low. Investigating some of thesedeveloped methods is one of the main purposes of this study.A simplification that has long been applied in gas turbine simulations hasbeen the absence of cooling cavities. Another part of this thesis will focus onthe effect of cooling cavities and the importance of including them in the domain.Therefore, all transient and steady state simulations have been examined for twocases; a simplified case and a detailed case. The results are then compared tothe experimental measurements to evaluate the importance of their presence inthe model. The software used to perform all simulations is the commercial codeANSYS CFX 15.The findings suggest that even though including cooling cavities would improvethe results, the simulations should be run in transient. One important finding wasthat when performing transient simulations, especially the Time Transformationmethod, not only is the pitch ratio between every subsequent blade row important,but also the pitch ratio between the stators is highly influential on the accuracyof the results.
APA, Harvard, Vancouver, ISO, and other styles
14

Slaymaker, Mark Arthur. "The formalisation and transformation of access control policies." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:179cd9d2-0547-42b7-84a0-690bc4478bfb.

Full text
Abstract:
Increasing amounts of data are being collected and stored relating to every aspect of an individual's life, ranging from shopping habits to medical conditions. This data is increasingly being shared for a variety of reasons, from providing vast quantities of data to validate the latest medical hypothesis, to supporting companies in targeting advertising and promotions to individuals that fit a certain profile. In such cases, the data being used often comes from multiple sources --- with each of the contributing parties owning, and being legally responsible for, their own data. Within such models of collaboration, access control becomes important to each of the individual data owners. Although they wish to share data and benefit from information that others have provided, they do not wish to give away the entirety of their own data. Rather, they wish to use access control policies that give them control over which aspects of the data can be seen by particular individuals and groups. Each data owner will have access control policies that are carefully crafted and understood --- defined in terms of the access control representation that they use, which may be very different from the model of access control utilised by other data owners or by the technology facilitating the data sharing. Achieving interoperability in such circumstances would typically require the rewriting of the policies into a uniform or standard representation --- which may give rise to the need to embrace a new access control representation and/or the utilisation of a manual, error-prone, translation. In this thesis we propose an alternative approach, which embraces heterogeneity, and establishes a framework for automatic transformations of access control policies. This has the benefit of allowing data owners to continue to use their access control paradigm of choice. Of course, it is important that the data owners have some confidence in the fact that the new, transformed, access control policy representation accurately reflects their intentions. To this end, the use of tools for formal modelling and analysis allows us to reason about the translation, and demonstrate that the policies expressed in both representations are equivalent under access control requests; that is, for any given request both access control mechanisms will give an equivalent access decision. For the general case, we might propose a standard intermediate access control representation with transformations to and from each access control policy language of interest. However, for the purpose of this thesis, we have chosen to model the translation between role-based access control (RBAC) and the XML-based policy language, XACML, as a proof of concept of our approach. In addition to the formal models of the access control mechanisms and the translation, we provide, by way of a case study, an example of an implementation which performs the translation. The contributions of this thesis are as follows. First, we propose an approach to resolving issues of authorisation heterogeneity within distributed contexts, with the requirements being derived from nearly eight years of work in developing secure, distributed systems. Our second contribution is the formal description of two popular approaches to access control: RBAC and XACML. Our third contribution is the development of an Alloy model of our transformation process. Finally, we have developed an application that validates our approach, and supports the transformation process by allowing policy writers to state, with confidence, that two different representations of the same policy are equivalent.
APA, Harvard, Vancouver, ISO, and other styles
15

Selek, I. (István). "Novel evolutionary methods in engineering optimization—towards robustness and efficiency." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291579.

Full text
Abstract:
Abstract In industry there is a high demand for algorithms that can efficiently solve search problems. Evolutionary Computing (EC) belonging to a class of heuristics are proven to be well suited to solve search problems, especially optimization tasks. They arrived at that location because of their flexibility, scalability and robustness. However, despite their advantages and increasing popularity, there are numerous opened questions in this research area, many of them related to the design and tuning of the algorithms. A neutral technique called Pseudo Redundancy and related concepts such as Updated Objective Grid (UOG) is proposed to tackle the mentioned problem making an evolutionary approach more suitable for ''real world'' applications while increasing its robustness and efficiency. The proposed UOG technique achieves neutral search by objective function transformation(s) resulting several advantageous features. (a) Simplifies the design of an evolutionary solver by giving population sizing principles and directions to choose the right selection operator. (b) The technique of updated objective grid is adaptive without introducing additional parameters, therefore no parameter tuning required for UOG to adjust it for different environments, introducing robustness. (c) The algorithm of UOG is simple and computationally cheap. (d) It boosts the performance of an evolutionary algorithm on high dimensional (constrained and unconstrained) problems. The theoretical and experimental results from artificial test problems included in this thesis clearly show the potential of the proposed technique. In order to demonstrate the power of the introduced methods under "real" circumstances, the author additionally designed EAs and performed experiments on two industrial optimization tasks. Although, only one project is detailed in this thesis while the other is referred. As the main outcome of this thesis, the author provided an evolutionary method to compute (optimal) daily water pump schedules for the water distribution network of Sopron, Hungary. The algorithm is currently working in industry.
APA, Harvard, Vancouver, ISO, and other styles
16

Damouche, Nasrine. "Improving the Numerical Accuracy of Floating-Point Programs with Automatic Code Transformation Methods." Thesis, Perpignan, 2016. http://www.theses.fr/2016PERP0032/document.

Full text
Abstract:
Les systèmes critiques basés sur l’arithmétique flottante exigent un processus rigoureux de vérification et de validation pour augmenter notre confiance en leur sureté et leur fiabilité. Malheureusement, les techniques existentes fournissent souvent une surestimation d’erreurs d’arrondi. Nous citons Arian 5 et le missile Patriot comme fameux exemples de désastres causés par les erreurs de calculs. Ces dernières années, plusieurs techniques concernant la transformation d’expressions arithmétiques pour améliorer la précision numérique ont été proposées. Dans ce travail, nous allons une étape plus loin en transformant automatiquement non seulement des expressions arithmétiques mais des programmes complets contenant des affectations, des structures de contrôle et des fonctions. Nous définissons un ensemble de règles de transformation permettant la génération, sous certaines conditions et en un temps polynômial, des expressions pluslarges en appliquant des calculs formels limités, au sein de plusieurs itérations d’une boucle. Par la suite, ces larges expressions sont re-parenthésées pour trouver la meilleure expression améliorant ainsi la précision numérique des calculs de programmes. Notre approche se base sur les techniques d’analyse statique par interprétation abstraite pour sur-rapprocher les erreurs d’arrondi dans les programmes et au moment de la transformation des expressions. Cette approche est implémenté dans notre outil et des résultats expérimentaux sur des algorithmes numériques classiques et des programmes venant du monde d’embarqués sont présentés
Critical software based on floating-point arithmetic requires rigorous verification and validation process to improve our confidence in their reliability and their safety. Unfortunately available techniques for this task often provide overestimates of the round-off errors. We can cite Arian 5, Patriot rocket as well-known examples of disasters. These last years, several techniques have been proposed concerning the transformation of arithmetic expressions in order to improve their numerical accuracy and, in this work, we go one step further by automatically transforming larger pieces of code containing assignments, control structures and functions. We define a set of transformation rules allowing the generation, under certain conditions and in polynomial time, of larger expressions by performing limited formal computations, possibly among several iterations of a loop. These larger expressions are better suited to improve, by re-parsing, the numerical accuracy of the program results. We use abstract interpretation based static analysis techniques to over-approximate the round-off errors in programs and during the transformation of expressions. A tool has been implemented and experimental results are presented concerning classical numerical algorithms and algorithms for embedded systems
APA, Harvard, Vancouver, ISO, and other styles
17

Nieuwveldt, Fernando Damian. "A survey of computational methods for pricing Asian options." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2118.

Full text
Abstract:
Thesis (MSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009.
In this thesis, we investigate two numerical methods to price nancial options. We look at two types of options, namely European options and Asian options. The numerical methods we use are the nite di erence method and numerical inversion of the Laplace transform. We apply nite di erence methods to partial di erential equations with both uniform and non-uniform spatial grids. The Laplace inversion method we use is due to Talbot. It is based on the midpoint-type approximation of the Bromwich integral on a deformed contour. When applied to Asian options, we have the problem of computing the hypergeometric function of the rst kind. We propose a new method for numerically calculating the hypergeometric function. This method too is based on using Talbot contours. Throughout the thesis, we use the Black-Scholes equation as our benchmark problem.
APA, Harvard, Vancouver, ISO, and other styles
18

Kelly, Edward Joseph. "Transformation in meaning-making : selected examples from Warren Buffett's life, a mixed methods study." Thesis, Lancaster University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.553713.

Full text
Abstract:
The primary research question addressed in this study is: has Warren Buffett's 'meaning-making-in-action' changed over his life, as predicted by developmental theory? Meaning-making is defined within Constructive-developmental theory (Cook- Greuter, 1999,2005; Kegan, 1980,1994; Loevinger, 1976; McCauley et at., 2006; Torbert & Ass., 2004; Wilber, 2000) to mean the internal organising system individuals use to make sense of their experience. The question highlights the importance of vertical development which for Jean Piaget (1954) "was not the gradual accumulation of new knowledge or experience but a process of moving through qualitatively distinct stages of growth, a process that transforms knowledge itself" (McCauley et al., 2006, p. 635). Applying a constructive-developmental framework this study tracks the transformation in Buffett's meaning-making-inaction across thirty-two representative examples from his life that are then compared to one or more of seven developmental action-logics in developmental theory (Cook-Greuter, 2005; Fisher, Rooke & Torbert, 2003; Torbert & Ass., 2004). The study concludes that Buffett's meaning-making-in-action has transformed over his life and in a sequence predicted by the action-logics in developmental theory. This is reflected in a Spearman correlation coefficient of p. 93 which indicates a strong relationship between the predicted order of development and the actual order. While the principle contribution from this study is a methodological one within the field of developmental theory, the research also makes an original contribution to our understanding of the importance of Buffett's development to his effectiveness as 2 a leader. By implication developmental theory has important things to say about how others may develop their leadership as well. Next steps in this research include simplifying the application of the method.
APA, Harvard, Vancouver, ISO, and other styles
19

Haasdonk, Bernard [Verfasser]. "Transformation Knowledge in Pattern Analysis with Kernel Methods: Distance and Integration Kernels / Bernard Haasdonk." aachen : shaker, 2006. http://nbn-resolving.de/urn:nbn:de:bsz:25-opus-23769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Taib, Samsudin Hj. "Interpretation of the aeromagnetic anomalies of mainland Scotland using pseudogravimetric transformation and other methods." Thesis, Durham University, 1990. http://etheses.dur.ac.uk/6076/.

Full text
Abstract:
A procedure to upward continue magnetic anomalies observed on an irregular surface onto a horizontal plane has been developed and applied to the aeromagnetic map of Great Britain. Pseudogravimetric transformation was then carried out on this reduced anomaly and both data sets have been used for analysis and interpretation of several prominent anomalies in Scotland along the Great Glen fault and over the Midland Valley. A prominent linear positive magnetic anomaly occurring along the Great Glen fault has been modelled as due to a locally magnetized outward dipping body almost symmetrical about its apex beneath the fault line, together with a magnetized crustal slab to the northwest of the fault. The outward dipping body has its top lying within the upper crust, a magnetization of greater than about 1.0 A/m, a half-width of about 40 km at its base and a thickness of the order of 7-18 km. The origin of the outward dipping magnetized body may possibly be explained by metamorphism produced by frictional heating resulting from the transcurrent fault movement. Alternatively the metamorphism may be associated with some other fault related process such as crustal fluid flow. Thermal modelling has been used to demonstrate this. The magnetization contrast across the fault may be the direct result of blocks of differing magnetization on opposite side, juxtaposed as a result of transcurrent movement. The modelling along a profile over the Clyde Plateau (Midland Valley of Scotland) using a well-constrained lava body reveals the presence of a long wavelength anomaly component due to a deeper crustal source. The basement anomaly is conspicuous on the pseudogravimetric map but not on the aeromagnetic map. A near circular magnetic anomaly near Bathgate in the Midland Valley can be explained by an unexposed intrusive body superimposed on the deep crustal source as above.
APA, Harvard, Vancouver, ISO, and other styles
21

Shah, Nehal Rajendra. "Towards Novel Methods of Mutagenesis for Histophilus somni." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/43708.

Full text
Abstract:
Histophilus somni is an etiologic agent of shipping fever pneumonia, myocarditis, and other systemic diseases in bovines, although nonpathogenic commensal strains also exist. Virulence factors that have been identified in H. somni include biofilm formation, lipooligosaccharide phase variation, immunoglobulin binding proteins, survival in phagocytic cells, and many others. To identify genes responsible for virulence, an efficient mutagenesis system is needed. Mutagenesis of H. somni using allelic exchange is difficult due to its tight restriction modification system. Mutagenesis by natural transformation in Haemophilus influenzae is well established and may be enhanced by the presence of uptake signal sequences (USS) within the genome. We hypothesized that natural transformation occurs in H. somni because its genome is over-represented with USS and contains all the necessary genes for competence, except that ComD and ComE are mutated. For natural transformation, H. somni was grown to exponential phase, and then transferred to a non-growth defined medium to induce competence. H. somni strain 2336 was successfully transformed with homologous linear DNA (lob2A) containing an antibiotic marker gene, but at low efficiency. Shuttle vector pNS3K was also naturally transformed into H. somni at low efficiency. To attempt to improve transformation efficiency, comD and comE from H. influenzae were cloned into shuttle vector pNS3K to generate the plasmid pSScomDE. Although introduction of pSScomDE into H. somni was expected to increase the number and breadth of mutants generated by natural transformation, multiple attempts to electroporate pSScomDE into H. somni were unsuccessful. A native plasmid (pHS649) from H. somni strain 649 may prove to be a more efficient shuttle vector. Due to inefficiency in generating mutants by allelic exchange, transposon (Tn) mutagenesis with EZ::Tn5â ¢Tnp Transposomeâ ¢ (Epicentre) was used to generate a bank of mutants, but the mutation efficiency was low. Therefore the mariner Tn element is being tested as a more efficient method for random mutagenesis of H. somni. The transposase, which is required for excision of the Tn, was over-expressed in Escherichia coli, and then purified using amylose resin. H. somni was then naturally transformed after in-vitro transposition using pMarStrep, which contains the mariner Tn with StrepR antibiotic gene marker, and a series of transposition and ligation components. However, mariner Tn mutants were not generated. Nonetheless, natural transformation and/or mariner Tn mutagenesis may still prove to be efficient methods for mutagenesis of H. somni. Through the use of more effective mutagenesis systems, genes responsible for the expression of virulence factors can be identified, and improved vaccine candidates can be developed.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
22

Skalski, Jonathan Edward. "The Epistemic Qualities of Quantum Transformation." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2258.

Full text
Abstract:
Growth and development are central constituents of the human experience. Although the American Psychological Association aims to understand change and behavior in ways that embrace all aspects of experience (APA, 2008), sudden, life-altering or quantum transformation has been disregarded throughout the history of psychology until recently (see Miller & C' de Baca, 1994, 2001). Quantum transformation is similar to self-surrender conversion (James, 1902), but different from peak experiences (Maslow, 1964) and near death experiences (Lorimer, 1990) because quantum transformation, by definition, involves lasting change. Quantum transformation contains epistemic qualities, which refer to the content and process of knowing (Miller & C' de Baca, 2001), but little is known about these qualities. The current study employed a qualitative method to better understand the epistemic qualities of quantum transformation. Fourteen participants were extensively interviewed about their experience. Analysis involved hermeneutic methods (Kvale, 1996) and phenomenological description (Giorgi & Giorgi, 2003). Quantum transformation is essentially a process of knowing that unfolded in the form of Disintegration, Insight, and Integration in the present study. First, Disintegration is presented by themes of Overwhelming stress, Relational struggle, Hopelessness, Holding-on, Control, Psychological turmoil, Self-discrepancy, and Guilt. Second, Insight is presented by the Content and Tacit knowing of the experience. Third, Integration is presented by Changes in values, Other-orientation, and A process of development. The results suggest that the disintegration and the suffering that characterizes the pre-transformation milieu inform how quantum transformation relates to lasting change. Therapists that automatically aim to alleviate moral-emotional sorrow or guilt should consider whether the emotional experience can bring about positive transformation. Overall, quantum transformation has potentially major implications for our understanding of personality change and moral development.
APA, Harvard, Vancouver, ISO, and other styles
23

Ditzel, Facci Paula. "Dancing Conflicts, Unfolding Peaces: Dance as method to elicit conflict transformation." Doctoral thesis, Universitat Jaume I, 2017. http://hdl.handle.net/10803/404493.

Full text
Abstract:
This research explores dance as method to elicit conflict transformation and unfold peaces at the intrapersonal level. Peace is understood as presence, as a way of being in the world, and conflict as a natural feature of human relationships. This thesis investigates how to provide a frame which renders the embodied here and now moving experience meaningful, creating auspicious conditions for eliciting conflict transformation and unfolding peaces. Exploring elements that contribute to this process, it analyses interpretations of peaces and dance expressions. Furthermore, this thesis discusses the transrational peace philosophy and an approach to dance that acknowledges its potential for peace, and suggests twisting harmful tendencies with balance and awareness. It then explores elicitive conflict transformation and methods to facilitate it. Finally, this text presents a theoretical and practical approach to those elements through embodied movement, which informs the potentials and limitations of dance as method to elicit conflict transformation
La presente tesis explora la danza como método para elicitar la transformación de conflictos y desdoblar paces en nivel intrapersonal. Se investiga cómo propiciar un contexto en el cual se haga significativa la experiencia del movimiento corporal consciente en el momento presente, creando condiciones auspiciosas para elicitar conflictos y desdoblar paces. En busca de elementos que concierten tal método, esta pesquisa pone en diálogo interpretaciones de paces con expresiones de danza. Asimismo, se elabora sobre la filosofía de las paces transracionales y sobre el potencial de la danza para la paz, y se sugiere distorsionar tendencias nocivas con equilibrio y consciencia. Se explora también la perspectiva elicitiva de transformación de conflictos y los métodos para facilitarla. Finalmente, se presenta un abordaje teórico y práctico de estos elementos por medio del movimiento corporal consciente, que informa el potencial y las limitaciones de la danza como método elicitivo de transformación de conflictos.
APA, Harvard, Vancouver, ISO, and other styles
24

Nyberg, Peter. "Evaluation of two Methods for Identifiability Testing." Thesis, Linköping University, Department of Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51293.

Full text
Abstract:

This thesis concerns the identifiability issue; which, if any, parameters can be deduced from the input and output behavior of a model? The two types of identifiability concepts, a priori and practical, will be addressed and explained. Two methods for identifiability testing are evaluated and the result shows that the two methods work well if they are combined. The first method is for a priori identifiability analysis and it can determine the a priori identifiability of a system in polynomial time. The result from the method is probabilistic with a high probability of correct answer. The other method takes the simulation approach to determine whether the model is practically identifiable. Non-identifiable parameters manifest themselves as a functional relationship between the parameters and the method uses transformations of the parameter estimates to conclude if the parameters are linked. The two methods are verified on models with known identifiability properties and then tested on some examples from systems biology. Although the output from one of the methods is cumbersome to interpret, the results show that the number of parameters that can be determined in practice (practical identifiability) are far fewer than the ones that can be determined in theory (a priori identifiability). The reason for this is the lack of quality, noise and lack of excitation, of the measurements.


Fokus i denna rapport är på identifierbarhetsproblemet. Vilka parametrar kan unikt bestämmas från en modell? Det existerar två typer av identifierbarhetsbegrepp, a priori och praktisk identifierbarhet, som kommer att förklaras. Två metoder för identifierbarhetstestning är utvärderade och resultaten visar på att de två metoderna fungerar bra om de kombineras med varandra. Den första metoden är för a priori identifierbarhetsanalys och den kan avgöra identifierbarheten för ett system i polynomiell tid. Resultaten från metoden är slumpmässigt med hög sannolikhet för ett korrekt svar. Den andra metoden använder sig av simuleringar för att avgöra om modellen är praktiskt identifierbar. Icke-identifierbara parametrar yttrar sig som funktionella kopplingar mellan parametrar och metoden använder sig av transformationer av parameterskattningarna för att avgöra om parametrarna är kopplade. De två metoderna är verifierade på modeller där identifierbarheten är känd och är därefter testade på några exempel från systembiologi. Trots att resultaten från den ena metoden är besvärliga att tolka visar resultaten på att antalet parametrar som går att bestämma i verkligheten (praktiskt identifierbara) är betydligt färre än de parametrar som kan bestämmas i teorin (a priori identifierbara). Anledningen beror på brist på kvalitet, både brus och brist på excitation, i mätningarna.

APA, Harvard, Vancouver, ISO, and other styles
25

Carletti, Vincenzo. "Exact and inexact methods for graph Similarity in structural pattern recognition." Caen, 2016. http://www.theses.fr/2016CAEN2004.

Full text
Abstract:
Les graphes sont utilisés dans de nombreux domaines applicatifs tels que la biologie, les réseaux sociaux, les bases de données,. . . Les graphes permettentt de décrire un ensemble d'objets ainsi que leurs relations. L'analyse de ces objets réclame souvent de mesurer la similarité entre les graphes. Toutefois, en raison de son aspect combinatoire, ce problème est NP complet et estt généralement résolu en utilisant différentes heuristiques. Dans cette thèse nous avons exploré deux approches pour calculer la similarité entre graphes. La première est basé sur l'appariement exact. Nous avons conçu l'algorithme VF3 qui recherche des motifs dans les graphes. La seconde approche est basée sur un appariement inexact qui calcule une approximation efficace de la distance d'édition entre graphes en la modélisant comme un problème de minimisation quadratique
Graphs are widely employed in many application fields, such as biology, chemistry, social networks, databases and so on. Graphs allow to describe a set of objects together with their relationships. Analyzing these data often requires to measure the similarity between two graphs. Unfortunately, due to its combinatorial nature, this is a NP-Complete problem generally addressed using different kind of heuristics. In this Thesis we have explored two approaches to compute the similarity between graphs. The former is based on the exact graph matching approach. We have designed, VF3, an algorithm aimed to search for pattern structures within graphs. While, the second approach is an inexact graph matching method which aims to compute an efficient approximation of the Graph Edit Distance (GED) as a Quadratic Assignment Problem (QAP)
APA, Harvard, Vancouver, ISO, and other styles
26

Msukwa, Chimwemwe A. P. S. "Traditional African conflict prevention and transformation methods : case studies of Sukwa, Ngoni, Chewa and Yao tribes in Malawi." University of the Western Cape, 2012. http://hdl.handle.net/11394/4646.

Full text
Abstract:
Philosophiae Doctor - PhD
This study sought to investigate if there are common cultural elements for preventing and transforming violent conflict in selected patrilineal and matrilineal tribes in Malawi, as well as selected societies from other parts of Africa. The researcher argues that in both patrilineal and matrilineal tribes in Malawi, violent conflict prevention and transformation methods are inherently rooted in elaborate socio-political governance structures. This also applies to other societies in Africa, such as the pre-colonial traditional societies of Rwanda, the Pokot pastoral community in the North Rift of Kenya, the ubuntu societies in South Africa and the Acholi of Northern Uganda. The basic framework for these structures comprise the individuals (men, women and older children), as the primary building blocks, the family component comprising of the nucleus and extended families as secondary building block and traditional leadership component. Within these socio-political governance structures, individuals coexist and are inextricably bound in multi-layered social relationships and networks with others. In these governance structures, a certain level of conflict between individuals or groups is considered normal and desirable, as it brings about vital progressive changes as well as creates the necessary diversity, which makes the community interesting. However, violent conflicts are regarded as undesirable and require intervention. Consequently, the multi-layered social networks have several intrinsic features, which enable the communities to prevent the occurrence of violent conflicts or transform them when they occur, in order to maintain social harmony. The first findings show that each level of the social networks has appropriate mechanisms for dissipating violent conflicts, which go beyond tolerable levels. Secondly, individuals have an obligation to intervene in violent conflicts as part of social and moral roles, duties and commitments, which they have to fulfil. Thirdly, the networks have forums in which selected competent elders from the society facilitate open discussions of violent conflicts and decisions are made by consensus involving as many men and women as possible. In these forums, each individual is valued and dignified. Fourthly, there are deliberate efforts to advance transparency and accountability in the forums where violent conflicts are discussed. However, in general terms, women occupy a subordinate status in both leadership and decision-making processes, though they actively participate in violent conflict interventions and some of them hold leadership positions. In addition, the findings show that the tribes researched have an elaborate process for transforming violent conflicts. This process includes the creation of an environment conducive for discussing violent conflicts, listening to each of the disputants, establishing the truth, exhausting all issues, reconciling the disputants and in case one disputant is not satisfied with the outcomes of the discussions, referring the violent conflict for discussion to another forum. Furthermore, individuals in both patrilineal and matrilineal tribes are governed by moral values including respect, relations, relationships, interdependence, unity, kindness, friendliness, sharing, love, transparency, tolerance, self-restraint, humility, trustworthiness and obedience. These moral values enhance self-restraint, prevent aggressive behaviour, as well as promote and enhance good relationships between individuals in the family and the society as a whole. The researcher argues that the positive cultural factors for prevention and transformation of violent conflict, outlined above, which are inherent in the traditional African socio-political governance system should be deliberately promoted for incorporation into the modern state socio-political governance systems through peace-building and development initiatives as well as democratisation processes. This could be one of the interventions for dealing with violent conflict devastating Africa today.
APA, Harvard, Vancouver, ISO, and other styles
27

Howe, Bill. "Gridfields: Model-Driven Data Transformation in the Physical Sciences." PDXScholar, 2006. https://pdxscholar.library.pdx.edu/open_access_etds/2676.

Full text
Abstract:
Scientists' ability to generate and store simulation results is outpacing their ability to analyze them via ad hoc programs. We observe that these programs exhibit an algebraic structure that can be used to facilitate reasoning and improve performance. In this dissertation, we present a formal data model that exposes this algebraic structure, then implement the model, evaluate it, and use it to express, optimize, and reason about data transformations in a variety of scientific domains. Simulation results are defined over a logical grid structure that allows a continuous domain to be represented discretely in the computer. Existing approaches for manipulating these gridded datasets are incomplete. The performance of SQL queries that manipulate large numeric datasets is not competitive with that of specialized tools, and the up-front effort required to deploy a relational database makes them unpopular for dynamic scientific applications. Tools for processing multidimensional arrays can only capture regular, rectilinear grids. Visualization libraries accommodate arbitrary grids, but no algebra has been developed to simplify their use and afford optimization. Further, these libraries are data dependent—physical changes to data characteristics break user programs. We adopt the grid as a first-class citizen, separating topology from geometry and separating structure from data. Our model is agnostic with respect to dimension, uniformly capturing, for example, particle trajectories (1-D), sea-surface temperatures (2-D), and blood flow in the heart (3-D). Equipped with data, a grid becomes a gridfield. We provide operators for constructing, transforming, and aggregating gridfields that admit algebraic laws useful for optimization. We implement the model by analyzing several candidate data structures and incorporating their best features. We then show how to deploy gridfields in practice by injecting the model as middleware between heterogeneous, ad hoc file formats and a popular visualization library. In this dissertation, we define, develop, implement, evaluate and deploy a model of gridded datasets that accommodates a variety of complex grid structures and a variety of complex data products. We evaluate the applicability and performance of the model using datasets from oceanography, seismology, and medicine and conclude that our model-driven approach offers significant advantages over the status quo.
APA, Harvard, Vancouver, ISO, and other styles
28

Huang, Chien-Chung. "Discrete event system modeling using SysML and model transformation." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45830.

Full text
Abstract:
The objective of this dissertation is to introduce a unified framework for modeling and simulating discrete event logistics systems (DELS) by using a formal language, the System Modeling Language (SysML), for conceptual modeling and a corresponding methodology for translating the conceptual model into a simulation model. There are three parts in this research: plant modeling, control modeling, and simulation generation. Part 1:Plant Modeling of Discrete Event Logistics Systems. Contemporary DELS are complex and challenging to design. One challenge is to describe the system in a formal language. We propose a unified framework for modeling DELS using SysML. A SysML subset for plant modeling is identified in this research. We show that any system can be described by using the proposed subset if the system can be modeled using finite state machines or finite state automata. Furthermore, the system modeled by the proposed subset can avoid the state explosion problem, i.e., the number of the system states grows exponentially when the number of the components increases. We also compare this approach to other existing modeling languages. Part 2:Control Modeling of Discrete Event Logistics Systems. The development of contemporary manufacturing control systems is an extremely complex process. One approach for modeling control systems uses activity diagrams from SysML, providing a standard object-oriented graphical notation and enhancing reusability. However, SysML activity diagrams do not directly support the kind of analysis needed to verify the control model, such as might be available with a Petri net (PN) model. We show that a control model represented by UML/SysML activity diagrams can be transformed into an equivalent PN, so the analysis capability of PN can be used and the results applied back in the activity diagram model. We define a formal mathematical notation for activity diagrams, show the mapping rules between PN and activity diagrams, and propose a formal transformation algorithm. Part 3:Discrete Event Simulation Generation. The challenge of cost-effectively creating discrete event simulation models is well-known. One approach to alleviate this issue is to describe a system using a descriptive modeling language and then transform the system model to a simulation model. Some researchers have tried to realize this idea using a transformation script. However, most of the transformation approaches depend on a domain specific language, so extending the domain specific language may require modifying the transformation script. We propose a transformation approach from SysML to a simulation language. We show that a transformation script can be independent of the associated domain specific language if the domain specific language is implemented as domain libraries using a proposed SysML subset. In this case, both the domain library and the system model can be transformed to a target simulation language. We demonstrate a proof-of-concept example using AnyLogic as the target simulation language.
APA, Harvard, Vancouver, ISO, and other styles
29

Moog, Mathieu. "Carbon dioxide at extreme conditions : liquid(s), crystals, glasses and their transformation from ab initio topological methods." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS263.

Full text
Abstract:
Le dioxyde de carbone est un constituant important du manteau de la Terre et intervient dans de nombreux processus géologiques, notamment sismiques et volcaniques. Dans cette thèse, nous étudions les transformations du dioxyde de carbone dans les conditions extrêmes du manteau inférieur de la Terre. En effet, la compréhension du comportement du dioxyde de carbone et notamment de ses mécanismes de polymérisation dans ces conditions pourrait apporter un éclairage supplémentaire sur les propriétés du manteau inférieur. Nous utilisons ici des outils topologiques avancées et des méthodes de simulation aux premiers principes (ab initio) afin d’étudier ces mécanismes de polymérisation du dioxyde de carbone dans des conditions de températures et de pressions extrêmes du manteau. Ce faisant nous mettons en évidence quatre comportements fluides différents dans ces conditions géothermiques: le liquide moléculaire classique, avec des interactions faibles entre molécules de dioxyde de carbone ; un liquide moléculaire réactif dans lequel ces molécules réagissent fréquemment et forment d’éphèmes dimères permettant l’échange d’oxygène entre elles ; un liquide polymérique hautement réactif formant un réseau complexe de chaînes moléculaires en perpétuelles évolutions ; une forme liquide avec très peu de diffusion, semblable à un amorphe. Tous ces fluides peuvent apparaître dans les conditions expérimentales du manteau inférieur; leur propriétés peuvent donc impacter la réactivité et les propriétés de transport de ce dernier. Par exemple, le liquide moléculaire réactif semble indiquer une participation accrue du dioxyde de carbone dans les réactions du manteau
Although carbon dioxide is well known for its impact on the atmosphere, it is also an important constituent of the mantle of earth. As such, it is implicated in a number of geological events, notably in earthquakes and volcanism. In this thesis we aim at understanding the behavior of carbon dioxide in the lower mantle, where it is likely formed through reactions between carbonates and silicon dioxide. Indeed, the properties of carbon dioxide, and most notably its polymerization mechanisms may impact the reactivity of the mantle and therefore impact the chemical properties occurring within it. In this work we use state of the art topological descriptors and ab initio simulation methods to study the polymerization mechanisms that occurs under the extreme conditions corresponding to the lower mantle of Earth. We notably show the existence of four distinctive fluids that coexist at those conditions, including: a reactive molecular fluid with the regular formation of dimers allowing the exchange of oxygen between carbon dioxide molecules; a very reactive polymeric fluid which forms a complex network in perpetual evolution; and a very sluggish liquid, very similar to an amorphous solid ; and the standard molecular liquid with long range interactions between carbon dioxide molecules. All of those different fluids occur within the experimental conditions of the lower mantle and therefore may have potential implications for its reactivity and transport properties. The reactive molecular liquid, for example, implies that carbon dioxide will take an important part in the chemical reactions of the mantle
APA, Harvard, Vancouver, ISO, and other styles
30

Guo, Ronggang. "Systematical analysis of the transformation procedures in Baden-Württemberg with Least Squares and Total Least Squares methods." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2007. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-33293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Paulauskaitė, Agnė. "Z formaliuju metodu panaudojimas informaciniu sistemu projektavime." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2005. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2005~D_20050520_194937-24216.

Full text
Abstract:
Summary Still today informal methods are the most common for informational systems design. They don‘t allows unambiguously understand formulating tasks, moreover availably specifications not always are complete. Because of this informational system does not correspond to users needs. Using informal methods specification transformation to software code isn‘t always possible. In real time informational systems problematic domain is varying in time. Thus are changing requirements for informational systems. Using informal methods, to solve this problem, usually we need to rewrite software. Using formal methods we don‘t have to rewrite software, it is enough organization business instructions specified in Z transform to software code. In this paper we present research results about Z specification method use for formal requirements specification for informational systems design. Using Z/EVES - an interactive system for composing, checking, and analyzing Z specifications, was accomplished Z specification validation, theretofore reviewing the list of available Z specification validation tools. Z specification language was compared with object-oriented language Object–Z to find out advantages and disadvantages of these two formal specification languages. Were discussed questions about Z specification transformation to Object-Z, which facilities an object-oriented specification extension to object-oriented programming languages. In this paper transformation methodology from object-Z... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
32

Fletcher, Kimberley Liane. "The Collision of Political and Legal Time| Foreign Affairs and the Court's Transformation of Executive Authority." Thesis, State University of New York at Albany, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3620215.

Full text
Abstract:

A dynamic institutional relationship exists between the United States executive branch and the United States Supreme Court. This dissertation examines how the Court affects constitutional and political development by taking a leading role in interpreting presidential decision-making in the area of foreign affairs since 1936. Examining key cases and controversies in foreign policymaking, primarily in the twentieth and twenty-first centuries, this dissertation highlights the patterns of intercurrences and the mutual construction process that takes place at the juncture of legal and political time. In so doing, it is more than evident that the Court not only sanctions the claims made by executives of unilateral decision-making, but also that the Court takes a leading role in (re)defining the very scope and breadth of executive foreign policymaking.

APA, Harvard, Vancouver, ISO, and other styles
33

An, Zhong. "Interpretation of X-ray and microwave images : some transform methods and phase unwrapping." Thesis, King's College London (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Navarro, Quiles Ana. "COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/98703.

Full text
Abstract:
Desde las contribuciones de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob y Johann Bernoulli en el siglo XVII hasta ahora, las ecuaciones en diferencias y las diferenciales han demostrado su capacidad para modelar satisfactoriamente problemas complejos de gran interés en Ingeniería, Física, Epidemiología, etc. Pero, desde un punto de vista práctico, los parámetros o inputs (condiciones iniciales/frontera, término fuente y/o coeficientes), que aparecen en dichos problemas, son fijados a partir de ciertos datos, los cuales pueden contener un error de medida. Además, pueden existir factores externos que afecten al sistema objeto de estudio, de modo que su complejidad haga que no se conozcan de forma cierta los parámetros de la ecuación que modeliza el problema. Todo ello justifica considerar los parámetros de la ecuación en diferencias o de la ecuación diferencial como variables aleatorias o procesos estocásticos, y no como constantes o funciones deterministas, respectivamente. Bajo esta consideración aparecen las ecuaciones en diferencias y las ecuaciones diferenciales aleatorias. Esta tesis hace un recorrido resolviendo, desde un punto de vista probabilístico, distintos tipos de ecuaciones en diferencias y diferenciales aleatorias, aplicando fundamentalmente el método de Transformación de Variables Aleatorias. Esta técnica es una herramienta útil para la obtención de la función de densidad de probabilidad de un vector aleatorio, que es una transformación de otro vector aleatorio cuya función de densidad de probabilidad es conocida. En definitiva, el objetivo de este trabajo es el cálculo de la primera función de densidad de probabilidad del proceso estocástico solución en diversos problemas basados en ecuaciones en diferencias y diferenciales aleatorias. El interés por determinar la primera función de densidad de probabilidad se justifica porque dicha función determinista caracteriza la información probabilística unidimensional, como media, varianza, asimetría, curtosis, etc., de la solución de la ecuación en diferencias o diferencial correspondiente. También permite determinar la probabilidad de que acontezca un determinado suceso de interés que involucre a la solución. Además, en algunos casos, el estudio teórico realizado se completa mostrando su aplicación a problemas de modelización con datos reales, donde se aborda el problema de la estimación de distribuciones estadísticas paramétricas de los inputs en el contexto de las ecuaciones en diferencias y diferenciales aleatorias.
Ever since the early contributions by Isaac Newton, Gottfried Wilhelm Leibniz, Jacob and Johann Bernoulli in the XVII century until now, difference and differential equations have uninterruptedly demonstrated their capability to model successfully interesting complex problems in Engineering, Physics, Chemistry, Epidemiology, Economics, etc. But, from a practical standpoint, the application of difference or differential equations requires setting their inputs (coefficients, source term, initial and boundary conditions) using sampled data, thus containing uncertainty stemming from measurement errors. In addition, there are some random external factors which can affect to the system under study. Then, it is more advisable to consider input data as random variables or stochastic processes rather than deterministic constants or functions, respectively. Under this consideration random difference and differential equations appear. This thesis makes a trail by solving, from a probabilistic point of view, different types of random difference and differential equations, applying fundamentally the Random Variable Transformation method. This technique is an useful tool to obtain the probability density function of a random vector that results from mapping another random vector whose probability density function is known. Definitely, the goal of this dissertation is the computation of the first probability density function of the solution stochastic process in different problems, which are based on random difference or differential equations. The interest in determining the first probability density function is justified because this deterministic function characterizes the one-dimensional probabilistic information, as mean, variance, asymmetry, kurtosis, etc. of corresponding solution of a random difference or differential equation. It also allows to determine the probability of a certain event of interest that involves the solution. In addition, in some cases, the theoretical study carried out is completed, showing its application to modelling problems with real data, where the problem of parametric statistics distribution estimation is addressed in the context of random difference and differential equations.
Des de les contribucions de Isaac Newton, Gottfried Wilhelm Leibniz, Jacob i Johann Bernoulli al segle XVII fins a l'actualitat, les equacions en diferències i les diferencials han demostrat la seua capacitat per a modelar satisfactòriament problemes complexos de gran interés en Enginyeria, Física, Epidemiologia, etc. Però, des d'un punt de vista pràctic, els paràmetres o inputs (condicions inicials/frontera, terme font i/o coeficients), que apareixen en aquests problemes, són fixats a partir de certes dades, les quals poden contenir errors de mesura. A més, poden existir factors externs que afecten el sistema objecte d'estudi, de manera que, la seua complexitat faça que no es conega de forma certa els inputs de l'equació que modelitza el problema. Tot aço justifica la necessitat de considerar els paràmetres de l'equació en diferències o de la equació diferencial com a variables aleatòries o processos estocàstics, i no com constants o funcions deterministes. Sota aquesta consideració apareixen les equacions en diferències i les equacions diferencials aleatòries. Aquesta tesi fa un recorregut resolent, des d'un punt de vista probabilístic, diferents tipus d'equacions en diferències i diferencials aleatòries, aplicant fonamentalment el mètode de Transformació de Variables Aleatòries. Aquesta tècnica és una eina útil per a l'obtenció de la funció de densitat de probabilitat d'un vector aleatori, que és una transformació d'un altre vector aleatori i la funció de densitat de probabilitat és del qual és coneguda. En definitiva, l'objectiu d'aquesta tesi és el càlcul de la primera funció de densitat de probabilitat del procés estocàstic solució en diversos problemes basats en equacions en diferències i diferencials. L'interés per determinar la primera funció de densitat es justifica perquè aquesta funció determinista caracteritza la informació probabilística unidimensional, com la mitjana, variància, asimetria, curtosis, etc., de la solució de l'equació en diferències o l'equació diferencial aleatòria corresponent. També permet determinar la probabilitat que esdevinga un determinat succés d'interés que involucre la solució. A més, en alguns casos, l'estudi teòric realitzat es completa mostrant la seua aplicació a problemes de modelització amb dades reals, on s'aborda el problema de l'estimació de distribucions estadístiques paramètriques dels inputs en el context de les equacions en diferències i diferencials aleatòries.
Navarro Quiles, A. (2018). COMPUTATIONAL METHODS FOR RANDOM DIFFERENTIAL EQUATIONS: THEORY AND APPLICATIONS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/98703
TESIS
APA, Harvard, Vancouver, ISO, and other styles
35

Michaud, Wild Nickie. "Political criticism and the power of satire| The transformation of "late-night" comedy on television in the United States, 1980-2008." Thesis, State University of New York at Albany, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3671783.

Full text
Abstract:

How has political comedy on television in the United States changed over time? Earlier examples of political comedy on television were shows like Saturday Night Live and various late night talk shows, which focused primarily on political or personal scandals or personal characteristics, rather than policies or substantive issues. In other arenas of television and the public sphere in general, there was serious criticism of scandals, but not in political comedy. Shows that attempted to criticize politicians or serious public issues using satire, irony, or invective such as The Smothers Brothers Comedy Hour, were routinely censored by network executives. With the advent of cable, and the failures of traditional mainstream journalism after 9/11, a change occurred. The Daily Show with Jon Stewart almost immediately adopted a critical stance on the Bush administration that was widely discussed in "serious" public sphere outlets such as CNN, the New York Times and the Washington Post. This form of "critical comedy" has proved popular. This project examines commentary about such programs in the journalistic sphere from each presidential election cycle from 1980-2008. This includes data from newspapers as well as television news sources. Additionally, I conduct content analysis of sets of Saturday Night Live, The Colbert Report, and The Daily Show from each time period, if the show was being produced. I show that political comedy is increasingly influential in public sphere discussions of presidential politics.

APA, Harvard, Vancouver, ISO, and other styles
36

Dietrich, Jan Philipp. "Phase Space Reconstruction using the frequency domain : a generalization of actual methods." Master's thesis, Universität Potsdam, 2008. http://opus.kobv.de/ubp/volltexte/2011/5073/.

Full text
Abstract:
Phase Space Reconstruction is a method that allows to reconstruct the phase space of a system using only an one dimensional time series as input. It can be used for calculating Lyapunov-exponents and detecting chaos. It helps to understand complex dynamics and their behavior. And it can reproduce datasets which were not measured. There are many different methods which produce correct reconstructions such as time-delay, Hilbert-transformation, derivation and integration. The most used one is time-delay but all methods have special properties which are useful in different situations. Hence, every reconstruction method has some situations where it is the best choice. Looking at all these different methods the questions are: Why can all these different looking methods be used for the same purpose? Is there any connection between all these functions? The answer is found in the frequency domain : Performing a Fourier transformation all these methods getting a similar shape: Every presented reconstruction method can be described as a multiplication in the frequency domain with a frequency-depending reconstruction function. This structure is also known as a filter. From this point of view every reconstructed dimension can be seen as a filtered version of the measured time series. It contains the original data but applies just a new focus: Some parts are amplified and other parts are reduced. Furthermore I show, that not every function can be used for reconstruction. In the thesis three characteristics are identified, which are mandatory for the reconstruction function. Under consideration of these restrictions one gets a whole bunch of new reconstruction functions. So it is possible to reduce noise within the reconstruction process itself or to use some advantages of already known reconstructions methods while suppressing unwanted characteristics of it.
Attraktorrekonstruktion („Phase Space Reconstruction“) ist eine Technik, die es ermöglicht, aus einer einzelnen Zeitreihe den vollständigen Phasenraum eines Systems zu rekonstruieren und somit Rückschlüsse auf topologische Eigenschaften dieses dynamischen Systems zu ziehen. Sie findet Verwendung in der Bestimmung von Lyapunov-Exponenten und zur Reproduktion von unbeobachteten Systemgrößen. Es gibt viele verschiedene Methoden zur Attraktorrekonstruktion wie z.B. die Time-Delay-Methode or Rekonstruktion durch Ableitung, Integration oder mithilfe einer Hilbert-Transformation. Zumeist wird der Time-Delay-Ansatz verwendet, es gibt jedoch auch diverse Problemstellungen, in welchen die alternativen Methoden bessere Ergebnisse liefern. Die Kernfragen, die beim Vergleich dieser Methoden entsteht, sind: Wie kommt es, dass alle Ansätze, trotz ihrer teilweise sehr unterschiedlichen Struktur, denselben Zweck erfüllen? Gibt es Übereinstimmungen zwischen all diesen Methoden? Die Antwort lässt sich im Frequenzraum finden: Nach einer Fourier-Transformation besitzen alle genannten Methoden plötzlich eine sehr ähnliche Struktur. Jede Methode transformiert sich im Frequenzraum zu einer einfachen Multiplikation des Eingangssignals mit einer frequenzabhängigen Rekonstruktionsfunktion. Diese Struktur ist in der Datenanalyse auch bekannt als Filter. Aus dieser Perspektive lässt sich jede Rekonstruktionsdimension als gefilterte Zeitreihe der ursprünglichen Zeitreihe interpretieren: Sie enthält den Originaldatensatz, allerdings mit einem verschobenen Fokus: Einige Eigenschaften der Originalzeitreihe werden unterdrückt, während andere Teile verstärkt wiedergegeben werden. Des weiteren zeige ich in der Diplomarbeit, dass nicht jede beliebige Funktion im Frequenzraum zur Rekonstruktion verwendet werden kann. Ich stelle drei Eigenschaften vor, welche jede Rekonstruktionsfunktion erfüllen muss. Unter Beachtung dieser Bedingungen ergeben sich nun diverse Möglichkeiten für neue Rekonstruktionsfunktionen. So ist es z.B. möglich gleichzeitig mit der Rekonstruktion das Ursprungssignal auch zu filtern, oder man kann bereits bestehende Rekonstruktionsfunktionen so abwandeln, dass unerwünschte Nebeneffekte der Rekonstruktion abgemildert oder gar ganz unterdrückt werden.
APA, Harvard, Vancouver, ISO, and other styles
37

Rosenbaum, Christopher Michael. "AN OBSERVATIONAL STUDY OF THE METHODS AND PROGRESS IN ENTERPRISE LEAN TRANSFORMATION AT A LEARNING HEALTH CARE ORGANIZATION." UKnowledge, 2013. http://uknowledge.uky.edu/ms_etds/5.

Full text
Abstract:
The health care industry in the United States is increasingly pressured to improve safety and quality performance and increase revenue. In response, many health care institutions are moving to redesign their processes and practices in an effort to decrease costs and provide safer, higher quality, and more efficient care. The purpose of this paper is to document the Lean implementation strategy and progress in implementation at a large teaching health care organization undergoing Lean transformation in order to understand enterprise transformation strategies and the impact of leadership involvement on culture development and Lean implementation. Through direct observations and involvement and transformation activities, the methodology for Lean transformation and progress in implementation were documented and analyzed. The organization employed an outside consultant to assist with transformation activities, and underwent a three-pronged approach to implementation, which included model area development, team member problem solving training, and management-led problem solving activities. It was found that leadership involvement was lacking, especially at the highest levels, and the organization struggled to build the culture necessary to support transformation and develop an operational model area, though successes were realized in efforts to train employees in Toyota’s 8-Step Problem Solving method and in management-led problem solving activities.
APA, Harvard, Vancouver, ISO, and other styles
38

Holefors, Anna. "Genetic transformation of the apple rootstock M26 with genes influencing growth properties /." Alnarp : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 1999. http://epsilon.slu.se/avh/1999/91-576-5477-8.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Degrou, Antoine Edouard. "Etude de l’impact des procédés de transformation sur la diffusion des caroténoïdes : cas du lycopène de la tomate." Thesis, Avignon, 2013. http://www.theses.fr/2013AVIG0655/document.

Full text
Abstract:
Les caroténoïdes sont une famille de molécules lipidiques que l’on trouve en particulier dans les végétaux, et qui ont pour caractéristique visuelle d’être colorés, du jaune au rouge. Ils ont été identifiés comme nutritionnellement actifs par des études épidémiologiques. Afin de pouvoir jouer un rôle dans l’organisme les caroténoïdes doivent être absorbés. Pour ce faire ils doivent être libérés de la matrice alimentaire afin de passer après plusieurs étapes dans la circulation sanguine. Or les caroténoïdes des végétaux sont connus pour être peu biodisponible. Cette biodisponibilité augmente après transformation. L’objectif de ce travail est donc de comprendre l’effet des procédés de fabrication sur la bioaccessibilité, en développant un outil simple permettant aux industriels de l’évaluer facilement. Le lycopène de la tomate a été choisi comme molécule d’intérêt. Dans un premier temps, une méthode d’étude de la diffusion du lycopène entre la matrice végétale et la phase lipidique du bolus alimentaire est mise en place. En utilisant cette méthode avec un large extrait d’huile, seulement 31%±1% du lycopène à put être extrait du jus de tomate vers la phase huile. Avec des ratio faible (entre 0.11 et 1) huile/tomate, l’extraction du lycopène est limitée par la saturation de l’huile. La diffusion du lycopène ne varie pas significativement avec le pH mais va augmenter lorsque la température varie de 10 °C à 37 °C. A partir de ces résultats sont calculés les facteurs de partition du lycopène dans l’huile ainsi que sa diffusivité.Dans un deuxième temps, des échantillons contrastés à base de tomates sont réalisés par l’utilisation des procédés Hot Break (HB) et Cold Break (CB) afin de vérifier l’impact des procédés de transformation sur les propriétés de diffusion du lycopène. Enfin, le test de diffusion a été appliqué à différentes matrices, produits commerciaux, ou frais transformés en laboratoire afin de vérifier son aptitude à les classifier. Ce travail à permis de construire les bases d’un modèle de diffusion du lycopène prenant en compte la première étape de son ingestion, la diffusion vers la phase lipidique. Les résultats obtenus pourront être confrontés à des résultats sur d’autre matrice et ainsi permettre l’élaboration d’un modèle de digestion tenant compte des différents paramètres mis en avant au cours de cette étude
Carotenoids are natural fat-soluble pigments synthesized by plants, and especially found in relatively high amounts in numerous fruits and vegetables and which have the visual characteristic of being colored from yellow to red.These compounds were identified as being beneficial to health by epidemiological studies.In order to play a role in body carotenoids have to be absorbed. They must be released from the matrix to pass in the lipid phase of the bolus. Or plant carotenoids are known to have a low bioavailability. The bioavailability increases after transformation. The objective of this work is to understand the effect of manufacturing processes on the bioavailability, developing a simple tool allowing manufacturers to easily evaluate Lycopene tomato was selected as molecule of interest. Therefore we designed a model to evaluate these parameters which can modify carotenoids diffusion. Using this model, even with a large excess of oil, only 31%±1% of lycopene could be extracted from tomato juice to the oil phase. At low (between 0,11 and 1) oil/tomato ratio, extraction of lycopene was limited by saturation of the oil phase. The lycopene diffusion did not vary significantly with pH but it doubled when temperature rose from 10 °C to 37 °C. From these results are calculated factors partition of lycopene in oil and its diffusivity. Secondly, contrasting tomato samples were carried out by using the two industrials methods Hot Break (HB) and Cold Break (CB) in order to verify the effect of processing methods on the diffusion properties of lycopene. Finally, the diffusion test was applied to different matrix, commercial products, or fresh products processed in the laboratory to verify its ability to classifier them. This work gave stable and repeatable results and may be used for a reliable and quick evaluation of the impact of process on food matrix, which could enhance the carotenoid bioavailability. It is also a powerful tool to study the physico-chemical parameters that may affect this bioaccessibility. Results obtained may be used to develop a new model of digestion witch used various parameters highlighted during this study
APA, Harvard, Vancouver, ISO, and other styles
40

Brittle, Seth William. "Bioavailability and Transformation of Silver Nanoparticles in the Freshwater Environment." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1484594585990252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Culver, K. C. "Critical being for pedagogy and social transformation: radically reimagining critical thinking in higher education." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6930.

Full text
Abstract:
This dissertation explores the potential for higher education to promote the development of critical being among diverse students, including three studies that employ critical quantitative approaches. The first chapter proposes critical being as an alternative to critical thinking that better reflects the purposes of higher education for the public good. In Chapter Two, I create a survey-based instrument measuring critical being, including three factors that are theoretically grounded in the work of Barnett (1997) and Davies (2015). Chapter Three examines the relationship between specific instructional practices associated with academic challenge and four-year growth in critical being among three racial and/or ethnic groups traditionally underrepresented in higher education: Black and African American students, Asian and Pacific Islander students, and Hispanic, Latinx and Chicano students. Chapter Four focuses on college instructors, exploring the relationship of individual, academic, and organizational factors with instructors’ emphasis of critical being in the classroom and their beliefs about students’ abilities and efforts. Finally, Chapter Five returns to the necessity for higher education to center critical being in order to equip students to be well-informed agents of social change. By bringing together the results of the three studies, this chapter also considers the implications of higher education for critical being, offers self-reflection on the implementation of critical quantitative approaches, and looks forward in making recommendations for future research.
APA, Harvard, Vancouver, ISO, and other styles
42

Cannon, Patrick Owen. "Communication for Planetary Transformation and the Drag of Public Conversations: The Case of Landmark Education Corporation." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Christensen, Laurene L. "Writing in the Contact Zone: Three Portraits of Reflexivity and Transformation." PDXScholar, 2002. https://pdxscholar.library.pdx.edu/open_access_etds/1886.

Full text
Abstract:
Culture is at the core of language teaching. Because classrooms are contact zones (Pratt 1991), teachers must have a well-developed sense of their own intercultural competence so that they may better facilitate the cross-cultural discovery inherent in language teaching. Teacher preparation programs need to provide opportunities for new teachers to increase their intercultural awareness. The purpose of this research was to qualitatively understand the experiences of pre-service teachers in a required culture-learning class at a large urban university. Specifically, the focus of this study was the completion of a mini-ethnography project designed to give the students a cross-cultural exchange. Since such contact zones can be the site of reflexivity and transformation, this study sought to understand the contexts in which reflexivity and transformation might occur, as well as how these changes might influence a person's intercultural competence. This research used student writing as a primary source for illustrating change. Writing samples from all course assignments were collected from the class. Intercultural Development Inventory (IDI) Profiles were collected from three individuals who also agreed to extensive interviews. This data was used to create case study portraits of the class as well as the three individuals, illustrating a variety of experiences with the ethnography project. Change in intercultural competence was measured according to the Developmental Model of Intercultural Sensitivity (Bennett 1993) and the IDI. Each person had a markedly different experience with the project, and each person experienced some kind of intercultural change. Overall, the results suggest that ethnography is a useful classroom tool. When used at an appropriate stage of a student's intercultural development, reflexivity and perspective transformation can occur, thus leading to intercultural competence.
APA, Harvard, Vancouver, ISO, and other styles
44

Pippig, Michael. "Massively Parallel, Fast Fourier Transforms and Particle-Mesh Methods: Massiv parallele schnelle Fourier-Transformationen und Teilchen-Gitter-Methoden." Doctoral thesis, Universitätsverlag der Technischen Universität Chemnitz, 2015. https://monarch.qucosa.de/id/qucosa%3A20398.

Full text
Abstract:
The present thesis provides a modularized view on the structure of fast numerical methods for computing Coulomb interactions between charged particles in three-dimensional space. Thereby, the common structure is given in terms of three self-contained algorithmic frameworks that are built on top of each other, namely fast Fourier transform (FFT), nonequispaced fast Fourier transform (NFFT) and NFFT based particle-mesh methods (P²NFFT). For each of these frameworks algorithmic enhancement and parallel implementations are presented with special emphasis on scalability up to hundreds of thousands of parallel processes. In the context of FFT massively parallel algorithms are composed from hardware adaptive low level modules provided by the FFTW software library. The new algorithmic NFFT concepts include pruned NFFT, interlacing, analytic differentiation, and optimized deconvolution in Fourier space with respect to a mean square aliasing error. Enabled by these generalized concepts it is shown that NFFT provides a unified access to particle-mesh methods. Especially, mixed-periodic boundary conditions are handled in a consistent way and interlacing can be incorporated more efficiently. Heuristic approaches for parameter tuning are presented on the basis of thorough error estimates.
Die vorliegende Dissertation beschreibt einen modularisierten Blick auf die Struktur schneller numerischer Methoden für die Berechnung der Coulomb-Wechselwirkungen zwischen Ladungen im dreidimensionalen Raum. Die gemeinsame Struktur ist geprägt durch drei selbstständige und auf einander aufbauenden Algorithmen, nämlich der schnellen Fourier-Transformation (FFT), der nicht äquidistanten schnellen Fourier-Transformation (NFFT) und der NFFT-basierten Teilchen-Gitter-Methode (P²NFFT). Für jeden dieser Algorithmen werden Verbesserungen und parallele Implementierungen vorgestellt mit besonderem Augenmerk auf massiv paralleler Skalierbarkeit. Im Kontext der FFT werden parallele Algorithmen aus den Hardware adaptiven Modulen der FFTW Softwarebibliothek zusammengesetzt. Die neuen NFFT-Konzepte beinhalten abgeschnittene NFFT, Versatz, analytische Differentiation und optimierte Entfaltung im Fourier-Raum bezüglich des mittleren quadratischen Aliasfehlers. Mit Hilfe dieser Verallgemeinerungen bietet die NFFT einen vereinheitlichten Zugang zu Teilchen-Gitter-Methoden. Insbesondere gemischt periodische Randbedingungen werden einheitlich behandelt und Versatz wird effizienter umgesetzt. Heuristiken für die Parameterwahl werden auf Basis sorgfältiger Fehlerabschätzungen angegeben.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Zhe. "Fate of Pharmaceuticals and Their Transformation Products in Rivers : An integration of target analysis and screening methods to study attenuation processes." Doctoral thesis, Stockholms universitet, Institutionen för miljövetenskap och analytisk kemi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-123752.

Full text
Abstract:
Pharmaceuticals are environmental contaminants causing steadily increasing concern due to their high usage, ubiquitous distribution in the aquatic environment, and potential to exert adverse effects on the ecosystems. After being discharged from wastewater treatment plants (WWTPs), pharmaceuticals can undergo transformation processes in surface waters, of which microbial degradation in river sediments is considered highly significant. In spite of a substantial number of studies on the occurrence of pharmaceuticals in aquatic systems, a comprehensive understanding of their environmental fate is still limited. First of all, very few consistent datasets from lab-based experiments to field studies exist to allow for a straightforward comparison of observations. Secondly, data on the identity and occurrence of transformation products (TPs) is insufficient and the relation of the behavior of TPs to that of their parent compounds (PCs) is poorly understood. In this thesis, these knowledge gaps were addressed by integrating the TP identification using suspect/non-target screening approaches and PC/TP fate determination. The overarching objective was to improve the understanding of the fate of pharmaceuticals in rivers, with a specific focus on water-sediment interactions, and formation and behavior of TPs. In paper I, 11 pharmaceutical TPs were identified in water-sediment incubation experiments using non-target screening. Bench-scale flume experiments were conducted in paper II to simultaneously investigate the behavior of PCs and TPs in both water and sediment compartments under more complex and realistic hydraulic conditions. The results illustrate that water-sediment interactions play a significant role for efficient attenuation of PCs, and demonstrate that TPs are formed in sediment and released back to surface water. In paper III the environmental behavior of PCs along stretches of four wastewater-impacted rivers was related to that of their TPs. The attenuation of PCs is highly compound and site specific. The highest attenuation rates of the PCs were observed in the river with the most efficient river water-pore water exchange. This research also indicates that WWTPs can be a major source of TPs to the receiving waters. In paper IV, suspect screening with a case-control concept was applied on water samples collected at both ends of the river stretches, which led to the identification of several key TPs formed along the stretches. The process-oriented strategies applied in this thesis provide a basis for prioritizing and identifying the critical PCs and TPs with respect to environmental relevance in future fate studies.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: Submitted. Paper 4: Manuscript.

APA, Harvard, Vancouver, ISO, and other styles
46

Hieber, Nathaniel Paul. "Changes on the Horizon: The Evolution of Transportation Methods and Infrastructure in the American Southwest, 1870-1920." Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1619098201581259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Liang, Yental C. T. "The effects of economic transformation upon selected high school vocational education programs in Southern California." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bahbah, Chahrazade. "Advanced numerical methods for the simulation of the industrial quenching process." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM012.

Full text
Abstract:
La trempe est une méthode de traitement thermique où un métal chaud est refroidi rapidement à l’aide d’un medium. Le but est de donner au métal une certaine microstructure afin d’atteindre les performances mécaniques requises. Ce procédé a des impacts directs sur l’évolution propriétés mécaniques, contrôle de la microstructure et libération des contraintes résiduelles. Afin de réaliser un procédé optimal, il est essentiel de contrôler correctement les transformations de phase qui ont lieu dans l’alliage, et ainsi obtenir la microstructure présentant les propriétés thermomécaniques souhaitées. Cette thèse est réalisée en collaboration avec la société Linamar Montupet spécialisée dans la fabrication de composants en aluminium pour l’industrie automobile. Ils s’intéressent à la trempe des pièces métalliques dans les liquides pouvant vaporiser. La vaporisation est le principal phénomène qui anime le système. L’objectif de cette thèse est donc de définir un cadre numérique capable de simuler le procédé de trempe à l’échelle industrielle. Différents aspects seront étudiés: (i) analyser et simuler les interactions liquide-vapeur-solide avec changement de phase, (ii) simuler des interactions fluide-solide pour pouvoir prédire le comportement thermomécanique du solide. Les résultats des développements numériques seront validés par des confrontations avec les expériences proposées par le partenaire industriel
Quenching is a heat treatment method where a hot metal part is cooled down rapidly with the help of a quenchant. The purpose of such process is to give a certain microstructure to the metal in order to achieve the required mechanical performance. This process has direct impacts on changing mechanical properties, controlling microstructure and releasing residual stresses. Good control of quenching is essential for correctly controlling the phase changes that take place within the alloy, and obtain the microstructure exhibiting the desired thermomechanical properties. This Phd is done in collaboration with the company Linamar Montupet specialized in the manufacture of complex cast alumnium components for the automotive industry. They are interested in the quenching of metallic parts in liquid quenchants that can vaporize. The vaporizationis generally the leading phenomenon that drives the system. Indeed, the cooling of the part is strongly conditioned by the behavior of the surrounding fluid that extracts the heat therein.Thus, the objective of this thesis is to set a numerical framework able to simulate the quenching process at an industrial scale. In this thesis, different aspects will be studied: (i) analyze and simulate the liquid-vapor-solid interactions with phase change, (ii) simulate fluid-solid interactions to be able to predict the thermomechanical behavior of the solid. The results coming from these numerical development will be validated by confrontations with the experiments proposed in agreement with the industrial partner
APA, Harvard, Vancouver, ISO, and other styles
49

Broggi, Francesca. "In vitro methods to study the cytotoxicity, cell transformation capacity and genotoxicity of nanoparticles : application to cobalt ferrite and silver nanoparticles." Thesis, Heriot-Watt University, 2012. http://hdl.handle.net/10399/2618.

Full text
Abstract:
The objective of this thesis was to investigate the in vitro cytotoxic, genotoxic and transforming effects induced by nanoparticles (NPs) of industrial interest on a range of cell cultures. The cytotoxicity of two sizes of CoFe2O4 NPs, paramagnetic particles of interest in different biomedical applications, was investigated using the Neutral Red uptake (NR) and Colony Forming Efficiency (CFE) assays using six mammalian cell lines at concentrations between 10 and 120 μM. More specifically, cytotoxicity was evaluated after 72-hour treatment of five cell lines (A549, CaCo2, HaCaT, HepG2, MDCK) with 10, 20, 40, 60, 80, 100 and 120 μM concentrations of NPs using NR, and 10, 20, 30, 40, 50, 60, 80, 100 and 120 μM concentrations of NPs using CFE. In parallel with these tests, cytotoxicity was also evaluated in mouse Balb3T3 fibroblasts using (i) NR after 72-hour treatment with 10, 20, 40, 60, 80, 100 and 120 μM concentrations of NPs, and (ii) CFE after both 72-hour treatment with 1, 5, 10, 20, 30, 40, 50, 60, 80, 100 and 120 μM concentrations of NPs, and 24-hour treatment with 1, 10, 60 and 120 μM concentrations of NPs. The cytotoxic effect exhibited a dose-effect relationship for Balb3T3 cells as assessed using the CFE assay. The testing of a more extensive concentration range of NPs in Balb3T3 cells (i.e., 1 and 5 μM in addition to the concentrations tested in the other five cell lines) over a 72-hour exposure time using CFE, together with the additional test using a 24-hour exposure time allowed appropriate concentration ranges to be determined for use in subsequent experiments using the Cell Transformation (CTA) and Cytokinesis-Block Micronucleus (CBMN) assays. The cell transformation capacity and genotoxicity of CoFe2O4 NPs were investigated using the Balb3T3 model, and assessed using the CTA (specifically at concentrations of 1, 5, 20 and 60 μM for 72 hours of treatment) and CBMN (specifically at concentrations of 1, 10 and 60 μM for 24 hours of treatment). The CoFe2O4 NPs induced neither effect at the doses and time points investigated. Four sizes of Ag NPs, chosen for their antimicrobial properties, were assessed for cytotoxicity (using CFE at concentrations of 0.1, 0.5, 1, 5 and 10 μM for 24 hours of treatment and of 0.01, 0.1, 0.5, 1, 2.5, 5 and 10 μM for 72 hours of treatment), cell transformation capacity (using CTA at concentrations of 0.5, 2.5 and 5 μM for 72 hours of treatment) and genotoxicity to Balb3T3 mouse fibroblasts (using CBMN at concentrations of 1, 5 and 10 μM for 24 hours of treatment). The Ag NPs had a significant cytotoxic effect, but no cell transformation or genotoxic effects at the doses and time points investigated. Physicochemical characterization of the chosen NPs was performed; size distribution and surface charge were measured by Dynamic Light Scattering (DLS), imaging by Scanning Electron Microscopy (SEM), the purity and ion leakage by Inductively Coupled Plasma Mass Spectrometry (ICP-MS), and the sedimentation by UV-Visible spectrometry.
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Langping [Verfasser], and Stefan [Akademischer Betreuer] Haderlein. "Characterizing transformation processes of environmental contaminants by multi-element isotope analysis – proving concepts and developing methods / Langping Wu ; Betreuer: Stefan Haderlein." Tübingen : Universitätsbibliothek Tübingen, 2020. http://d-nb.info/1206173203/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography