Dissertations / Theses on the topic 'Quantification'

To see the other types of publications on this topic, follow the link: Quantification.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Quantification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Endriss, Cornelia, and Stefan Hinterwimmer. "Quantificational Variability Effects with plural definites : quantification over individuals or situations?" Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2008/1951/.

Full text
Abstract:
In this paper we compare the behaviour of adverbs of frequency (de Swart 1993) like usually with the behaviour of adverbs of quantity like for the most part in sentences that contain plural definites. We show that sentences containing the former type of Q-adverb evidence that Quantificational Variability Effects (Berman 1991) come about as an indirect effect of quantification over situations: in order for quantificational variability readings to arise, these sentences have to obey two newly observed constraints that clearly set them apart from sentences containing corresponding quantificational DPs, and that can plausibly be explained under the assumption that quantification over (the atomic parts of) complex situations is involved. Concerning sentences with the latter type of Q-adverb, on the other hand, such evidence is lacking: with respect to the constraints just mentioned, they behave like sentences that contain corresponding quantificational DPs. We take this as evidence that Q-adverbs like for the most part do not quantify over the atomic parts of sum eventualities in the cases under discussion (as claimed by Nakanishi and Romero (2004)), but rather over the atomic parts of the respective sum individuals.
APA, Harvard, Vancouver, ISO, and other styles
2

Herbelot, Aurelie. "Underspecified quantification." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Endriss, Cornelia, and Stefan Hinterwimmer. "The influence of tense in adverbial quantification." Universität Potsdam, 2004. http://opus.kobv.de/ubp/volltexte/2006/840/.

Full text
Abstract:
We argue that there is a crucial difference between determiner and adverbial quantification. Following Herburger [2000] and von Fintel [1994], we assume that determiner quantifiers quantify over individuals and adverbial quantifiers over eventualities. While it is usually assumed that the semantics of sentences with determiner quantifiers and those with adverbial quantifiers basically come out the same, we will show by way of new data that quantification over events is more restricted than quantification over individuals. This is because eventualities in contrast to individuals have to be located in time which is done using contextual information according to a pragmatic resolution strategy. If the contextual information and the tense information given in the respective sentence contradict each other, the sentence is uninterpretable. We conclude that this is the reason why in these cases adverbial quantification, i.e. quantification over eventualities, is impossible whereas quantification over individuals is fine.
APA, Harvard, Vancouver, ISO, and other styles
4

Aḵẖtar, ʻAlī. "Identity and quantification." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15146.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND HUMANITIES
Vita.
Includes bibliographical references.
by Ali Akhtar.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
5

Basilico, David Anthony. "Quantification and locality." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186305.

Full text
Abstract:
This dissertation develops a transformational theory of scope which is based not on the position to which an entire quantificational noun phrase (QNP) can move and adjoin but on the position and to which a quantificational determiner can move and adjoin. Following Heim (1982), a tripartite representation for sentences containing QNPs is adopted in which quantificational determiners move out of their containing noun phrases and adjoin to the sentence node at the level of Logical Form (LF). By utilizing this type of representation, asymmetries between the movement possibilities of a phrase and scope possibilities of a phrase can be captured. This dissertation argues that movement of an operator is free but constrained by the operator acquiring the selection index of the phrase which it binds. The selection index is percolated up the tree in a series of local relationships (government, specifier/head and X-Bar). This index percolation is dependent on the ability of a syntactic head to acquire an index. The necessity of this index percolation approach is demonstrated in the first chapter, which investigates the phenomenon of unselective binding between an adverbial operator and indefinite in restrictive 'if/when' clauses. It shows that this relationship is sensitive to some syntactic islands but not others. It demonstrates that the index percolation approach is the best way to capture the selective island sensitivities of this phenomenon. Additional motivation for this account is given in chapter two, which deals with internally headed relative clauses (IHRCs). Several parallels between IHRCs and restrictive 'if/when' clauses are noted. It shows that the binding of the internal head by the determiner associated with the IHRC is similar to the binding of an indefinite by an adverbial operator. The next two chapters treat the phenomenon of partial Wh-movement. These chapters further show the application of the index percolation account because they argue that the relationships noted above between an adverbial operator and indefinite and operator and internal head are analogous to the relationship between a partially moved WH-Phrase and the sentence initial scope marker. In chapter six, the scope of quantified possessive phrases in English is examined. This is a case where movement of a phrase and scope of a phrase sharply differ. The approach where the determiner of the possessive is moved alone, with index percolation from the phrase in the specifier position to the moved determiner, is shown to best handle these cases.
APA, Harvard, Vancouver, ISO, and other styles
6

Bui, Huy Q. "Quantification, opacity and modality." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ65025.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cameron, Lee R. J. "Aerosol explosion hazard quantification." Thesis, Cardiff University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jiang, Yan. "Logical dependency in quantification." Thesis, University College London (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sved, Sofia. "Quantification of Model Rrisk." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-243924.

Full text
Abstract:
The awareness of model risk has increased due to the increased use of models to valuate financial instruments and their increasing complexity and regulators now require financial institutions to manage it. Despite this, there is still no industry or market standard when it comes to the quantification of model risk. The objective with this project is to find and implement a method that may be used to quantify model risk and evaluate it based on accuracy, efficiency and generalizability. Several approaches to model risk in the literature are explored in this thesis and it is concluded that existing methods are generally not efficient, accurate or generalizable. However, by combining two of the existing methods in the literature and using data on counterparty valuations, another method to quantify model risk can be constructed. This method is implemented and backtested and it is found to be accurate, general and more efficient than alternative approaches. Furthermore, this method may also serve in model validation as a mean to judge the quality of valuations and compare valuation models to each other. One limitation of the method is that if there are few counterparties for a valuation model, say 1 or 2, the method used in this thesis is not suitable.
Medvetenheten kring modellrisk har ökat på grund av ökad användning av modeller vid värdering av finansiella instrument samt deras ökande komplexitet. Dessutom begär nu regulatorer att institutioner ska beräkna samt redogöra för modellrisk. Trots detta finns ännu ingen bransch eller marknadsstandard när det kommer till hur modellrisk bör kvantifieras. Syftet med projektet är att hitta och implementera en metod som kan användas för att kvantifiera modellrisk samt utvärdera denna baserat på effektivitet, noggrannhet och generaliserbarhet. I den här uppsatsen har flera olika tillvägagångssätt i litteraturen för att kvantifiera modellrisk utvärderats med slutsatsen att befintliga metoder i allmänhet varken är effektiva, korrekta eller generaliserbara. Däremot, genom att kombinera två av de befintliga metoderna i litteraturen och använda data om motpartsvärderingar kan en annan metod för att kvantifiera modellrisken konstrueras. Denna metod implementeras och backtestas och den visar sig vara noggrann, generaliserbar och effektivare än de alternativa metoderna vilket var vad som eftersöktes. Vidare kan denna metod också tjäna i modellvalidering som ett medel för att bedöma hur väl värderingar från en viss modell överensstämmer med marknadens värderingar samt för att jämföra värderingsmodeller med varandra. En begränsning som kan identifieras för denna metod är att om det finns få motparter till en värderingsmodell, säg 1 eller 2, är metoden som används i denna uppsats inte lämplig för att kvantifiera modellrisk.
APA, Harvard, Vancouver, ISO, and other styles
10

Ferreira, Marcelo (Marcelo Barra). "Event quantification and plurality." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33697.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 2005.
Includes bibliographical references (p. 131-138).
This dissertation presents three studies based on the hypothesis that the domain of entities on which natural language interpretation relies includes a partially ordered sub-domain of events. In this sub-domain, we can identify singular and plural elements, the latter being characterizable as mereological sums having singular events as their minimal parts. I discuss how event variables ranging over pluralities are introduced in the logical representation of natural languages sentences and how event operators manipulate these variables. Logical representations are read off syntactic structures, and among the elements I will claim are hidden in the syntactic representation of certain sentences are plural definite descriptions of events and event quantifiers selectively binding plural variables. My goal will be to motivate the postulation of these elements by showing how reference to pluralities of events shed light on several properties of a variety of constructions, and how interpretive differences originated in singular/plural oppositions overtly manifested in the nominal domain are replicated in the aspectual/verbal domain, even in the absence of any overt morphological manifestation.
(cont.) The empirical domain of investigation includes adverbial quantification, donkey anaphora and imperfective aspect, with both habitual and progressive readings being analyzed in detail.
by Marcelo Ferreira.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Gotham, M. G. H. "Copredication, quantification and individuation." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1460158/.

Full text
Abstract:
This thesis addresses the various problems of copredication: the phenomenon whereby two predicates are applied to a single argument, but they appear to require that their argument denote different things. For instance, in the sentence ‘The lunch was delicious but went on for hours’, the predicate ‘delicious’ appears to require that ‘the lunch’ denote food, while ‘went on’ appears to require that it denote an event. Copredication raises philosophical issues regarding the place of a reference relation in semantic theory. It also raises issues concerning the ascription of sortal requirements to predicates in framing a theory of semantic anomaly. Finally, many quantified copredication sentences have truth conditions that cannot be accounted for given standard assumptions, because the predicates used impose distinct criteria of individuation on the objects to which they apply. For instance, the sentence ‘Three books are heavy and informative’ cannot be true in a situation involving only a trilogy (informationally three books, but physically only one), nor in a situation involving only three copies of the same book (physically three books, but informationally only one): the three books involved must be both physically and informationally distinct. The central claims of this thesis are that nouns supporting copredication denote sets of complex objects, and that lexical entries incorporate information about their criteria of individuation, defined in terms of equivalence relations on subsets of the domain of discourse. Criteria of individuation are combined during semantic composition, then accessed and exploited by quantifiers in order to specify that the objects quantified over are distinct in defined ways. This novel approach is presented formally in Chapters 2 and 3, then compared with others in the literature in Chapter 4. In Chapter 5, the discussion is extended to the question of the implications of this approach for the form that a semantic theory should take.
APA, Harvard, Vancouver, ISO, and other styles
12

Albuisson, Eliane. "Quantification des électrogrammes visuels." Paris 6, 1992. http://www.theses.fr/1992PA061001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Elhadad, Jimmy. "Problèmes de quantification géométrique." Aix-Marseille 1, 1990. http://www.theses.fr/1990AIX11274.

Full text
Abstract:
Ce travail porte sur la quantification geometrique des varietes symplectiques et son application a divers modeles physiques. Le premier article traite du modele d'atome d'hydrogene et, partant d'une description classique, nous appliquons les procedures de quantification geometrique et proposons une interpretation des etats quantiques qui rende compte des symetries du modele. Dans le deuxieme article nous nous interessons au flot geodesique d'une sphere et nous construisons une quantification explicite. Le troisieme article revient sur le modele d'atome d'hydrogene et integre dans le cadre de la quantification geometrique les corrections usuelles de structures fine et hyperfine introduites en physique. Les trois articles suivants posent le probleme de la quantification d'un systeme mecanique contraint. Nous montrons a partir d'exemples comment il est necessaire de modifier la condition de dirac. Cette modification n'etant pertinente que dans le cas ou le groupe de symetrie n'est pas unimodulaire, nous etudions comment une extension appropriee du systeme permet de recouvrir les proprietes de fonctorialite du processus de quantification vis-a-vis de l'operation de reduction. Le procede d'extension est enfin applique pour etudier la structure de certaines orbites coadjointes d'un groupe de lie
APA, Harvard, Vancouver, ISO, and other styles
14

Tran, Thuan. "Wh-quantification in Vietnamese." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 274 p, 2009. http://proquest.umi.com/pqdweb?did=1694575191&sid=5&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

May, Robert. "The grammar of quantification /." New York : Garland, 1990. http://catalogue.bnf.fr/ark:/12148/cb35690684g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Janmahasatian, Sarayut. "Quantification of lean body weight /." [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19202.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Norvoll, Gyrd. "Quantification and Traceability of Requirements." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8723.

Full text
Abstract:

Software development is a highly dynamic process, primarily caused by its foundation in the dynamic human world. Requirements traceability alleviates the detrimental effects of this dynamism by providing increased control over the artifacts of the software development processes and their interrelationships. This thesis investigates how an RT tool should be designed and implemented in order to assist with the tasks of requirements traceability, and outlines a tool that primarily focuses on reducing the work overhead associated with the tasks of implementing requirements traceability in software development projects. Preparatory to the development of the RT tool, the applicability of the traceability models presented in the in-depth study has been confirmed through empirical work. A detailed representation of the models has been compiled, elaborating on the internal representation of artifacts and traces. The models were extended to be able to represent organisational hierarchies, enabling trace information analysis to deduce the context of important decisions throughout the software development processes, an important tool in understanding how requirements are determined. The thesis presents a requirements specification and architecture with a firm foundation in the findings of the in-depth study, outlining an RT tool that addresses important issues concerning the implementation of requirements traceability, in particular focusing on reducing the associated work overhead. Based on the requirements specification and architecture, a evolutionary prototype is developed, giving its users an impression of the functionality of the outlined RT tool. The prototype addresses the issues pointed out by the requirements specification and architectural description, and, throughout development, attention is given the evolvability of the prototype. Consequently, the prototype provides a good foundation for the future development of a complete RT tool.

APA, Harvard, Vancouver, ISO, and other styles
18

Bejugam, Santosh. "Tremor quantification and parameter extraction." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-16021.

Full text
Abstract:
Tremor is a neuro degenerative disease causing involuntary musclemovements in human limbs. There are many types of tremor that arecaused due to the damage of nerve cells that surrounds thalamus of thefront brain chamber. It is hard to distinguish or classify the tremors asthere are many reasons behind the formation of specific category, soevery tremor type is named behind its frequency type. Propermedication for the cure by physician is possible only when the disease isidentified.Because of the argument given in the above paragraph, there is a needof a device or a technique to analyze the tremor and for extracting theparameters associated with the signal. These extracted parameters canbe used to classify the tremor for onward identification of the disease.There are various diagnostic and treatment monitoring equipment areavailable for many neuromuscular diseases. This thesis is concernedwith the tremor analysis for the purpose of recognizing certain otherneurological disorders. A recording and analysis system for human’stremor is developed.The analysis was performed based on frequency and amplitudeparameters of the tremor. The Fast Fourier Transform (FFT) and higherorderspectra were used to extract frequency parameters (e.g., peakamplitude, fundamental frequency of tremor, etc). In order to diagnosesubjects’ condition, classification was implemented by statisticalsignificant tests (t‐test).
APA, Harvard, Vancouver, ISO, and other styles
19

Leblond, Frédéric. "Quantification rigide de skyrmions déformés." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0006/MQ33694.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hodges, Amanda E. "Objective Quantification of Daytime Sleepiness." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/iph_theses/175.

Full text
Abstract:
BACKGROUND: Sleep problems affect people of all ages, race, gender, and socioeconomic classifications. Undiagnosed sleep disorders significantly and adversely impact a person’s level of academic achievement, job performance, and subsequently, socioeconomic status. Undiagnosed sleep disorders also negatively impact both direct and indirect costs for employers, the national government, and the general public. Sleepiness has significant implications on quality of life by impacting occupational performance, driving ability, cognition, memory, and overall health. The purpose of this study is to describe the prevalence of daytime sleepiness, as well as other quantitative predictors of sleep continuity and quality. METHODS: Population data from the CDC program in fatigue surveillance were used for this secondary analysis seeking to characterize sleep quality and continuity variables. Each participant underwent a standard nocturnal polysomnography and a standard multiple sleep latency test (MSLT) on the subsequent day. Frequency and chi-square tests were used to describe the sample. One-Way Analysis of Variance (ANOVA) was used to compare sleep related variables of groups with sleep latencies of <5 >minutes, 5-10 minutes, and >10 minutes. Bivariate and multivariate logistic regression was used to examine the association of the sleep variables with sleep latency time. RESULTS: The mean (SD) sleep latency of the sample was 8.8 (4.9) minutes. Twenty-four individuals had ≥1 SOREM, and approximately 50% of participants (n = 100) met clinical criteria for a sleep disorder. Individuals with shorter sleep latencies, compared to those with longer latencies reported higher levels of subjective sleepiness, had higher sleep efficiency percentages, and longer sleep times. The Epworth Sleepiness Scale, sleep efficiency percentage, total sleep time, the presence of a sleep disorder, and limb movement index were positively associated with a mean sleep latency of <5 >minutes. CONCLUSIONS: The presence of a significant percentage of sleep disorders within our study sample validate prior suggestions that such disorders remain unrecognized, undiagnosed, and untreated. In addition, our findings confirm questionnaire-based surveys that suggest a significant number of the population is excessively sleepy, or hypersomnolent. Therefore, the high prevalence of sleep disorders and the negative public health effects of daytime sleepiness demand attention. Further studies are now required to better quantify levels daytime sleepiness, within a population based sample, to better understand their impact upon morbidity and mortality. This will not only expand on our current understanding of daytime sleepiness, but it will also raise awareness surrounding its significance and relation to public health.
APA, Harvard, Vancouver, ISO, and other styles
21

Pinnaka, Chaitanya. "Quantification of User Privacy Loss." Thesis, KTH, Kommunikationsnät, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-119822.

Full text
Abstract:
With the advent of communication age newer, faster and arguably better ways of moving information are at our disposal. People felt the need to stay connected which led to the evolution of smart gadgets like cell phones, tablets and laptops. The next generations of automobiles are keen in extending this connectivity to the vehicle user by arming themselves with radio interfaces. This move will enable the formation of vehicular networks where each car (mobile node) will be an instance of mobile ad hoc networks, popularly referred as Vehicular AdHoc Networks (VANETS). These networks will provide further necessary infrastructure for applications that can help improving safety and efficiency of road traffic as well as provide useful services for the mobile nodes (cars). The specific nature of VANETS brings up the need to address necessary security and privacy issues to be integrated into the social world. Thus, the open field of secure inter-vehicular communication promises an interesting research area. This thesis aims to quantify how much of a user trajectory can an adversary identify while monitoring non-safety applications in VANETS. Different types of adversaries, their attacks and possible non-safety applications are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
22

Lascelles, Dominique. "Quantification of adsorbed flotation reagents." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80118.

Full text
Abstract:
Collector interaction with mineral surfaces has long been studied. Little work has been done, however, on directly quantifying reagent adsorption, certainly under industrial process conditions. The use of a novel surface analysis technique, Headspace Analysis Gas-phase Infrared Spectroscopy (HAGIS), is suggested for quantification of adsorbed reagents in mineral processing.
As a first exercise, a test system of xanthate adsorption onto lead sulphide minerals was studied. A survey of possible calibration standards (pure xanthate, a synthetic lead-xanthate, galena (PbS) and a lead sulphide ore conditioned with xanthate) resulted in linear curves for all four cases. The quantification of isopropyl xanthate adsorption onto batch flotation products (concentrate and tail) was used to determine that ore standards gave the most accurate results.
The technique was also tested for quantification of adsorbed amines. Two collectors, dodecylamine and diphenylguanidine, and a depressant, triethylenetetramine, were studied. A common calibration curve was prepared using diphenylguanidine adsorbed on Inco matte. Results show that the HAGIS technique can easily be used to quantify adsorbed amines.
It is concluded that the HAGIS technique is a powerful new tool for the quantitative determination of adsorbed reagents. The xanthate study showed the use of ores as standards produces the best calibration. The amine study introduced the possibility of analyzing reagent mixtures.
APA, Harvard, Vancouver, ISO, and other styles
23

Nemes, Simona. "Practical methods for lignans quantification." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110432.

Full text
Abstract:
An optimized microwave-assisted extraction (MAE) method is proposed, in conjunction with high performance liquid chromatography (HPLC) analysis, for the general quantification of lignans in flaxseed materials and other plant foods. The method involves hydrolyzing 0.5 – 1.6 g samples with 50 ml of 0.5 M NaOH at 156 W (power level estimated using calorimetric calibration) for 3 min, using intermittent microwave power application (30 s on/off). The final temperature of the extracts is 67°C. The MAE method extracts the lignan from the plant matrix completely, accurately (97.5 % recovery), efficiently (yields 21.4 and 26.6 % higher than those obtained with reference methods), and precisely (coefficients of variation < 4.03 % for repeated determinations). An enzymatic hydrolysis (EH) method, complementary to the MAE method, was developed for the general determination of lignan aglycones in plant samples. The EH method involves hydrolyzing microwave-assisted extracts containing 100 mg sample in 3 ml of sodium acetate buffer (0.01 M, pH 5), with crude solutions of β-glucuronidase using ≥ 40 U of enzyme/mg sample (depending on the hydrolysing capacity of various batches of enzyme) by incubation at 37°C for 48 h. The lignan glucosides are hydrolysed in proportion of 95.6 %. The EH method is recommended for building databases of lignan contents in foods, useful for nutritionists and medical researchers who seek to assess the effects of dietary lignan intake on human health. Artificial neural network (ANN) and partial least squares (PLS) regression models, which are complementary to the MAE method, were calibrated for the general quantification of lignans in a variety of flaxseed materials. The lignan values predicted with the ANN and PLS models were in the range of ± 0.67 to 4.85 % of the reference lignan values. Using the ANN and PLS models requires measuring the UV-Vis light absorption of microwave-assisted flaxseed extracts at 289, 298, 343, and 765 nm, following the Folin-Ciocalteu's assay; the models are useful to the flaxseed processing industry for rapidly and accurately determining the lignan contents of various flaxseed raw materials. A non-automated, affordable and accurate solid phase extraction (SPE) method was developed for purifying microwave-assisted flaxseed extracts. The method requires the preparation of extracts prior to SPE by adjusting the pH of extracts in two stages, 1st to pH 3 with sulphuric acid for removing the water soluble proteins and carbohydrates by precipitation, and 2nd to pH 5 with sodium hydroxide for improving the retention of lignan by the packed SPE phase in order to reduce the lignan losses in the wash-water eluate. Microwave-assisted extracts from 0.6 and 1.5 g defatted flaxseed meal can be purified by SPE in order to recover in the 10, 20 and 30 % ethanol pooled eluates 71.2 % and 60.6 %, respectively, of the amount of lignan subjected to purification. SPE purified extracts can be used for further experiments, such as testing the antioxidant activity and the stability of the lignan extracts during various storage conditions.
Une méthode optimisée d'extraction assistée par micro-ondes (EAMO), en conjonction avec l'analyse par chromatographie liquide de haute performance, est proposée pour la quantification de lignanes, de façon généralisée, dans des échantillons des graines de lin et des aliments d'origine végétale. La méthode nécessite l'hydrolyse des échantillons de 0.5 - 1.6 g avec 50 ml de NaOH 0.5 M en appliquant 156 W (niveau de puissance estimé par calibration calorimétrique) de façon intermittente (30 s marche/arrêt) pour 3 min. La température finale des extraits était de 67°C. La méthode EAMO extrait les lignanes des matrices végétales complètement, avec exactitude (récupération de 97.5 %), avec efficacité (rendements de 21.4 et 26.6 % plus hauts que ceux obtenus avec des méthodes conventionnelles), et avec précision (coefficients de variation pour analyses répétées < 4.03 %).Une méthode d'hydrolyse enzymatique (HE), complémentaire pour la méthode EAMO, a été développée pour la quantification généralisée des lignanes aglycones dans des échantillons végétaux. La méthode HE nécessite l'hydrolyse des extraits, obtenus par EAMO, qui contient 100 mg d'échantillons dans 3 ml de solution tampon d'acétate de sodium (0.01 M, pH 5), avec des solutions d'enzyme β-glucuronidase en concentrations de ≥ 40 U d'enzyme/mg échantillon dépendant de la capacité d'hydrolyse des différents lots d'enzymes), par incubation a 37°C pour 48 h. Les lignanes glucosides sont hydrolysés en proportion de 95.6 %. La méthode HE est recommandée pour construire des bases des données des contenus en lignanes des aliments, qui sont utiles aux chercheurs en santé et nutrition qui cherchent à évaluer les effets des apports nutritionnels des lignanes sur la santé humaine. Des modèles de réseaux de neurones artificiels (RNA) et de régression par les moindres carrés partiels (MCP), qui sont complémentaires pour la méthode EAMO, ont été calibrés pour la quantification généralisée des lignanes dans une variété d'échantillons de graines de lin. Les valeurs des lignanes estimées avec les modèles RNA et MCP ont été dans des écarts de ± 0.67 jusqu'à 4.85 % des valeurs de référence des lignanes. L'utilisation de modèles RNA et MCP nécessite d'effectuer des tests de Folin-Ciocalteu afin de mesurer l'absorption de la lumière UV-Vis des extraits a 289, 298, 343, et 765 nm. Ces modèles sont utiles aux industries de transformations des graines de lin pour quantifier avec rapidité et précision les niveaux de lignanes dans les différentes sources de matières premières à base de graines de lin.Une méthode non-automatisée, abordable et précise d'extraction sur phase solide (EPS) a été développée afin de purifier des extraits de graines de lin produits par EAMO. La méthode nécessite la préparation des extraits avant la EPS par ajustement du pH à deux reprises; premièrement au pH 3 avec de l'acide sulfurique pour enlever, par précipitation, les protéines et les hydrates de carbone qui sont solubles dans l'eau; et, deuxièmement au pH 5 avec de l'hydroxyde de soude pour améliorer la rétention des lignanes en phase solide par l'entonnoir EPS afin de réduire les pertes de lignanes dans l'eau de lavage. Des extraits produits par EAMO à partir de 0.6 et 1.5 g de farine de graines de lin dégraissée peuvent être purifiés par EPS afin de récupérer 71.2 et 60.6 %, respectivement, de la quantité des lignanes utilisée pour la purification, dans les liquides d'élution des10, 20 et 30 % d'éthanol mis en commun. Des extraits purifiés par EPS peuvent être utilisés pour tester la capacité antioxydante et la stabilité des extraits des lignanes durant leur entreposage dans des conditions variées.
APA, Harvard, Vancouver, ISO, and other styles
24

Virmani, Shashank Soyuz. "Entanglement quantification and local discrimination." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sequeira, Dilip. "Type inference with bounded quantification." Thesis, University of Edinburgh, 1998. http://hdl.handle.net/1842/503.

Full text
Abstract:
In this thesis we study some of the problems which occur when type inference is used in a type system with subtyping. An underlying poset of atomic types is used as a basis for our subtyping systems. We argue that the class of Helly posets is of significant interest, as it includes lattices and trees, and is closed under type formation not only with structural constructors such as function space and list, but also records, tagged variants, Abadi-Cardelli object constructors, top and bottom. We develop a general theory relating consistency, solvability, and solution of sets of constraints between regular types built over Helly posets with these constructors, and introduce semantic notions of simplification and entailment for sets of constraints over Helly posets of base types. We extend Helly posets with inequalities of the form a <= tau, where tau is not necessarily atomic, and show how this enables us to deal with bounded quantification. Using bounded quantification we define a subtyping system which combines structural subtype polymorphism and predicative parametric polymorphism, and use this to extend with subtyping the type system of Laufer and Odersky for ML with type annotations. We define a complete algorithm which infers minimal types for our extension, using factorisations, solutions of subtyping problems analogous to principal unifiers for unification problems. We give some examples of typings computed by a prototype implementation.
APA, Harvard, Vancouver, ISO, and other styles
26

Elfverson, Daniel. "Multiscale Methods and Uncertainty Quantification." Doctoral thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-262354.

Full text
Abstract:
In this thesis we consider two great challenges in computer simulations of partial differential equations: multiscale data, varying over multiple scales in space and time, and data uncertainty, due to lack of or inexact measurements. We develop a multiscale method based on a coarse scale correction, using localized fine scale computations. We prove that the error in the solution produced by the multiscale method decays independently of the fine scale variation in the data or the computational domain. We consider the following aspects of multiscale methods: continuous and discontinuous underlying numerical methods, adaptivity, convection-diffusion problems, Petrov-Galerkin formulation, and complex geometries. For uncertainty quantification problems we consider the estimation of p-quantiles and failure probability. We use spatial a posteriori error estimates to develop and improve variance reduction techniques for Monte Carlo methods. We improve standard Monte Carlo methods for computing p-quantiles and multilevel Monte Carlo methods for computing failure probability.
APA, Harvard, Vancouver, ISO, and other styles
27

Parkinson, Matthew. "Uncertainty quantification in Radiative Transport." Thesis, University of Bath, 2019. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.767610.

Full text
Abstract:
We study how uncertainty in the input data of the Radiative Transport equation (RTE), affects the distribution of (functionals of) its solution (the output data). The RTE is an integro-differential equation, in up to seven independent variables, that models the behaviour of rarefied particles (such as photons and neutrons) in a domain. Its applications include nuclear reactor design, radiation shielding, medical imaging, optical tomography and astrophysics. We focus on the RTE in the context of nuclear reactor physics where, to design and maintain safe reactors, understanding the effects of uncertainty is of great importance. There are many potential sources of uncertainty within a nuclear reactor. These include the geometry of the reactor, the material composition and reactor wear. Here we consider uncertainty in the macroscopic cross-sections ('the coefficients'), representing them as correlated spatial random fields. We wish to estimate the statistics of a problem-specific quantity of interest (under the influence of the given uncertainty in the cross-sections), which is defined as a functional of the scalar flux. This is the forward problem of Uncertainty Quantification. We seek accurate and efficient methods for estimating these statistics. Thus far, the research community studying Uncertainty Quantification in radiative transport has focused on the Polynomial Chaos expansion. However, it is known that the number of terms in the expansion grows exponentially with respect to the number of stochastic dimensions and the order of the expansion, i.e. polynomial chaos suffers from the curse of dimensionality. Instead, we focus our attention on variants of Monte Carlo sampling - studying standard and quasi-Monte Carlo methods, and their multilevel and multi-index variants. We show numerically that the quasi-Monte Carlo rules, and the multilevel variance reduction techniques, give substantial gains over the standard Monte Carlo method for a variety of radiative transport problems. Moreover, we report problems in up to 3600 stochastic dimensions, far beyond the capability of polynomial chaos. A large part of this thesis is focused towards a rigorous proof that the multilevel Monte Carlo method is superior to the standard Monte Carlo method, for the RTE in one spatial and one angular dimension with random cross-sections. This is the first rigorous theory of Uncertainty Quantification for transport problems and the first rigorous theory for Uncertainty Quantification for any PDE problem which accounts for a path-dependent stability condition. To achieve this result, we first present an error analysis (including a stability bound on the discretisation parameters) for the combined spatial and angular discretisation of the spatially heterogeneous RTE, which is explicit in the heterogeneous coefficients. We can then extend this result to prove probabilistic bounds on the error, under assumptions on the statistics of the cross-sections and provided the discretisation satisfies the stability condition pathwise. The multilevel Monte Carlo complexity result follows. Amongst other novel contributions, we: introduce a method which combines a direct and iterative solver to accelerate the computation of the scalar flux, by adaptively choosing the fastest solver based on the given coefficients; numerically test an iterative eigensolver, which uses a single source iteration within each loop of a shifted inverse power iteration; and propose a novel model for (random) heterogeneity in concrete which generates (piecewise) discontinuous coefficients according to the material type, but where the composition of materials are spatially correlated.
APA, Harvard, Vancouver, ISO, and other styles
28

Hsing, Jeff M. (Jeff Mindy) 1972. "Quantification of myocardial macromolecular transport." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9068.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 66-68).
The needs and impacts of drug administration have evolved from a systemic to a local focus. Local drug delivery would allow a higher local drug concentration at lower systemic toxicity than what can be achieved if delivered systemically. One of the tissues of interest for local delivery is the heart, or myocardium. Increasingly, clinicians are looking to direct myocardial delivery for therapy of complex cardiovascular diseases. Yet, there is little quantitative data on the rates of macromolecular transport inside the myocardium. A porcine model was used in this work as it is most closely similar to humans in size, structure and morphology. Using a technique previously developed in this laboratory to quantify the distribution of macromolecules, the delivery of compounds directly into the myocardium was evaluated. To make quantification generic and not specific for a particular drug or compound, fluorescent-labeled 20kDa and 150kDa dextrans were used to simulate small and large diffusing macromolecules. Diffusion in the myocardium in two directions, transmural and cross-sectional, were investigated to look at diffusion of compounds along and against the myocardium fiber orientation. Fluorescent microscopy was used to quantify concentration profiles, and then the data was fit to a simple diffusion model to calculate diffusivities. This validated the technique developed. The diffusivities of 20kDa dextran in the transmural and cross-sectional direction were calculated to be 9.49 ± 2.71 um2/s and 20.12 ± 4.10 um2/s respectively. The diffusivities for 150kDa were calculated to be 2.39 ± 1.86um 2/s and 3.23 ± 1.76um2/s respectively. The diffusivities of the two macromolecules were statistically different (p < 0.02 for transmural direction and p < 0.01 for cross-section direction). While the diffusion for the larger macromolecule was isotropic, it was not the case for the smaller one. The calculated diffusivity values in the myocardium correlated with previously published data for dextran in the arterial media, suggesting that the transport properties of the myocardium and arterial media may be similar. Applications of quantitative macromolecular transport may include developing novel therapies for cardiovascular diseases in the future.
by Jeff M. Hsing.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
29

Van, Zyl Jalene. "The quantification of metabolic regulation." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80280.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Metabolic systems are open systems continually subject to changes in the surrounding environment that cause uctuations in the state variables and perturbations in the system parameters. However, metabolic systems have mechanisms to keep them dynamically and structurally stable in the face of these changes. In addition, metabolic systems also cope with large changes in the uxes through the pathways, not letting metabolite concentrations vary wildly. Quantitative measures have previously been proposed for "metabolic regulation", using the quantitative framework of Metabolic Control Analysis. However, the term "regulation" is so loosely used so that its content is mostly lost. These di erent measures of regulation have also not been applied to a model and comparably investigated prior to this study. Hence, this study analyses the usefulness of the di erent quantitative measures in answering di erent types of regulatory questions. Thus, the aim of this study was to distinguish the above mentioned aspects of metabolic regulation and to nd appropriate quantitative measures for each, namely dynamic stability, structurally stability, and homeostasis. Dynamic stability is the property of a steady state to return to its original state after a perturbation in a metabolite in the system, and can be analysed in terms of self and internal-response coe cients. Structural stability is concerned with the change in steady state after a perturbation of a parameter in the system, and can be analysed in terms of concentration-response coe cients. Furthermore, it is shown that control patterns are useful in understanding which system properties determine structural stability and to what degree. Homeostasis is de ned as the change in the steady-state concentration of a metabolite relative to the change in the steady-state ux through the metabolite pool following a perturbation in a system parameter, and co-response coe cients are proposed as quantitative measures of homeostasis. More speci cally, metabolite-ux coresponse coe cients allow the de nition of an index that quanti es to which degree a metabolite is homeostatically regulated. A computational model of a simple linear metabolic sequence subject to feedback inhibition with di erent sets of parameters provided a test-bed for the quantitative analysis of metabolic regulation. Log-log rate characteristics and parameter portraits of steady-state variables, as well as response and elasticity coe cients were used to analyse the steady-state behaviour and control properties of the system. This study demonstrates the usefulness of generic models based on proper enzyme kinetics to further our understanding of metabolic behaviour, control and regulation and has laid the groundwork for future studies of metabolic regulation of more complex core models or of models of real systems.
AFRIKAANSE OPSOMMING: Metaboliese sisteme is oop sisteme wat gedurig blootgestel word aan `n uktuerende omgewing. Hierdie uktuasies lei tot veranderinge in beide interne veranderlikes en parameters van metaboliese sisteme. Metaboliese sisteme besit egter meganismes om dinamies en struktureel stabiel te bly. Verder verseker hierdie meganismes ook dat die konsentrasies van interne metaboliete relatief konstant bly ten spyte van groot veranderinge in uksie deur die metaboliese pad waarvan hierdie metaboliete deel vorm. Kwantitatiewe maatstawwe is voorheen voorgestel vir "metaboliese regulering", gebaseer op die raamwerk van Metaboliese Kontrole Analise. Die onkritiese gebruik van die term "regulering" ontneem egter hierdie konsep van sinvolle betekenis. Voor hierdie studie is die voorgestelde maatstawwe van regulering nog nie toegepas op 'n model ten einde hulle met mekaar te vergelyk nie. Die huidige studie ondersoek die toepaslikheid van die verskillende maatstawwe om verskillende tipe vrae oor regulering te beantwoord. Die doelwit van hierdie studie was om aspekte van metaboliese regulering, naamlik dinamiese stabiliteit, strukturele stabiliteit en homeostase, te onderskei, asook om 'n gepaste maatstaf vir elk van die verskillende aspekte te vind. Dinamiese stabiliteit is 'n eienskap van 'n bestendige toestand om terug te keer na die oorspronklike toestand na perturbasie van die konsentrasie van 'n interne metaboliet. Hierdie aspek van regulering kan in terme van interne respons en self-respons koeffi siente geanaliseer word. Strukturele stabiliteit van 'n bestendige toestand beskryf die mate van verandering van die bestendige toestand nadat 'n parameter van die sisteem geperturbeer is, en kan in terme van konsentrasie-responskoeffisiente geanaliseer word. Verder wys hierdie studie dat kontrole patrone van nut is om vas te stel watter eienskappe van 'n sisteem die strukturele stabiliteit bepaal en tot watter mate. Homeostase word gede finieer as die verandering in die konsentrasie van 'n interne metaboliet relatief tot die verandering in die uksie deur daardie metaboliese poel nadat 'n parameter van die sisteem verander het. Vir die analise van hierdie aspek van regulering word ko-responskoe ffisiente as 'n maatstaf voorgestel. Meer spesi ek kan metaboliet- uksie ko-responskoeff siente gebruik word om `n indeks te de nieer wat meet tot watter mate 'n metaboliet homeostaties gereguleer word. 'n Rekenaarmatige model van 'n eenvoudige lineere metaboliese sekwens wat onderhewig is aan terugvoer inhibisie is gebruik om die verskillende aspekte van metaboliese regulering kwantitatief te analiseer met vier verskillende stelle parameters. Dubbel-logaritmiese snelheidskenmerke en parameter portrette van bestendige toestandsveranderlikes, asook van respons- en elastisiteit koeffisente is gebruik om die bestendige toestandsgedrag en kontrole eienskappe van die sisteem te analiseer. Hierdie studie demonstreer die nut van generiese modelle wat op korrekte ensiemkinetika gebaseer is om ons verstaan van metaboliese gedrag, kontrole en regulering te verdiep. Verder dien hierdie studie as grondslag vir toekomstige studies van metaboliese regulering van meer ingewikkelde kernmodelle of modelle van werklike sisteme.
National Research Foundation
APA, Harvard, Vancouver, ISO, and other styles
30

Zarinabad, Nooralipour Niloufar. "Advanced quantification of myocardial perfusion." Thesis, King's College London (University of London), 2013. https://kclpure.kcl.ac.uk/portal/en/theses/advanced-quantification-of-myocardial-perfusion(1aa4ae14-3452-4f50-bbfc-f433191f210c).html.

Full text
Abstract:
Ischemic heart disease remains a major global health concern with significant morbidity and mortality issues. Identifying areas of myocardial tissue at risk early on can help guide clinical management and develop appropriate treatment strategies to prevent myocardial infarction, thus improving patient outcomes. Using the latest cardiac magnetic resonance (CMR) imaging techniques, first-pass perfusion imaging allows for a very high spatial resolution, non-invasive and radiation free quantification of myocardial blood flow (MBF). True quantification of very high resolution perfusion images offers a unique capability to localize and measure subendocardial ischemia. A common technique for calculating MBF from dynamic contrast-enhanced cardiovascular MR (DCE-CMR) is to track a bolus of contrast agent and measure MBF using fully quantitative methods. These methods which are based on central volume principle deconvolve the changes in the concentration of the injected contrast agent in the tissue with the arterial input function (AIF). However deconvolution is inherently a difficult process and therefore numerically unstable with noise contaminated data. The purpose of this study is to enable high spatial resolution voxel-wise quantitative analysis of myocardial perfusion in DCE-CMR, in particular by finding the most favourable quantification algorithm in this context. Voxel-wise quantification has the potential to combine the advantage of visual analysis with the objective and reproducible evaluation made possibly by a true quantitative assessment. Four deconvolution algorithms – Fermi function modelling, deconvolution using B-spline basis, deconvolution using exponential basis and Auto-Regressive Moving Average modelling (ARMA) were tested to calculate voxel-wise perfusion estimates. The algorithms were developed on synthetic data and validated against a true gold-standard using a hardware perfusion phantom and an explanted perfused pig heart. The accuracy of each method was assessed for different levels of spatial resolution. Moreover robustness of each deconvolution algorithm to variation in perfusion modelling parameters was evaluated. Finally, voxel-wise analysis was used to generate high resolution perfusion maps on real data acquired from healthy volunteers and patients with coronary artery disease. Both simulations and maps in the hardware phantom, explanted pig heart data and patient studies showed that voxel-wise quantification of myocardium perfusion is feasible and can be used to detect abnormal regions with high sensitivity in identifying the tissue at risk. In general ARMA and the exponential method showed to be more accurate, on the other hand, Fermi model was the most robust method to noise with highest precision for voxel-wise analysis. Inevitably the choice of quantification method for data analysis boils down to a trade-off between accuracy and precision of the estimation.
APA, Harvard, Vancouver, ISO, and other styles
31

El, Kheiashy Karim. "Flow-Transport Modeling and Quantification." ScholarWorks@UNO, 2007. http://scholarworks.uno.edu/td/548.

Full text
Abstract:
Several research investigations have been conducted on the flow and sediment transport over bed forms in alluvial rivers (e.g. mean flow field, turbulence, shear partitioning, bed load transport and bed form geometry). Much of this work was either laboratory studies or small scale field investigations. Recently, advance in technology have improved the way data are collected and analyzed, e.g. flow data, velocity data and detailed bathymetric information that provide greater knowledge about the bed form geometry. Recent advances in computing power have also reduced the computational restrictions on using three dimensional numerical models in modeling flow applications to predict the temporal and spatial changes of flow and sediment environments. The work performed in this research quantified the periodic nature of bed forms types and geometries along the Lower Mississippi river. Correlations were performed relating the hydrodynamics of the river to the bed form types and geometries. The research work showed the inability of hydrostatic numerical modeling systems to accurately predict flow separation at the bed form crest but indicated that these models could reasonably predict the out of phase relationship between the bed form and the water surface profile. Furthermore the hydrostatic models predicted the total bed resistance as adequately as the non-hydrostatic models. It was found that non-hydrostatic models are required to properly simulate flow separation at bed form crests. Models such as MIKE 3 with constant z-level vertical discretization failed to capture the observed boundary layers unless very fine grids are used. A new procedure was developed as a part of this research, in which relations and dependencies between the hydrodynamic resistance and the bed form dimensions relative to the numerical model spatial scale were derived. This procedure can be used to aid in numerical riverine model calibration and to provide a better representation of flow resistance in hydrodynamic modeling codes.
APA, Harvard, Vancouver, ISO, and other styles
32

Salvatore, Felipe de Souza. "Topics in modal quantification theory." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/8/8133/tde-14122015-122734/.

Full text
Abstract:
The modal logic S5 gives us a simple technical tool to analyze some main notions from philosophy (e.g. metaphysical necessity and epistemological concepts such as knowledge and belief). Although S5 can be axiomatized by some simple rules, this logic shows some puzzling properties. For example, an interpolation result holds for the propositional version, but this same result fails when we add first-order quantifiers to this logic. In this dissertation, we study the failure of the Definability and Interpolation Theorems for first-order S5. At the same time, we combine the results of justification logic and we investigate the quantified justification counterpart of S5 (first-order JT45). In this way we explore the relationship between justification logic and modal logic to see if justification logic can contribute to the literature concerning the restoration of the Interpolation Theorem.
A lógica modal S5 nos oferece um ferramental técnico para analizar algumas noções filosóficas centrais (por exemplo, necessidade metafísica e certos conceitos epistemológicos como conhecimento e crença). Apesar de ser axiomatizada por princípios simples, esta lógica apresenta algumas propriedades peculiares. Uma das mais notórias é a seguinte: podemos provar o Teorema da Interpolação para a versão proposicional, mas esse mesmo teorema não pode ser provado quando adicionamos quantificadores de primeira ordem a essa lógica. Nesta dissertação vamos estudar a falha dos Teoremas da Definibilidade e da Interpolação para a versão quantificada de S5. Ao mesmo tempo, vamos combinar os resultados da lógica da justificação e investigar a contraparte da versão quantificada de S5 na lógica da justificação (a lógica chamada JT45 de primeira ordem). Desse modo, vamos explorar a relação entre lógica modal e lógica da justificação para ver se a lógica da justificação pode contribuir para a restauração do Teorema da Interpolação.
APA, Harvard, Vancouver, ISO, and other styles
33

Cohort, Pierre. "Sur quelques problèmes de quantification." Paris 6, 2000. http://www.theses.fr/2000PA066112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

BENAZZA, AMALE. "Quantification vectorielle en codage d'images." Paris 11, 1993. http://www.theses.fr/1993PA112097.

Full text
Abstract:
Cette these s'interesse a la quantification vectorielle d'images. Elle a pour objet de proposer deux methodes permettant de reduire la complexite du codage au prix d'une faible hausse de la distorsion globale. La premiere approche reduit la complexite de la recherche du code-vecteur grace a l'utilisation d'un dictionnaire ayant une structure arborescente. Contrairement a l'approche classique, l'arborescence est creee de maniere ascendante grace a des techniques de classification hierarchique. Par la suite, la methode de codage utilise un algorithme de type perceptron pour determiner l'hyperplan separant deux nuds-fils. La seconde methode concerne les quantificateurs vectoriels a codes-produits a analyse lineaire. Il est montre qu'une condition necessaire et suffisante sur la transformation doit etre verifiee pour que les quantifications du code-produit soient effectuees independamment. Dans ce cas, le probleme de l'optimisation des differents quantificateurs sous la contrainte de debit global fixe est resolu. Enfin, la diminution de la charge de calcul du codage est evaluee
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Congling. "Improvement Of Gfp Expression Quantification." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1206113236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Carson, J. "Uncertainty quantification in palaeoclimate reconstruction." Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29076/.

Full text
Abstract:
Studying the dynamics of the palaeoclimate is a challenging problem. Part of the challenge lies in the fact that our understanding must be based on only a single realisation of the climate system. With only one climate history, it is essential that palaeoclimate data are used to their full extent, and that uncertainties arising from both data and modelling are well characterised. This is the motivation behind this thesis, which explores approaches for uncertainty quantification in problems related to palaeoclimate reconstruction. We focus on uncertainty quantification problems for the glacial-interglacial cycle, namely parameter estimation, model comparison, and age estimation of palaeoclimate observations. We develop principled data assimilation schemes that allow us to assimilate palaeoclimate data into phenomenological models of the glacial-interglacial cycle. The statistical and modelling approaches we take in this thesis means that this amounts to the task of performing Bayesian inference for multivariate stochastic differential equations that are only partially observed. One contribution of this thesis is the synthesis of recent methodological advances in approximate Bayesian computation and particle filter methods. We provide an up-to-date overview that relates the different approaches and provides new insights into their performance. Through simulation studies we compare these approaches using a common benchmark, and in doing so we highlight the relative strengths and weaknesses of each method. There are two main scientific contributions in this thesis. The first is that by using inference methods to jointly perform parameter estimation and model comparison, we demonstrate that the current two-stage practice of first estimating observation times, and then treating them as fixed for subsequent analysis, leads to conclusions that are not robust to the methods used for estimating the observation times. The second main contribution is the development of a novel age model based on a linear sediment accumulation model. By extending the target of the particle filter we are able to jointly perform parameter estimation, model comparison, and observation age estimation. In doing so, we are able to perform palaeoclimate reconstruction using sediment core data that takes age uncertainty in the data into account, thus solving the problem of dating uncertainty highlighted above.
APA, Harvard, Vancouver, ISO, and other styles
37

Russell, Kelly A. "Children's prenumerical quantification of time." Birmingham, Ala. : University of Alabama at Birmingham, 2008. https://www.mhsl.uab.edu/dt/2008p/russell.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Alabama at Birmingham, 2008.
Additional advisors: Jerry Aldridge, Lois Christensen, Lynn Kirkland, Maryann Manning. Description based on contents viewed Oct. 7, 2008; title from PDF t.p. Includes bibliographical references (p. 66-68).
APA, Harvard, Vancouver, ISO, and other styles
38

Vine, Douglas P. "The target vulnerability quantification process." Master's thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-12162009-020115/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hampp, Fabian. "Quantification of combustion regime transitions." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/32582.

Full text
Abstract:
The current work provides fundamental understanding of combustion regime transitions from distributed reactions towards the corrugated flamelet regime through a novel application of the multi-fluid approach of Spalding. Aerodynamically stabilised premixed flames were studied in a back-to-burnt opposed jet configuration featuring fractal grid generated multi-scale turbulence (Re ≃ 18,400 and Ret > 350). The chemical timescale was varied via the mixture stoichiometry, fuel reactivity and excess enthalpy with rates of strain exceeding the laminar flame extinction point. Rayleigh thermometry was performed to quantify the reaction zone broadening with large low temperature regions observed. Simultaneous Mie scattering, OH-PLIF and PIV were used to quantify the encounter of intermediate fluid states (i.e. mixing, mildly and strongly reacting) in addition to reactants and combustion products. A physical interpretation was provided for the individual fluid states. The analysis showed self-sustained flames in low strain regions with a collocated and pronounced dilatation for higher Damköhler numbers. By contrast, highly strained regions resulted in an auto-ignition related burning with attenuated dilatation and increased vorticity levels. The variation of the excess enthalpy - in particular for low Damköhler number combustion - illustrates the dominant influence of the burnt gas state on the dilatation and burning mode, with a distinct impact on the scalar flux also evident. The fuel reactivity showed a clear effect on the burning mode transitions, with explicit differences in the resulting flow field. The flow conditions were analysed in terms of Damköhler and Karlovitz numbers based on chemical timescales corresponding to laminar flames and auto-ignition events. The thesis provides novel insights into the underlying conditions leading to combustion regime transitions by means of (i) the evolution of multi-fluid probability, (ii) interface, (iii) mean flow field, (iv) conditional velocity and (v) conditional strain statistics evaluated as a function of the Damköhler number. (vi) The combustion mode influence on the scalar transport is discussed and (iv) a tentative 3D regime diagram is provided. The data illustrate the potential of a multi-fluid delineation to quantify a wide range of burning modes of relevance to low polluting combustion technologies.
APA, Harvard, Vancouver, ISO, and other styles
40

Varley, Lisa. "Intermolecular interactions : quantification and applications." Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2739/.

Full text
Abstract:
This thesis deals with the nature of fundamental intermolecular interactions and the ways in which they can be exploited using supramolecular chemistry. Three separate studies have been undertaken in order to explore and quantify different types of electrostatic interactions. Chapter 2 describes an investigation into the nature of hydrogen bonding interactions between charged species and well-defined neutral hosts, in order to quantify their hydrogen bonding strength on an already established scale. The importance of metal-ligand interactions in self-assembly is documented in Chapter 3, where the synthesis of functional supramolecules is described and their self-assembly in the presence of a bidentate ligand is investigated. Finally, Chapter 4 describes the use of calixarene-porphyrin conjugates in gas-sensing devices, showing how a handle on the design and synthesis of supramolecules and an understanding of their basic interactions can provide a useful application. The detailed background literature relating to the project will be described as an introduction to each chapter; this chapter provides a general introduction to the field of supramolecular chemistry and an overview of key advances that have been made since its inception.
APA, Harvard, Vancouver, ISO, and other styles
41

LERTWATANAKUL, PRAVIT. "Quantification et determination en thai." Strasbourg 2, 1999. http://www.theses.fr/1999STR20043.

Full text
Abstract:
Cette these vise a expliquer a quels moyens le locuteur thai peut recourir pour effectuer les operations de quantification et de determination, basees sur l'aspect quantitatif et/ou qualitatif du nom sur lequel portent ces operations. Pour quantifier le nom, le locuteur thai utilise << un classificateur (cl) >> dans la sequence << numeral + cl >> qui se trouve apres le nom. En thai, le cl est une sous-categorie de nom qui peut indiquer le statut qualitatif du nom qu'il quantifie. Dans ce cas, il sert egalement a classifier le nom dans une des categories suivantes: nom de personnes, d'animaux, d'objets. Et s'il determine le nom, il se trouve tout de suite apres lui dans la sequence << cl + demonstratif >>. Au premier abord, le classificateur peut sembler exotique et caracteristique d'une aire geographique limitee a certaines langues de l'asie, de l'afrique et du pacifique. Cependant, nos recherches, dans le cadre d'<< une linguistique du generalisable >> due a antoine culioli nous permettent de dire que le cl entre dans un ensemble de phenomenes qui depassent largement les langues a classificateurs. La preuve en est que, a la place du cl, on peut trouver dans toutes les langues des noms demesure et des noms d'espece ou de type qui permettent eux aussi de quantifier et de determiner les noms. Cela revient a dire que le classificateur proprement dit, meme s'il n'existe pas en francais et dans d'autres langues indo-europeennes, fait partie du systeme de determination et de quantification de toutes les langues. Cette these est divisee en deux parties. La premiere presente des cl en thai et un constat de la situation dans le domaine de la quantification et de la determination. On y trouve egalement une liste de cl et leur compatibilite semantique avec telle ou telle classe de noms. La deuxieme partie presente une interpretation generale de la structure du syntagme nominal en thai ainsi qu'une analyse d'exemples reels tires de la vie quotidienne et de diverses situations du discours.
APA, Harvard, Vancouver, ISO, and other styles
42

Gupta, Kavya. "Stability Quantification of Neural Networks." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST004.

Full text
Abstract:
Les réseaux de neurones artificiels sont au cœur des avancées récentes en Intelligence Artificielle. L'un des principaux défis auxquels on est aujourd'hui confronté, notamment au sein d'entreprises comme Thales concevant des systèmes industriels avancés, est d'assurer la sécurité des nouvelles générations de produits utilisant cette technologie. En 2013, une observation clé a révélé que les réseaux de neurones sont sensibles à des perturbations adverses. Ceci soulève de sérieuses inquiétudes quant à leur applicabilité dans des environnements où la sécurité est critique. Au cours des dernières années, des publications ont étudiées les différents aspects de la robustesse des réseaux de neurones, et des questions telles que ``Pourquoi des attaques adverses se produisent?", ``Comment pouvons-nous rendre les réseaux de neurones plus robustes à ces bruits ?", ``Comment générer des attaques plus fortes", etc., se sont posées avec une acuité croissante. Cette thèse vise à apporter des réponses à de telles questions. La communauté s'intéressant aux attaques adverses en apprentissage automatique travaille principalement sur des scénarios de classification, alors que les études portant sur des tâches de régression sont rares. Nos contributions comblent le fossé existant entre les méthodes adverses en apprentissage et les applications de régression.Notre première contribution, dans le chapitre 3, propose un algorithme de type ``boîte blanche" pour attaquer les modèles de régression. L'attaquant adverse présenté est déduit des propriétés algébriques du Jacobien du réseau. Nous montrons que notre attaquant réussit à tromper le réseau de neurones et évaluons son efficacité à réduire les performances d'estimation. Nous présentons nos résultats sur divers ensembles de données tabulaires industriels en libre accès et réels. Notre analyse repose sur la quantification de l'erreur de tromperie ainsi que différentes métriques. Une autre caractéristique remarquable de notre algorithme est qu'il nous permet d'attaquer de manière optimale un sous-ensemble d'entrées, ce qui peut aider à identifier la sensibilité de certaines d'entre elles. La deuxième contribution de cette thèse (Chapitre 4) présente une analyse de la constante de Lipschitz multivariée des réseaux de neurones. La constante de Lipschitz est largement utilisée dans la littérature pour étudier les propriétés intrinsèques des réseaux de neurones. Mais la plupart des travaux font une analyse mono-paramétrique, qui ne permet pas de quantifier l'effet des entrées individuelles sur la sortie. Nous proposons une analyse multivariée de la stabilité des réseaux de neurones entièrement connectés, reposant sur leur propriétés Lipschitziennes. Cette analyse nous permet de saisir l'influence de chaque entrée ou groupe d'entrées sur la stabilité du réseau de neurones. Notre approche repose sur une re-normalisation appropriée de l'espace d'entrée, visant à effectuer une analyse plus précise que celle fournie par une constante de Lipschitz globale. Nous visualisons les résultats de cette analyse par une nouvelle représentation conçue pour les praticiens de l'apprentissage automatique et les ingénieurs en sécurité appelée étoile de Lipschitz. L'utilisation de la normalisation spectrale dans la conception d'une boucle de contrôle de stabilité est abordée au chapitre 5. Une caractéristique essentielle du modèle optimal consiste à satisfaire aux objectifs de performance et de stabilité spécifiés pour le fonctionnement. Cependant, contraindre la constante de Lipschitz lors de l'apprentissage des modèles conduit généralement à une réduction de leur précision. Par conséquent, nous concevons un algorithme permettant de produire des modèles de réseaux de neurones ``stable dès la conception" en utilisant une nouvelle approche de normalisation spectrale, qui optimise le modèle, en tenant compte à la fois des objectifs de performance et de stabilité. Nous nous concentrons sur les petits drones aériens (UAV)
Artificial neural networks are at the core of recent advances in Artificial Intelligence. One of the main challenges faced today, especially by companies likeThales designing advanced industrial systems is to ensure the safety of newgenerations of products using these technologies. In 2013 in a key observation, neural networks were shown to be sensitive to adversarial perturbations, raising serious concerns about their applicability in critically safe environments. In the last years, publications studying the various aspects of this robustness of neural networks, and rising questions such as "Why adversarial attacks occur?", "How can we make the neural network more robust to adversarial noise?", "How to generate stronger attacks?" etc., have grown exponentially. The contributions of this thesis aim to tackle such problems. The adversarial machine learning community concentrates majorly on classification scenarios, whereas studies on regression tasks are scarce. Our contributions bridge this significant gap between adversarial machine learning and regression applications.The first contribution in Chapter 3 proposes a white-box attackers designed to attack regression models. The presented adversarial attacker is derived from the algebraic properties of the Jacobian of the network. We show that our attacker successfully fools the neural network and measure its effectiveness in reducing the estimation performance. We present our results on various open-source and real industrial tabular datasets. Our analysis relies on the quantification of the fooling error as well as different error metrics. Another noteworthy feature of our attacker is that it allows us to optimally attack a subset of inputs, which may help to analyze the sensitivity of some specific inputs. We also, show the effect of this attacker on spectrally normalised trained models which are known to be more robust in handling attacks.The second contribution of this thesis (Chapter 4) presents a multivariate Lipschitz constant analysis of neural networks. The Lipschitz constant is widely used in the literature to study the internal properties of neural networks. But most works do a single parametric analysis, which do not allow to quantify the effect of individual inputs on the output. We propose a multivariate Lipschitz constant-based stability analysis of fully connected neural networks allowing us to capture the influence of each input or group of inputs on the neural network stability. Our approach relies on a suitable re-normalization of the input space, intending to perform a more precise analysis than the one provided by a global Lipschitz constant. We display the results of this analysis by a new representation designed for machine learning practitioners and safety engineers termed as a Lipschitz star. We perform experiments on various open-access tabular datasets and an actual Thales Air Mobility industrial application subject to certification requirements.The use of spectral normalization in designing a stability control loop is discussed in Chapter 5. A critical part of the optimal model is to behave according to specified performance and stability targets while in operation. But imposing tight Lipschitz constant constraints while training the models usually leads to a reduction of their accuracy. Hence, we design an algorithm to train "stable-by-design" neural network models using our spectral normalization approach, which optimizes the model by taking into account both performance and stability targets. We focus on Small Unmanned Aerial Vehicles (UAVs). More specifically, we present a novel application of neural networks to detect in real-time elevon positioning faults to allow the remote pilot to take necessary actions to ensure safety
APA, Harvard, Vancouver, ISO, and other styles
43

García, de León Pedro Lenin. "Quantification de variables conjuguées par états cohérents." Thesis, Paris Est, 2008. http://www.theses.fr/2008PEST0210/document.

Full text
Abstract:
Dans ce travail on se concentre sur une méthode alternative de quantification a travers des états cohérents. La méthode canonique associe un pair de variables conjuguées classiques et identifie leur crochet de Poisson au commutateur quantique de ses observables quantiques correspondantes. Les observables sont définies comme des opérateurs auto-adjoints agissant sur un espace de Hilbert particulier. Leurs valeurs physiques se trouvent dans leur résolution spectrale, et pourtant sont liés à une mesure à valeur projection (PV). Néanmoins, il existe un empêchement lorsqu’on impose des bornes sur les ces spectres. Cette restriction sur la définition des opérateurs est décrite par un théorème de W. Pauli et ouvre la voie vers la définition de méthodes alternatives de quantification. La quantification par états cohérents propose une définition d’observable quantique qui prend des valeurs à travers la valeur moyenne sur une famille « cohérente » non-orthogonale et surcomplète de vecteurs dans l’espace de Hilbert. Les états cohérents définis à cet effet partagent avec ceux de oscillateur harmonique la propriété d’être des résolutions de l’identité et d’être parametrisés par un indice discret et une variable complexe. Ceci les rend particulièrement utiles pour « traduire » des variables classiques en opérateurs quantiques bien définis. On a étudié trois cas particuliers ou la définition d’opérateurs auto-adjoints est compromise. En premier on propose une définition de l’opérateur de phase, correspondant à l’angle conjugué à l’action classique. En deuxième place on étudie la quantification du mouvement dans un puits infini de potentiel, notamment, l’opérateur d’impulsion problématique est défini proprement. Finalement ont propose un opérateur temps, conjugué au Hamiltonien, pour une particule libre en utilisant des états cohérents de type SU(1,1) sur des demi plans de Poincaré
In this work we focus on an alternative quantization method using generalized coherent states. The canonical method associates a pair of conjugated classical variables to their corresponding quantum observables identifying their Poisson bracket to a quantum commutator. These obervables, defined as self-adjoint operators acting on a particular Hilbert space, find their values in their spectral resolution an thus are linked to a Projection Valued (PV) measure. But there is an obstacle on the operator definition once we apply boundaries to the spectra. This restriction , described in a theorem by W. Pauli rises the question on the need of an alternative way of defining observables and opens the way to a new quantization protocol. Coherent state quantization proposes a quantum observable definition taking values through the mean value on a set of "coherent" non-orthogonal, overcomplete vectors in the Hilbert space H. These coherent states resolve the identity in H and are parametrized by a discrete parameter and a complex variable just as their homonyms for the harmonic oscillator. This last property makes them particularly useful to "translate" classical variables into well defined quantum operators. We have studied three particular cases where the self-adjoint operator definition is compromised. In first place we worked in a phase operator definition, corresponding to the angle-action classical pair. An example on how this idea could be extended to relative phases for the SU(N) group is given. The second example is the quantization of movement in an infinite well potential. Here the problematic momentum operator is defined properly. Finally we propose a time operator, conjugated to the Hamiltonian, for a free particle using SU(1,1) type coherent states on Poincaré half-planes
APA, Harvard, Vancouver, ISO, and other styles
44

Cot, Sanz Albert. "Absolute quantification in brain SPECT imaging." Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/6601.

Full text
Abstract:
Certes malalties neurològiques estan associades amb problemes en els sistemes de neurotransmissió. Una aproximació a l'estudi d'aquests sistemes és la tomografia d'emissió SPECT (Single Photon Emission Computed Tomography) com a tècnica
no-invasiva que proporciona imatges funcionals representatives de l'activitat neuronal. Aquesta tècnica permet la visualització i l'anàlisi de diferents òrgans i teixits dins l'àmbit de la Medicina Nuclear.

Malgrat que la inspecció visual de la imatge a vegades és suficient per establir el diagnòstic, la quantificació dels paràmetres de la imatge reconstruida poden millorar la fiabilitat i exactitud del diagnòstic precoç de la malaltia. En particular, la quantificació d'estudis de neurotransmissors de dopamina pot ajudar a detectar els estadis inicials de malalties com el Parkinson. Així mateix, la quantificació permet un seguiment més acurat de l'evolució de la malaltia i una evaluació dels efectes de la terapèutica aplicada.

La quantificació es veu afectada pels efectes degradants de la imatge com són el soroll estadístic, la resposta del sistema col.limador/detector i l'efecte de dispersió i/o atenuació dels fotons en la seva interacció amb la matèria. Alguns d'aquests efectes poden ser corregits mitjançant l'ús d'algoritmes de reconstrucció iteratius.

L'objectiu d'aquesta tesi és aconseguir una quantificació tant absoluta com relativa dels valors numèrics de la imatge reconstruida de manera que reprodueixin la distribució d'activitat real del pacient en el moment de l'adquisició de l'estudi de SPECT. Per aconseguir-ho s'han desenvolupat diferents codis i algoritmes per millorar els mètodes de reconstrucció existents i validar-ne els seus resultats.

La validació i millora dels algoritmes s'ha basat en l'ús de tècniques de simulació Monte Carlo. S'han analitzat els diferents codis Monte Carlo disponibles en l'àmbit de la Medicina Nuclear i s'ha escollit SimSET. La interpretació dels resultats obtinguts i la comparació amb els resultats experimentals ens van dur a incorporar modificacions en el codi original. D'aquesta manera vam obtenir i validar SimSET com a generador d'estudis de SPECT a partir de pacients i objectes virtuals.

La millora dels algoritmes es va basar en la incorporació de models analítics de la resposta del sistema col.limador/detector. La modelització del sistema es va implementar per diferents configuracions i energies de la font amb la utilització del codi Monte Carlo PENELOPE. Així mateix es va dissenyar un nou algoritme iteratiu que incorporés l'efecte 3D del sistema i es va tenir en compte la valoració de la imatge en tot el seu volum.

Finalment, es va proposar una correcció de l'scattering utilitzant el simulador SimSET modificat per tal d'accelerar el procés de reconstrucció. Els valors reconstruits de la imatge ens han permès recuperar més d'un 95\% dels valors originals, permetent per tant la quantificació absoluta de les imatges de SPECT.
Many forms of brain diseases are associated with problems in the neurotransmission systems. One approach to the assessment of such systems is the use of Single Photon Emission Computed Tomography (SPECT) brain imaging. Neurotransmission SPECT has become an important tool in neuroimaging and is today regarded as a useful method in both clinical and basic research. SPECT is able to non-invasively visualize and analyze different organs and tissues functions or properties in Nuclear Medicine.

Although visual inspection is often sufficient to assess neurotransmission imaging, quantification might improve the diagnostic accuracy of SPECT studies of the dopaminergic system. In particular, quantification of neurotransmission SPECT studies in Parkinson Disease could help us to diagnose this illness in the early pre-clinical stages. One of the main research topics in SPECT is to achieve early diagnosis, indeed preclinical diagnosis in neurodegenerative illnesses. In this field detailed analysis of shapes and values of the region of interest (ROIs) of the image is important, thus quantification is needed. Moreover, quantification allows a follow-up of the progression of disease and to assess the effects of potential neuroprotective treatment strategies. Therefore, the aim of this thesis is to achieve quantification of both the absolute activity values and the relative values of the reconstructed SPECT images.

Quantification is affected by the degradation of the image introduced by statistical noise, attenuation, collimator/detector response and scattering effects. Some of these degradations may be corrected by using iterative reconstruction algorithms, which thus enable a more reliable quantification. The importance of correcting degradations in reconstruction algorithms to improve quantification accuracy of brain SPECT studies has been proved.

Monte Carlo simulations are the --gold standard' for testing reconstruction algorithms in Nuclear Medicine. We analyzed the available Monte Carlo codes and we chose SimSET as a virtual phantom simulator. A new stopping criteria in SimSET was established in order to reduce the simulation time. The modified SimSET version was validated as a virtual phantom simulator which reproduces realistic projection data sets in SPECT studies.

Iterative algorithms permit modelling of the projection process, allowing for correction of spatially variant collimator response and the photon crosstalk effect between transaxial slices. Thus, our work was focused on the modelling of the collimator/detector response for the parallel and fan beam configurations using the Monte Carlo code PENELOPE. Moreover, a full 3D reconstruction with OS-EM algorithms was developed.

Finally, scattering has recognized to be one of the most significant degradation effects in SPECT quantification. Nowadays this subject is an intensive field of research in SPECT techniques. Monte Carlo techniques appear to be the most reliable way to include this correction. The use of the modified SimSET simulator accelerates the forward projection process although the computational burden is already a challenge for this technique.

Full 3D reconstruction simultaneously applied with Monte Carlo-based scattering correction and the 3D evaluation procedure is a major upgrade technique in order to obtain valuable, absolute quantitative estimates of the reconstructed images. Once all the degrading effects were corrected, the obtained values were 95\% of the theoretical values. Thus, the absolute quantification was achieved.
APA, Harvard, Vancouver, ISO, and other styles
45

Jäger, Robert. "Quantification and localization of molecular hydrophobicity." [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=960539999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Crave, Alain. "Quantification de l'organisation des réseaux hydrographiques." Phd thesis, Université Rennes 1, 1995. http://tel.archives-ouvertes.fr/tel-00620071.

Full text
Abstract:
Les réseaux hydrographiques jouent un rôll3 prépondérant dans la régulation des flux d'eau et de matière à la surface des continents. Sièges d'une érosion intense, ils contrebalancent, en découpant le paysage par un réseau de vallées, les apports endogènes d'origine tectonïque. Leur structure et leur degré d'extension et de ramification sont des paramètres clés dans l'évolution du relief sur des pas de temps longs (103 -107 années), et danJ3 la propagation d'ondes de crues sur des pas de temps courts Cl heure, qques jours). Comprendre leur évolution est un enjeu intéressant à la fois d'un point de vue fondamental, pour la détermination des processus physiques fondamentaux de transferts de matière à des échelles variant du continent au versant, mais également dans une perspective d'application' à .des besoins de prévision hydrologiques. Le travail présenté s'attache d'une part à définir un schéma d'organisation possible et, d'autre part, à définir un modèle simple d'extension et d'évolution des réseaux hydrographiques et du relief. L'analyse statistique des propriétés géométriques de deux réseaux hydrographiques bretons: la Vilaine et le Blavet, met en évidence, entre autre, une dis(ribution aléatoire des confluences au sein du système hydrographique et une densité homogène sur 4 ordres de grandeur d'échelle. Ces deux observations témoignent d'une géométrie aléatoire et oriente la modélisation vers u" modèle mixte stochastique-déterministe. Le modèle proposé traduit la physique des principaux processus d'érosion par des règles simples de déplacement et d'action de marcheurs lancés aléatoirement sur une topographie. D'utilisation très souple, cet outil offre la possibilité d'aborder l'étude de l'évolution des reliefs et des axes de drainage en fonction de l'importance relative des processus fondamentaux que sont les déplacements et les transports de matière par diffusion et advection. Il est . ainsi possible de simuler la croissance de perturbations . ~~~~~~~~ suivant la prépondérance de processus particulier.
APA, Harvard, Vancouver, ISO, and other styles
47

Michel, Jean-Philippe. "Quantification conformément équivariante des fibrés supercotangents." Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2009. http://tel.archives-ouvertes.fr/tel-00425576.

Full text
Abstract:
Cette thèse comprend deux parties.
1. Quantification conformément équivariante des fibrés supercotangents.
Nous entendons par quantification du fibré supercotangent d'une variété M, un isomorphisme linéaire entre l'espace des superfonctions polynomiales en les fibres et l'espace des opérateurs différentiels spinoriels sur M. Nous montrons qu'il existe une unique quantification pour les fibrés supercotangents des variétés (M,g) conformément plates, qui soit équivariante sous l'action des transformations conformes de M.
2. Sur la géométrie projective du supercercle: une construction unifiée des super birapport et dérivée schwarzienne.
Nous établissons, pour trois supergroupes agissant sur le supercercle, une correspondance entre le supergroupe, les invariants caractéristiques de son action et le 1-cocycle associé, définissant ainsi trois géométries sur le supercercle. L'invariant de la géométrie projective est le super birapport, son 1-cocycle associé étant la dérivée schwarzienne.
APA, Harvard, Vancouver, ISO, and other styles
48

Williams, Sarah J., and mikewood@deakin edu au. "The Definition and quantification of assets." Deakin University, 1995. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20050915.142446.

Full text
Abstract:
The word ‘asset’ was originally taken into the English language, from the Latin ‘ad satis’ and French ‘asez’, as a term used at law meaning sufficient estate or effects to discharge debts. It later came to be used in the sense of property available for the payment of debts. Assets were understood to be property (objects owned and rights of ownership) that could be exchanged for cash. The importance of factual knowledge of the money equivalents of property and debts, in managing mercantile affairs, was emphasised in accounting manuals during the eighteenth and nineteenth centuries. The rights of investors and creditors to factual up-to-date information about the financial state of affairs of companies, given the advent of limited liability, underscored the early company legislation that required the preparation and auditing of statements of property and debts. During the latter part of the nineteenth century the emphasis in accounting moved away from assets as exchangeable property to assets as deferred costs. Expectations took the place of observables. The abstract (expectational) notion of assets as ‘future economic benefits’ was embraced by accountants in the absence of rigorous definitions of the elements and functions of dated statements of financial position and performance. Assets are quantified financially by a heterogeneous mass of potentially inconsistent rules that, by and large, have no regard for the empirical nature of measurement. Consequently, accountants have failed to provide the community with up-to-date factual information about the financial state of affairs and performance of business entities - and, hence, with an informative basis for financial action.
APA, Harvard, Vancouver, ISO, and other styles
49

Ho, Christopher Sui-keung. "Mesostructure quantification of fibre-reinforced composites." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0017/MQ49722.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Carrasquel, Isha. "STRUCTURE-PROPERTY QUANTIFICATION RELATED TO CRASHWORTHINESS." MSSTATE, 2008. http://sun.library.msstate.edu/ETD-db/theses/available/etd-07102008-140429/.

Full text
Abstract:
The objective of this study is to characterize critical component structure-properties on a Dodge Neon for material response refinement in crashworthiness simulations. Crashworthiness simulations using full-scale finite element (FE) vehicle models are an important part of vehicle design. According to the National Highway Traffic Safety Administration (NHTSA), there were over six million vehicle crashes in the United States during 2004, claming lives of more than 40,000 people. Crashworthiness simulations on a detailed FE model of a 1996 Plymouth/Dodge Neon were conducted on the NHTSA for different impact crash scenarios. The top-ten energy-absorbing components of the vehicle were determined. Material was extracted from the as-built vehicle and microstructural analyses were conducted. Tension tests at different temperatures and strain rates were performed as well as microhardness tests. Different microstructural spatial clustering and mechanical properties were found for diverse vehicle components. A plasticity model based on microstructure was used to predict the material response of the front bumper.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography