Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Parametric modelling.

Rozprawy doktorskie na temat „Parametric modelling”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Parametric modelling”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Ceran, Murat. "Parametric human spine modelling". Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/7958.

Pełny tekst źródła
Streszczenie:
3-D computational modelling of the human spine provides a sophisticated and cost-effective medium for bioengineers, researchers, and ergonomics designers in order to study the biomechanical behaviour of the human spine under different loading conditions. Developing a generic parametric computational human spine model to be employed in biomechanical modelling introduces a considerable potential to reduce the complexity of implementing and amending the intricate spinal geometry. The main objective of this research is to develop a 3-D parametric human spine model generation framework based on a command file system, by which the parameters of each vertebra are read from the database system, and then modelled within commercial 3-D CAD software. A novel data acquisition and generation system was developed as a part of the framework for determining the unknown vertebral dimensions, depending on the correlations between the parameters estimated from existing anthropometrical studies in the literature. The data acquisition system embodies a predictive methodology that comprehends the relations between the features of the vertebrae by employing statistical and geometrical techniques. Relations amongst vertebral parameters such as golden ratio were investigated and successfully implemented into the algorithms. The validation of the framework was carried out by comparing the developed 3-D computational human spine models against various real life human spine data, where good agreements were achieved. The constructed versatile framework possesses the capability to be utilised as a basis for quickly and effectively developing biomechanical models of the human spine such as finite element models.
Style APA, Harvard, Vancouver, ISO itp.
2

Johansson, Anders Torbjörn. "Parametric modelling of cetacean calls". Thesis, University of Southampton, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409609.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Shannon, Sean Matthew. "Probabilistic acoustic modelling for parametric speech synthesis". Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708415.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Abdelaal, Medhat Mohamed Ahmed. "Modelling the fisheries of lake manzala, egypt, using parametric and non-parametric statistical methods". Thesis, University of Plymouth, 1999. http://hdl.handle.net/10026.1/2572.

Pełny tekst źródła
Streszczenie:
Much attention has been given to the economic aspects of the fisheries in Egypt, while building a statistical or mathematical model for fish production has received little attention. This study is devoted to a comprehensive assessment of Lake Manzala fisheries; past, present and future. Lake Manzala is one of the main fisheries resources in Egypt, and there is evidence that the fisheries have been over-exploited in recent years. The study objectives were to determine the factors that affect fish catches by individual vessels, to compare between parametric and non-parametric models of the fish catches, and to produce a mathematical model of stock behaviour which can be used to suggest policies to manage the Lake Manzala fishery. A new method of estimating the carrying capacity of the lake and intrinsic growth rate of Tilapia and its four species has been developed. Simulation had to be used to get error estimates of the biomass parameter estimates using the new method. Three catch strategies have been investigated and assessed, with discounted utility of future yields. Two ways of modelling individual vessel catches in relation to their effort characteristics, a parametric and non-parametric analysis, have been investigated. Using generalised additive model gave an improved fit to the survey data compared with the parametric analysis. It also gave a lower allowable fleet size which leads to more conservative management policy. A simulation approach was used to investigate the uncertainty in the predicted catches and stock levels, and to give insight into the risks associated with various levels of control. There was no evidence that a management strategy which aimed to fish at maximum sustainable yield would put the stock at risk.
Style APA, Harvard, Vancouver, ISO itp.
5

Simpson, Andrew G. "Parametric modelling of energy consumption in road vehicles /". [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kolossiatis, Michalis. "Modelling via normalisation for parametric and nonparametric inference". Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/2769/.

Pełny tekst źródła
Streszczenie:
Bayesian nonparametric modelling has recently attracted a lot of attention, mainly due to the advancement of various simulation techniques, and especially Monte Carlo Markov Chain (MCMC) methods. In this thesis I propose some Bayesian nonparametric models for grouped data, which make use of dependent random probability measures. These probability measures are constructed by normalising infinitely divisible probability measures and exhibit nice theoretical properties. Implementation of these models is also easy, using mainly MCMC methods. An additional step in these algorithms is also proposed, in order to improve mixing. The proposed models are applied on both simulated and real-life data and the posterior inference for the parameters of interest are investigated, as well as the effect of the corresponding simulation algorithms. A new, n-dimensional distribution on the unit simplex, that contains many known distributions as special cases, is also proposed. The univariate version of this distribution is used as the underlying distribution for modelling binomial probabilities. Using simulated and real data, it is shown that this proposed model is particularly successful in modelling overdispersed count data.
Style APA, Harvard, Vancouver, ISO itp.
7

Mineault, Patrick. "Parametric modelling of visual cortex at multiple scales". Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123020.

Pełny tekst źródła
Streszczenie:
The visual system is confronted with the daunting task of extracting behaviourally relevant visual information from noisy and ambiguous patterns of luminance falling on the retina. It solves this problem through a hierarchical architecture, in which the visual stimulus is iteratively re-encoded into ever more abstract representations which can drive behaviour. This thesis explores the question of how the computations performed by neurons in the visual hierarchy create behaviourally relevant representations. This question requires probing the visual system at multiple scales: computation is the role of single neurons and ensembles of neurons; representation is the function of multiple neurons within an area; hierarchical processing is an emergent process which involves multiple areas; and behaviour is defined at the full scale of the system, the psychophysical observer. To study visual processing at multiple scales, I propose to develop and apply parametric modelling methods in the context of systems identification. Systems identification seeks to establish the deterministic relationship between the input and the output of a system. Systems identification has proven particularly useful in the study of visual processing, where the input to the system can be easily controlled via sensory stimulation.Parametric modeling, built on the theory of Generalized Linear Models (GLMs), furnishes a common framework to analyze signals with different statistical properties which occur in the analysis of neural systems: spike trains, multi-unit activity, local field potentials and psychophysical decisions.In Chapter 2, I develop the parametric modeling framework which is used throughout this thesis in the context of psychophysical classification images. Results show that parametric modeling can infer a psychophysical observer's decision process with fewer trials than previously proposed methods. This allows the exploration of more complex, and potentially more informative, models of decision processes while retaining statistical tractability.In Chapter 3, I extend and apply this framework to the analysis of visual representations at the level of neuronal ensembles in area V4. The results show that it is possible to infer, from multi-unit activity and local field potential (LFP) signals, the representation of visual space at a fine-grained scale over several millimeters of cortex. Analysis of the estimated visual representations reveals that LFPs reflect both local sources of input and global biases in visual representation. These results resolve a persistent puzzle in the literature regarding the spatial reach of the local field potential.In Chapter 4, I extend and apply the same framework to the analysis of single-neuron responses in area MST of the dorsal visual stream. Results reveal that MST responses can be explained by the integration of their afferent input from area MT, provided that this integration is nonlinear. Estimated models reveal long suspected, but previously unconfirmed receptive field organization in MST neurons that allow them to respond to complex optic flow patterns. This receptive field organization and nonlinear integration allows more accurate estimation of the velocity of approaching objects from the population of MST neurons, thus revealing their possible functional role in vergence control and object motion estimation.Put together, these results demonstrate that with powerful statistical methods, it is possible to infer the nature of visual representations at multiple scales. In the discussion, I show how these results may be expanded to gain a better understanding of hierarchical visual processing at large.
Le système visuel est confronté à la difficile tâche d'extraire de l'information utile au comportement à partir de motifs complexes et ambigus détectés par la rétine. Il résout ce problème grâce à une architecture hiérarchique, dans laquelle le stimulus visuel est itérativement ré-encodé dans une représentation abstraite. Ce mémoire explore la question suivante : comment les computations performées par des neurones de la hiérarchie visuelle créent-elles des représentations permettant des comportements complexes?Cette question nécessite l'étude du système visuel à plusieurs échelles : la computation est le rôle de neurones et d'ensembles de neurones; la représentation est une fonction des neurones dans une aire du cerveau; la hiérarchie émerge de la communication entre de multiples aires du cerveau; et le comportement est défini à l'échelle du système visuel complet, l'observateur psychophysique.Afin d'étudier le système visuel à de multiple échelles, je développe et applique des méthodes de modélisation paramétrique dans le cadre de l'identification de système. Celle-ci a pour but d'établir la relation déterministe entre l'entrée d'un système et sa sortie. L'identification de système est particulièrement utile dans l'étude de la vision, où l'entrée du système peut être facilement contrôlée par stimulation sensorielle.La modélisation paramétrique, bâtie sur la théorie des modèles linéaires généralisés, offre un paradigme commun pour analyser des signaux ayant des propriétés statistiques disparates, souvent rencontrés dans l'étude du système nerveux: les potentiels d'action, l'activité d'ensemble de neurones, et les décisions psychophysiques.Dans le 2ème chapitre, je développe le paradigme d'analyse par modélisation paramétrique qui sera utilisé tout au long de ce mémoire dans le contexte des images de classification psychophysiques. Je démontre qu'il est possible d'inférer, grâce à ces méthodes, le processus décisionnel d'un observateur psychophysique avec moins de données que ce qui était précédemment possible. Cette avancée permet l'exploration de modèles psychophysiques plus complexes, et potentiellement plus informatifs sur le processus décisionnel de l'observateur.Dans le 3ème chapitre, j'applique ce paradigme à l'analyse des représentations visuelles au niveau d'ensembles neuronaux dans l'aire V4 du système visuel. Les résultats démontrent qu'il est possible, à partir de l'activité des champs de potentiel locaux (CPL), d'inférer la représentation corticale de l'espace visuel sur une échelle de plusieurs millimètres. Je démontre ainsi que les CPL reflètent à la fois des sources synaptiques locales et des biais globaux dans la représentation visuelle. Ces résultats résolvent une controverse dans la littérature concernant l'intégration spatiale des CPL.Dans le 4ème chapitre, j'applique ce même paradigme dans l'analyse de neurones dans l'aire MST du système visuel dorsal. Je révèle que les réponses dans MST peuvent être expliquées par l'intégration de sources afférentes provenant de l'aire MT; cependant, cette intégration se révèle nonlinéaire. Cette analyse révèle des propriétés longtemps soupçonnées mais jusqu'ici non confirmées des champs réceptifs des neurones dans MST; celles-ci leur permettent de communiquer de l'information sur les motifs de flux optique complexes. Cette organisation des champs réceptifs et l'intégration nonlinéaire permet d'extraire plus facilement la vélocité d'objets s'approchant de l'observateur à partir des réponses de la population de neurones dans MST, révélant un rôle insoupçonné de ces neurones dans l'estimation de la vélocité des objets.Pris ensemble, ces résultats démontrent qu'à l'aide de méthodes statistiques puissantes, il est possible d'inférer la nature des représentations visuelles à de multiples échelles. Dans la discussion, je démontre comment généraliser ces résultats afin d'obtenir une meilleure compréhension des computations hiérarchiques dans le système visuel.
Style APA, Harvard, Vancouver, ISO itp.
8

Lyons, Sean Christopher. "Numerical modelling of a nanosecond optical parametric oscillator". Thesis, University of Strathclyde, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.248729.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Pericleous, Paraskevi. "Parametric joint modelling for longitudinal and survival data". Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/59673/.

Pełny tekst źródła
Streszczenie:
Joint modelling is the simultaneous modelling of longitudinal and survival data, while taking into account a possible association between them. A common approach in joint modelling studies is to assume that the repeated measurements follow a lin- ear mixed e�ects model and the survival data is modelled using a Cox proportional hazards model. The Cox model, however, requires a strong proportionality assump- tion, which seems to be violated quite often. We, thus, propose the use of parametric survival models. Additionally, joint modelling literature mainly deals with right- censoring only and does not consider left-truncation, which can cause bias. The joint model proposed here considers left-truncation and right-censoring.
Style APA, Harvard, Vancouver, ISO itp.
10

Motta, Enrico. "Reusable components for knowledge modelling". Thesis, Open University, 1998. http://oro.open.ac.uk/57879/.

Pełny tekst źródła
Streszczenie:
In this work I illustrate an approach to the development of a library of problem solving components for knowledge modelling. This approach is based on an epistemological modelling framework, the Task/Method/Domain/Application (TMDA) model, and on a principled methodology, which provide an integrated view of both library construction and application development by reuse. The starting point of the proposed approach is given by a task ontology. This formalizes a conceptual viewpoint over a class of problems, thus providing a task-specific framework, which can be used to drive the construction of a task model through a process of model-based knowledge acquisition. The definitions in the task ontology provide the initial elements of a task-specific library of problem solving components. In order to move from problem specification to problem solving, a generic, i.e. taskindependent, model of problem solving as search is introduced, and instantiated in terms of the concepts in the relevant task ontology, say T. The result is a task-specific, but method-independent, problem solving model. This generic problem solving model provides the foundation from which alternative problem solving methods for a class of tasks can be defined. Specifically, the generic problem solving model provides i) a highly generic method ontology, say M; ii) a set of generic building blocks (generic tasks), which can be used to construct task-specific problem solving methods; and iii) an initial problem solving method, which can be characterized as the most generic problem solving method, which subscribes to M and is applicable to T. More specific problem solving methods can then be (re-)constructed from the generic problem solving model through a process of method/ontology specialization and method-to-task application. The resulting library of reusable components enjoys a clear theoretical basis and provides robust support for reuse. In the thesis I illustrate the approach in the area of parametric design.
Style APA, Harvard, Vancouver, ISO itp.
11

Wang, Shwi-Chun. "The fairing of parametric curves and surfaces". Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302739.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Stetson, Scott W. "PSICE modelling and parametric study of microbolometer thermal detectors". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA396272.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Hempel, Arne-Jens, i Steffen F. Bocklisch. "Parametric Fuzzy Modelling Framework for Complex Data-Inherent Structures". Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200901487.

Pełny tekst źródła
Streszczenie:
The present article dedicates itself to fuzzy modelling of data-inherent structures. In particular two main points are dealt with: the introduction of a fuzzy modelling framework and the elaboration of an automated, data-driven design strategy to model complex data-inherent structures within this framework. The innovation concerning the modelling framework lies in the fact that it is consistently built around a single, generic type of parametrical and convex membership function. In the first part of the article this essential building block will be defined and its assets and shortcomings will be discussed. The novelty regarding the automated, data-driven design strategy consist in the conservation of the modelling framework when modelling complex (nonconvex) data-inherent structures. Instead of applying current clustering methods the design strategy uses the inverse of the data structure in order to created a fuzzy model solely based on convex membership functions. Throughout the article the whole model design process is illustrated, section by section, with the help of an academic example.
Style APA, Harvard, Vancouver, ISO itp.
14

Barker, Peter Neil. "Semi-parametric modelling of converging hazards in survival analysis". Thesis, Lancaster University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.421828.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Kehoe, Dennis Frederick. "The parametric modelling of quality development within manufacturing organisations". Thesis, University of Liverpool, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321145.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Donnelly, Peter Gerard. "Adaptive parametric modelling of narrowband signals for sonar applications". Thesis, University of Ulster, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357582.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Timur, Mert. "A parametric modelling tool for high speed displacement monohulls". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100091.

Pełny tekst źródła
Streszczenie:
Thesis: S.M. in Naval Architecture and Marine Engineering, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 121-123).
In ship design projects, it is of utmost importance to investigate a wide range of options during the concept design phase in order to determine which one best suits to the requirements. Although, keeping the concept design phase shorter in order to be competitive in the market is as important. The chances for a shipyard to win a contract would surely increase with a proposed design whose performances are demonstrated through a systematic evaluation of alternative solutions. However, the number of the design alternatives is inversely proportional to the time span of concept design for each alternative. The detailed evaluations at this stage can only be performed with CFD (Computational Fluid Dynamics) and FE (Finite Element) tools, and both require a complete representation of the ship hull geometry. So, only having a faster hull form generation tool would enable the designer to evaluate more options. It is possible to achieve rapid geometry generation through fully parametric modeling. Fully parametric hull modeling is the practice of creating the entire hull shape definition only from form parameters, without the need for offset data or predefined lines plan. In this thesis a fully parametric modeling tool, PHull, is developed using Java programming language for rapid geometry generation of high speed displacement monohulls, in order to be used in hydrodynamic optimization process. The results from the validation cases, FFG-7 and ATHENA Model 5365, are presented.
Mert Timur.
S.M. in Naval Architecture and Marine Engineering
Style APA, Harvard, Vancouver, ISO itp.
18

ORTOLANI, CHIARA. "PARAMETRIC MODELLING OF FREIGHT NETWORKS: OPERATIONAL AND ENVIRONMENTAL COSTS". Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3427402.

Pełny tekst źródła
Streszczenie:
The increased awareness of the environmental and social impacts caused by logistics and distribution has brought to a deeper focus on this theme by both industry and literature. Research in this field has been developed in order to take into consideration the so-called “green effects” while studying distribution networks. It is commonly acknowledged that transportation activities generate both operational and environmental costs. Operational costs are typically well defined and computed in logistics providers’ rates. Nevertheless, there is a full set of impacts (such as gasses and particles emissions, noise, congestion, accidents) generating costs that are not reflected into transport prices: these are the environmental costs. As consumers’ environmental consciousness is growing and legislative organizations are pushing for stricter regulations, from the industrial point of view the request is rising of correctly estimating overall transportation costs. Generally companies which work to reduce their environmental impacts focus on distribution as it is commonly recognized as one of the most polluting activities. As s consequence, they act on three subsequent levels: optimization of the existent networks and flows; optimization of the modes of transport; increase of the efficiency of routes and journeys. Some of the most widespread actions meant to decrease transport environmental impacts consist in minimizing empty running of trucks, encouraging intermodal distribution, running more efficient vehicles: all measures that, besides abating pollution and congestion costs, have the substantial benefit of pulling down the overall costs generated by companies. However, only few literature contributions give quantitative models to estimate external transport costs. Most of the current studies develop qualitative and multi-objective researches, trying to define and estimate external impacts, without though giving any proper cost values. As a starting point of this work I am presenting an extensive literature analysis, which shows how dispersed data regarding full transport costs are. Starting from these data, I elaborated new analytical functions which overcome the limits of the existing formulations and which allow the calculation of both internal and external transport costs for freight distribution networks. These new functions can effectively estimate the total transport cost: their reliability, as well as their effectiveness, are proved through the application to real industrial cases. The developed functions represent then the analytical input of an innovative calculation framework. The new framework which I realized as the main objective of my research permits to achieve a reliable and straightforward calculation of the overall transport cost –both internal and external– for any road distribution networks. While requiring in input an extremely reduced amount of data, the model calculates the total transportation cost by considering specific distribution constraints and variables. Both effective operational applicability and results reliability are achieved. The application of the new model to real industrial cases shows that the framework gives reliable outputs, as the calculated internal costs are comparable to the costs paid by the companies to their forwarders. Nevertheless, the overall costs (considering the sum of internal and external contributions) exceed the actual, operational costs. This highlights immediately the fact that there are certain unpaid amounts which are linked to the presence of external costs. External costs have big impacts, with a weight which is on average the 18% of the total cost. Being so significant, such costs cannot be ignored. On the contrary, finding strategies in order to minimize them becomes more and more important. As these costs are influenced by a considerable number of factors, it is necessary to identify on which values to act in order to effectively minimize the external cost. Through a sensitivity analysis it has been possible to highlight the most meaningful factors affecting both internal and external cost values: the highly significant elements are the vehicle saturation and the travelled distance in urban peak conditions. These will be the variables to act on when developing a cost minimization strategy.
L’aumentata consapevolezza delle conseguenze in termini di impatti ambientali e sociali delle attività di logistica e distribuzione ha generato un’attenzione particolare in merito a questi temi da parte sia dell’industria che del mondo accademico. Ciò ha portato allo sviluppo di un filone di ricerca orientato allo studio di quelli che sono stati definiti “green effects”, ovvero effetti ambientali e sociali legati allo sviluppo delle reti logistiche. E’ comunemente riconosciuto come le attività di trasporto generino sia costi di tipo operativo che costi di natura ambientale. I costi operativi sono facilmente definibili e vanno a formare parte delle tariffe di trasporto tipicamente proposte dai fornitori di servizi logistici. Esistono tuttavia una serie di impatti (come l’emissione di gas e particelle inquinanti, l’inquinamento acustico causato dai veicoli, la congestione stradale, gli incidenti) i quali danno luogo a costi che non si riflettono nelle tariffe di trasporto: questi sono i costi ambientali. L’attenzione da parte dei consumatori, da un lato, e la pressione sempre maggiore dal punto di vista legislativo, dall’altro, fanno sì che il mondo industriale richieda in modo sempre più pressante strumenti in grado di stimare correttamente quali siano i reali costi generati dal trasporto. Tipicamente le aziende che lavorano in un’ottica di minimizzazione dei propri impatti ambientali si focalizzano proprio sull’ambito della distribuzione in quanto questa rappresenta una delle attività in assoluto maggiormente inquinanti. A livello operativo l’azione viene declinata su tre livelli: in primo luogo, l’ottimizzazione delle reti e dei flussi esistenti; quindi l’ottimizzazione dei mezzi utilizzati per il trasporto; infine l’aumentata efficienza dei viaggi che vengono svolti sulla rete con i mezzi a disposizione. Alcune delle azioni più diffuse consistono, ad esempio, nel minimizzare i viaggi a vuoto, nell’incoraggiare la distribuzione intermodale, nell’acquistare veicoli con prestazioni migliori. Tutte misure, queste, che ancor prima dell’indubbio merito di abbattere una serie di costi ambientali, hanno il vantaggio di portare ad una diminuzione anche dei costi operativi effettivamente sostenuti dall’azienda. In letteratura, tuttavia, questo tema è ancora scarsamente affrontato: solo pochi contributi a livello internazionale forniscono modelli quantitativi in grado di calcolare i costi esterni di trasporto. La maggior parte degli studi sviluppa infatti analisi di tipo qualitativo e multi-obiettivo, tentando di definire gli impatti esterni ma non fornendo alcun costo reale associato a tali impatti. Il punto di partenza di questo lavoro è rappresentato da un’estesa analisi della letteratura che mostra come i dati a disposizione siano dispersi e spesso non immediatamente interpretabili. A partire da questa analisi ho sviluppato una serie di nuove funzioni di costo in grado di superare i limiti delle formulazioni tradizionali e di calcolare i costi interni ed esterni associati ad una generica rete di trasporto su strada. Tali funzioni consentono di ottenere una stima reale del costo totale di trasporto: la loro affidabilità, nonché l’efficacia, sono dimostrate grazie all’applicazione ad una serie di casi reali. Le funzioni sviluppate rappresentano il cuore del modello analitico innovativo. Questo modello, che ho realizzato come obiettivo ultimo della mia ricerca, consente il calcolo dei costi complessivi di trasporto –interni ed esterni– per una qualsiasi rete di distribuzione su strada. A partire da un numero estremamente ridotto di dati in ingresso, il modello è in grado di calcolare il costo totale di trasporto andando a considerare gli specifici vincoli e le variabili associate al determinato percorso distributivo. Si ottengono così da un lato un’effettiva applicabilità delle nuove funzioni di costo, dall’altro una dimostrata affidabilità dei risultati. L’applicazione del modello ad una serie di casi reali conferma, infatti, la validità del modello stesso, il quale calcola un costo operativo assolutamente confrontabile con la tariffa effettivamente pagata al fornitore di trasporto. Tale applicazione, inoltre, mette in luce come i costi totali di trasporto (calcolati come somma di costi operativi ed esterni) eccedano le tariffe attuali. Questo sottolinea come ad oggi esistano alcuni effetti dell’attività di trasporto il cui costo non viene sostenuto da chi genera il trasporto stesso: tali effetti sono imputabili proprio alla componente esterna del trasporto. I costi esterni hanno impatti rilevanti, con un peso che è in media pari al 18% del costo totale di trasporto: essendo così significativi, non possono in alcun modo essere ignorati. Al contrario, identificare strategie che portino alla loro minimizzazione diventa una necessità sempre più urgente. L’applicazione di un’analisi di sensitività ai fattori coinvolti nel calcolo del costo ha reso possibile l’individuazione delle voci più critiche in quanto a conseguenze sul costo finale: si tratta della saturazione del veicolo e della distanza percorsa su strade urbane in condizioni di picco, ovvero di traffico elevato. Sono pertanto questi i fattori più importanti sui quali fare leva nel momento in cui venga sviluppata una strategia di minimizzazione dei costi.
Style APA, Harvard, Vancouver, ISO itp.
19

Sewell, David Rogers. "A parametric framework for computational modelling of the auditory periphery". Thesis, University of Leicester, 1998. http://hdl.handle.net/2381/30232.

Pełny tekst źródła
Streszczenie:
This thesis offers a critical review of past and present techniques in the field of modelling the auditory periphery whilst concurrently describing the evolution of a novel computer based demonstration platform. Modelling techniques used to simulate the progression of sound from the free field to the auditory nerve are reviewed and the advantages afforded by the parametric approach are argued. The underlying philosophy of the modelling framework is to provide a transparent model, in which components are mapped directly to related components in the real system, that is readily extensible in the light of new data, and provides a re-usable platform on which to test theories of audition. The framework provides models of the major 'stages' in the auditory periphery. The acoustic meatus can be defined using any known cross sectional area variation. The tympanum can be ascribed regional properties and the middle ear model, within the framework, allows investigation into the effects of the air cavities and the acoustic reflex. The geometrical relationships between the physiological structures within the organ of Corti are specifically defined throughout a cochlea model and incorporated to a coupled model of the basilar membrane and scala fluids. The framework provides a platform for investigation of theories of active mechanics and includes models of stereocilia deflection and adaptation, as well as simple compartmental models of ion transfer surrounding the outer hair cells (OHC), and fast motile OHC responses. As simulation takes place in the time domain, investigations into cochlear transduction are not limited to single sinusoidal input, thereby enabling the transduction of speech and time varying cochlea phenomena to be addressed.
Style APA, Harvard, Vancouver, ISO itp.
20

Kim, Minjoo. "Three essays in semi-parametric modelling of time-varying distribution". Thesis, University of Leeds, 2011. http://etheses.whiterose.ac.uk/1916/.

Pełny tekst źródła
Streszczenie:
During the last century we have been frustrated by the number of economic crises which trigger extreme uncertainty in the global economic system. Economic agents are sensitive to the uncertainty of inflations, as well as to asset values, for survival in such circumstances. Hence, modern finance and monetary economics emphasise that risk modelling of asset values and inflations are key inputs to financial theory and monetary policy. The risk is completely described by the distribution which is verified to be time-varying and non-normal. Although various parametric and non-parametric approaches have been developed to model the time-varying nature and the non-normality, they still suffer from intrinsic limitations. This study proposes the dynamic modelling of the non-parametric distribution (Functional Autoregressive Model (FAR) and Spatial Distribution Analysis) in order to overcome the limitations. Firstly, we apply FAR to the Value-at-Risk analysis. It forecasts an intraday return density function by the functional autoregressive process and calculates a daily Value-at-Risk by the Normal Inverse Gaussian distribution. It reduces economic cost and improves coverage ability in the Value-at-Risk analysis. Secondly, we apply FAR to forecasting the cross-sectional distribution of sectoral inflation rates, which holds the information of the heterogeneous variation across sectors. As a result, it improves the aggregate inflation rate forecasting. Further, the heterogeneous variation is utilised for constructing the uncertainty band of the aggregate inflation forecast, like the fan-chart of the Bank of England. Thirdly, we apply the spatial distribution analysis to rank investment strategies by comparing their time aggregated utilities over the investment horizon. To this end, we use a spatial dominance test. Since a classical stochastic dominance approach considers only the return distribution at the terminal time point of the investment horizon, it cannot properly evaluate the risk, broken out exogenously or endogenously, in the middle of the investment horizon. However, the proposed spatial dominance approach considers completely the interim risk in evaluating alternative investment strategies.
Style APA, Harvard, Vancouver, ISO itp.
21

Yassin, Nihad Jaro. "Application of parametric and solid modelling techniques to human body simulations". Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/1384.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Palipana, Aruna Susantha. "CFD modelling of natural gas combustion in spark ignited engines". Thesis, Loughborough University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327653.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Grosfils, Valérie. "Modelling and parametric estimation of simulated moving bed chromatographic processes (SMB)". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210313.

Pełny tekst źródła
Streszczenie:
La chromatographie à lit mobile simulé (procédé SMB) est une technique de séparation bien maîtrisée dans certains secteurs traditionnels tels que la séparation de sucres et d’hydrocarbures. Cependant, son application à la séparation de composés à haute valeur ajoutée dans l’industrie pharmaceutique pose de nouveaux problèmes liés à la nature des produits à séparer ainsi qu'aux exigences plus strictes en matière de pureté et de quantités produites. Les principaux problèmes ouverts sont notamment la détermination des conditions de fonctionnement optimales, la conception de structures robustes de régulation, et le développement d’un système de supervision permettant la détection et la localisation de dégradations de fonctionnement. Ces tâches requièrent l’usage d’un modèle mathématique du procédé ainsi qu’une méthodologie d’estimation paramétrique. L’étude et le développement des modèles mathématiques pour ce type d’installation ainsi que l’estimation des paramètres des modèles sur base de mesures expérimentales constituent précisément l’objet de nos travaux de doctorat.

Les modèles mathématiques décrivant les procédés SMB consistent en les bilans massiques des composés à séparer. Ce sont des modèles à paramètres distribués (décrit par des équations aux dérivées partielles). Certains ont un comportement dynamique de type hybride (c'est-à-dire faisant intervenir des dynamiques à temps continu et des événements discrets). Quelques modèles ont été développés dans la littérature. Il s’agit de sélectionner ceux qui paraissent les plus intéressants au niveau de leur temps de calcul, de leur efficacité et du nombre de paramètres à déterminer. En outre, de nouvelles structures de modèles sont également proposées afin d’améliorer le compromis précision / temps de calcul.

Ces modèles comportent généralement certains paramètres inconnus. Ils consistent soit, en des grandeurs physiques mal définies au départ des données de base, soit, en des paramètres fictifs, introduits à la suite d'hypothèses simplificatrices et englobant à eux seuls un ensemble de phénomènes. Il s’agit de mettre au point une procédure systématique d’estimation de ces paramètres requérant le moins d’expériences possible et un faible temps de calcul. La valeur des paramètres est estimée, au départ de mesures réelles, à l'aide d'une procédure de minimisation d'une fonction de coût qui indique l’écart entre les grandeurs estimées par le modèle et les mesures. La sensibilité du modèle aux écarts sur les paramètres, ainsi que l’identifiabilité du modèle (possibilité de déterminer de manière univoque les paramètres du modèle) sur la base de mesures en fonctionnement normal sont étudiées. Ceci fournit un critère de comparaison supplémentaire entre les différents modèles et permet en outre de déterminer les conditions expérimentales optimales (choix du type d’expérience, choix des signaux d’entrée, choix du nombre et de la position des points de mesures…) dans lesquelles les mesures utilisées lors de l’estimation paramétrique doivent être relevées. De plus, les erreurs d’estimation sur les paramètres et les erreurs de simulation sont estimées. La procédure choisie est ensuite validée sur des données expérimentales recueillies sur un procédé pilote existant au Max-Planck-Institut für Dynamik komplexer technischer systeme (Magdebourg, Allemagne).


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Style APA, Harvard, Vancouver, ISO itp.
24

Binder, Harald. "Flexible semi- and non-parametric modelling and prognosis for discrete outcomes". Berlin Logos-Verl, 2006. http://deposit.ddb.de/cgi-bin/dokserv?id=2786084&prov=M&dok_var=1&dok_ext=htm.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Richardson, Julie K. "Parametric modelling for linear system identification and chaotic system noise reduction". Thesis, University of Strathclyde, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405388.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Smith, Michael. "Non-parametric workspace modelling for mobile robots using push broom lasers". Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:50224eb9-73e8-4c8a-b8c5-18360d11e21b.

Pełny tekst źródła
Streszczenie:
This thesis is about the intelligent compression of large 3D point cloud datasets. The non-parametric method that we describe simultaneously generates a continuous representation of the workspace surfaces from discrete laser samples and decimates the dataset, retaining only locally salient samples. Our framework attains decimation factors in excess of two orders of magnitude without significant degradation in fidelity. The work presented here has a specific focus on gathering and processing laser measurements taken from a moving platform in outdoor workspaces. We introduce a somewhat unusual parameterisation of the problem and look to Gaussian Processes as the fundamental machinery in our processing pipeline. Our system compresses laser data in a fashion that is naturally sympathetic to the underlying structure and complexity of the workspace. In geometrically complex areas, compression is lower than that in geometrically bland areas. We focus on this property in detail and it leads us well beyond a simple application of non-parametric techniques. Indeed, towards the end of the thesis we develop a non-stationary GP framework whereby our regression model adapts to the local workspace complexity. Throughout we construct our algorithms so that they may be efficiently implemented. In addition, we present a detailed analysis of the proposed system and investigate model parameters, metric errors and data compression rates. Finally, we note that this work is predicated on a substantial amount of robotics engineering which has allowed us to produce a high quality, peer reviewed, dataset - the first of its kind.
Style APA, Harvard, Vancouver, ISO itp.
27

Norbury, A. A. W. "Parametric studies based mechanical and thermal modelling of spot welded joints". Thesis, Liverpool John Moores University, 2017. http://researchonline.ljmu.ac.uk/5848/.

Pełny tekst źródła
Streszczenie:
This work has focused on formulating a experimental/numerical framework for the investigation of spot weld properties and performance. An Inverse temperature measurement approach has been established to predict the thermal history of a spot welded joints using remote thermocouples. This method incorporated the experimental data into an Artificial Neural Network (AAN) to predict cooling curves of the HAZ. Advanced modelling programs have been developed to simulate spot welded joints and thermocouples. Using the programs to investigate the effects of the key dimensional or material parameters on the mechanical or thermal response of spot welded joints of steels and different thermocouple joints relevant to their applications. Graphical User Interface Abaqus plug-ins of spot welded joints have developed using Python scripting and are used to investigate the effect of nugget size and sheet thickness on the stress and deformation of spot welded joints of steel. These works are important to establish an integrated approach to study the electrical, mechanical and thermal process of the spot welding process.
Style APA, Harvard, Vancouver, ISO itp.
28

Klompje, Gideon. "A parametric monophone speech synthesis system". Thesis, Link to online version, 2006. http://hdl.handle.net/10019/561.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Tarkian, Mehdi, i Zaldivar Tessier Francisco Javier. "Aircraft Parametric 3D Modelling and Panel Code of Analysis for Conceptual Design". Thesis, Linköping University, Department of Management and Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10607.

Pełny tekst źródła
Streszczenie:

Throughout the development of this report there will be a brief explanation of what the actual Aircraft Design Process is and in which stages the methodology that the authors are proposing will be implemented as well as the tools that will interact to produce this methodology.

The proposed tool will be the first part of a methodology that, according to the authors, by integrating separate tools that are currently used in different stages of the aeronautical design, will promote a decrease in the time frame for the initial stages of the design process.

The first part of the methodology above, that is proposed in this project, starts by creating a computer generated aircraft model and analyzing its basic aerodynamic characteristics “Lift Coefficient” and “Induced Drag Coefficient”, this step will be an alternative to statistical and empirical methods used in the industry, which require vast amount of data.

This task will be done in several steps, which will transfer the parametric aircraft model to an input file for the aerodynamic analysis program. To transfer the data a “translation” program has been developed that arranges the geometry and prepares the input file for analysis.

During the course of this report the reader will find references to existing aircrafts, such as the MD-11 or Airbus 310. However, these references are not intended to be an exact computer model of the mentioned airplanes. The authors are using this as reference so the reader can relate what he/she is seeing in this paper to existing aircrafts. By doing such comparison, the author intends to demonstrate that the Parametric Model that has been created possesses the capability to simulate to some extend the shape of existing aircrafts.

Finally from the results of this project it is concluded that the methodology in question is promising. Linking the two programs is possible and the aerodynamic characteristics of the models tested fall in the appropriate range. None the less the research must continue following the line that has been discussed in this report.

Style APA, Harvard, Vancouver, ISO itp.
30

Pretorius, Wesley Byron. "Non-parametric regression modelling of in situ fCO2 in the Southern Ocean". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71630.

Pełny tekst źródła
Streszczenie:
Thesis (MComm)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: The Southern Ocean is a complex system, where the relationship between CO2 concentrations and its drivers varies intra- and inter-annually. Due to the lack of readily available in situ data in the Southern Ocean, a model approach was required which could predict the CO2 concentration proxy variable, fCO2. This must be done using predictor variables available via remote measurements to ensure the usefulness of the model in the future. These predictor variables were sea surface temperature, log transformed chlorophyll-a concentration, mixed layer depth and at a later stage altimetry. Initial exploratory analysis indicated that a non-parametric approach to the model should be taken. A parametric multiple linear regression model was developed to use as a comparison to previous studies in the North Atlantic Ocean as well as to compare with the results of the non-parametric approach. A non-parametric kernel regression model was then used to predict fCO2 and nally a combination of the parametric and non-parametric regression models was developed, referred to as the mixed regression model. The results indicated, as expected from exploratory analyses, that the non-parametric approach produced more accurate estimates based on an independent test data set. These more accurate estimates, however, were coupled with zero estimates, caused by the curse of dimensionality. It was also found that the inclusion of salinity (not available remotely) improved the model and therefore altimetry was chosen to attempt to capture this e ect in the model. The mixed model displayed reduced errors as well as removing the zero estimates and hence reducing the variance of the error rates. The results indicated that the mixed model is the best approach to use to predict fCO2 in the Southern Ocean and that altimetry's inclusion did improve the prediction accuracy.
AFRIKAANSE OPSOMMING: Die Suidelike Oseaan is 'n komplekse sisteem waar die verhouding tussen CO2 konsentrasies en die drywers daarvoor intra- en interjaarliks varieer. 'n Tekort aan maklik verkrygbare in situ data van die Suidelike Oseaan het daartoe gelei dat 'n model benadering nodig was wat die CO2 konsentrasie plaasvervangerveranderlike, fCO2, kon voorspel. Dié moet gedoen word deur om gebruik te maak van voorspellende veranderlikes, beskikbaar deur middel van afgeleë metings, om die bruikbaarheid van die model in die toekoms te verseker. Hierdie voorspellende veranderlikes het ingesluit see-oppervlaktetemperatuur, log getransformeerde chloro l-a konsentrasie, gemengde laag diepte en op 'n latere stadium, hoogtemeting. 'n Aanvanklike, ondersoekende analise het aangedui dat 'n nie-parametriese benadering tot die data geneem moet word. 'n Parametriese meerfoudige lineêre regressie model is ontwikkel om met die vorige studies in die Noord-Atlantiese Oseaan asook met die resultate van die nieparametriese benadering te vergelyk. 'n Nie-parametriese kern regressie model is toe ingespan om die fCO2 te voorspel en uiteindelik is 'n kombinasie van die parametriese en nie-parametriese regressie modelle ontwikkel vir dieselfde doel, wat na verwys word as die gemengde regressie model. Die resultate het aangetoon, soos verwag uit die ondersoekende analise, dat die nie-parametriese benadering meer akkurate beramings lewer, gebaseer op 'n onafhanklike toets datastel. Dié meer akkurate beramings het egter met "nul"beramings gepaartgegaan wat veroorsaak word deur die vloek van dimensionaliteit. Daar is ook gevind dat die insluiting van soutgehalte (nie beskikbaar oor via sateliet nie) die model verbeter en juis daarom is hoogtemeting gekies om te poog om hierdie e ek in die model vas te vang. Die gemengde model het kleiner foute getoon asook die "nul"beramings verwyder en sodoende die variasie van die foutkoerse verminder. Die resultate het dus aangetoon dat dat die gemengde model die beste benadering is om te gebruik om die fCO2 in die Suidelike Oseaan te beraam en dat die insluiting van altimetry die akkuraatheid van hierdie beraming verbeter.
Style APA, Harvard, Vancouver, ISO itp.
31

Biscio, Christophe A. N. "Contribution to the modelling and the parametric estimation of determinantal point processes". Nantes, 2015. https://archive.bu.univ-nantes.fr/pollux/show/show?id=2c65c3dc-4641-4c69-b96e-38a5104ab426.

Pełny tekst źródła
Streszczenie:
Ce manuscrit est dédié à l’étude des processus ponctuels déterminantaux (DPPs) ainsi qu’à leur estimation paramétrique. Ces processus sont connus pour modéliser des phénomènes répulsifs (le cas où les points ont tendance à se repousser entre eux). Dans la première partie, nous étudions la richesse de ce modèle en proposant deux définitions de la répulsion, basées sur les moments d’ordre deux du processus, plus précisément la fonction de corrélation par paires (pcf) g. Cela nous conduit à la détermination du DPP le plus répulsif. Nous présentons ensuite de nouvelles familles paramétriques de DPPs couvrant toute la plage de répulsion offerte par cette classe. Dans la seconde partie, nous démontrons que les DPPs stationnaires sont Brillinger-mélangeants. Cette propriété basée sur les moments du processus permet d’obtenir des résultats asymptotiques sur différentes statistiques. En particulier, nous démontrons un théorème limite central sur une large classe de fonctionnelles d’ordre p et nous adaptons au cadre des DPPs certains résultats déjà connus sur les estimateurs à noyaux de la pcf g d’un processus Brillinger-mélangeant. Finalement, dans la dernière partie, nous étudions l’estimation d’un DPP paramétrique par minimum de contraste, basée sur les fonctions de Ripley K et la pcf g. Nous prouvons la consistance et la normalité asymptotique de ces méthodes et en particulier, nous obtenons une forme explicite de la variance asymptotique. Ces résultats sont en fait des cas particuliers d’un théorème plus général sur les propriétés asymptotiques des estimateurs par minimum de contraste d’un processus ponctuel stationnaire que nous énonçons et prouvons dans ce chapitre
This manuscript is devoted to the study of determinantal point processes (DPPs) and their parametric estimation. These processes are known to be well adapted to inhibitive point patterns, where the points tend to repel each others. In the first chapter, we study the flexibility of this model by suggesting two definitions of the repulsiveness, both based on the second order moments of the process, namely the pair correlation function (pcf) g. This leads us to identify the most repulsive DPP. Then, we introduce new parametric families of DPPs that cover a large range of DPPs, from the stationary Poisson process to the most repulsive DPP. In the second chapter, we prove that stationary DPPs are Brillinger mixing. This property allows us to deduce asymptotic results on several statistics. Namely, we prove a central limit theorem for a wide class of functionals of order p of the process and we adapt to DPPs some results already known on kernel estimators of the function g of Brillinger mixing point processes. Finally, in the last chapter we study the minimum contrast estimation based on the Ripley’s K-function and the pcf g. We prove the consistency and the asymptotic normality of these methods for stationary DPPs. In particular, we obtain an explicit form of the asymptotic variance. These results are in fact particular cases of a more general theorem dealing with the asymptotic properties of minimum contrast estimation for stationary point processes that we state and prove in this chapter
Style APA, Harvard, Vancouver, ISO itp.
32

Athanasopoulos, Michael. "Modelling and Animation using Partial Differential Equations. Geometric modelling and computer animation of virtual characters using elliptic partial differential equations". Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5437.

Pełny tekst źródła
Streszczenie:
This work addresses various applications pertaining to the design, modelling and animation of parametric surfaces using elliptic Partial Differential Equations (PDE) which are produced via the PDE method. Compared with traditional surface generation techniques, the PDE method is an effective technique that can represent complex three-dimensional (3D) geometries in terms of a relatively small set of parameters. A PDE-based surface can be produced from a set of pre-configured curves that are used as the boundary conditions to solve a number of PDE. An important advantage of using this method is that most of the information required to define a surface is contained at its boundary. Thus, complex surfaces can be computed using only a small set of design parameters. In order to exploit the advantages of this methodology various applications were developed that vary from the interactive design of aircraft configurations to the animation of facial expressions in a computer-human interaction system that utilizes an artificial intelligence (AI) bot for real time conversation. Additional applications of generating cyclic motions for PDE based human character integrated in a Computer-Aided Design (CAD) package as well as developing techniques to describe a given mesh geometry by a set of boundary conditions, required to evaluate the PDE method, are presented. Each methodology presents a novel approach for interacting with parametric surfaces obtained by the PDE method. This is due to the several advantages this surface generation technique has to offer. Additionally, each application developed in this thesis focuses on a specific target that delivers efficiently various operations in the design, modelling and animation of such surfaces.
Style APA, Harvard, Vancouver, ISO itp.
33

Edum-Fotwe, Kwamina. "Procedural reconstruction of architectural parametric models from airborne and ground laser scans". Thesis, University of Bath, 2018. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.767574.

Pełny tekst źródła
Streszczenie:
This research addresses the problem of efficiently and robustly reconstructing semantically-rich 3D architectural models from laser-scanned point-clouds. It first covers the pre-existing literature and industrial developments in active-sensing, 3D reconstruction of the built-environment and procedural modelling. It then documents a number of novel contributions to the classical problems of change-detection between temporally varying multi-modal geometric representations and automatic 3D asset creation from airborne and ground point-clouds of buildings. Finally this thesis outlines on-going research and avenues for continued investigation - most notably fully automatic temporal update and revision management for city-scale CAD models via data-driven procedural modelling from point-clouds. In short this thesis documents the outcomes of a research project whose primary aim was to engineer fast, accurate and sparse building reconstruction algorithms. Formally: this thesis puts forward the hypothesis (and advocates) that architectural reconstruction from actively-sensed point-clouds can be addressed more efficiently and affording greater control (over the geometric results) - via deterministic procedurally-driven analysis and optimisation than via stochastic sampling.
Style APA, Harvard, Vancouver, ISO itp.
34

Ahmat, Norhayati. "Geometric modelling and shape optimisation of pharmaceutical tablets. Geometric modelling and shape optimisation of pharmaceutical tablets using partial differential equations". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5702.

Pełny tekst źródła
Streszczenie:
Pharmaceutical tablets have been the most dominant form for drug delivery and they need to be strong enough to withstand external stresses due to packaging and loading conditions before use. The strength of the produced tablets, which is characterised by their compressibility and compactibility, is usually deter-mined through a physical prototype. This process is sometimes quite expensive and time consuming. Therefore, simulating this process before hand can over-come this problem. A technique for shape modelling of pharmaceutical tablets based on the use of Partial Differential Equations is presented in this thesis. The volume and the sur-face area of the generated parametric tablet in various shapes have been es-timated numerically. This work also presents an extended formulation of the PDE method to a higher dimensional space by increasing the number of pa-rameters responsible for describing the surface in order to generate a solid tab-let. The shape and size of the generated solid tablets can be changed by ex-ploiting the analytic expressions relating the coefficients associated with the PDE method. The solution of the axisymmetric boundary value problem for a finite cylinder subject to a uniform axial load has been utilised in order to model a displace-ment component of a compressed PDE-based representation of a flat-faced round tablet. The simulation results, which are analysed using the Heckel model, show that the developed model is capable of predicting the compressibility of pharmaceutical powders since it fits the experimental data accurately. The opti-mal design of pharmaceutical tablets with particular volume and maximum strength has been obtained using an automatic design optimisation which is performed by combining the PDE method and a standard method for numerical optimisation.
Style APA, Harvard, Vancouver, ISO itp.
35

Ajaz, Mahnoor. "Finite Difference Time Domain Modelling of Ultrasonic Parametric Arrays in Two-Dimensional Spaces". The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1619109761801613.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Ongkittikul, Surachai. "Hand tracking with Parametric Skin Modelling Using Particle Filter Framework for Gesture Recognition". Thesis, University of Surrey, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520572.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Rosu, Roxana Gabriela. "Parametric approaches for modelling local structure tensor fields with applications to texture analysis". Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0102/document.

Pełny tekst źródła
Streszczenie:
Cette thèse porte sur des canevas méthodologiques paramétriques pour la modélisation de champs de tenseurs de structure locaux (TSL) calculés sur des images texturées. Estimé en chaque pixel, le tenseur de structure permet la caractérisation de la géométrie d’une image texturée à travers des mesures d’orientation et d’anisotropie locales. Matrices symétriques semi-définies positives, les tenseurs de structure ne peuvent pas être manipulés avec les outils classiques de la géométrie euclidienne. Deux canevas statistiques riemanniens, reposant respectivement sur les espaces métriques a ne invariant (AI) et log-euclidien (LE), sont étudiés pour leur représentation. Dans chaque cas, un modèle de distribution gaussienne et de mélange associé sont considérés pour une analyse statistique. Des algorithmes d’estimation de leurs paramètres sont proposés ainsi qu’une mesure de dissimilarité. Les modèles statistiques proposés sont tout d’abord considérés pour décrire des champs de TSL calculés sur des images texturées. Les modèles AI et LE sont utilisés pour décrire des distributions marginales de TSL tandis que les modèles LE sont étendus afin de décrire des distributions jointes de TSL et de caractériser des dépendances spatiales et multi-échelles. L’ajustement des modèles théoriques aux distributions empiriques de TSL est évalué de manière expérimentale sur un ensemble de textures composées d’un spectre assez large de motifs structuraux. Les capacités descriptives des modèles statistiques proposés sont ensuite éprouvées à travers deux applications. Une première application concerne la reconnaissance de texture sur des images de télédétection très haute résolution et sur des images de matériaux carbonés issues de la microscopie électronique à transmission haute résolution. Dans la plupart des cas, les performances des approches proposées sont supérieures à celles obtenues par les méthodes de l’état de l’art. Sur l’espace LE, les modèles joints pour la caractérisation des dépendances spatiales au sein d’un champ de TSL améliorent légèrement les résultats des modèles opérant uniquement sur les distributions marginales. La capacité intrinsèque des méthodes basées sur le tenseur de structure à prendre en considération l’invariance à la rotation, requise dans beaucoup d’applications portant sur des textures anisotropes, est également démontrée de manière expérimentale. Une deuxième application concerne la synthèse de champs de TSL. A cet e et, des approches mono-échelle ainsi que des approches pyramidales multi-échelles respectant une hypothèse markovienne sont proposées. Les expériences sont effectuées à la fois sur des champs de TSL simulés et sur des champs de TSL calculés sur des textures réelles. Efficientes dans quelques configurations et démontrant d’un potentiel réel de description des modèles proposés, les expériences menées montrent également une grande sensibilité aux choix des paramètres qui peut s’expliquer par des instabilités d’estimation sur des espaces de grande dimension
This thesis proposes and evaluates parametric frameworks for modelling local structure tensor (LST) fields computed on textured images. A texture’s underlying geometry is described in terms of orientation and anisotropy, estimated in each pixel by the LST. Defined as symmetric non-negative definite matrices, LSTs cannot be handled using the classical tools of Euclidean geometry. In this work, two complete Riemannian statistical frameworks are investigated to address the representation of symmetric positive definite matrices. They rely on the a ne-invariant (AI) and log-Euclidean (LE) metric spaces. For each framework, a Gaussian distribution and its corresponding mixture models are considered for statistical modelling. Solutions for parameter estimation are provided and parametric dissimilarity measures between statistical models are proposed as well. The proposed statistical frameworks are first considered for characterising LST fields computed on textured images. Both AI and LE models are first employed to handle marginal LST distributions. Then, LE models are extended to describe joint LST distributions with the purpose of characterising both spatial and multiscale dependencies. The theoretical models’ fit to empirical LST distributions is experimentally assessed for a texture set composed of a large diversity of patterns. The descriptive potential of the proposed statistical models are then assessed in two applications. A first application consists of texture recognition. It deals with very high resolution remote sensing images and carbonaceous material images issued from high resolution transmission electron microscopy technology. The LST statistical modelling based approaches for texture characterisation outperform, in most cases, the state of the art methods. Competitive texture classification performances are obtained when modelling marginal LST distributions on both AI and LE metric spaces. When modelling joint LST distributions, a slight gain in performance is obtained with respect to the case when marginal distributions are modelled. In addition, the LST based methods’ intrinsic ability to address the rotation invariance prerequisite that arises in many classification tasks dealing with anisotropic textures is experimentally validated as well. In contrast, state of the art methods achieve a rather pseudo rotation invariance. A second application concerns LST field synthesis. To this purpose, monoscale and multiscale pyramidal approaches relying on a Markovian hypothesis are developed. Experiments are carried out on toy LST field examples and on real texture LST fields. The successful synthesis results obtained when optimal parameter configurations are employed, are a proof of the real descriptive potential of the proposed statistical models. However, the experiments have also shown a high sensitivity to the parameters’ choice, that may be due to statistical inference limitations in high dimensional spaces
Style APA, Harvard, Vancouver, ISO itp.
38

Petersen, Tom. "Modelling transport, accessibility and productivity in Öresund". Licentiate thesis, KTH, Infrastructure, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1768.

Pełny tekst źródła
Streszczenie:

This licentiate thesis is about the provision of transportinfrastructure and the regional impacts of such provision.Three different techniques have been investigated that can beused for the assessment and forecasting of the effects ofinfrastructure: transport demand models and parametric andnon-parametric econometric estimation techniques. The maininterest is focused around the regional effects of theÖresund fixed link, which was opened on July 1, 2000.

The thesis is a collection of three papers plus a generalintroduction: papers 1 and 2 are concerned with the effect ofaccessibility in the transport networks on productivity on anindividual firm level. In paper 1, a translog cost function,extended with an accessibility variable, is estimated for 24business aggregates using panel data techniques and tests on adataset covering single workplaces in Scania over the years1990–98. The results are not conclusive, and cannot beused for forecasting of the after-situation. In paper 2, anon-parametric method, propensity score matching, is applied onthe same dataset to test if productivity differs in highaccessibiliby areas compared to those with low accessibility,while controlling for other differences between firms. Theresult here is the same as in the first paper: for no businessthere is a significant difference in productivity that can berelated to accessibility. In paper 3, a framework for theexternal validation of models of transport, landuse andenvironment is developed, with a focus on transport forecastmodels. The scenario assumptions and forecast results ofearlier models are presented and compared. A before-and-afterdatabase under construction for the Öresund region is alsopresented, to be used for validation of such models.

Key words:infrastructure assessment, validation,Öresund, transport demand models, regionalconsequences.

Style APA, Harvard, Vancouver, ISO itp.
39

Ahmat, Norhayati Binti. "Geometric modelling and shape optimisation of pharmaceutical tablets : geometric modelling and shape optimisation of pharmaceutical tablets using partial differential equations". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5702.

Pełny tekst źródła
Streszczenie:
Pharmaceutical tablets have been the most dominant form for drug delivery and they need to be strong enough to withstand external stresses due to packaging and loading conditions before use. The strength of the produced tablets, which is characterised by their compressibility and compactibility, is usually deter-mined through a physical prototype. This process is sometimes quite expensive and time consuming. Therefore, simulating this process before hand can over-come this problem. A technique for shape modelling of pharmaceutical tablets based on the use of Partial Differential Equations is presented in this thesis. The volume and the sur-face area of the generated parametric tablet in various shapes have been es-timated numerically. This work also presents an extended formulation of the PDE method to a higher dimensional space by increasing the number of pa-rameters responsible for describing the surface in order to generate a solid tab-let. The shape and size of the generated solid tablets can be changed by ex-ploiting the analytic expressions relating the coefficients associated with the PDE method. The solution of the axisymmetric boundary value problem for a finite cylinder subject to a uniform axial load has been utilised in order to model a displace-ment component of a compressed PDE-based representation of a flat-faced round tablet. The simulation results, which are analysed using the Heckel model, show that the developed model is capable of predicting the compressibility of pharmaceutical powders since it fits the experimental data accurately. The opti-mal design of pharmaceutical tablets with particular volume and maximum strength has been obtained using an automatic design optimisation which is performed by combining the PDE method and a standard method for numerical optimisation.
Style APA, Harvard, Vancouver, ISO itp.
40

Dall, Rasmus. "Statistical parametric speech synthesis using conversational data and phenomena". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29016.

Pełny tekst źródła
Streszczenie:
Statistical parametric text-to-speech synthesis currently relies on predefined and highly controlled prompts read in a “neutral” voice. This thesis presents work on utilising recordings of free conversation for the purpose of filled pause synthesis and as an inspiration for improved general modelling of speech for text-to-speech synthesis purposes. A corpus of both standard prompts and free conversation is presented and the potential usefulness of conversational speech as the basis for text-to-speech voices is validated. Additionally, through psycholinguistic experimentation it is shown that filled pauses can have potential subconscious benefits to the listener but that current text-to-speech voices cannot replicate these effects. A method for pronunciation variant forced alignment is presented in order to obtain a more accurate automatic speech segmentation something which is particularly bad for spontaneously produced speech. This pronunciation variant alignment is utilised not only to create a more accurate underlying acoustic model, but also as the driving force behind creating more natural pronunciation prediction at synthesis time. While this improves both the standard and spontaneous voices the naturalness of spontaneous speech based voices still lags behind the quality of voices based on standard read prompts. Thus, the synthesis of filled pauses is investigated in relation to specific phonetic modelling of filled pauses and through techniques for the mixing of standard prompts with spontaneous utterances in order to retain the higher quality of standard speech based voices while still utilising the spontaneous speech for filled pause modelling. A method for predicting where to insert filled pauses in the speech stream is also developed and presented, relying on an analysis of human filled pause usage and a mix of language modelling methods. The method achieves an insertion accuracy in close agreement with human usage. The various approaches are evaluated and their improvements documented throughout the thesis, however, at the end the resulting filled pause quality is assessed through a repetition of the psycholinguistic experiments and an evaluation of the compilation of all developed methods.
Style APA, Harvard, Vancouver, ISO itp.
41

Cheung, Wan Sup. "Identification, stabilisation and control of nonlinear systems using neural network-based parametric nonlinear modelling". Thesis, University of Southampton, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333732.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Jones, Ian Willem. "Performance optimisation of integrated circuit cells with constrained parametric yield and process variation modelling". Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/38056.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Haritidis, Panagiotis, i Tony Tran. "Parameterstyrd modellering av bergtunnlar". Thesis, KTH, Byggteknik och design, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255607.

Pełny tekst źródła
Streszczenie:
Project designing in the building construction industry has developed a lot during recent years and the result is that not only 2D documents are made, but also 3D models are created to facilitate the project. A problem most players are facing daily is when conditions and information in a project change and design engineers must make necessary changes to their models. These changes can be time-consuming, and the designers may need to do these changes more than once during the project. Could a parametric 3D model make these changes faster than a CAD model when conditions and information on a project change? One of the developers that frequently needs to make changes to its models is WSP geotechnical department. The desire is to find a new working method that increases the efficiency of project design of tunnels. The aim of this thesis was to try parametric modeling as a working method and to see if this method could be used to create ground tunnels. A script has been made in Grasshopper, a visual programming plug-in that generates a parametric model of a tunnel. The parametric model was then compared with an existing CAD model created by the WSP geotechnical department in an earlier project. Pros and cons have been presented and conclusions have been made about if parametric modelling could be more efficient than current working methods. The results of this thesis indicate that parametric modelling is an efficient working method and that it could be used as a working method for future ground tunnel projects.
Projekteringen i byggbranschen har utvecklats mycket de senaste åren och resultatet har blivit att förutom 2D-handlingar, projekteras även 3D-modeller för att underlätta projekteringsarbetet.Ett vardagligt problem som de flesta aktörer står inför är när fler förutsättningar och information i projekt ändras och projektörer tvingas göra ändringar i sina modeller. Dessa ändringar kan vara tidskrävande och behöva göras fler än en gång under projektet. Kan en tredimensionell modell som modellerats parametriskt göra ändringar snabbare när förutsättningar i ett projekt ändras än en CAD modell?En av dessa aktörer som ständigt behöver göra ändringar i sina modeller i efterhand är WSP:s bergteknikavdelning. Önskvärt vore att hitta en ny arbetsmetod som kan effektivisera projekteringen av bergtunnlar.Detta examensarbete syftar till att undersöka om parametrisk modellering kan användas som arbetsmetod för projektering av bergtunnlar. Ett script har skapats i Grasshopper, ett visuellt programmeringstillägg som genererar en parametrisk modell av en tunnel. Den parametriskt skapade modellen har sedan jämförts med en befintlig CAD-modell som skapats av WSP:s bergtekniksavdelning i ett tidigare projekt.Fördelar och nackdelar har lyfts fram och slutsatser har dragits om huruvida parametrisk modellering kan vara mer effektiv än nuvarande arbetsmetoder.Resultaten av detta examensarbete pekar mot att parametrisk modellering är en effektiv arbetsmetod och skulle kunna användas för framtida tunnelprojekt.
Style APA, Harvard, Vancouver, ISO itp.
44

Ilie, Katherine-Rodica, i Katherine ilie@rmit edu au. "Modelling, Simulation and Optimisation of Asymmetric Rotor Profiles in Twin-screw Superchargers". RMIT University. Aerospace, Mechanical and Manufacturing Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080213.144857.

Pełny tekst źródła
Streszczenie:
There is a growing recognition worldwide of the need for more powerful, smaller petrol engines, capable of delivering the higher picking power of larger engines, yet still being economical and environmentally friendly when used for day-to-day driving. An engineering solution for more efficient engines has been considered by research so far. It has been identified that superchargers can potentially improve the performance of automotive engines; therefore research has focused on developing superchargers and supercharger components with higher efficiency. Of particular interest to the research presented in this thesis has been the twin-screw supercharging compressor with design adapted for automotive use (the twin-screw supercharger). The performance of this supercharger type depends on the volume and total losses of the air flow through the supercharger rotors more than on any other aspects of its behaviour. To accurately predict the efficiency of the twin-screw su percharger for matching a particular engine system, accurate supercharger design is required. The main objective of this research was the investigation of the existing limitations of twin-screw superchargers, in particular leakage and reduced efficiency, leading to the development of optimal asymmetric rotor profiles. This research has been completed in four stages defining an innovative rotor design method. The parametric three-dimensional geometric model of twin-screw supercharger rotors of any aspect ratio was developed. For model validation through visualisation, CAD rotor models with scalable data were generated in commercial CAD software and calibrated experimentally by Laser Doppler Velocimetry (LDV) tests. Calibrated rotor profile data can be transferred into CAD-CFD interface for flow simulation and performance optimisation. Through the application of this new rotor design method, new opportunities are created for the twin-screw supercharger design practice, making it a part of the engineering solution for more efficient engines.
Style APA, Harvard, Vancouver, ISO itp.
45

Sabuncuoglu, Baris. "Development of parametric finite element modelling methods for nonwoven materials including rate dependent material behaviour". Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/10016.

Pełny tekst źródła
Streszczenie:
Thermally bonded nonwovens are low-price substitutes for traditional textiles. They are used in many areas including filtration, automotive and aerospace industries. Hence, understanding deformation behaviours of these materials is required to design new products tailored for specific applications in different areas. Because of their complex and random structure, numerical simulations of nonwoven materials have been a challenging task for many years. The main aim of the thesis is to develop a computational modelling tool to simulate the effect of design parameters on structural behaviour of low-density nonwoven materials by using a finite element method. The modelling procedure is carried out with a parametric modelling technique, which allows a designer to run a series of analyses with different design parameters and observe the effects of these parameters on the mechanical behaviour of nonwoven materials. The thesis also presents the study of rate dependent behaviour of nonwoven fibres. Novel test and data-interpretation procedures are proposed to determine the creep behaviour of fibres in the nonwoven structure. Some case studies are presented to demonstrate the effectiveness of the model. The developed computational tool allows macro and micro-scale structural investigation of nonwoven materials. Two additional studies are presented, performed with the developed tool. In the first study, the effect of design parameters on tensile stiffness of nonwovens was determined by performing numerical analyses with various nonwoven models. In the second one, strain distribution in fibres is studied thoroughly together with factors affecting the distribution. The models, developed in the thesis can also be employed in further studies of nonwovens, such as investigation of their damage and fracture behaviour.
Style APA, Harvard, Vancouver, ISO itp.
46

Martens, Corentin. "Patient-Derived Tumour Growth Modelling from Multi-Parametric Analysis of Combined Dynamic PET/MR Data". Doctoral thesis, Universite Libre de Bruxelles, 2021. https://dipot.ulb.ac.be/dspace/bitstream/2013/320127/5/contratCM.pdf.

Pełny tekst źródła
Streszczenie:
Gliomas are the most common primary brain tumours and are associated with poor prognosis. Among them, diffuse gliomas – which include their most aggressive form glioblastoma (GBM) – are known to be highly infiltrative. The diagnosis and follow-up of gliomas rely on positron emission tomography (PET) and magnetic resonance imaging (MRI). However, these imaging techniques do not currently allow to assess the whole extent of such infiltrative tumours nor to anticipate their preferred invasion patterns, leading to sub-optimal treatment planning. Mathematical tumour growth modelling has been proposed to address this problem. Reaction-diffusion tumour growth models, which are probably the most commonly used for diffuse gliomas growth modelling, propose to capture the proliferation and migration of glioma cells by means of a partial differential equation. Although the potential of such models has been shown in many works for patient follow-up and therapy planning, only few limited clinical applications have seemed to emerge from these works. This thesis aims at revisiting reaction-diffusion tumour growth models using state-of-the-art medical imaging and data processing technologies, with the objective of integrating multi-parametric PET/MRI data to further personalise the model. Brain tissue segmentation on MR images is first addressed with the aim of defining a patient-specific domain to solve the model. A previously proposed method to derive a tumour cell diffusion tensor from the water diffusion tensor assessed by diffusion-tensor imaging (DTI) is then implemented to guide the anisotropic migration of tumour cells along white matter tracts. The use of dynamic [S-methyl-11C]methionine ([11C]MET) PET is also investigated to derive patient-specific proliferation potential maps for the model. These investigations lead to the development of a microscopic compartmental model for amino acid PET tracer transport in gliomas. Based on the compartmental model results, a novel methodology is proposed to extract parametric maps from dynamic [11C]MET PET data using principal component analysis (PCA). The problem of estimating the initial conditions of the model from MR images is then addressed by means of a translational MRI/histology study in a case of non-operated GBM. Numerical solving strategies based on the widely used finite difference and finite element methods are finally implemented and compared. All these developments are embedded within a common framework allowing to study glioma growth in silico and providing a solid basis for further research in this field. However, commonly accepted hypothesis relating the outlines of abnormalities visible on MRI to tumour cell density iso-contours have been invalidated by the translational study carried out, leaving opened the questions of the initialisation and the validation of the model. Furthermore, the analysis of the temporal evolution of real multi-treated glioma patients demonstrates the limitations of the formulated model. These latter statements highlight current obstacles to the clinical application of reaction-diffusion tumour growth models and pave the way to further improvements.
Les gliomes sont les tumeurs cérébrales primitives les plus communes et sont associés à un mauvais pronostic. Parmi ces derniers, les gliomes diffus – qui incluent la forme la plus agressive, le glioblastome (GBM) – sont connus pour être hautement infiltrants. Le diagnostic et le suivi des gliomes s'appuient sur la tomographie par émission de positons (TEP) ainsi que l'imagerie par résonance magnétique (IRM). Cependant, ces techniques d'imagerie ne permettent actuellement pas d'évaluer l'étendue totale de tumeurs aussi infiltrantes ni d'anticiper leurs schémas d'invasion préférentiels, conduisant à une planification sous-optimale du traitement. La modélisation mathématique de la croissance tumorale a été proposée pour répondre à ce problème. Les modèles de croissance tumorale de type réaction-diffusion, qui sont probablement les plus communément utilisés pour la modélisation de la croissance des gliomes diffus, proposent de capturer la prolifération et la migration des cellules tumorales au moyen d'une équation aux dérivées partielles. Bien que le potentiel de tels modèles ait été démontré dans de nombreux travaux pour le suivi des patients et la planification de thérapies, seules quelques applications cliniques restreintes semblent avoir émergé de ces derniers. Ce travail de thèse a pour but de revisiter les modèles de croissance tumorale de type réaction-diffusion en utilisant des technologies de pointe en imagerie médicale et traitement de données, avec pour objectif d'y intégrer des données TEP/IRM multi-paramétriques pour personnaliser davantage le modèle. Le problème de la segmentation des tissus cérébraux dans les images IRM est d'abord adressé, avec pour but de définir un domaine propre au patient pour la résolution du modèle. Une méthode proposée précédemment permettant de dériver un tenseur de diffusion tumoral à partir du tenseur de diffusion de l'eau évalué par imagerie DTI a ensuite été implémentée afin de guider la migration anisotrope des cellules tumorales le long des fibres de matière blanche. L'utilisation de l'imagerie TEP dynamique à la [S-méthyl-11C]méthionine ([11C]MET) est également investiguée pour la génération de cartes de potentiel prolifératif propre au patient afin de nourrir le modèle. Ces investigations ont mené au développement d'un modèle compartimental pour le transport des traceurs TEP dérivés des acides aminés dans les gliomes. Sur base des résultats du modèle compartimental, une nouvelle méthodologie est proposée utilisant l'analyse en composantes principales pour extraire des cartes paramétriques à partir de données TEP dynamiques à la [11C]MET. Le problème de l'estimation des conditions initiales du modèle à partir d'images IRM est ensuite adressé par le biais d'une étude translationelle combinant IRM et histologie menée sur un cas de GBM non-opéré. Différentes stratégies de résolution numérique basées sur les méthodes des différences et éléments finis sont finalement implémentées et comparées. Tous ces développements sont embarqués dans un framework commun permettant d'étudier in silico la croissance des gliomes et fournissant une base solide pour de futures recherches dans le domaine. Cependant, certaines hypothèses communément admises reliant les délimitations des anormalités visibles en IRM à des iso-contours de densité de cellules tumorales ont été invalidée par l'étude translationelle menée, laissant ouverte les questions de l'initialisation et de la validation du modèle. Par ailleurs, l'analyse de l'évolution temporelle de cas réels de gliomes multi-traités démontre les limitations du modèle. Ces dernières affirmations mettent en évidence les obstacles actuels à l'application clinique de tels modèles et ouvrent la voie à de nouvelles possibilités d'amélioration.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Style APA, Harvard, Vancouver, ISO itp.
47

Mukora, Audrey Etheline. "Learning curves and engineering assessment of emerging energy technologies : onshore wind". Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8968.

Pełny tekst źródła
Streszczenie:
Sustainable energy systems require deployment of new technologies to help tackle the challenges of climate change and ensuring energy supplies. Future sources of energy are less economically competitive than conventional technologies, but there is the potential for cost reduction. Tools for modelling technological change are important for assessing the deployment potential of early-stage technologies. Learning curves are a tool for assessing and forecasting cost reduction of a product achieved through experience from cumulative production. They are often used to assess technological improvements, but have a number of limitations for emerging energy technologies. Learning curves are aggregate in nature, representing overall cost reduction gained from learning-by-doing. However, they do not identify the actual factors behind the cost reduction. Using the case study of onshore wind energy, this PhD study focuses on combining learning curves with engineering assessment methods for improved methods of assessing and managing technical change for emerging energy technologies. A third approach, parametric modelling, provides a potential means to integrate the two methods.
Style APA, Harvard, Vancouver, ISO itp.
48

Clifford, Sam. "Spatio-temporal modelling of ultrafine particle number concentration". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/63528/4/Samuel_Clifford_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.
Style APA, Harvard, Vancouver, ISO itp.
49

Mota, Susana de Jesus. "Channel modelling for MIMO systems". Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14961.

Pełny tekst źródła
Streszczenie:
Doutoramento em Engenharia Electrotécnica
Systems equipped with multiple antennas at the transmitter and at the receiver, known as MIMO (Multiple Input Multiple Output) systems, offer higher capacities, allowing an efficient exploitation of the available spectrum and/or the employment of more demanding applications. It is well known that the radio channel is characterized by multipath propagation, a phenomenon deemed problematic and whose mitigation has been achieved through techniques such as diversity, beamforming or adaptive antennas. By exploring conveniently the spatial domain MIMO systems turn the characteristics of the multipath channel into an advantage and allow creating multiple parallel and independent virtual channels. However, the achievable benefits are constrained by the propagation channel’s characteristics, which may not always be ideal. This work focuses on the characterization of the MIMO radio channel. It begins with the presentation of the fundamental results from information theory that triggered the interest on these systems, including the discussion of some of their potential benefits and a review of the existing channel models for MIMO systems. The characterization of the MIMO channel developed in this work is based on experimental measurements of the double-directional channel. The measurement system is based on a vector network analyzer and a two-dimensional positioning platform, both controlled by a computer, allowing the measurement of the channel’s frequency response at the locations of a synthetic array. Data is then processed using the SAGE (Space-Alternating Expectation-Maximization) algorithm to obtain the parameters (delay, direction of arrival and complex amplitude) of the channel’s most relevant multipath components. Afterwards, using a clustering algorithm these data are grouped into clusters. Finally, statistical information is extracted allowing the characterization of the channel’s multipath components. The information about the multipath characteristics of the channel, induced by existing scatterers in the propagation scenario, enables the characterization of MIMO channel and thus to evaluate its performance. The method was finally validated using MIMO measurements.
Os sistemas equipados com múltiplas antenas no emissor e no recetor, conhecidos como sistemas MIMO (Multiple Input Multiple Output), oferecem capacidades mais elevadas, permitindo melhor rentabilização do espectro e/ou utilização de aplicações mais exigentes. É sobejamente sabido que o canal rádio é caracterizado por propagação multipercurso, fenómeno considerado problemático e cuja mitigação tem sido conseguida através de técnicas como diversidade, formatação de feixe ou antenas adaptativas. Explorando convenientemente o domínio espacial os sistemas MIMO transformam as características multipercurso do canal numa mais-valia e permitem criar vários canais virtuais, paralelos e independentes. Contudo, os benefícios atingíveis são condicionados pelas características do canal de propagação, que poderão não ser sempre as ideais. Este trabalho centra-se na caracterização do canal rádio para sistemas MIMO. Inicia-se com a apresentação dos resultados fundamentais da teoria da informação que despoletaram todo o entusiamo em torno deste tipo de sistemas, sendo discutidas algumas das suas potencialidades e uma revisão dos modelos existentes para sistemas MIMO. A caracterização do canal MIMO desenvolvida neste trabalho assenta em medidas experimentais do canal direcional adquiridas em dupla via. O sistema de medida é baseado num analisador de redes vetorial e numa plataforma de posicionamento bidimensional, ambos controlados por um computador, permitindo obter a resposta em frequência do canal rádio nos vários pontos correspondentes à localização dos elementos de um agregado virtual. As medidas são posteriormente processadas com o algoritmo SAGE (Space-Alternating Expectation-Maximization), de forma a obter os parâmetros (atraso, direção de chegada e amplitude complexa) das componentes multipercurso mais significativas. Seguidamente, estes dados são tratados com um algoritmo de classificação (clustering) e organizados em grupos. Finalmente é extraída informação estatística que permite caracterizar o comportamento das componentes multipercurso do canal. A informação acerca das características multipercurso do canal, induzidas pelos espalhadores (scatterers) existentes no cenário de propagação, possibilita a caracterização do canal MIMO e assim avaliar o seu desempenho. O método foi por fim validado com medidas MIMO.
Style APA, Harvard, Vancouver, ISO itp.
50

Falkeström, Oskar, Kevin Coleman i Malin Nilsson. "Micromechanical modelling of creep in wooden materials". Thesis, Uppsala universitet, Tillämpad mekanik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444796.

Pełny tekst źródła
Streszczenie:
Wood is a complex organic orthotropic viscoelastic material with acellular structure. When stressed, wood will deform over timethrough a process called creep. Creep affects all wooden structureand can be difficult, time-consuming and expensive to measure. For this thesis, a simple computer model of the woodenmicrostructure was developed. The hypothesis was that the modelledmicrostructure would display similar elastic and viscoelasticproperties as the macroscopic material. The model was designed by finding research with cell geometries ofconiferous trees measured. The model considered late- and earlywoodgeometries as well as growth rings. Rays were ignored as they onlycomposed 5-10% of the material. By applying a finite element method, the heterogeneous late- andearlywood cells could be homogenized by sequentially loading thestrain vector and calculating the average stress. The computer model produced stiff but acceptable values for theelastic properties. Using the standard linear solid method to modelviscoelasticity, the computer model assembled creep curvescomparable to experimental results. With the model sufficiently validated, parametric studies on thecell geometry showed that the elastic and viscoelastic propertieschanged greatly with cell shape. An unconventional RVE was alsotested and shown to give identical result to the standard RVE. Although not perfect, the model can to a certain degree predict theelastic and viscoelastic characteristics for wood given itscellular geometry. Inaccuracies were thought to be caused byassumptions and approximations when building the model.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii