Dissertations / Theses on the topic 'Segmented modeling'

To see the other types of publications on this topic, follow the link: Segmented modeling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Segmented modeling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Idbenjra, Khaoula. "Essays on Segmented-Modeling Approaches for Business Analytical Applications." Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILA027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les complexités croissantes de la prise de décision financière, intensifiées par les crises financières récentes, requièrent une modélisation prédictive avancée et transparente, notamment dans les domaines du scoring de crédit et de la fidélisation des clients. Cette thèse explore les mérites significatifs des modèles hybrides basés sur la segmentation, en se concentrant de manière cruciale sur le Logit Leaf Model (LLM), à travers différentes applications : la fidélisation des clients Business-to-Business (B2B), le scoring de crédit, et la gestion des Prêts Non Performants (NPL). L'urgence d'outils analytiques robustes, interprétables et flexibles a été amplifiée, surtout sur fond de défis financiers et économiques modernes. Ainsi, cette recherche lie minutieusement les conclusions de trois études clés pour explorer et critiquer la fonctionnalité, l'applicabilité, et le mérite du LLM dans divers contextes.L'étude dans le chapitre 2 met en lumière l'applicabilité du LLM dans les scénarios B2B, où la rétention des clients devient cruciale. L'étude montre comment le LLM peut améliorer les stratégies de rétention des clients B2B en utilisant la modélisation uplift et en fournissant des insights essentiels aux managers à travers des visualisations spécifiques, globales et de niveau segmentaire qui renforcent la prise de décision managériale. La seconde étude, présentée dans le chapitre 3, explore le domaine du scoring de crédit, mettant en lumière les performances prédictives supérieures du LLM et son interprétabilité exceptionnelle, ce qui le fait se démarquer des modèles traditionnels tels que la régression logistique et les arbres de décision, et même par rapport à des modèles avancés tels que les réseaux de neurones.Chapitre 4, introduisant la troisième étude, propose une analyse détaillée en utilisant le LLM pour démontrer sa capacité à prédire et comprendre les complexités des prêts non performants (NPL). Ceci est réalisé grâce à un examen approfondi des caractéristiques du débiteur, du prêt, et des indicateurs macroéconomiques. La capacité du modèle à fournir simultanément des prédictions précises et à produire des insights pratiques, comparée à divers autres modèles alternatifs de risque de crédit, souligne sa praticité dans la gestion du risque financier, en particulier pour les banques.À travers une exploration approfondie et une combinaison des études mentionnées ci-dessus, cette thèse met en lumière les diverses aptitudes du LLM à naviguer à travers différents domaines, tous guidés par les données. Elle soulève une discussion sur l'utilité des modèles hybrides basés sur la segmentation pour prendre des décisions complexes, louant le LLM pour sa capacité à allier puissance prédictive et interprétabilité, et à agir en tant qu'outil puissant à travers diverses applications. La thèse suggère également des domaines pour la recherche future dans le chapitre 5, encourageant une exploration plus poussée de la scalabilité, de l'adaptabilité, et des améliorations potentielles du LLM à travers divers secteurs et défis analytiques
The increasing complexities of financial decision-making, intensified by recent financial crises, necessitate transparent advanced predictive modeling, especially in the realms of credit scoring and customer retention. This dissertation explores the significant merits of hybrid segmentation-based models, with a pivotal focus on the Logit Leaf Model (LLM), across varied applications: Business-to-Business (B2B) customer retention, credit scoring, and Non-Performing Loan (NPL) management. The exigency for robust, interpretable, and flexible analytical tools has been amplified, especially against the backdrop of modern financial and economic challenges. Thus, this research meticulously interweaves findings from three pivotal studies to explore and critique the functionality, applicability, and merit of the LLM in diverse contexts.The study in chapter 2 highlights LLM's applicability in B2B scenarios, where customer retention becomes crucial. The study shows how the LLM can improve strategies for B2B customer retention by using uplift modeling and providing essential insights to managers through specific, overall, and segment-level visualizations that strengthen managerial decision-making. The second study, presented in chapter 3, explores the field of credit scoring, spotlighting LLM's superior predictive performance and exceptional interpretability, which makes it stand out amidst traditional models like logistic regression and decision trees, and even when compared to advanced models such as neural networks.Chapter 4, introducing the third study, offers a detailed analysis using the Logit Leaf Model (LLM) to demonstrate its capability to predict and comprehend the complexities of Non-Performing Loans (NPLs). This is achieved through a thorough examination of debtor, loan, and macroeconomic features. The model's ability to concurrently provide precise predictions and yield practical insights, when compared with various alternate credit risk models, accentuates its practicality in managing financial risk, especially within retail banking scenarios.Through a thorough exploration and combination of the studies mentioned above, this dissertation highlights the LLM's varied abilities in navigating through different but inherently data-driven fields. It raises discussion on the usefulness of hybrid segmentation-based models in making complex decisions, praising the LLM for its ability to combine predictive power with interpretability and act as a powerful tool across various applications. The dissertation also suggests areas for future research in chapter 5, encouraging further exploration into the scalability, adaptability, and potential improvements of the LLM across various sectors and analytical challenges
2

Bai, Xiaoyu. "Micromagnetic Modeling of Thin Film Segmented Medium for Microwave-Assisted Magnetic Recording." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this dissertation, a systematic modeling study has been conducted to investigate the microwaveassisted magnetic recording (MAMR) and its related physics. Two different modeling approaches including effective field modeling and recording signal-to-noise ratio (SNR) modeling has been conducted to understand the MAMR mechanism on segmented thin film granular medium. First the background information about perpendicular magnetic recording (PMR) and its limitation has been introduced. The motivation of studying MAMR is to further improve the recording area density capacity (ADC) of the hard disk drive (HDD) and to overcome the theoretical limitation of PMR. The development of recording thin film medium has also been discussed especially the evolvement of the multilayer composite medium. Since the spin torque oscillator (STO) is the essential component in MAMR, different STO structures have been discussed. The relation between STO setting (thickness, location and frequency) to the ac field distribution has also been explored. In effective field modeling, both head configuration and medium structure optimization have been investigated. The head configuration study includes the effective field distribution in relation to the fieldgeneration- layer thickness, location, and frequency. Especially an interesting potential erasure is detected due to the imperfect circularity of the ac field. Several approaches have been proposed to prevent the erasure. Meanwhile, notched and graded segmentation structure have been compared through effective field analysis in terms of the field gradient and track width. It has been found that MAMR with notched Hk distribution is able to achieve both high field gradient and narrow track width simultaneously. In recording SNR modeling, first the behavior of MAMR with single layer medium has been studied and three phases have been discovered. As proceed to the multi-layer medium, a practical issue which is MAMR with insufficient ac field power and high medium damping has been introduced. Since the fabrication of STO with high ac power is highly difficult, the issue has been investigated from the medium side which is through an optimized medium structure, the provided ac field can be utilized more efficiently. It has been found that more segmentation on upper part of the grain to fit the ac field yields more efficient ac field power usage. Following this scenario, the graded and notched segmentation structure have been studied in terms of SNR and track width. The traditional dilemma between recording SNR and track width in the conventional PMR is partially solve using MAMR with notched segmentation structure.
3

Jesuthasan, Nirmalakanth. "Modeling of thermofluid phenomena in segmented network simulations of loop heat pipes." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The overarching goal of the work presented in this thesis is to formulate, implement, test, and demonstrate cost-effective mathematical models and numerical solution methods for computer simulations of fluid flow and heat transfer in loop heat pipes (LHPs) operating under steady-state conditions.A segmented network thermofluid model for simulating steady-state operation of conventional LHPs with cylindrical and flat evaporators is proposed. In this model, the vapor-transport line, condenser pipe, and liquid-transport line are divided into longitudinal segments (or control volumes). Quasi-one-dimensional formulations, incorporating semi-empirical correlations for the related single- and two-phase phenomena, are used to iteratively impose balances of mass, momentum, and energy on each of the aforementioned segments, and collectively on the whole LHP. Variations of the thermophysical properties of the working fluid with temperature are taken into account, along with change in quality, pressure drop, and heat transfer in the two-phase regions, giving the proposed model enhanced capabilities compared to those of earlier thermofluid network models of LHPs. The proposed model is used to simulate an LHP for which experimental measurements are available in the literature: The predictions of the proposed model are in very good agreement with the experimental results.In earlier quasi-one-dimensional models of LHPs, the pressure drop for vapor flow through the grooves in the evaporator is computed using a friction-factor correlation that applies strictly only in the fully-developed region of fluid flows in straight ducts with impermeable walls. This approach becomes unacceptable when this pressure drop is a significant contributor to the overall pressure drop in the LHP. A more accurate correlation for predicting this pressure drop is needed. To fulfill this need, first, a co-located equal-order control-volume finite element method (CVFEM) for predicting three-dimensional parabolic fluid flow and heat transfer in straight ducts of uniform regular- and irregular-shaped cross-section is proposed. The methodology of the proposed CVFEM is also adapted to formulate a simpler finite volume method (FVM), and this FVM is used to investigate steady, laminar, Newtonian fluid flow and heat transfer in straight vapor grooves of rectangular cross-section, for parameter ranges representative of typical LHP operating conditions. The results are used to elaborate the features of a special fully-developed flow and heat transfer region (established at a distance located sufficiently downstream from the blocked end of the groove) and to propose novel correlations for calculating the overall pressure drop and also the bulk temperature of the vapor. These correlations are incorporated in the aforementioned quasi-one-dimensional model to obtain an enhanced segmented network thermofluid model of LHPs.Sintered porous metals of relatively low porosity (0.30 – 0.50) and small pore diameter (2.0 – 70 micrometers) are the preferred materials for the wick in LHPs. The required inputs to mathematical models of LHPs include the porosity, maximum effective pore size, effective permeability, and effective thermal conductivity of the liquid-saturated porous material of the wick. The determination of these properties by means of simple and effective experiments, procedures, and correlations is demonstrated using a sample porous sintered-powder-metal plate made of stainless steel 316.Finally, the capabilities of the aforementioned enhanced segmented network thermofluid model are demonstrated by using it to simulate a sample LHP operating under steady-state conditions with four different working fluids: ammonia, distilled water, ethanol, and isopropanol. The results are presented and comparatively discussed.
L'objectif principal de cette thèse consiste à formuler, mettre en œuvre, tester et démontrer des modèles mathématiques et des méthodes numériques pour réaliser la simulation d'écoulements de fluide et de transfert de chaleur dans des boucles fluides diphasiques [Loop Heat Pipes (LHPs) en anglais], opérant en régime permanent. Un modèle de réseau segmenté thermofluide pour simuler le fonctionnement en régime permanent des LHPs conventionnelles avec des évaporateurs cylindriques et plats est proposé. Dans ce modèle, la ligne de transport de la vapeur, le tuyau du condenseur et la ligne de transport du liquide sont divisés en segments longitudinaux (ou volumes de contrôle). Des formulations quasi-unidimensionnelles, intégrant des corrélations semi-empiriques pour les phénomènes multiphasiques sont utilisées pour assurer la conservation de la masse, de la quantité de mouvement et de l'énergie sur chacun des segments individuels, puis sur l'ensemble du LHP. Les variations des propriétés thermophysiques du fluide en fonction de la température sont prises en compte, ainsi que le changement dans le titre en vapeur, la chute de pression, et le transfert de chaleur dans les régions diphasiques, améliorant ainsi les capacités du modèle proposé par rapport aux modèles précédents de réseaux de LHPs. Le modèle proposé est utilisé pour simuler un LHP pour lequel des mesures expérimentales sont disponibles dans la littérature: les prédictions du modèle proposé sont en très bon accord avec les résultats expérimentaux. Dans les modèles quasi-unidimensionnels précédents de LHPs, la chute de pression pour un débit de vapeur à travers les gorges de l'évaporateur est calculée en utilisant une corrélation faisant intervenir un facteur de friction s'appliquant uniquement dans la région pleinement développé de conduits avec des murs imperméables. Cette approche est inacceptable quand cette baisse de pression devient significative devant la chute de pression globale du LHP. Une corrélation plus précise pour prédire cette chute de pression est alors nécessaire. Pour répondre à ce besoin, une méthode de volumes éléments finis (CVFEM) est proposée pour prédire l'écoulement en trois dimensions du fluide et le transfert de chaleur dans différents conduits à section uniforme régulière et irrégulière. La méthodologie du CVFEM est également adaptée pour formuler une méthode plus simple en volumes finis (FVM). Cette approche est utilisée pour étudier l'écoulement laminaire de fluides newtoniens et le transfert de chaleur dans des cannelures de vapeur à section rectangulaire, pour des conditions typiques de fonctionnement d'un LHP. Les résultats sont utilisés pour élaborer les caractéristiques d'une région pleinement développé (particulière aux LHPs) et de proposer de nouvelles corrélations pour le calcul de la chute de pression globale et des températures de vapeur. Ces corrélations sont incorporées dans le modèle quasi-unidimensionnel pour obtenir un modèle amélioré de réseau segmenté thermofluide pour les LHPs. Les métaux poreux, fabriqués à partir de poudre de métaux sintérisées, ayant une faible porosité (0.30 - 0.50) et un diamètre de pores de petite taille (2.0 à 70 micromètres), sont les matériaux idéals pour la mèche des LHPs. Les paramètres d'entrées des modèles mathématiques de LHPs incluent la porosité, la taille effective maximale des pores, la perméabilité effective et la conductivité thermique effective de la mèche saturée d'un liquide. La détermination de ces propriétés par des expériences simples est réalisée en utilisant un échantillon poreux fritté de poudre de métal en acier inoxydable 316. Enfin, les capacités du modèle améliorée du réseau segmenté thermofluide discuté ci-dessus sont démontrées en l'utilisant pour simuler un LHP opérant en régime permanent avec quatre fluides différents: l'ammoniac, l'eau distillée, l'éthanol et l'isopropanol. Les résultats sont présentés et discutés comparativement.
4

Ringenberg, Jordan. "Computerized 3D Modeling and Simulations of Patient-Specific Cardiac Anatomy from Segmented MRI." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1406129522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Roslyakova, Irina [Verfasser], Holger [Gutachter] Dette, and Ingo [Gutachter] Steinbach. "Modeling thermodynamical properties by segmented non-linear regression / Irina Roslyakova ; Gutachter: Holger Dette, Ingo Steinbach." Bochum : Ruhr-Universität Bochum, 2017. http://d-nb.info/1140222953/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zelinski, Michael E. "A segmented aperture space telescope modeling tool and its application to remote sensing as understood through image quality and image utility /." Online version of thesis, 2009. http://hdl.handle.net/1850/11658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Steinberg, Andreas [Verfasser], Henriette [Akademischer Betreuer] Sudhaus, and Wolfgang [Gutachter] Rabbel. "Improved modeling of segmented earthquake rupture informed by enhanced signal analysis of seismic and geodetic observations / Andreas Steinberg ; Gutachter: Wolfgang Rabbel ; Betreuer: Henriette Sudhaus." Kiel : Universitätsbibliothek Kiel, 2021. http://d-nb.info/1236572122/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rocha, Tassiana Duarte da. "Análise numérica do comportamento de juntas entre aduelas de vigas protendidas." Universidade do Estado do Rio de Janeiro, 2012. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=5616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Diversos pesquisadores têm estudado o comportamento e o emprego de aduelas de concreto, que constituem as vigas segmentadas em sistemas estruturais, de maneira especial pontes e viadutos. Por esta razão, inúmeros trabalhos têm sido publicados nos últimos anos respaldados por testes experimentais e análises numéricas. O comportamento destas vigas contrasta com as clássicas vigas monolíticas em diversos aspectos, pois, a estrutura é composta de partes de elementos de concreto pré-moldado que, após serem posicionados no local definitivo, são protendidos. A protensão pode ser aderente ou não aderente. A principal vantagem deste sistema de construção é a rapidez e o alto controle de qualidade, por isso é largamente utilizado, havendo uma demanda de estudo de previsão do seu real comportamento No presente trabalho apresenta-se uma modelagem numérica via elementos finitos, para simular o comportamento de vigas compostas por aduelas justapostas sem material ligante entre as juntas. A protensão aplicada é aderente e a análise considera a não linearidade da região da junta. Assim sendo, o objetivo desta investigação é dar uma contribuição ao estudo do comportamento estrutural estático de vigas segmentadas, atentando para o comportamento das juntas, utilizando um programa comercial. Para o modelo são empregadas técnicas usuais de discretização, via método dos elementos finitos (MEF), por meio do programa de elementos finitos SAP2000[93]. O modelo proposto é constituído de elementos de placa próprios para concreto para representar a viga, a protensão é introduzida por meio de barras bidimensionais que transferem as tensões ao longo de seu comprimento e as juntas são implementadas utilizando elementos de contato. A analise é bidimensional e considera os efeitos das perdas de protensão. Este trabalho de pesquisa objetiva também o estudo de elementos de contato especialmente as características de deformação para esta ferramenta computacional. A definição dos parâmetros para o modelo foi feita com base em dados experimentais disponíveis na literatura. O modelo numérico foi calibrado e confrontado com resultados experimentais obtidos em ensaios de laboratório.
Several researchers have studied the behavior and the use of concrete staves, which are segmented beams in structural systems, especially bridges and overpasses, in civil engineering. For this reason, numerous studies have been published in recent years supported by experimental tests and numerical analyzes. The behavior of these beams in contrast to the conventional monolithic beams in different ways because the structure is composed of portions of pre-cast concrete elements which after being placed in situ are prestressed. The prestressing can be adhered or otherwise attached. The main advantage of this system of construction is speed and high quality control, so it is widely used; there is a demand forecast study their actual behavior. In this paper, we present a finite element model to simulate the behavior of composite beams prestressed by adherent with staves. Therefore, the objective of this research is to make a contribution to the study of static structural behavior of beams targeted observing the behavior of joints using a commercial program. For the model are employed standard techniques of discretization via the finite element method (FEM), using the finite element program SAP2000 [90]. The model is made of elements suitable for plaque to represent the actual beam, the prestressing is introduced by means of two-dimensional bar which transfer the stresses along its length and the joints are implemented using contact elements proposed by the program, called Link. The analysis is two dimensional and considers the effects of prestressing losses. This research also aims to study the contact elements especially the characteristics of deformation to this computational tool. The definition of parameters for the model was based on experimental data available in literature. The numerical model was calibrated and compared with experimental results obtained in laboratory trials.
9

Hartasánchez, Frenk Diego Andrés 1982. "Modeling and simulation of interlocus gene conversion." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/525842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les regions duplicades del genoma, com ara les duplicacions de segments (SDs), són una característica comuna dels genomes eucariotes i han estat associades a canvis fenotípics. Donada la seva rellevància evolutiva, tenir un model neutre per descriure la seva evolució és essencial. En aquesta tesi, descric el desenvolupament de SeDuS, un simulador computacional endavant en el temps de l'evolució neutra de SDs. Les duplicacions estan sotmeses a un procés de recombinació, anomenat conversió gènica interlocus (IGC), que afecta els patrons de variació i de desequilibri de lligament dins i entre duplicacions. Aquí descric els efectes de sobreposar regions susceptibles de recombinació homòloga amb regions susceptibles d'IGC i d'incorporar dependència d'IGC en la similitud de seqüències. Addicionalment, ja que les SDs són objectius potencial de la selecció natural, informo sobre possibles alteracions a proves estadístiques quan aquestes s'apliquen a regions duplicades sotmeses a IGC. Finalment, exploro la possibilitat de combinar resultats de diferents proves estadístiques aplicades al llarg de tot el genoma per detectar la presència de duplicacions col·lapsades.
Duplicated regions of the genome, such as Segmental Duplications (SDs), are a pervasive feature of eukaryotic genomes and have been linked to phenotypic changes. Given their evolutionary relevance, having a neutral model to describe their evolution is essential. In this thesis, I report the development of SeDuS, a forward-in-time computer simulator of SD neutral evolution. Duplications are known to undergo a recombination process, termed interlocus gene conversion (IGC), which is known to affect the patterns of variation and linkage disequilibrium within and between duplicates. Here I describe the effects of overlapping crossover and IGC susceptible regions and of incorporating sequence similarity dependence of IGC. Furthermore, since SDs are potential targets of natural selection, I report potential confounding effects of IGC on test statistics when these are applied to duplications. Finally, I explore the possibility of combining results of different test statistics applied genome-wide to detect the presence of collapsed duplications.
10

Yeung, Anson Chi-Ming Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Polymer segmented cladding fibres: cross fibre modelling, design, fabrication and experiment." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents the first research on polymer-segmented-cladding-fibre (PSCF), an emerging class of microstructured- optical-fibres (MOFs), which allows single-mode operation with ultra-large-core area. This research covers the modelling, design, fabrication and experiment of the polymer optical cross-fibre (4-period-SCF) whose cross-sectional view resembles a cross. A new wedge waveguide model has been formulated and applied to demonstrate that for any given parameters, the cross fibre gives the same performance for single-mode operation as the N-period-SCFs (for N = 2, 6 and 8). These fibres behave identically if the high-index segment angle, θ1, is the same and the low-index segment angular width, θ2, is sufficiently large for negligible adjacent mode coupling effects. This remarkable finding has significant ramifications for SCF fabrication, design and performance. Theoretical predictions confirmed by experiments demonstrated that a cross-fibre is all that needed to fabricate a large-core single-mode-fibre with no geometry-induced birefringence. The high-index outer ring effects on the cross fibre single-mode performance have been systematically investigated for the first time. The study reveals that the ring index value higher than its core index has very strong effects on single-mode performance. Within a narrow range of θ1, the minimum fibre length required for single-mode operation is reduced but outside this angle range, longer single-mode length is required. Furthermore, the fibre can be anti-guiding if θ1 exceeds the cutoff angle. Incorporating the fabrication constraints, the optimal cross-fibre design with high-index ring is achieved by optimising the relative index difference, high-index segment angle and core-cladding diameter ratio. Two preform-making techniques developed for the cross-fibres fabrication include the cladding-segment-in-tube method and the core-cladding-segment-in-tube method. The innovative approach in these methods overcomes the problems of bubble formation and fractures, which are related to the fibre structure complexity and the polymer intrinsic properties and their processing. It enables the successful drawing of single-mode fibres. This thesis reports the first experimental demonstration of single-mode operation of large-core cross-fibre. Three experimental studies with different cross-fibre designs have demonstrated (i) large-core single-mode operation, (ii) high-index ring effects on fibre performance and (iii) cross-fibre optimal design trial. Apart from this, the 8-period-SCF fibre performance has been demonstrated experimentally.
11

Holmes, Wendy Jane. "Modelling segmental variability for automatic speech recognition." Thesis, University College London (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mainka, Julia. "Impédance locale dans une pile à membrane H2/air (PEMFC) : études théoriques et expérimentales." Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10042/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse apporte des éléments de compréhension de la boucle basse fréquence des spectres d'impédance de PEMFC H2/air. Différentes expressions de l'impédance de transport de l'oxygène alternatives à l'élément de Warburg sont proposées. Elles prennent en compte des phénomènes de transport dans les directions perpendiculaire et parallèle à l'électrode qui sont habituellement négligés: convection à travers la GDL et le long du canal d'air, résistance protonique de la couche catalytique et appauvrissement en oxygène entre l'entrée et la sortie de la cellule. Une attention particulière est portée sur les oscillations de concentration induites par le signal de mesure qui se propagent le long du canal d'air. Ces différentes expressions de l'impédance de transport de l'oxygène sont utilisées dans un circuit électrique équivalent destiné à simuler l'impédance de la cellule. Une comparaison entre résultats expérimentaux et théoriques permet d'identifier les paramètres du circuit électrique. A partir de ces paramètres, il est possible d'analyser les mécanismes physiques et électro-chimiques qui se produisent dans la pile, ainsi que de tirer certaines conclusions sur les phénomènes de transport de l'oxygène dans les milieux poreux de la cathode. Pour cela, nous avons utilité des cellules segmentées et instrumentées conçues et fabriquées au laboratoire
The aim of this Ph.D thesis is to contribute to a better understanding of the low frequency loop in impedance spectra of H2/air fed PEMFC and to bring information about the main origin(s) of the oxygen transport impedance through the porous media of the cathode via locally resolved EIS. Different expressions of the oxygen transport impedance alternative to the one-dimensional finite Warburg element are proposed. They account for phenomena occurring in the directions perpendicular and parallel to the electrode plane that are not considered usually: convection through the GDL and along the channel, finite proton conduction in the catalyst layer, and oxygen depletion between the cathode inlet and outlet. A special interest is brought to the oxygen concentration oscillations induced by the AC measuring signal that propagate along the gas channel and to their impact on the local impedance downstream. These expressions of the oxygen transport impedance are used in an equivalent electrical circuit modeling the impedance of the whole cell. Experimental results are obtained with instrumented and segmented cells designed and built in our group. Their confrontation with numerical results allows to identify parameters characterizing the physical and electrochemical processes in the MEA
13

Denoziere, Guilhem. "Numerical Modeling of a Ligamentous Lumbar Motion Segment." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Eight out of ten people in the United States will have problems with low back pain at some point in their life. The most significant surgical treatments for low back pain can be distributed into two main groups of solutions: arthrodesis and arthroplasty. Spinal arthrodesis consists of the fusion of a degenerated functional spine unit (FSU) to alleviate pain and prevent mechanical instability. Spinal arthroplasty consists of the implantation of an artificial disc to restore the functionality of the degenerated FSU. The objective of this study is to analyze and compare the alteration of the biomechanics of the lumbar spine treated either by arthrodesis or arthroplasty. A three-dimensional finite element model of a ligamentous lumbar motion segment, constituted of two FSUs, was built and simulated through a static analysis with the finite element software ABAQUS. It was shown that the mobility of the segment treated by arthrodesis was reduced in all rotational degrees of freedom by an average of approximately 44%, relative to the healthy model. Conversely, the mobility of the segment treated by arthroplasty was increased in all rotational degrees of freedom by an average of approximately 52%. The FSU implanted with the artificial disc showed a high risk of instability and further degeneration. The mobility and the stresses in the healthy FSU, adjacent to the restored FSU in the segment treated by arthroplasty, were also increased. In conclusion, the simulation of the arthroplasty model showed more risks of instability and further degeneration, on the treated level as well as on the adjacent levels, than in the arthrodesis model.
14

Rasid, Mohd Azri Hizami. "Contribution to multi-physical studies of small synchronous-reluctance machine for automotive equipment." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2260/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans une application d’équipement automobile, la conception optimale d’un actionneur nécessite la prise en compte simultané des différents phénomènes physiques ; tant qu’en termes de performances attendues ainsi que les contraintes à respectées. Ces physiques comprennent la performance électromagnétique / électromécanique, comportement thermique et le comportement vibro-acoustique. En prenant en compte le coût et la faisabilité en matières de fabrications, la machine synchro-réluctante (Syncrel) à rotor segmenté a été démontré intéressante pour une application avec des fortes contraintes d’encombrement et thermique. Cette étude a donc pour objective d’évaluer la capacité de la machine Syncrel dans ces différents physiques et démontre les interactions entre eux, qui peuvent affecter les performances de la machine en fonctionnement. Des modèles multi-physiques ont été développés et validés en utilisant une machine prototype conçu précédemment pour un actionneur d’embrayage électrique. En se servant des modèles validés, différents critères de performances des différentes topologies de rotor de la machine Syncrel ont été aussi comparés. A l’issue de l’étude, les modèles électromagnétiques, électromécaniques, thermiques et vibro-acoustiques valides sont à nos dispositions pour être utilisés dans la conception de machine Syncrel en future. La machine Syncrel avec rotor segmenté a été démontrée capable pour être utilisé dans l'application de l'embrayage électrique étudié en particulier. Suite à des évaluations de performance en physique différente, des pistes d'améliorations ont également été proposées
In an on-board automotive environment, machines optimal design requires simultaneous consideration of numerous physical phenomena; both in terms of expected performance or in terms of constraints to be respected. The physics that can be affected includes the electromagnetic / electromechanical performance, thermal behavior and vibro-acoustic behavior. Among a large choice of machine, with the manufacturer cost and manufacturing concern taken en into account, the synchronous reluctance machine with segmented has been found to be particularly interesting for application with severe ambient temperature and encumbrance limitation. This study has therefore as objectives to evaluate the capacity of the synchronous reluctance machine in ail physics mentioned and eventually shows the interaction between these physics, thus performance alteration of the machine in operated in the automobile equipment environment. Multi-physics model were developed and confronted to experimental validations using a prototype machine that was developed for an electrical clutch. Using the validated model, different performance figures of synchronous reluctance machines with different rotor topologies were compared. Resulting from the study, valid electromagnetic, electromechanical, thermal and vibro-acoustic models are now available to be used as tools in future machine design. The synchronous reluctance with segmented rotor prototype machine has been shown to be capable to be used in the electrical clutch application studied in particular. Following performance evaluations in different physics, suggestions of improvements have also been proposed
15

Reed, Jeremy T. "Acoustic segment modeling and preference ranking for music information retrieval." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation focuses on improving content-based recommendation systems for music. Specifically, progress in the development in music content-based recommendation systems has stalled in recent years due to some faulty assumptions: 1. most acoustic content-based systems for music information retrieval (MIR) assume a bag-of-frames model, where it is assumed that a song contains a simplistic, global audio texture 2. genre, style, mood, and authors are appropriate categories for machine-oriented recommendation 3. similarity is a universal construct and does not vary among different users The main contribution of this dissertation is to address these faulty assumptions by describing a novel approach in MIR that provides user-centric, content-based recommendations based on statistics of acoustic sound elements. First, this dissertation presents the acoustic segment modeling framework that describes a piece of music as a temporal sequence of acoustic segment models (ASMs), which represent individual polyphonic sound elements. A dictionary of ASMs generated in an unsupervised process defines a vocabulary of acoustic tokens that are able to transcribe new musical pieces. Next, standard text-based information retrieval algorithms use statistics of ASM counts to perform various retrieval tasks. Despite a simple feature set compared to other content-based genre recommendation algorithms, the acoustic segment modeling approach is highly competitive on standard genre classification databases. Fundamental to the success of the acoustic segment modeling approach is the ability to model acoustical semantics in a musical piece, which is demonstrated by the detection of musical attributes on temporal characteristics. Further, it is shown that the acoustic segment modeling procedure is able to capture the inherent structure of melody by providing near state-of-the-art performance on an automatic chord recognition task. This dissertation demonstrates that some classification tasks, such as genre, possess information that is not contained in the acoustic signal; therefore, attempts at modeling these categories using only the acoustic content is ill-fated. Further, notions of music similarity are personal in nature and are not derived from a universal ontology. Therefore, this dissertation addresses the second and third limitation of previous content-based retrieval approaches by presenting a user-centric preference rating algorithm. Individual users possess their own cognitive construct of similarity; therefore, retrieval algorithms must demonstrate this flexibility. The proposed rating algorithm is based on the principle of minimum classification error (MCE) training, which has been demonstrated to be robust against outliers and also minimizes the Parzen estimate of the theoretical classification risk. The outlier immunity property limits the effect of labels that arise from non-content-based sources. The MCE-based algorithm performs better than a similar ratings prediction algorithm. Further, this dissertation discusses extensions and future work.
16

Chang, Jane W. (Jane Wen) 1970. "Near-miss modeling : a segment-based approach to speech recognition." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/46179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Serridge, Benjamin M. (Benjamin Michael) 1973. "Context-dependent modeling in a segment-based speech recognition system." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 78-80).
by Benjamin M. Serridge.
M.Eng.
18

Sun, Yingcheng. "Topic Modeling and Spam Detection for Short Text Segments in Web Forums." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1575281495398615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Glisse, Marc. "Combinatoire des droites et segments pour la visibilité 3D." Phd thesis, Université Nancy II, 2007. http://tel.archives-ouvertes.fr/tel-00192337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse présente principalement des résultats sur la combinatoire des droites et segments qui apparaissent naturellement dans l'étude des problèmes de visibilité en trois dimensions. Nous exposons en premier lieu des résultats sur la taille de la silhouette d'un objet vu d'un point, c'est à dire sur la complexite de l'ensemble des droites ou segments tangents à l'objet et passant par le point. Nous présentons en particulier les premières bornes théoriques non triviales pour des polyèdres non-convexes, à savoir que, sous des hypothèses raisonnables, la complexité moyenne de la silhouette est au plus la racine carrée de la complexité du polyèdre, phénomène largement observé en infographie. Nous présentons aussi des bornes, en moyenne et dans le cas le pire, sur le nombre de droites et segments tangents à quatre objets dans une scène composée d'objets polyédriques ou sphériques. Ces bornes donnent en particulier l'espoir que la complexité des structures de données globales comme le complexe de visibilité ne soit pas nécessairement prohibitive. Les bornes sur les polytopes sont également les premières à tirer parti des propriétés structurelles des scènes composées de triangles organisés en polytopes de facon réaliste, c'est à dire non nécessairement disjoints. Ces bornes induisent enfin les premières bornes non triviales sur la complexité des ombres induites par des sources lumineuses non ponctuelles. Les résultats presentés dans cette thèse améliorent significativement l'état de l'art sur les propriétés combinatoires des structures de visibilité en trois dimensions et devraient favoriser les développements algorithmiques futurs pour ces problèmes.
20

Layton, Todd Samuel. "A generalized (k, m)-segment mean algorithm for long term modeling of traversable environments." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-76).
We present an ecient algorithm for computing semantic environment models and activity patterns in terms of those models from long-term value trajectories defined as sensor data streams. We use an expectation-maximization approach to calculate a locally optimal set of path segments with minimal total error from the given data signal. This process reduces the raw data stream to an approximate semantic representation. The algorithm's speed is greatly improved by the use of lossless coresets during the iterative update step, as they can be calculated in constant amortized time to perform operations with otherwise linear runtimes. We evaluate the algorithm for two types of data, GPS points and video feature vectors, on several data sets collected from robots and human-directed agents. These experiments demonstrate the algorithm's ability to reliably and quickly produce a model which closely ts its input data, at a speed which is empirically no more than linear relative to the size of that data set. We analyze several topological maps and representative feature sets produced from these data sets.
by Todd Samuel Layton.
M. Eng.
21

Turunc, Cagri. "An Implementation Of Ekf Slam With Planar Segments." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614906/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Localization and mapping are vital capabilities for a mobile robot. These two capabilities strongly depend on each other and simultaneously executing both of these operations is called SLAM (Simultaneous Localization and Mapping). SLAM problem requires the environment to be represented with an abstract mapping model. It is possible to construct a map from point cloud of environment via scanner sensor systems. On the other hand, extracting higher level of features from point clouds and using these extracted features as an input for mapping system is also a possible solution for SLAM. In this work, a 4D feature based EKF SLAM system is constructed and open form of equations of algorithm are presented. The algorithm is able to use center of mass and direction of features as input parameters and executes EKF SLAM via these parameters. Performance of 4D feature based EKF SLAM was examined and compared with 3D EKF SLAM via monte-carlo simulations. By this way
it is believed that, contribution of adding a direction vector to 3D features is investigated and illustrated via graphs of monte-carlo simulations. At the second part of the work, a scanner sensor system with IR distance finder is designed and constructed. An algorithm was presented to extract planar features from data collected by sensor system. A noise model was proposed for output features of sensor and 4D EKF SLAM algorithm was executed via extracted features of scanner system. By this way, performance of 4D EKF SLAM algorithm is tested with real sensor data and output results are compared with 3D features. So in this work, contribution of using 4D features instead of 3D ones was examined via comparing performance of 3D and 4D algorithms with simulation results and real sensor data.
22

Turunc, Cagri. "An Implementation Of 3d Slam With Planar Segments." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12614928/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Localization and mapping are vital capabilities for a mobile robot. These two capabilities strongly depend on each other and simultaneously executing both of these operations is called SLAM (Simultaneous Localization and Mapping). SLAM problem requires the environment to be represented with an abstract mapping model. It is possible to construct a map from point cloud of environment via scanner sensor systems. On the other hand, extracting higher level of features from point clouds and using these extracted features as an input for mapping system is also a possible solution for SLAM. In this work, a 4D feature based EKF SLAM system is constructed and open form of equations of algorithm are presented. The algorithm is able to use center of mass and direction of features as input parameters and executes EKF SLAM via these parameters. Performance of 4D feature based EKF SLAM was examined and compared with 3D EKF SLAM via monte-carlo simulations. By this way
it is believed that, contribution of adding a direction vector to 3D features is investigated and illustrated via graphs of monte-carlo simulations. At the second part of the work, a scanner sensor system with IR distance finder is designed and constructed. An algorithm was presented to extract planar features from data collected by sensor system. A noise model was proposed for output features of sensor and 4D EKF SLAM algorithm was executed via extracted features of scanner system. By this way, performance of 4D EKF SLAM algorithm is tested with real sensor data and output results are compared with 3D features. So in this work, contribution of using 4D features instead of 3D ones was examined via comparing performance of 3D and 4D algorithms with simulation results and real sensor data.
23

Knecht, Casey Scott. "Crash Prediction Modeling for Curved Segments of Rural Two-Lane Two-Way Highways in Utah." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis contains the results of the development of crash prediction models for curved segments of rural two-lane two-way highways in the state of Utah. The modeling effort included the calibration of the predictive model found in the Highway Safety Manual (HSM) as well as the development of Utah-specific models developed using negative binomial regression. The data for these models came from randomly sampled curved segments in Utah, with crash data coming from years 2008-2012. The total number of randomly sampled curved segments was 1,495. The HSM predictive model for rural two-lane two-way highways consists of a safety performance function (SPF), crash modification factors (CMFs), and a jurisdiction-specific calibration factor. For this research, two sample periods were used: a three-year period from 2010 to 2012 and a five-year period from 2008 to 2012. The calibration factor for the HSM predictive model was determined to be 1.50 for the three-year period and 1.60 for the five-year period. These factors are to be used in conjunction with the HSM SPF and all applicable CMFs. A negative binomial model was used to develop Utah-specific crash prediction models based on both the three-year and five-year sample periods. A backward stepwise regression technique was used to isolate the variables that would significantly affect highway safety. The independent variables used for negative binomial regression included the same set of variables used in the HSM predictive model along with other variables such as speed limit and truck traffic that were considered to have a significant effect on potential crash occurrence. The significant variables at the 95 percent confidence level were found to be average annual daily traffic, segment length, total truck percentage, and curve radius. The main benefit of the Utah-specific crash prediction models is that they provide a reasonable level of accuracy for crash prediction yet only require four variables, thus requiring much less effort in data collection compared to using the HSM predictive model.
24

Durkin, Jennifer Dowling James. "Development of a geometric modelling approach for human body segment inertial parameter estimation /." *McMaster only, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jaitman, Abigail. "Multi-segment foot modelling to enable an understanding of altered gait in diabetes." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/88864/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Diabetes is a multisystemic disease that affects the whole human body, in particular, the musculoskeletal system. Muscles, tendons, ligaments and bone marrow are its main victims, the foot being the most common target. Changes in its anatomy can occur rapidly, and therefore an early diagnosis is imperative in order to provide the appropriate medical care, thus avoiding amputation which is a high factor of morbidity. In order to understand the biomechanical implications of the disease, it is necessary to develop new and improved models that allow the study of the foot during gait. The difficulties arising in foot modelling are inherent in its complex composition, thus most models simplify the foot geometry, structure, materials and kinetic analysis. This thesis presents a new approach towards foot modelling, combining readily available non-invasive methodologies to develop multi-segment foot models. This research helps in the in-depth understanding of the effects of changes in structure and shape of the foot brought about by diabetes and in the evaluation of the effects of interventions and long-term rehabilitation. Intermediate results are presented in order to establish the reliability of the proposed methods, developing first a new method for simultaneous plantar pressure and gait study. New approaches to muscle-tendon length and moment arm measurement are tested and validated, following an analysis of different pennation angle assumptions for force production assessment. Both extrinsic and intrinsic muscles are included in the model using the Hill muscle model. Stiffness and damping parameters are estimated on a per-subject basis. In order to model the soft tissue, which is of particular interest in diabetic patients, a model consisting of a system of parallel spring and damper, is proposed. Parameters are presented for 15 subjects with the purpose of characterising the properties of the soft tissue under the calcaneus (heel pad), metatarsal heads and hallux. A further analysis is provided by simulating different diabetic foot injuries and comparing their effect in joint range of movement and moment and soft tissue. Combined, these studies produce a complete subject-specific musculoskeletal and soft tissue model that enhances our understanding of both normal and altered gait.
26

Day, Judd S. "Use of a magnetic tracking device for three-dimensional link segment modeling in manual materials handling." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq20623.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Rastenė, Irma. "Autoregresinio modelio pasikeitusio segmento testavimas ir vertinimas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110628_134442-76842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Disertacijoje nagrinėjamas pirmos eilės autoregresinio modelio pasikeitusio segmento testavimo ir vertinimo uždavinys. Aprašomo modelio epideminio pasikeitimo pradžia ir ilgis nėra žinomi. Pasiūlyti kriterijai pasikeitusio segmento testavimui, kurie pagrįsti modelio paklaidų įvertinių dalinių sumų ir modelio parametro dalinių įvertinių laužčių procesais. Šiems procesams gautos ribinės teoremos Hiolderio erdvėse. Nurodomas testų statistikų ribinis elgesys esant teisingai nulinei ir alternatyviajai hipotezėms. Iš empirinio kriterijų galios tyrimo rezultatų matyti, kad pasiūlytų testų galia didžiausia aptinkant pasikeitimus iš stacionarios būklės į nestacionarią arba esant artimoms vienetui modelio parametro reikšmėms. Taip pat įrodoma, kad mažiausių kvadratų metodu gauti pasikeitusio segmento pradžios ir ilgio įverčiai bei autoregresinio modelio su pasikeitusiu segmentu parametrų įverčiai yra suderintieji bei pateikiamas jų konvergavimo greitis.
In the doctoral dissertation, we consider problems of testing and estimating changed segment with unknown starting position and duration of epidemic state in the autoregressive first-order model. The proposed tests are based on partial sums of model residuals and model-parameter partial-estimator polygonal line processes. We derive asymptotic results for these processes in Holder spaces. The behavior of test statistics under the null hypothesis of no change and alternative is provided. Empirical power analysis has shown that tests are more powerful when absolute values of model parameter are quite large or autoregressive process changes from a stationary state to a nonstationary one. We prove the consistency of the least square changed-segment estimators and provide their convergence rates.
28

Ghyoot, Christiaan Jacob. "The modelling of particle build up in shell-and-tube heat exchangers due to process cooling water / Christiaan Jacob Ghyoot." Thesis, North-West University, 2013. http://hdl.handle.net/10394/9511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Sasol Limited experiences extremely high particulate fouling rates inside shell-and-tube heat exchangers that utilize process cooling water. The water and foulants are obtained from various natural and process sources and have irregular fluid properties. The fouling eventually obstructs flow on the shell side of the heat exchanger to such an extent that the tube bundles have to be replaced every nine months. Sasol requested that certain aspects of this issue be addressed. To better understand the problem, the effects of various tube and baffle configurations on the sedimentation rate in a shell-and-tube heat exchanger were numerically investigated. Single-segmental, double-segmental and disc-and-doughnut baffle configurations, in combination with square and rotated triangular tube configurations, were simulated by using the CFD software package, STAR-CCM+. In total, six configurations were investigated. The solution methodology was divided into two parts. Firstly, steady-state solutions of the six configurations were used to identify the best performing model in terms of large areas with high velocity flow. The results identified both single-segmental baffle configurations to have the best performance. Secondly, transient multiphase simulations were conducted to investigate the sedimentation characteristics of the two single-segmental baffle configurations. It was established that the current state of available technology cannot adequately solve the detailed simulations in a reasonable amount of time and results could only be obtained for a time period of a few seconds. By simulating the flow fields for various geometries in steady-state conditions, many of the observations and findings of literature were verified. The single-segmental baffle configurations have higher pressure drops than double-segmental and disc-and-doughnut configurations. In similar fashion, the rotated triangular tube configuration has a higher pressure drop than the square arrangement. The single-segmental configurations have on average higher flow velocities and reduced cross-flow mass flow fractions. It was concluded from this study that the single-segmental baffle with rotated triangular tube configuration had the best steady-state performance. Some results were extracted from the transient multiphase simulations. The transient multiphase flow simulation of the single-segmental baffle configurations showed larger concentrations of stagnant sediment for the rotated triangular tube configuration versus larger concentrations of suspended/flowing sediment in the square tube configuration. This result was offset by the observation that the downstream movement of sediment was quicker for the rotated triangular tube configuration. No definitive results could be obtained, but from the available results, it can be concluded that the configuration currently implemented at Sasol is best suited to handle sedimentation. This needs to be verified in future studies by using advanced computational resources and experimental results.
Thesis (MIng (Mechanical Engineering))--North-West University, Potchefstroom Campus, 2013
29

Andersson, Per-Åke. "Multi-year maintenance optimisation for paved public roads - segment based modelling and price-directive decomposition." Doctoral thesis, Linköpings universitet, Optimeringslära, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
I avhandlingen studeras hur kostnadseffektiva underhålls- (uh-)planer för belagd väg kan genereras, på basis av information om aktuellt vägytetillstånd och funktionella modeller för kostnads- och tillståndsförändringar, delvis utvecklade i samarbete med svenska Vägverket (VV). Tilltänkt användning är på strategisk och programnivå, innan mer detaljerad objektinformation finns att tillgå. Till skillnad från hittills använda modeller, så genereras individuella uh-planer för varje vägsegment (en homogen vägsträcka vad gäller aktuellt beläggningstillstånd och beläggningshistorik), i kontinuerliga tillstånds- och åtgärdsrum. Genom användning av Lagrangerelaxerande optimeringsteknik, så kan de speciella nytto/kostnads-kvot-villkor som VV ålägger varje uh-objekt naturligen hanteras med dualpriser för budgetvillkoren. Antalet vägsegment som konkurrerar om budgetmedlen är vanligtvis stort. Data från VV:s Vägdatabank för Värmland har använts, omfattande ca 9000 vägsegment. Genom den stora datamängden har datorprogrammen implementerats för parallellbearbetning. Under avhandlingsarbetet har projektet beviljats tillgång till Monolith PCklustret vid NSC. För att kunna reducera optimeringskörtiderna har modell- och metodutveckling varit nödvändig. Genom att aggregera vägsegmenten till vägklasser har goda startvärden på dualpriserna erhållits. Genom utvecklingen av en speciell restvärdesrutin har den explicit behandlade tidsperioden kunnat reduceras. Vid lösandet av det duala subproblemet har speciell uppmärksamhet ägnats åt de diskretiseringseffekter som uppstår i metoden dynamisk programmering. En typ av tillämpning avser ett delvägnät, exempelvis en väg. Valideringsstudier har genomförts på väg 63 i Värmland – med lovande men inte tillfredsställande resultat (se nedan). En speciell modell för samordnat uh beaktar stordriftsfördelarna vid samtidig åtgärd på en hel vägsträcka. Den andra huvudtypen av studier gäller ett helt nätverk. Flera metodtyper har tillämpats, både för att lösa de relaxerade optimeringsproblemen och för att generera uhplaner som uppfyller budgetvillkoren. För en anständig diskretisering är körtiderna för hela Värmland mindre än 80 CPU-timmar. Genom en a posteriori primal heuristik reduceras kraven på parallellbearbetning till ett litet PC-kluster. Avhandlingen studerar vidare effekterna av omfördelade budgetmedel samt en övergång till en transparent, stokastisk modell – vilka båda visar små avvikelser från basmodellen. Optimeringsresultaten för Värmland indikerar att budgetnivåer på ca 40% av Värmlands verkliga uh-budget är tillräckliga. Dock saknas viktiga kostnadsdrivande faktorer i denna första modellomgång, exempelvis vissa funktionella prestanda (säkerhet), all miljöpåverkande prestanda (buller etc.) och strukturell prestanda (ex.vis bärighet, som enbart modelleras via ett åldersmått). För ökad tilltro till PMS i allmänhet och optimering i synnerhet, bör avvikelserna analyseras ytterligare och leda till förbättringar vad gäller tillståndsmätning, tillståndseffekt- & kostnadsmodellering samt matematisk modellering & implementering.
The thesis deals with the generation of cost efficient maintenance plans for paved roads, based on database information about the current surface conditions and functional models for costs and state changes, partly developed in cooperation with Vägverket (VV, Swedish Road Administration). The intended use is in a stage of budgeting and planning, before concrete project information is available. Unlike the up to now used models, individual maintenance plans can be formulated for each segment (a homogeneous road section as to the current pavement state and paving history), in continuous state and works spaces. By using Lagrangean relaxation optimisation techniques, the special benefit/cost-ratio constraints that VV puts on each maintenance project can be naturally mastered by dual prices for the budget constraints. The number of segments competing for budget resources is usually large. Data from VV Vägdatabank (SRA Road Database) in county Värmland were used, comprising around 9000 road segments. Due to the large data amount the implemented programs rely on parallel computation. During the thesis work, access to the PC-cluster Monolith at NSC was granted. In order to reduce optimisation run times, model & method development was needed. By aggregating the road segments into road classes, good initial values of the dual prices were achieved. By adding new state dimensions, the use of the Markov property could be motivated. By developing a special residual value routine, the explicitly considered time period could be reduced. At solving the dual subproblem special attention was paid to the discretization effects in the dynamic programming approach. One type of study is on a sub-network, e.g. a road. Validation studies were performed on road 63 in Värmland – with promising but not satisfactory results (see below). A special model for co-ordinated maintenance considers the fine-tuned cost effects of simultaneous maintenance of contiguous road segments. The other main type of study is for a whole network. Several method types have been applied, both for solving the relaxed optimisation problems and for generating maintenance plans that fit to the budgets. For a decent discretization, the run time for the whole Värmland network is less than 80 CPU-hrs.A posterior primal heuristics reduces the demands for parallel processing to a small PC-cluster.The thesis further studies the effects of redistributing budget means, as well as turning to a transparent stochastic model – both showing modest deviations from the basic model. Optimisation results for Värmland indicate budget levels around 40% of the actual Värmland budget as sufficient. However, important cost triggers are missing in this first model round, e.g., certain functional performance (safety), all environmental performance (noise etc.) and structural performance (e.g. bearing capacity, only modelled by an age measure). For increased credibility of PMS in general and optimisation in particular, the discrepancies should be further analysed and lead to improvements as to condition monitoring, state effect & cost modelling and mathematical modelling & implementation.
30

Manos, Alexandros Sterios. "A study on out-of-vocabulary word modelling for a segment-based keyword spotting system." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/39394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 93-95).
by Alexandros Sterios Manos.
M.S.
31

Neto, Leoncio Claro de Barros. "Modelagem em geometria digital aprimorada por técnicas adaptativas de segmentos de retas." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-11082011-135522/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visando representar linhas retas digitais, segmentos digitalizados e arcos, cada uma das linhas de pesquisa disponíveis apresenta suas vantagens e aplicações apropriadas. No entanto, considerando as complexidades de cenários do mundo real, o uso dessas representações não é tão popular em situações que requerem modelos flexíveis ou envolvendo interferências espúrias. As tecnologias adaptativas são formalismos da ciência da computação capazes de alterar seu comportamento dinamicamente, sem a interferência de agentes externos, em resposta a estímulos de entrada. Ao serem capazes de responder às mencionadas condições variáveis do ambiente, os dispositivos adaptativos naturalmente tendem a apresentar a flexibilidade requerida para atuarem em cenários dinâmicos. Assim, este trabalho investiga uma alternativa fundamentada no autômato finito adaptativo por meio do dispositivo denominado segmento digitalizado adaptativo, que incorpore o poder expressivo de representar parâmetros desses segmentos. Dentre esses parâmetros destacam-se a capacidade de representar as tolerâncias, a escalabilidade, os erros causados por desvios em ângulo ou em comprimento dos segmentos mencionados, resultando em estruturas mais flexíveis. Considerando que os métodos sintáticos são estruturais, os segmentos digitalizados adaptativos são modelados por conjuntos de regras, partindo-se de primitivas, concebendo-se as funções adaptativas correspondentes para alteração dos estados e de regras de transição. Posteriormente, estruturas mais elaboradas são concebidas relacionadas a arcos digitais pelos quais cadeias (strings) estimulam, em um passo único, autômatos finitos adaptativos que implementam segmentos digitalizados adaptativos. As implementações utilizam uma ferramenta cujo núcleo é um simulador para edição dos arquivos que compõem os autômatos. Consequentemente, o método proposto torna-se uma alternativa relativamente simples e intuitiva comparando-se com as abordagens existentes, apresentando capacidade de aprendizagem, além de ser computacionalmente poderosa.
For the representation of digital straight lines, digitized straight line segments and arcs, each of the available research approaches has its advantages and suitable applications. However, taking into account the complexities of real-world scenarios, the use of these representations is not so popular in situations that require flexible models or involving spurious interferences. Adaptive technologies are computer science formalisms able to change their behavior dynamically, without the interference of external agents, in response to incoming stimuli. By being able to respond to changing environmental conditions, adaptive devices naturally tend to have the required flexibility to work in dynamic scenarios. Thus, the purpose of this study is to investigate an alternative based on adaptive finite automaton through the device called adaptive digitized straight line segment, incorporating the expressive power to represent parameters of these segments. Among these parameters, emphasis is given to the ability to represent tolerances, scalability or errors caused by deviations in angle or length of the mentioned segments, resulting in more flexible structures. Whereas syntactic methods are structural, adaptive digitized straight line segments are modeled by sets of rules, starting from primitives, conceiving the corresponding adaptive functions to amend the set of states and transition rules. Later, more elaborate structures are designed related to digital arcs the corresponding strings of which stimulate, in just a single step, adaptive finite automata that implement adaptive digitized straight line segments. The implementations use a simulator for editing the files that compose the automata. Consequently, the proposed method reveals to be a simple and intuitive alternative capable of learning, besides being computationally powerful.
32

Rastenė, Irma. "Testing and estimating changed segment in autoregressive model." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110628_134429-88914.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the doctoral dissertation, we consider problems of testing and estimating changed segment with unknown starting position and duration of epidemic state in the autoregressive first-order model. The proposed tests are based on partial sums of model residuals and model-parameter partial-estimator polygonal line processes. We derive asymptotic results for these processes in Holder spaces. The behavior of test statistics under the null hypothesis of no change and alternative is provided. Empirical power analysis has shown that tests are more powerful when absolute values of model parameter are quite large or autoregressive process changes from a stationary state to a nonstationary one. We prove the consistency of the least square changed-segment estimators and provide their convergence rates.
Disertacijoje nagrinėjamas pirmos eilės autoregresinio modelio pasikeitusio segmento testavimo ir vertinimo uždavinys. Aprašomo modelio epideminio pasikeitimo pradžia ir ilgis nėra žinomi. Pasiūlyti kriterijai pasikeitusio segmento testavimui, kurie pagrįsti modelio paklaidų įvertinių dalinių sumų ir modelio parametro dalinių įvertinių laužčių procesais. Šiems procesams gautos ribinės teoremos Hiolderio erdvėse. Nurodomas testų statistikų ribinis elgesys esant teisingai nulinei ir alternatyviajai hipotezėms. Iš empirinio kriterijų galios tyrimo rezultatų matyti, kad pasiūlytų testų galia didžiausia aptinkant pasikeitimus iš stacionarios būklės į nestacionarią arba esant artimoms vienetui modelio parametro reikšmėms. Taip pat įrodoma, kad mažiausių kvadratų metodu gauti pasikeitusio segmento pradžios ir ilgio įverčiai bei autoregresinio modelio su pasikeitusiu segmentu parametrų įverčiai yra suderintieji bei pateikiamas jų konvergavimo greitis.
33

Nanavati, Hemant. "Molecular modeling of the elastic and photoelastic properties of crosslinked polymer networks: a statistical segment approach / by Hemant Nanavati." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jablonski, Sophie Marie-Odile. "Assessment of bioenergy heating potential in the UK and in Poland using market segments analysis and MARKAL modelling." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.511992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Peral, Millán Mireia. "Dynamics of subduction systems with opposite polarity in adjacent segments: application to the Westernmost Mediterranean." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/672453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The objective of this thesis is to study the first-order dynamics of subduction systems characterized by opposite dip polarity in adjacent plate segments. The absence of previous studies analyzing the geodynamic evolution of these systems has defined the research strategy of the present work. Consequently, the thesis consists of three different parts combining analog experiments and numerical models of very simple double subduction systems and its application to the Westernmost Mediterranean where a subduction system with opposite polarity in adjacent plate segments (double polarity subduction) has been proposed to explain the formation and evolution of this region. Part 1. Analog experiments of subduction systems with opposite polarity in adjacent segments Firstly, I have studied the first-order plate dynamics of subduction systems with opposite polarity in adjacent plate segments by means of analog experiments. Laboratory experiments have been carried out in the Laboratory of Experimental Tectonics in Roma Tre University during a two month stay in Rome (Italy) under the advice of Prof. Dr. Francesca Funiciello and Prof. Dr. Claudio Faccenna. A total of eighteen experiments have been designed, including four with a single plate subduction as reference models. The laboratory experiments are composed of one or two separates plates made of silicon putty representing the lithosphere and glucose syrup representing the mantle. The plates are fixed at their back edge to enforce a slab rollback behavior and subduction is started by deflecting manually the leading edge of the plate (i.e., initial slab pull). Different setups have been designed to test the influence of the width of plates and the initial separation between them on the evolution of the system. Results show that the mantle flow induced by both plates is asymmetric relative to the axis of each plate causing a progressive merging of the toroidal cells that prevents a steady state phase of the subduction process and generates a net outward drag perpendicular to the plates. Trench retreat velocities depend on the relative position of the trenches, increasing when trenches approach to each other and decreasing when they separate after their intersection. Part 2. Reproducing analog experiments of subduction systems with numerical modeling Secondly, some of the previous laboratory experiments have been performed by means of numerical modeling to compare and complement previous results and quantify the relevant physical parameters characterizing a double polarity subduction system. Around thirty-five numerical models, in addition to preliminary tests, have been performed although only fifteen are presented in this thesis showing the most outstanding and satisfactory results. The set of the numerical models have been run in the supercomputer MARENOSTRUM 4 (Barcelona Supercomputing Center, Spain) and BRUTUS (Swiss Federal Institute of Technology, Switzerland), with an average computation time of 3 weeks per model. The 3D numerical setup is chosen with similar material parameters, geometry and dimensions as for previous subuction analog models consisting on one or two viscous plates descending into the upper mantle in opposite directions. Plates are fixed at their trailing edge to enforce roll-back behavior during density-driven subduction. A small perturbation is initially imposed to initiate subduction. Firstly, computational domain size, boundary conditions, rheology and thickness of plates are tested to find the numerical model that best represents analog experiments. Secondly, relevant parameters controlling the double polarity subduction process are studied by means of numerical techniques. Results show that the most suitable numerical boundary conditions to reasonably reproduce the analog results are free-slip at the lateral boundaries and no-slip at the bottom of the model. Lateral boundary conditions affect the evolution of the system at short distances allowing for a reduction of the size of the model domain relative to the analog model and to increase the resolution and saving computation time. Complementing previous experiments of double polarity subduction, numerical results show that the induced mantle flow generates a stress coupling between the adjacent plates slowing down the overall subduction process and producing lateral movement of plates and asymmetrical deformation of the slabs and trenches. Part 3. The Alboran and Algerian basins (Westernmost Mediterranean). A case study of double subduction with opposite polarity in adjacent segments. Finally, a 3D numerical model of double subduction with opposite polarity in adjacent plate segments has been performed simulating the tectonic setting of the Westernmost Mediterranean. The evolution of the Alboran-Tethys slab (Betic-Rif slab) is reproduced in this tectonic scenario studying the influence of the adjacent plate segment. Around forty numerical models have been performed varying physical and geometrical parameters, including preliminary numerical tests. In this thesis only the final double polarity subduction model and a single plate subduction model are presented. The numerical models have been run in the supercomputer MARENOSTRUM 4 (Barcelona Supercomputing Center, Spain) with an average computation time of 4 weeks per model. The model setup consists of two oceanic plate segments with a visco-plastic rheology subducting in opposite directions into the viscous upper mantle and starting at 35 Ma. In the present-day Alboran Basin region, the plate segment corresponding to the Alboran-Tethys slab (Betic-Rif slab) dips to the southeast with the trailing edge fixed to the Iberian margin. A continental African plate segment is included at the west side of this plate. In the present Algerian Basin region the plate segment (Tell-Kabylies slab) dips to the northwest and the trailing edge is fixed to the African margin. A small slab perturbation to account for the Africa- Iberia convergence prior to 35 Ma is initially imposed and the subsequent subduction process is driven by Rayleigh-Taylor instability. In addition, a reference model including only the Alboran-Tethys slab has been performed in order to study the influence of an adjacent plate segment subducting in opposite direction. Results show that the progressive curvature of the Alboran-Tethys slab is due to the lack of a transform zone in its connection with the Atlantic oceanic to the west and the strong segmentation of the African margin. This produces larger retreat velocities in the eastern side of the slab, where a transform zone separates the Alboran segment from the Algerian segment, than in the western side. Trench retreat velocities of both plate segments are measured concluding that the opening of the Alboran Basin occurs around 22 Ma. The influence of the adjacent Algerian segment generates an asymmetrical flow pattern around both trenches slowing down the overall subduction process of the Alboran plate segment.
El objetivo de esta tesis consiste en estudiar la dinámica de sistemas de doble subducción con polaridad opuesta en segmentos adyacentes. La ausencia de estudios previos sobre el análisis de la evolución geodinámica de estos sistemas ha definido la estrategia de investigación del presente trabajo. En consecuencia, esta tesis consta de tres partes diferenciadas que combinan tanto experimentos análogos como modelos numéricos de sistemas de doble subducción y su posterior aplicación a la región del Mediterráneo occidental, donde se ha propuesto un sistema de doble subducción con polaridad opuesta en segmentos adyacentes para explicar su formación y evolución. Parte 1. Experimentos análogos de sistemas de subducción con polaridad opuesta en segmentos adyacentes La primera parte de esta tesis consiste en analizar la dinámica de los sistemas de doble subducción con polaridad opuesta en segmentos litosféricos adyacentes mediante experimentos análogos. Estos experimentos se han llevado a cabo en el Laboratorio de Tectónica Experimental en la Universidad de Roma Tre durante una estancia de dos meses en Roma (Italia) bajo la supervisión de la Prof. Dr. Francesca Funiciello y el Prof. Dr. Claudio Faccenna. Se han diseñado un total de dieciocho experimentos, incluyendo cuatro modelos de referencia de subducción simple (una placa). Estos experimentos consisten en una o dos placas elaboradas con silicona representando la litosfera y un tanque lleno de jarabe de glucosa representando el manto. Las placas se fijan en su borde posterior para imponer un comportamiento de retroceso de las fosas de subducción (trench rollback) y la subducción se inicia hundiendo manualmente el borde delantero de la placa. Se han diseñado diferentes configuraciones para probar la influencia en la evolución del sistema de la anchura de las placas y la separación inicial entre ellas. Los resultados muestran que el flujo del manto inducido por ambas placas es asimétrico con respecto al eje de cada placa. Esto provoca una fusión progresiva de las celdas toroidales impidiendo una fase de estabilización del proceso de subducción y generando una fuerza de arrastre neta que tiende a separar las placas. Las velocidades de retroceso de las fosas (velocidad de subducción) dependen de la posición relativa de éstas, aumentando cuando las fosas se acercan entre sí y disminuyendo cuando éstas se separan después de su alineación. Parte 2. Reproducción de experimentos análogos de sistemas de subducción mediante modelos numéricos En segundo lugar, se han realizado una serie de modelos numéricos, basados en los experimentos análogos realizados en la primera parte con el objetivo de comparar y complementar los resultados anteriores así como cuantificar los parámetros físicos más relevantes que caracterizan a un sistema de doble subducción con polaridad opuesta. En total, se han realizado alrededor de treinta y cinco modelos numéricos, además de las pruebas preliminares, aunque en esta tesis sólo se presentan quince de ellos mostrando los resultados más relevantes. Los modelos numéricos se han calculado principalmente en los supercomputadores MARENOSTRUM 4 (Centro de Supercomputación de Barcelona, España) y BRUTUS (Instituto Federal Suizo de Tecnología, Suiza), con un tiempo de cálculo promedio de 3 semanas por modelo. La configuración numérica 3D se elige con parámetros de geometría, dimensiones y materiales similares a los modelos análogos de subducción realizados anteriormente. Éstos consisten en una o dos placas viscosas que subducen en el manto superior en direcciones opuestas. Las placas están fijadas en su borde posterior imponiendo un comportamiento de retroceso de la fosa durante la subducción. La subducción se produce debido a la diferencia de densidades entre la placa y el manto a partir de una pequeña subducción impuesta inicialmente. En primer lugar, se estudian el tamaño del dominio computacional, las condiciones de contorno, la reología y el grosor de las placas para encontrar el modelo numérico que mejor representa el experimento análogo. En segundo lugar, se estudian los parámetros más relevantes que controlan el proceso de doble subducción con polaridad opuesta mediante técnicas numéricas. Los resultados muestran que las condiciones de contorno del modelo numérico más adecuadas para reproducir los experimentos análogos son el deslizamiento libre en las paredes laterales del dominio computacional y el deslizamiento nulo en la pared inferior. Las condiciones de contorno laterales sólo afectan a la evolución del sistema a distancias muy pequeñas, lo que permite una reducción del tamaño del dominio computacional en relación con el modelo análogo aumentando la resolución numérica y disminuyendo el tiempo de cálculo. De manera complementaria a los resultados observados en los experimentos análogos, los resultados numéricos muestran que el flujo inducido del manto genera un acoplamiento en la tensión entre las placas adyacentes, ralentizando el proceso de subducción. Así mismo, se produce un movimiento lateral entre las placas en superficie y una deformación asimétrica de las porciones subducidas (slabs) y de las fosas. Parte 3. Las cuencas de Alborán y Argelia (Mediterráneo occidental). Un caso de estudio de doble subducción con polaridad opuesta en segmentos adyacentes Finalmente, se ha realizado un modelo numérico 3D de doble subducción con polaridad opuesta en segmentos litosféricos adyacentes simulando la configuración tectónica del Mediterráneo occidental. El modelo numérico reproduce la evolución del segmento de placa de Alborán-Tethys estudiando la influencia del segmento de placa adyacente de Argelia- Tethys. Se han realizado alrededor de cuarenta modelos numéricos variando parámetros físicos y geométricos, incluyendo los modelos numéricos preliminares. En esta tesis sólo se presentan el modelo final de doble subducción con polaridad opuesta y de subducción simple. Los modelos numéricos se han calculado en el supercomputador MARENOSTRUM 4 (Centro de Supercomputación de Barcelona, España) con un tiempo de cálculo promedio de 4 semanas por modelo. El modelo consta de dos segmentos de placa oceánica con una reología viscoplástica que subducen en direcciones opuestas en el manto superior viscoso. El modelo se inicia hace 35 millones de años. En la región actual de la cuenca del Alborán, el segmento de placa de Alborán-Tethys subduce hacia el sureste con el borde posterior fijado al margen ibérico. Se incluye un segmento de placa continental africana en el lado oeste de esta placa. En la región actual de la cuenca argelina, el segmento de placa de Tell-Kabylies subduce hacia el noroeste y el borde posterior se fija al margen africano. El estado inicial del modelo considera la convergencia entre África e Iberia desde el temprano Cretácico hasta el tardío Eoceno imponiendo una subducción inicial de los segmentos de placa de 150 km. El posterior proceso de subducción es impulsado por la inestabilidad de Rayleigh-Taylor. Además, se ha realizado un modelo de referencia de placa simple que incluye sólo el segmento de Alborán-Tethys para estudiar la influencia del segmento adyacente subduciendo en dirección opuesta. Los resultados muestran que la curvatura progresiva de la placa de Alborán-Tethys se debe a la falta de una zona de transformación en su conexión con el océano Atlántico al oeste y la gran segmentación del margen africano. Esto produce unas velocidades de retroceso de la fosa mayores en el lado este de la placa de Alborán-Tethys que en el lado occidental, donde una zona de transformación la separa del segmento de Argelia-Tethys. El cálculo de las velocidades de retroceso de las fosas de ambos segmentos de placa concluyen que la apertura de la cuenca de Alborán se produjo aproximadamente hace 22 millones de años. La influencia del segmento argelino adyacente genera un patrón de flujo asimétrico alrededor de ambas fosas que ralentiza el proceso de subducción general del segmento de la placa de Alborán- Tethys.
36

Zhan, Yijian [Verfasser], Günther [Gutachter] Meschke, and Peter [Gutachter] Mark. "Multilevel modeling of fiber-reinforced concrete and application to numerical simulations of tunnel lining segments / Yijian Zhan ; Gutachter: Günther Meschke, Peter Mark." Bochum : Ruhr-Universität Bochum, 2016. http://d-nb.info/112190954X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Markwardt, Jutta, Günther Pfeifer, Uwe Eckelt, and Bernd Reitemeier. "Analysis of Complications after Reconstruction of Bone Defects Involving Complete Mandibular Resection Using Finite Element Modelling." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-134947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: In a retrospective study, risk factors for complications after the bridging of mandibular defects using reconstruction plates were reviewed. Especially the loosening of the plate-screw-mandible complex should be analyzed with a finite element model in order to reduce plate complications in future. Patients and Methods: We examined 60 patients who underwent a treatment with reconstruction plates after tumor resection during a period of 10 years. The problem of screw loosening was additionally reviewed by means of a finite element study, and a model for the loosening process was developed. Results: Our postoperative examination showed that 26 patients suffered from complications that required an early removal of the plate. These complications were oral or extraoral plate exposures, the looseness of screws with or without plate displacement, and plate fractures. Thereby, we noticed that maxillary and mandibular areas of opposing teeth, the size of the mandible defect, and the crossing of the orofacial midline are all risk factors for plate complications. On the basis of the finite element model, a modified arrangement of the screws was derived. Hence, a new type of resection plate was established. Conclusions: By repositioning the screw holes along the long axis of the plate, the transition from tensile force to torque force of the screws in the screw-plate-bone complex can be minimized. Thereby, the complication of screw loosening will be considerably reduced
Hintergrund: In einer retrospektiven Studie wurden Risikofaktoren für Komplikationen nach Überbrückung von Unterkieferdefekten mit Rekonstruktionsplatten geprüft. Insbesondere die Lockerungsvorgänge des Schrauben- Platten-Unterkiefer-Verbundes sollten mit einer Finite- Elemente-Modellierung analysiert werden, um in Zukunft eine Reduzierung der Plattenkomplikationen erreichen zu können. Patienten und Methoden: Es wurden 60 Patienten untersucht, welche im Zeitraum von 10 Jahren im Rahmen von Tumoroperationen mit Rekonstruktionsplatten versorgt wurden. Das Problem der Lockerung der Plattenschrauben wurde zusätzlich mittels einer Finite-Elemente-Studie überprüft und ein Modell für den Lockerungsvorgang erarbeitet. Ergebnisse: Die Nachuntersuchungen ergaben, dass bei 26 Patienten die Platte wegen Komplikationen vorzeitig entfernt werden musste. Die Komplikationen traten als orale und extraorale Plattenfreilage, als Schraubenlockerung ohne oder mit Plattendislokationen und als Plattenbrüche auf. Dabei konnte festgestellt werden, dass bestehende Stützzonen des körpereigenen Restgebisses, die Größe des Unterkieferdefektes und dessen Mittellinienüberschreitung Risikofaktoren für Plattenkomplikationen darstellen. Anhand der Finite-Elemente-Modellierung wurde eine veränderte Schraubenanordnung abgeleitet. Daraus resultiert eine neue Form der Resektionsplatte. Schlussfolgerungen: Durch die Verschiebung der Schraubenlöcher aus der Längsachse der Platte kann der Übergang von der Zugbelastung zur Drehmomentbelastung der Schrauben im Schrauben-Platten-Knochen-Verbund der Platte minimiert werden. Dadurch werden Schraubenlockerungen als Komplikationen wesentlich seltener auftreten
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich
38

Levendoglu, Mert. "Probabilistic Seismic Hazard Assessment Of Ilgaz - Abant Segments Of North Anatolian Fault Using Improved Seismic Source Models." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615430/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bolu-Ilgaz region was damaged by several large earthquakes in the last century and the structural damage was substantial especially after the 1944 and 1999 earthquakes. The objective of this study is to build the seismic source characterization model for the rupture zone of 1944 Bolu-Gerede earthquake and perform probabilistic seismic hazard assessment (PSHA) in the region. One of the major improvements over the previous PSHA practices accomplished in this study is the development of advanced seismic source models in terms of source geometry and reoccurrence relations. Geometry of the linear fault segments are determined and incorporated with the help of available fault maps. Composite magnitude distribution model is used to properly represent the characteristic behavior of NAF without an additional background zone. Fault segments, rupture sources, rupture scenarios and fault rupture models are determined using the WG-2003 terminology. The Turkey-Adjusted NGAW1 (Gü
lerce et al., 2013) prediction models are employed for the first time on NAF system. The results of the study is presented in terms of hazard curves, deaggregation of the hazard and uniform hazard spectrum for four main locations in the region to provide basis for evaluation of the seismic design of special structures in the area. Hazard maps of the region for rock site conditions and for the proposed site characterization model are provided to allow the user perform site-specific hazard assessment for local site conditions and develop site-specific design spectrum. The results of the study will be useful to manage the future seismic hazard in the region.
39

Markwardt, Jutta, Günther Pfeifer, Uwe Eckelt, and Bernd Reitemeier. "Analysis of Complications after Reconstruction of Bone Defects Involving Complete Mandibular Resection Using Finite Element Modelling." Karger, 2007. https://tud.qucosa.de/id/qucosa%3A27607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: In a retrospective study, risk factors for complications after the bridging of mandibular defects using reconstruction plates were reviewed. Especially the loosening of the plate-screw-mandible complex should be analyzed with a finite element model in order to reduce plate complications in future. Patients and Methods: We examined 60 patients who underwent a treatment with reconstruction plates after tumor resection during a period of 10 years. The problem of screw loosening was additionally reviewed by means of a finite element study, and a model for the loosening process was developed. Results: Our postoperative examination showed that 26 patients suffered from complications that required an early removal of the plate. These complications were oral or extraoral plate exposures, the looseness of screws with or without plate displacement, and plate fractures. Thereby, we noticed that maxillary and mandibular areas of opposing teeth, the size of the mandible defect, and the crossing of the orofacial midline are all risk factors for plate complications. On the basis of the finite element model, a modified arrangement of the screws was derived. Hence, a new type of resection plate was established. Conclusions: By repositioning the screw holes along the long axis of the plate, the transition from tensile force to torque force of the screws in the screw-plate-bone complex can be minimized. Thereby, the complication of screw loosening will be considerably reduced.
Hintergrund: In einer retrospektiven Studie wurden Risikofaktoren für Komplikationen nach Überbrückung von Unterkieferdefekten mit Rekonstruktionsplatten geprüft. Insbesondere die Lockerungsvorgänge des Schrauben- Platten-Unterkiefer-Verbundes sollten mit einer Finite- Elemente-Modellierung analysiert werden, um in Zukunft eine Reduzierung der Plattenkomplikationen erreichen zu können. Patienten und Methoden: Es wurden 60 Patienten untersucht, welche im Zeitraum von 10 Jahren im Rahmen von Tumoroperationen mit Rekonstruktionsplatten versorgt wurden. Das Problem der Lockerung der Plattenschrauben wurde zusätzlich mittels einer Finite-Elemente-Studie überprüft und ein Modell für den Lockerungsvorgang erarbeitet. Ergebnisse: Die Nachuntersuchungen ergaben, dass bei 26 Patienten die Platte wegen Komplikationen vorzeitig entfernt werden musste. Die Komplikationen traten als orale und extraorale Plattenfreilage, als Schraubenlockerung ohne oder mit Plattendislokationen und als Plattenbrüche auf. Dabei konnte festgestellt werden, dass bestehende Stützzonen des körpereigenen Restgebisses, die Größe des Unterkieferdefektes und dessen Mittellinienüberschreitung Risikofaktoren für Plattenkomplikationen darstellen. Anhand der Finite-Elemente-Modellierung wurde eine veränderte Schraubenanordnung abgeleitet. Daraus resultiert eine neue Form der Resektionsplatte. Schlussfolgerungen: Durch die Verschiebung der Schraubenlöcher aus der Längsachse der Platte kann der Übergang von der Zugbelastung zur Drehmomentbelastung der Schrauben im Schrauben-Platten-Knochen-Verbund der Platte minimiert werden. Dadurch werden Schraubenlockerungen als Komplikationen wesentlich seltener auftreten.
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich.
40

Láštic, Daniel. "Deformačně-napěťová analýza elastomerových komponent flexibilní spojky." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-401521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The diploma thesis deals with computational modelling of stress-strain states in elastomeric components of a flexible coupling. The first part of the thesis is dedicated to research about usage and designs of flexible coupling and about fatigue of elastomers. The second part of the thesis concerns creation of the computational model. The model of material is determined based on uniaxial tension test of a specimen produced from a real elastomer component. The results are presented in the form of comparison of two designs of elastomer component with respect to fatigue behaviour based on a maximum principal strain range. The results of computational modelling in the viewpoint of crack initiation site are in good agreement with the results from the component used in operation and dif-ferences between the two designs are negligible. The quantitative difference of the two designs is 15 %.
41

Filbois, Alain. "Contributions à la modélisation automatique d'objets polyédriques 3D : extraction des primitives 3D, facettes et segments." Vandoeuvre-les-Nancy, INPL, 1995. http://www.theses.fr/1995INPL075N.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En étroite collaboration avec un autre doctorant Didier Gemmerle, notre travail de thèse traite de la reconstruction et de la modélisation d'objets à partir d'images de stéréovision trinoculaire. Les données de notre étude sont constituées d'une série d'images stéréoscopiques trinoculaires à niveaux de gris pris par les caméras du robot, la stratégie de ce dernier étant d'effectuer une révolution complète autour de chaque objet à modéliser. À partir de chacun de ces triplets, notre but est d'extraire un ensemble de primitives de modélisation, facettes et segments, puis de fusionner ces ensembles pour obtenir automatiquement le modèle 3D de l'objet. Notre système de modélisation se découpe en trois parties distinctes: le bas niveau, dont le rôle est d'extraire les primitives d'un triplet d'images. Ces primitives constituent la vue 3D. Le moyen niveau, effectuant l'appariement des primitives entre deux vues 3D consécutives. Le haut niveau, fusionnant les informations fournies par les bas et moyen niveaux et reconstruisant le modèle de l'objet. Ce manuscrit décrit la partie bas niveau de notre système. Les moyens et hauts niveaux sont décrits dans la thèse de D. Gemmerle intitulée: contributions à la modélisation d'objets polyédriques 3D: construction du modèle à partir de groupements perceptuels
42

Cambazoglu, Selim. "Preparation Of A Source Model For The Eastern Marmara Region Along The North Anatolian Fault Segments And Probabilistic Seismic Hazard Assessment Of Duzce Province." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614167/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The North Anatolian Fault System is one of the most important active strike-slip fault systems in the world. The August 17, 1999 and November 12, 1999 earthquakes at Kocaeli and Dü
zce are the most recent devastating earthquakes. The study area lies in the Eastern Marmara Region and is bounded by the 28.55-33.75 E and 40.00-41.20 N, latitude and longitude coordinates, respectively. There are numerous studies conducted in the study area in terms of active tectonics and seismicity, however studies are scale dependent. Therefore, a comprehensive literature survey regarding active tectonics of the region was conducted and these previous studies were combined with the lineaments extracted from 10 ASTER images via principle component analysis manual extraction method. Therefore, a line seismic source model for the Eastern Marmara region was compiled mainly based on major seismic events of instrumental period. The seismicity of these line segments were compared with the instrumental period earthquake catalogue compiled by Kandilli Observatory and Earthquake Research Institute with a homogeneous magnitude scale between 1900 and 2005. Secondary event and completeness of this catalogue was checked. The final catalogue was matched with the compiled seismic source for historical seismicity and source-scenario-segment-weight relationships were developed. This developed seismic source model was tested by a probabilistic seismic hazard assessment for Dü
zce city center by utilizing four different ground motion prediction equations. It was observed that Gutenberg-Richter seismicity parameter &lsquo
b&rsquo
does not have significant effect over the model, however change in the segmentation model have a low but certain influence.
43

REIS, Paulo Francisco de Oliveira. "Análise numérica da influência dos segmentos grauteados na interação entre paredes de alvenaria estrutural com blocos de concreto." Universidade Federal de Goiás, 2010. http://repositorio.bc.ufg.br/tede/handle/tde/1358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Made available in DSpace on 2014-07-29T15:18:26Z (GMT). No. of bitstreams: 1 Dissert Paulo Francisco.pdf: 4045114 bytes, checksum: 8738bedf378cfe88c98e7e3880fd9007 (MD5) Previous issue date: 2010-04-06
Structural masonry is one of the oldest building systems used by man. With technological advance, its use has become more and more frequent for providing rationality and economy to the building. For that reason, the research centers have tried to get to know it more precisely, especially in Brazil, where the system differs from other countries, prevailing the use of not armed or partially armed masonry. The studies conducted aim to know, primarily, the materials used and the analysis of their structural behavior. Even though, there are still masonry structural aspects that haven t been totally clarified. Based on that principle, the purpose of this work was to accomplish an investigation about the influence of grouting segments in the interaction between walls, with the utilization of isolated panels of three, five and seven walls and a multiple pavement structure, developed by computational modeling in finite elements. For the analysis of the results, check the profile of the distribution of normal vertical tensions and the absorption of applied loading. In this way, it is realizable that in some cases the grout causes significant interference, as in the holes of the bricks, while in others, its participation is almost despicable. An example is the grouting of the braces.
A alvenaria estrutural é um dos sistemas construtivos mais antigos usado pelo homem. Com o avanço tecnológico, o seu uso passou a ser cada vez maior por proporcionar racionalidade e economia para a construção. Por isso, os centros de pesquisas têm buscado conhecê-la de maneira precisa, principalmente no Brasil, onde o sistema utilizado difere de outros países, prevalecendo o uso da alvenaria não armada ou parcialmente armada. Os estudos feitos visam conhecer, primordialmente, os materiais empregados e a avaliação do seu comportamento estrutural. Mesmo assim, ainda há aspectos estruturais da alvenaria que ainda não foram totalmente esclarecidos. A partir deste princípio, a proposta do trabalho foi realizar uma investigação sobre a influência de segmentos grauteados na interação entre paredes, com a utilização de painéis isolados de três, cinco e sete paredes e uma estrutura de múltiplos pavimentos, desenvolvido por modelagem computacional em elementos finitos. Para análise dos resultados, avaliou-se o perfil da distribuição de tensões verticais normais e absorção do carregamento aplicado, que é transferido às paredes. Nesse sentido, percebe-se que em alguns casos o graute causa interferência significativa, como nos furos dos blocos, enquanto que em outros, a sua participação é praticamente desprezível. Exemplo disso são os grauteamentos das cintas.
44

Mocrosky, Jeferson Ferreira. "Um estudo sobre a aplicação do padrão BPMN (Business process modeland notation) para a modelagem do processo de desenvolvimento de produtos numa empresa de pequeno porte do segmento metal-mecânico." Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A modelagem do processo de negócio é uma abordagem da década de 1990 para melhoria do desempenho das organizações, que volta atualmente como forte contribuinte para a melhoria de desempenho das organizações. É com essa abordagem que esta pesquisa realiza a modelagem de um Processo Desenvolvimento de Produtos (PDP) de uma empresa de fabricação mecânica, que manufatura máquinas e equipamentos para apoiar a produção em frigoríficos, na região oeste Catarinense. A modelagem do PDP utiliza o padrão Business Process Model and Notation (BPMN) apoiada pelo aplicativo Intalio BPMS. O objetivo da pesquisa é avaliar a modelagem com BPMN para a formalização do Processo de Desenvolvimento de Produtos e como tratar as complexidades e interações intrínsecas deste processo, em pequenas empresas de fabricação mecânica. A modelagem com BPMN é estruturada na avalição do PDP de uma empresa selecionada e de observações in loco da execução do processo. A metodologia adotada para desenvolvimento da modelagem do PDP da empresa considerou os seguintes aspectos: i) estudo de uma empresa; ii) modelagem informacional; iii) automação do modelo e execução; iv) implementação do modelo do PDP na empresa. Também são apresentadas as características do Modelo Unificado de Rozenfeld et al. (2006), usado como referência para sistematizar a modelagem do Processo de Desenvolvimento de Produtos da empresa, através de avaliação do processo da empresa. Uma breve descrição é feita para apresentar as características dos principais padrões usados na Modelagem de Processos de Negócios, incluindo os principais aplicativos computacionais usados para apoiar os padrões de modelagem. Os resultados foram divididos em duas partes, em modelos abstratos estáticos e dinâmicos. O modelo abstrato estático tem caráter informacional, apresentando riqueza de detalhes, na forma de um mapa detalhado do processo. Para automação, esse modelo estático foi desdobrado em outros dois modelos abstratos, que são configurados para se tornarem dinâmicos, visando a implementação e execução de forma a atender satisfatoriamente a realidade da execução do processo na empresa. O primeiro modelo abstrato dinâmico implementado e executado define o produto e finaliza com a decisão do cliente sobre o orçamento solicitado ao setor de vendas da empresa. O segundo modelo abstrato dinâmico inicia com a aprovação do orçamento pelo cliente, dado início a atividades de projeto informacional e finaliza com a liberação para produção. Essa abordagem visa minimizar as complexidades de modelagem do processo e das particularidades especificas da empresa. A modelagem do PDP com o modelo de referência e a aplicação do padrão BPMN apoiado pelo Intalio BPMS permitiu relatar boas práticas, lições aprendidas, dificuldades e facilidades encontradas. Além disso, o PDP formalizado pela modelagem com BPMN e Intalio BPMS proporcionou mudanças significativas na execução atual do processo, contribuindo para maior integração entre os participantes.
Modeling the business process and an approach of the 1990s to improve the performance of organizations, this currently returns as a strong contributor to the improvement of performance of organizations Packing Company, in the region west of Santa Catarina State. Modeling the PDP uses the standard Business Process Model and Notation (BPMN) supported by the application Intalio BPMS. The objective of the research and evaluate the modeling with BPMN for the formalization of the development process of products and how to deal with the complexities and intrinsic interactions of this process, in small companies of mechanical manufacturing. The modeling with BPMN and structured in evaluation of PDP a company selected and comments on the site of the work, the execution of the process. The methodology adopted for the development of modeling PDP the company considered the following aspects: (i) study of a company; (ii) informational modeling; (iii) the automation model and implementation; (iv) implementation of the model of the PDP in the company. Also presented are the characteristics of Unified Model of Rozenfeld et al. (2006), used as a reference to systematize the modeling of the Products Development Process of the company, through evaluation of the process of the company. A brief description and made to have the characteristics of the major standards used in the modeling of business processes, including the main computational applications used to support the standards of modeling. The results were divided into two parts, in abstract models static and dynamic. The abstract model has static informational character, presenting richness of detail, in the form of a detailed map of the process. For automation, this static model was unfolded in two other abstract models, which are configured to become dynamic, aiming at the implementation and execution in order to meet satisfactorily the reality of the implementation of the process in the company. The first abstract model dynamic implemented and executed defines the product and finishes with the customer's decision on the budget requested the sales of the company. The second abstract model dynamic starts with the approval of the budget by the customer, initiated the activities of project informational and ends with the release to production. This approach aims to minimize the complexities of modeling the process and the specific peculiarities of the company. Modeling the PDP with the reference model and the application of standard BPMN supported by Intalio BPMS allowed report best practices, lessons learned, difficulties and facilities found. In addition, the PDP formalized by modeling with BPMN and Intalio BPMS provided significant changes in the implementation of the current process, contributing to greater integration between the participants.
45

Amara, Mounir. "Segmentation de tracés manuscrits. Application à l'extraction de primitives." Rouen, 1998. http://www.theses.fr/1998ROUES001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce travail expose la modélisation de courbes par des éléments de droites, cercles et côniques dans une même formulation grâce à un filtrage étendu de Kalman. L'application est la segmentation des tracés manuscrits par une succession de primitives géométriques. Pour l'ajustement, nous décrivons les différentes courbes par des équations cartésiennes. Ces équations sont soumises à une contrainte de normalisation de manière à obtenir une estimation précise et robuste des paramètres des primitives géométriques. Ces deux équations, écrites sous forme implicite, constituent le système d'observation. Leur linéarisation est effectuée par un développement de Taylor au premier ordre. L'équation d'état du système est fixée constante pour spécifier la description d'une seule forme. La formulation de l'estimation récursive des paramètres utilise un filtre étendu de Kalman qui minimise une erreur quadratique, la distance d'un point à une forme, pondérée par la covariance du bruit d'observation. Pour reconstruire un tracé manuscrit, nous définissons une stratégie de changement de modèles. Dans notre cas, leur détection est fondée soit sur des critères spatio-temporels, en utilisant la dynamique du tracé manuscrit, soit sur des critères purement géométriques. Nous appliquons notre méthodologie à la segmentation de tracés réels constitués de chiffres ou de lettres manuscrites. Nous montrons que les différentes méthodes que nous avons développées permettent de reconstruire en temps réel, de façon précise, robuste, complète et totalement autonome, un tracé manuscrit quelconque. Ces méthodes autorisent un codage des tracés manuscrits: une stratégie est mise en œuvre pour la sélection de la primitive la plus significative au sens d'un critère approprié Nous proposons une démarche qui tient compte du taux de réduction de données et de la précision de la description. Ces méthodes ont l'avantage de réduire considérablement la quantité d'information initiale tout en gardant la partie informative. Elles ont été appliquées sur des tracés manuscrits réels en vue de l'analyse de dessins ou d'écritures d'enfants de scolarité primaire.
46

Bureš, David. "Nepostavené Brno Historie a perspektivy nedokončených urbanistických záměrů v městě Brně." Doctoral thesis, Vysoké učení technické v Brně. Fakulta architektury, 2015. http://www.nusl.cz/ntk/nusl-233271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
During the movement of the city of Brno, we can perceive a series of fragments, stop unfinished urban visions. To greater or lesser extent affect the current appearance of the city and causing relationships or processes we do not understand. By understanding the original concept we can grab those sites again and meaningfully work with them. The subject of this thesis became a trio of locations - náměstí Míru (Peace Square), Akademické náměstí (Academic Square) and the area of Kraví hora (Cow Mountain). In these parts of the city we can reveal fragments of unfinished projects. They are joined together by imaginary storyline of campus construction of Czech universities in the 20s and 30s of the last century. That was the key, but not the only one, factor in shaping the form of these public spaces. The aim of the research is to analyze the historical context of the development of built-up areas affecting the monitored sites. Define the basic concept of spatial composition and analysis resulting of the functional and spatial arrangement. The model of the interwar form was confronted with the present state of the monitored area and also subjected to detailed analysis in terms of form, function and operation. The essential aim of this research was insertion plan of the period to the contemporary model of the structure of Brno and verification broader spatial and operational relationships with the help of animation and simulation of human movement with the site. The results can serve as stimuli state authorities for further working with these sites, as well as for urban practice.
47

Solatges, Thomas. "Modélisation, conception et commande de robots manipulateurs flexibles. Application au lancement et à la récupération de drones à voilure fixe depuis un navire faisant route." Thesis, Toulouse, ISAE, 2018. http://www.theses.fr/2018ESAE0012/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les robots manipulateurs sont généralement des machines rigides, conçues pour que leurflexibilité ne perturbe pas leurs mouvements. En effet, des flexibilités mécaniques importantesdans la structure d’un système introduisent des degrés de liberté supplémentaires dont le comportementest complexe et difficile à maîtriser. Cependant, la réduction de la masse d’un systèmeest bénéfique du point de vue des coûts, de la performance énergétique, de la sécurité et des performancesdynamiques. Afin de faciliter l’accès aux nombreux avantages d’une structure légèremalgré la présence de fortes flexibilités, cette thèse porte sur la modélisation, la conception et lacommande de robots manipulateurs flexibles. Elle est motivée par le projet YAKA, dont l’applicationest le lancement et la récupération de drones à voilure fixe depuis un navire faisant route.Cette application nécessite une importante dynamique sur un vaste espace de travail, bien au-delàdes spécifications des robots rigides classiques. Les outils de modélisation, de conception et decommande proposés prennent en compte la flexibilité des segments et des articulations, pour unnombre quelconque de degrés de liberté et de segments flexibles. Le modèle dynamique flexibleest obtenu par le formalisme de Lagrange, les poutres flexibles sont représentées par le modèled’Euler-Bernoulli. Le schéma de commande proposé se décompose en une inversion de modèledynamique rigide et un bloc de précommande par Input Shaping adapté aux robots manipulateursflexibles. Les outils de conception proposés permettent de baser le processus de conceptionsur des performances prédites du système complet muni de ses actionneurs et de son contrôleuravec une simulation réaliste. Les validations expérimentales effectuées sur le robot YAKA permettentde valider la pertinence de la démarche suivie. Les résultats du projet YAKA confirment lafaisabilité de la mise en oeuvre d’un robot flexible de grande envergure et à forte dynamique dansun contexte industriel, en particulier pour le lancement et la récupération d’un drone à voilurefixe depuis un navire faisant route
Robot manipulators are generally stiff machines, designed in a way that flexibility does not affecttheir movements. Indeed, significant flexibility introduces additional degrees of freedom witha complex behavior. However, reducing the mass of a system allows for costs, performance, andsafety improvements. In order to allow those benefits despite important flexibility, this thesis focuseson modeling, design and control of flexible robot manipulators. It is motivated by the YAKAproject, which aims at developing a robot to launch and recover fixed wing UAVs from a movingship. It implies reaching very high dynamics on a large workspace, way beyond the specificationsof common rigid robots. The proposed tools for modeling, design and control allow for taking intoaccount both joint and link flexibility, for any number of degrees of freedom and flexible links.The elastodynamic model is obtained with Lagrange principle, each flexible link being representedwith one ormany Euler-Bernouilli beams. The proposed control scheme uses a nonlinear rigiddynamic inversion and extends classical Input Shaping techniques to flexible robot manipulators.The proposed design tools allow for performance prediction of the system including its actuatorsand controllers thanks to a realistic simulation. Experiments conducted with the YAKA robot validatedthe proposed approach. The results of the YAKA project confirmed the feasibility of usinga large scale, highly dynamic flexible robot in an industrial context, in particular for UAVs launchand recovery operations from amoving ship
48

Laroum, Sami. "Prédiction de la localisation des protéines membranaires : méthodes méta-heuristiques pour la détermination du potentiel d'insertion des acides aminés." Phd thesis, Université d'Angers, 2011. http://tel.archives-ouvertes.fr/tel-01064309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans ce travail, nous nous intéressons à la localisation des protéines adressées vers la membrane du réticulum endoplasmique, et plus spécifiquement à la reconnaissance des segments transmembranaires et des peptides signaux. En utilisant les dernières connaissances acquises sur les mécanismes d'insertion d'un segment dans la membrane, nous proposons une méthode de discrimination de ces deux types de séquences basée sur le potentiel d'insertion de chaque acide aminé dans la membrane. Cela amène à rechercher pour chaque acide aminé une courbe donnant son potentiel d'insertion en fonction de sa place dans une fenêtre correspondant à l'épaisseur de la membrane. Notre objectif est de déterminer ≪ in silico ≫ une courbe pour chaque acide aminé, afin d'obtenir les meilleures performances pour notre méthode de classification. L'optimisation, sur des jeux de données construits à partir des banques de données de protéines, des courbes est un problème difficile que nous abordons grâce aux méthodes méta-heuristiques. Nous présentons tout d'abord un premier algorithme de recherche locale permettant d'apprendre un ensemble de courbes. Son évaluation sur les différents jeux de données montre de bons résultats de classification. Cependant, nous constatons une difficulté d'ajustement pour les courbes de certains acides aminés. La restriction de l'espace de recherche grâce à des informations pertinentes sur les acides aminés et l'introduction d'un voisinage multiple nous permettent d'améliorer les performances de notre méthode et en même temps de stabiliser les courbes apprises. Nous présentons également un algorithme génétique développé afin d'explorer de manière plus diversifiée l'espace de recherche de ce problème.
49

Laroum, Sami. "Prédiction de la localisation des protéines membranaires : méthodes méta-heuristiques pour la détermination du potentiel d'insertion des acides aminés." Phd thesis, Angers, 2011. https://theses.hal.science/tel-01064309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans ce travail, nous nous intéressons à la localisation des protéines adressées vers la membrane du réticulum endoplasmique, et plus spécifiquement à la reconnaissance des segments transmembranaires et des peptides signaux. En utilisant les dernières connaissances acquises sur les mécanismes d'insertion d'un segment dans la membrane, nous proposons une méthode de discrimination de ces deux types de séquences basée sur le potentiel d'insertion de chaque acide aminé dans la membrane. Cela amène à rechercher pour chaque acide aminé une courbe donnant son potentiel d'insertion en fonction de sa place dans une fenêtre correspondant à l'épaisseur de la membrane. Notre objectif est de déterminer ≪ in silico ≫ une courbe pour chaque acide aminé, afin d'obtenir les meilleures performances pour notre méthode de classification. L'optimisation, sur des jeux de données construits à partir des banques de données de protéines, des courbes est un problème difficile que nous abordons grâce aux méthodes méta-heuristiques. Nous présentons tout d'abord un premier algorithme de recherche locale permettant d'apprendre un ensemble de courbes. Son évaluation sur les différents jeux de données montre de bons résultats de classification. Cependant, nous constatons une difficulté d'ajustement pour les courbes de certains acides aminés. La restriction de l'espace de recherche grâce à des informations pertinentes sur les acides aminés et l'introduction d'un voisinage multiple nous permettent d'améliorer les performances de notre méthode et en même temps de stabiliser les courbes apprises. Nous présentons également un algorithme génétique développé afin d'explorer de manière plus diversifiée l'espace de recherche de ce problème
In this work, we are interested in the localization of proteins transported towards the endoplasmic reticulum membrane, and more specifically to the recognition of transmembrane segments and signal peptides. By using the last knowledges acquired on the mechanisms of insertion of a segment in the membrane, we propose a discrimination method of these two types of sequences based on the potential of insertion of each amino acid in the membrane. This leads to search for each amino acid a curve giving its potential of insertion according to its place in a window corresponding to the thickness of the membrane. Our goal is to determine "in silico" a curve for each amino acid to obtain the best performances for our method of classification. The optimization, on data sets constructed from data banks of proteins, of the curves is a difficult problem that we address through the meta-heuristic methods. We first present a local search algorithm for learning a set of curves. Its assessment on the different data sets shows good classification results. However, we notice a difficulty in adjusting the curves of certain amino acids. The restriction of the search space with relevant information on amino acids and the introduction of multiple neighborhood allow us to improve the performances of our method and at the same time to stabilize the learnt curves. We also developed a genetic algorithm to explore in a more diversified way the space of search for this problem
50

España, Boquera Salvador. "Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition)." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/62215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
[EN] This work is focused on problems (like automatic speech recognition (ASR) and handwritten text recognition (HTR)) that: 1) can be represented (at least approximately) in terms of one-dimensional sequences, and 2) solving these problems entails breaking the observed sequence down into segments which are associated to units taken from a finite repertoire. The required segmentation and classification tasks are so intrinsically interrelated ("Sayre's Paradox") that they have to be performed jointly. We have been inspired by what some works call the "successful trilogy", which refers to the synergistic improvements obtained when considering: - a good formalization framework and powerful algorithms; - a clever design and implementation taking the best profit of hardware; - an adequate preprocessing and a careful tuning of all heuristics. We describe and study "two stage generative models" (TSGMs) comprising two stacked probabilistic generative stages without reordering. This model not only includes Hidden Markov Models (HMMs, but also "segmental models" (SMs). "Two stage decoders" may be deduced by simply running a TSGM in reversed way, introducing non determinism when required: 1) A directed acyclic graph (DAG) is generated and 2) it is used together with a language model (LM). One-pass decoders constitute a particular case. A formalization of parsing and decoding in terms of semiring values and language equations proposes the use of recurrent transition networks (RTNs) as a normal form for Context Free Grammars (CFGs), using them in a parsing-as-composition paradigm, so that parsing CFGs result in a slight extension of regular ones. Novel transducer composition algorithms have been proposed that can work with RTNs and can deal with null transitions without resorting to filter-composition even in the presence of null transitions and non-idempotent semirings. A review of LMs is described and some contributions mainly focused on LM interfaces, LM representation and on the evaluation of Neural Network LMs (NNLMs) are provided. A review of SMs includes the combination of generative and discriminative segmental models and general scheme of frame emission and another one of SMs. Some fast cache-friendly specialized Viterbi lexicon decoders taking profit of particular HMM topologies are proposed. They are able to manage sets of active states without requiring dictionary look-ups (e.g. hashing). A dataflow architecture allowing the design of flexible and diverse recognition systems from a little repertoire of components has been proposed, including a novel DAG serialization protocol. DAG generators can take over-segmentation constraints into account, make use SMs other than HMMs, take profit of the specialized decoders proposed in this work and use a transducer model to control its behavior making it possible, for instance, to use context dependent units. Relating DAG decoders, they take profit of a general LM interface that can be extended to deal with RTNs. Some improvements for one pass decoders are proposed by combining the specialized lexicon decoders and the "bunch" extension of the LM interface, including an adequate parallelization. The experimental part is mainly focused on HTR tasks on different input modalities (offline, bimodal). We have proposed some novel preprocessing techniques for offline HTR which replace classical geometrical heuristics and make use of automatic learning techniques (neural networks). Experiments conducted on the IAM database using this new preprocessing and HMM hybridized with Multilayer Perceptrons (MLPs) have obtained some of the best results reported for this reference database. Among other HTR experiments described in this work, we have used over-segmentation information, tried lexicon free approaches, performed bimodal experiments and experimented with the combination of hybrid HMMs with holistic classifiers.
[ES] Este trabajo se centra en problemas (como reconocimiento automático del habla (ASR) o de escritura manuscrita (HTR)) que cumplen: 1) pueden representarse (quizás aproximadamente) en términos de secuencias unidimensionales, 2) su resolución implica descomponer la secuencia en segmentos que se pueden clasificar en un conjunto finito de unidades. Las tareas de segmentación y de clasificación necesarias están tan intrínsecamente interrelacionadas ("paradoja de Sayre") que deben realizarse conjuntamente. Nos hemos inspirado en lo que algunos autores denominan "La trilogía exitosa", refereido a la sinergia obtenida cuando se tiene: - un buen formalismo, que dé lugar a buenos algoritmos; - un diseño e implementación ingeniosos y eficientes, que saquen provecho de las características del hardware; - no descuidar el "saber hacer" de la tarea, un buen preproceso y el ajuste adecuado de los diversos parámetros. Describimos y estudiamos "modelos generativos en dos etapas" sin reordenamientos (TSGMs), que incluyen no sólo los modelos ocultos de Markov (HMM), sino también modelos segmentales (SMs). Se puede obtener un decodificador de "dos pasos" considerando a la inversa un TSGM introduciendo no determinismo: 1) se genera un grafo acíclico dirigido (DAG) y 2) se utiliza conjuntamente con un modelo de lenguaje (LM). El decodificador de "un paso" es un caso particular. Se formaliza el proceso de decodificación con ecuaciones de lenguajes y semianillos, se propone el uso de redes de transición recurrente (RTNs) como forma normal de gramáticas de contexto libre (CFGs) y se utiliza el paradigma de análisis por composición de manera que el análisis de CFGs resulta una extensión del análisis de FSA. Se proponen algoritmos de composición de transductores que permite el uso de RTNs y que no necesita recurrir a composición de filtros incluso en presencia de transiciones nulas y semianillos no idempotentes. Se propone una extensa revisión de LMs y algunas contribuciones relacionadas con su interfaz, con su representación y con la evaluación de LMs basados en redes neuronales (NNLMs). Se ha realizado una revisión de SMs que incluye SMs basados en combinación de modelos generativos y discriminativos, así como un esquema general de tipos de emisión de tramas y de SMs. Se proponen versiones especializadas del algoritmo de Viterbi para modelos de léxico y que manipulan estados activos sin recurrir a estructuras de tipo diccionario, sacando provecho de la caché. Se ha propuesto una arquitectura "dataflow" para obtener reconocedores a partir de un pequeño conjunto de piezas básicas con un protocolo de serialización de DAGs. Describimos generadores de DAGs que pueden tener en cuenta restricciones sobre la segmentación, utilizar modelos segmentales no limitados a HMMs, hacer uso de los decodificadores especializados propuestos en este trabajo y utilizar un transductor de control que permite el uso de unidades dependientes del contexto. Los decodificadores de DAGs hacen uso de un interfaz bastante general de LMs que ha sido extendido para permitir el uso de RTNs. Se proponen también mejoras para reconocedores "un paso" basados en algoritmos especializados para léxicos y en la interfaz de LMs en modo "bunch", así como su paralelización. La parte experimental está centrada en HTR en diversas modalidades de adquisición (offline, bimodal). Hemos propuesto técnicas novedosas para el preproceso de escritura que evita el uso de heurísticos geométricos. En su lugar, utiliza redes neuronales. Se ha probado con HMMs hibridados con redes neuronales consiguiendo, para la base de datos IAM, algunos de los mejores resultados publicados. También podemos mencionar el uso de información de sobre-segmentación, aproximaciones sin restricción de un léxico, experimentos con datos bimodales o la combinación de HMMs híbridos con reconocedores de tipo holístico.
[CAT] Aquest treball es centra en problemes (com el reconeiximent automàtic de la parla (ASR) o de l'escriptura manuscrita (HTR)) on: 1) les dades es poden representar (almenys aproximadament) mitjançant seqüències unidimensionals, 2) cal descompondre la seqüència en segments que poden pertanyer a un nombre finit de tipus. Sovint, ambdues tasques es relacionen de manera tan estreta que resulta impossible separar-les ("paradoxa de Sayre") i s'han de realitzar de manera conjunta. Ens hem inspirat pel que alguns autors anomenen "trilogia exitosa", referit a la sinèrgia obtinguda quan prenim en compte: - un bon formalisme, que done lloc a bons algorismes; - un diseny i una implementació eficients, amb ingeni, que facen bon us de les particularitats del maquinari; - no perdre de vista el "saber fer", emprar un preprocés adequat i fer bon us dels diversos paràmetres. Descrivim i estudiem "models generatiu amb dues etapes" sense reordenaments (TSGMs), que inclouen no sols inclouen els models ocults de Markov (HMM), sinò també models segmentals (SM). Es pot obtindre un decodificador "en dues etapes" considerant a l'inrevés un TSGM introduint no determinisme: 1) es genera un graf acíclic dirigit (DAG) que 2) és emprat conjuntament amb un model de llenguatge (LM). El decodificador "d'un pas" en és un cas particular. Descrivim i formalitzem del procés de decodificació basada en equacions de llenguatges i en semianells. Proposem emprar xarxes de transició recurrent (RTNs) com forma normal de gramàtiques incontextuals (CFGs) i s'empra el paradigma d'anàlisi sintàctic mitjançant composició de manera que l'anàlisi de CFGs resulta una lleugera extensió de l'anàlisi de FSA. Es proposen algorismes de composició de transductors que poden emprar RTNs i que no necessiten recorrer a la composició amb filtres fins i tot amb transicions nul.les i semianells no idempotents. Es proposa una extensa revisió de LMs i algunes contribucions relacionades amb la seva interfície, amb la seva representació i amb l'avaluació de LMs basats en xarxes neuronals (NNLMs). S'ha realitzat una revisió de SMs que inclou SMs basats en la combinació de models generatius i discriminatius, així com un esquema general de tipus d'emissió de trames i altre de SMs. Es proposen versions especialitzades de l'algorisme de Viterbi per a models de lèxic que permeten emprar estats actius sense haver de recórrer a estructures de dades de tipus diccionari, i que trauen profit de la caché. S'ha proposat una arquitectura de flux de dades o "dataflow" per obtindre diversos reconeixedors a partir d'un xicotet conjunt de peces amb un protocol de serialització de DAGs. Descrivim generadors de DAGs capaços de tindre en compte restriccions sobre la segmentació, emprar models segmentals no limitats a HMMs, fer us dels decodificadors especialitzats proposats en aquest treball i emprar un transductor de control que permet emprar unitats dependents del contexte. Els decodificadors de DAGs fan us d'una interfície de LMs prou general que ha segut extesa per permetre l'ús de RTNs. Es proposen millores per a reconeixedors de tipus "un pas" basats en els algorismes especialitzats per a lèxics i en la interfície de LMs en mode "bunch", així com la seua paral.lelització. La part experimental està centrada en el reconeiximent d'escriptura en diverses modalitats d'adquisició (offline, bimodal). Proposem un preprocés d'escriptura manuscrita evitant l'us d'heurístics geomètrics, en el seu lloc emprem xarxes neuronals. S'han emprat HMMs hibridats amb xarxes neuronals aconseguint, per a la base de dades IAM, alguns dels millors resultats publicats. També podem mencionar l'ús d'informació de sobre-segmentació, aproximacions sense restricció a un lèxic, experiments amb dades bimodals o la combinació de HMMs híbrids amb classificadors holístics.
España Boquera, S. (2016). Contributions to the joint segmentation and classification of sequences (My two cents on decoding and handwriting recognition) [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/62215
TESIS
Premiado

To the bibliography