Siga este enlace para ver otros tipos de publicaciones sobre el tema: Charts, maps.

Tesis sobre el tema "Charts, maps"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Charts, maps".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Mumford, Ian. "Milestones in lithographed cartography from 1800". Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299738.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Magalhães, Adriana Dias. "Análise proteômica de Trypanosoma cruzi : construção de mapas bidimensionais em pH alcalino". reponame:Repositório Institucional da UnB, 2006. http://repositorio.unb.br/handle/10482/5253.

Texto completo
Resumen
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Medicina, 2006.
Submitted by Allan Wanick Motta (allan_wanick@hotmail.com) on 2010-07-14T19:15:45Z No. of bitstreams: 1 2006_AdrianaDiasMagalhaes.pdf: 4334761 bytes, checksum: db31def2dc9f9ea1bec02b091af443c7 (MD5)
Approved for entry into archive by Lucila Saraiva(lucilasaraiva1@gmail.com) on 2010-07-14T22:45:57Z (GMT) No. of bitstreams: 1 2006_AdrianaDiasMagalhaes.pdf: 4334761 bytes, checksum: db31def2dc9f9ea1bec02b091af443c7 (MD5)
Made available in DSpace on 2010-07-14T22:45:57Z (GMT). No. of bitstreams: 1 2006_AdrianaDiasMagalhaes.pdf: 4334761 bytes, checksum: db31def2dc9f9ea1bec02b091af443c7 (MD5) Previous issue date: 2006
O Trypanosoma cruzi é o parasita causador da doença de Chagas, a qual atinge 16-18 milhões de pessoas. Recentemente, o seqüenciamento do genoma do T. cruzi foi concluído, o que deu novo impulso aos projetos pós-genômicos visando a elucidação da expressão diferencial de proteínas ao longo do ciclo de vida do parasita. A proteômica é bastante apropriada para este fim, já que a regulação da expressão de proteínas em T. cruzi ocorre em nível póstranscricional. Objetivando-se o estudo das proteínas básicas do proteoma de T. cruzi, condições para eletroforese bidimensional (2-DE) em pH alcalino das formas epimastigotas, tripomastigotas e amastigotas foram estabelecidas. Tornou-se necessário otimizar as condições experimentais para tais géis, já que nessa faixa de pH é normal o aparecimento de longas listras horizontais (streaking), baixa resolução de spots e baixa reprodutibilidade. O protocolo final, desenvolvido para formas epimastigotas, consistiu na adição de 10% de isopropanol ao tampão de reidratação do gel de gradiente imobilizado de pH, aplicação da amostra em uma fita de papel de filtro conectada ao anôdo, uso de fita embebida com solução de DTT junto ao catôdo e focalização isoelétrica utilizando-se o equipamento Multiphor II (GE Healthcare). Um total de 10 spots do gel de epimastigotas foram identificados por impressão digital do mapa peptídico (peptide mass fingerprint). As proteínas identificadas foram: fosfoglicerato quinase, prostaglandina F2a sintase, peptídeo metionina sulfóxido redutase, metiltioadenosina fosforilase, proteína dissulfeto isomerase, AKB ligase e quatro proteínas hipotéticas (hipotéticas). As condições padronizadas para a 2-DE foram aplicadas na construção de mapas bidimensionais das formas tripomastigotas e amastigotas. Os mapas resultantes permitiram verificar diferenças de expressão entre os proteomas. Por último, foi testada a metodologia do gel “dois em um” para 2-DE em faixa ampla de pH, que mostrou resultados promissores para futuras análises da expressão comparativa de proteínas em T.cruzi.
Trypanosoma cruzi is the parasite that causes Chagas disease, a chronic illness that affects 16-18 million people. Recently, the sequencing of T. cruzi genome was concluded. This accomplishment stimulated post-genomic projects aiming at elucidating the differential protein expression through the parasite life cycle. Proteomics is the most suitable methodology for this since T. cruzi protein expression regulation occurs at post-transcriptional level. In order to study the basic proteins from T. cruzi proteome, conditions for two-dimensional gel electrophoresis (2-DE) of epimastigote, trypomastigote and amastigote life forms were developed. It was necessary to optimize the 2-DE experimental conditions since in the alkaline pH range the gels usually presents spot streaking, low resolution and poor reproducibility. The final protocol, developed for epimastigotas, consisted of the addition of 10% isopropanol to the IPG gel strip rehydration buffer, sample loading using the “paper bridge” method, use of paper strip embedded in DTT solution near the cathode and isoelectric focusing using the Multiphor II apparatus (GE Healthcare). A total of 10 spots from the epimastigote gel were identified by peptide mass fingerprinting. The identified proteins were phosphoglycerate kinase, prostaglandin F2a synthase, methionine peptide sulfoxide reductase, methylthioadenosin phosphorylase, protein disulfide isomerase, AKB ligase and four hypothetical proteins. The optimized 2-DE conditions were applied to the construction of trypomastigotes and amastigotas two dimensional maps. The resulting maps permitted the visualization of differences in protein expression among the proteomes. Finally, the “two-in-one” 2-DE methodology for wide range pHs was tested and gave promising results that may be used in future studies on T. cruzi comparative protein expression.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Latta, Martin. "Vektorové letecké mapy s "high a low trajektoriemi"". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255378.

Texto completo
Resumen
Tato diplomová práce se zabývá dynamickým vykreslováním vektorových leteckých map, především letových trajektorií a navigačních bodů do mapy. Mapy jsou integrovány do mobilních aplikací, které piloti používají během letu a které nahrazují původní papírové mapy. Práce obsahuje teoretický úvod do problematiky a základní pojmy používané v letectví a navigaci. Další částí práce je průzkum, popis a zhodnocení existujících řešení a aplikací společně s návrhem a designem nového řešení. Nakonec je prezentována implementace demo aplikace pomocí WebGL knihovny a výsledné vyhodnocení a porovnání.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Yamamoto, Kaoru. "Disturbance Attenuation in Mass Chains with Passive Interconnection". Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/279020.

Texto completo
Resumen
This thesis is concerned with disturbance amplification in interconnected systems which may consist of a large number of elements. The main focus is on passive control of a chain of interconnected masses where a single point is subject to an external disturbance. The problem arises in the design of multi-storey buildings subjected to earthquake disturbances, but applies in other situations such as bidirectional control of vehicle platoons. It is shown that the scalar transfer functions from the disturbance to a given intermass displacement can be represented as a complex iterative map. This description is used to establish uniform boundedness of the H∞-norm of these transfer functions for certain choices of interconnection impedance. A graphical method for selecting an impedance such that the H∞-norm is no greater than a prescribed value for an arbitrary length of the mass chain is given. A design methodology for a fixed length of the mass chain is also provided. A case study for a 10-storey building model demonstrates the validity of this method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Koonce, Richard S. "THE SYMBOLIC RAPE OF REPRESENTATION: A RHETORICAL ANALYSIS OF BLACK MUSICAL EXPRESSION ON BILLBOARD'S HOT 100 CHARTS". Bowling Green State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1162098669.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lionni, Luca. "Colored discrete spaces : Higher dimensional combinatorial maps and quantum gravity". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS270/document.

Texto completo
Resumen
On considère, en deux dimensions, une version euclidienne discrète de l’action d’Einstein-Hilbert, qui décrit la gravité en l’absence de matière. À l’intégration sur les géométries se substitue une sommation sur des surfaces triangulées aléatoires. Dans la limite physique de faible gravité, seules les triangulations planaires survivent. Leur limite en distribution, la carte brownienne, est une surface fractale continue dont l’importance dans le contexte de la gravité quantique en deux dimensions a été récemment précisée. Cet espace est interprété comme un espace-temps quantique, obtenu comme limite à grande échelle d’un ensemble statistique de surfaces discrètes aléatoires. En deux dimensions, on peut donc étudier les propriétés fractales de la gravité quantique via une approche discrète. Il est bien connu que les généralisations directes en dimensions supérieures échouent à produire des espace-temps quantiques aux propriétés adéquates : en dimension D>2, la limite en distribution des triangulations qui survivent dans la limite de faible gravité est l’arbre continu aléatoire, ou polymères branchés en physique. Si en deux dimensions on parvient aux mêmes conclusions en considérant non pas des triangulations, mais des surfaces discrètes aléatoires obtenues par recollements de 2p-gones, nous savons depuis peu que ce n’est pas toujours le cas en dimension D>2. L’apparition de nouvelles limites continues dans le cadre de théories de gravité impliquant des espaces discrets aléatoires reste une question ouverte. Nous étudions des espaces obtenus par recollements de blocs élémentaires, comme des polytopes à facettes triangulaires. Dans la limite de faible gravité, seuls les espaces qui maximisent la courbure moyenne survivent. Les identifier est cependant une tâche ardue dans le cas général, pour lequel les résultats sont obtenus numériquement. Afin d’obtenir des résultats analytiques, une coloration des (D-1)-cellules, les facettes, a été introduite. En toute dimension paire, on peut trouver des familles d’espaces discrets colorés de courbure moyenne maximale dans la classe d’universalité des arbres – convergeant vers l’arbre continu aléatoire, des cartes planaires – convergeant vers la carte brownienne, ou encore dans la classe de prolifération des bébé-univers. Cependant, ces résultats sont obtenus en raison de la simplicité de blocs élémentaires dont la structure uni ou bidimensionnelle ne rend pas compte de la riche diversité des blocs colorés en dimensions supérieures. Le premier objectif de cette thèse est donc d’établir des outils combinatoires qui permettraient une étude systématique des blocs élémentaires colorés et des espaces discrets qu’ils génèrent. Le principal résultat de ce travail est l’établissement d’une bijection entre ces espaces et des familles de cartes combinatoires, qui préserve l’information sur la courbure locale. Elle permet l’utilisation de résultats sur les surfaces discrètes et ouvre la voie à une étude systématique des espaces discrets en dimensions supérieures à deux. Cette bijection est appliquée à la caractérisation d’un certain nombre de blocs de petites tailles ainsi qu’à une nouvelle famille infinie. Le lien avec les modèles de tenseurs aléatoires est détaillé. Une attention particulière est donnée à la détermination du nombre maximal de (D-2)-cellules et de l’action appropriée du modèle de tenseurs correspondant. Nous montrons comment utiliser la bijection susmentionnée pour identifier les contributions à un tout ordre du développement en 1/N des fonctions à 2n points du modèle SYK coloré, et appliquons ceci à l’énumération des cartes unicellulaires généralisées – les espaces discrets obtenus par recollement d’un unique bloc élémentaire – selon leur courbure moyenne. Pour tout choix de blocs colorés, nous montrons comment réécrire la théorie d’Einstein-Hilbert discrète correspondante comme un modèle de matrices aléatoires avec traces partielles, dit représentation en champs intermédiaires
In two dimensions, the Euclidean Einstein-Hilbert action, which describes gravity in the absence of matter, can be discretized over random triangulations. In the physical limit of small Newton's constant, only planar triangulations survive. The limit in distribution of planar triangulations - the Brownian map - is a continuum fractal space which importance in the context of two-dimensional quantum gravity has been made more precise over the last years. It is interpreted as a quantum continuum space-time, obtained in the thermodynamical limit from a statistical ensemble of random discrete surfaces. The fractal properties of two-dimensional quantum gravity can therefore be studied from a discrete approach. It is well known that direct higher dimensional generalizations fail to produce appropriate quantum space-times in the continuum limit: the limit in distribution of dimension D>2 triangulations which survive in the limit of small Newton's constant is the continuous random tree, also called branched polymers in physics. However, while in two dimensions, discretizing the Einstein-Hilbert action over random 2p-angulations - discrete surfaces obtained by gluing 2p-gons together - leads to the same conclusions as for triangulations, this is not always the case in higher dimensions, as was discovered recently. Whether new continuum limit arise by considering discrete Einstein-Hilbert theories of more general random discrete spaces in dimension D remains an open question.We study discrete spaces obtained by gluing together elementary building blocks, such as polytopes with triangular facets. Such spaces generalize 2p-angulations in higher dimensions. In the physical limit of small Newton's constant, only discrete spaces which maximize the mean curvature survive. However, identifying them is a task far too difficult in the general case, for which quantities are estimated throughout numerical computations. In order to obtain analytical results, a coloring of (D-1)-cells has been introduced. In any even dimension, we can find families of colored discrete spaces of maximal mean curvature in the universality classes of trees - converging towards the continuous random tree, of planar maps - converging towards the Brownian map, or of proliferating baby universes. However, it is the simple structure of the corresponding building blocks which makes it possible to obtain these results: it is similar to that of one or two dimensional objects and does not render the rich diversity of colored building blocks in dimensions three and higher.This work therefore aims at providing combinatorial tools which would enable a systematic study of the building blocks and of the colored discrete spaces they generate. The main result of this thesis is the derivation of a bijection between colored discrete spaces and colored combinatorial maps, which preserves the information on the local curvature. It makes it possible to use results from combinatorial maps and paves the way to a systematical study of higher dimensional colored discrete spaces. As an application, a number of blocks of small sizes are analyzed, as well as a new infinite family of building blocks. The relation to random tensor models is detailed. Emphasis is given to finding the lowest bound on the number of (D-2)-cells, which is equivalent to determining the correct scaling for the corresponding tensor model. We explain how the bijection can be used to identify the graphs contributing at any given order of the 1/N expansion of the 2n-point functions of the colored SYK model, and apply this to the enumeration of generalized unicellular maps - discrete spaces obtained from a single building block - according to their mean curvature. For any choice of colored building blocks, we show how to rewrite the corresponding discrete Einstein-Hilbert theory as a random matrix model with partial traces, the so-called intermediate field representation
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bartoloni, Bruno Figueiredo. "Mapas simpléticos com correntes reversas em tokamaks". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-22112016-211638/.

Texto completo
Resumen
Desenvolvemos um modelo na forma de um mapeamento bidimensional simplético (conservativo) para estudar a evolução das linhas de campo magnético de um plasma confinado no interior de um tokamak. Na primeira parte, consideramos dois perfis estudados na literatura para a densidade de corrente no plasma: um monotônico e um não-monotônico, que dão origem a diferentes perfis analíticos do fator de segurança. Nas simulações, consideramos inicialmente o sistema no equilíbrio, onde observamos, nas seções de Poincaré, apenas linhas invariantes. Em seguida, adicionamos uma perturbação (corrente externa), onde observamos cadeias de ilhas e caos no sistema. Na segunda parte consideramos um perfil também não-monotônico, mas com uma região na qual a densidade de corrente no plasma torna-se negativa, estudo ainda em aberto na literatura, que causa uma divergência no perfil do fator de segurança. Mesmo considerando o sistema apenas no equilíbrio, surgiram cadeias de ilhas muito pequenas em torno de curvas sem shear e caos localizado no sistema, característica não verificada para os outros perfis estudados no equilíbrio. Variando parâmetros relacionados à expressão da densidade de corrente, conseguimos controlar o aparecimento de regiões com cadeias de ilhas em torno de curvas sem shear e regiões caóticas. Para comprovar os resultados, aplicamos o perfil considerado a um outro mapa simplético da literatura (tokamap). Na parte final, consideramos a configuração do perfil do fator de segurança na forma de um divertor. Nessa configuração também temos uma divergência na expressão do perfil do fator de segurança. Observamos características similares (cadeias de ilhas em torno de curvas sem shear e caos) quando consideramos o perfil não-monotônico com densidade de corrente reversa.
We develop a symplectic (conservative) bidimensional map to study the evolution of magnetic field lines of a confined plasma in a tokamak. First, we considered two profiles for the plasma current density, studied in the literature: monotonic and non-monotonic, which give rise to different profiles for the poloidal magnetic field and different analytical profiles for the safety factor. In our simulations, we consider the system initially at equilibrium, where we observe, in Poincaré sections, only invariant lines. Then, we add a perturbation (external current), where we observe island chains and chaos in the system. In the second part, we consider a non-monotonic profile, but with a region which the current density becomes negative, which causes a divergence in the safety factor profile. Even considering only the sistem at equilibrium, very small island chains appeared around the shearless curves, and localized chaos. This feature was not observed for the other profiles at equilibrium. We can control the appearance of the regions with island chaind around the shearless curves and chaotic regions, by variation of parameters related to the density current expression. To comprove our results, we aplly the same profile to the other symplectic map. Finally, we consider a safety factor profile in a divertor configuration. We also have a divergence on in the safety factor profile. We observe similar features (island chains around shearless curves and localized chaos) when we consider a non-monotonic safety factor profile with a reversed density current.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Houseman, Jonathan. "Branched chains in poly(methyl methacrylate) polymerisations incorporating a polymeric chain transfer agent". Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/34854.

Texto completo
Resumen
Branching in poly(methyl methacrylate) (PMMA) is produced by incorporating a pre-prepared polymeric chain transfer agent (PCTA) into a single stage radical polymerisation. Samples of PCTA having a range of transfer functionalities and molar masses were synthesised by modifying a methacrylate-based copolymer. Control of branching in PMMA has been studied as a function of transfer functionality and molar mass in the PCT A and a function of MMA and initiator concentrations in the MMA polymerisation. The branched samples of PMMA have been characterised by size exclusion chromatography (SEC) with multi-detectors to determine Mark–Houwink and other parameters to assess levels of branching. Some PCTA samples have been prepared with a UV chromophore to facilitate characterisation by SEC-UV.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Masini, Simone. "Sviluppo di una piattaforma di crowdsensing per l'analisi di dati". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8375/.

Texto completo
Resumen
Piattaforma di raccolta e analisi dei dati ambientali, raccolti da vari dispositivi. Server in node.js per ricevere e salvare i dati, client android per catturare i dati, client web per analizzare i dati attraverso una mappa e dei grafici.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lombard, Camille. "Cloning, Expression and Purification of the Different Human Haptoglobin Chains and Initial Characterization by Mass Spectrometry". University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1365202687.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Langlais, Benoit. "Les champs magnétiques de la Terre et de Mars : apport des satellites Ørsted et Mars Global Surveyor". Paris, Institut de physique du globe, 2001. http://www.theses.fr/2001GLOB0004.

Texto completo
Resumen
Le sujet de cette thèse est l'ulilisation des mesures magnétiques effectuées à bord de satellites pour mieux décrire et comprendre le champ magnétique de la Terre et celui de Mars. Dans un premier temps, nous décrivons les techniques d'acquisition des mesures magnétiques terrestres, en insistant particulièrement sur la validation et le traitement des données du satellite Orsted. Après avoir introduit les modèles magnétiques de référence, et leur limitation due à la mauvaise répartition des données terrestres, nous montrons l'apport essentiel des données Orsted pour décrire le champs magnétique terrestre avec une résolution qui jusqu'ici n'avait été atteinte que pendant la période MAGSAT (1979-1980). Les nouveaux modèles du champ géomagnétique, et leur comparaison avec des modèles dérivés des données MAGSAT permettent de mieux appréhender la dynamique du champ magnétique à la surface du noyau, mais aussi de mieux décrire et interpréter (en termes de structures géologiques) le champ magnétique d'origine lithosphérique. Dans un deuxième temps, nous utilisons les données de la mission Mars Global Surveyor pour obtenir les premières description du champ magnétique martien. Celui-ci, figé dans les couches superficielles, ne présente pas les mêmes caractéristiques que le champ lithosphérique terrestre. Nous discutons la corrélation du champ magnétique martien avec les données topographiques et gravimétriques disponibles, et nous énonçons des hypothèses quant aux séquences temporelles de la mise en place de la lithosphère, et plus généralement sur l'histoire de l'évolution de la planète Mars. L'étude combinée du champ magnétique terrestre et du champ magnétique martien, et leur comparaison apportera à terme des contraintes quant à la dynamo terrestre et l'ancienne dynamo martienne
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Perälä, Jesper. "Pit Craters of Arsia Mons Volcano, Mars, and Their Relation to Regional Volcano-tectonism". Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-255563.

Texto completo
Resumen
Pit crater and pit crater chains associated to the volcano Arsia Mons on Mars have been mapped to analyse their spatial pattern and to conclude about their formation. For the mapping, high resolution satellite data gathered during the Mars Express mission were used. The spatial distribution of the pit craters was then compared with typical patterns of magmatic sheet intrusions within volcanoes as they are known from Earth. The results show that the pattern of the mapped pit craters and pit crater chains are in good agreement with these sheet intrusions and are therefore likely related to Martian sheet intrusions.
Kollapskratrar och kraterkedjor relaterade till vulkanen Arsia Mons på Mars har karterats för att analysera deras spatiala mönster och för att komma till slutsatser för deras tillblivelse. Högupplösta satellitbilder tagna av Mars Express-sonden har använts för karteringen. Fördelningen av de karterade kraterkedjorna jämfördes med typiska fördelningar av magmatiska gångbergarter från vulkaner på jorden. Resultaten visar att fördelningen av kollapskratrar och kraterkedjor överensstämmer enligt förväntningarna och påvisar en relation mellan kollapskratrar och magmatiska gångbergarter på Mars.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Vignes, Didier. "Etude du champ magnétique et de l'environnement ionisé de la planète Mars à l'aide de la sonde Mars global surveyor". Toulouse 3, 2000. http://www.theses.fr/2000TOU30169.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Civet, François. "Caractérisation de la structure électrique de Mars par méthode d'induction électromagnétique globale à partir des données magnétiques satellitaires de Mars Global Surveyor". Thesis, Brest, 2012. http://www.theses.fr/2012BRES0084/document.

Texto completo
Resumen
Les méthodes d'induction électromagnétique permettent de caractériser la conductivité électrique des matériaux, dont les corps planétaires telluriques, depuis les couches superficielles de la croûte jusqu'aux zones les plus internes, dans le manteau inférieur. Pour une source de champ électromagnétique donnée, des courants sont induits dans les matériaux qui y sont soumis. Avec l'essor des données magnétiques satellitaires, de nouvelles méthodes d'analyse des données magnétiques permettent d'obtenir des images unidimensionnelles de la structure électrique de ces corps car la structure spatio-temporelle de la source électromagnétique en est bien connue. Les travaux de mon doctorat ont eu pour but de mettre en place une nouvelle méthode d'analyse permettant de déterminer des modèles de structure interne globaux pour n'importe quel corps du système solaire pour lequel on dispose de longues séries temporelles magnétiques satellitaires. Après avoir testé cette méthode sur des modèles synthétiques et l'avoir appliqué au cas de données réelles terrestre pour lesquelles des études d'induction électromagnétiques antérieurs permettent d'avoir un a priori sur le modèle de conductivité électrique attendu, nous avons obtenu les premiers modèles de conductivité électrique martien en utilisant les données magnétiques du satellite Mars Global Surveyor. Ces résultats nous ont permis de valider des modèles de structure interne antérieurs établis à partir d'analyses géochimiques et minéralogiques des météorites martiennes. Cette méthode innovante est aujourd'hui la seule capable d'obtenir une image électrique des manteaux telluriques à partir de données magnétiques satellitaires pour des corps autres que la Terre ou la Lune et pour lesquels aucun a priori sur la structure spatio-temporelle du champ électromagnétique inducteur externe n'est nécessaire
My Ph.D. work consists in the investigation of satellite magnetic data to infer the deep internal conductivity distribution. I developed a new global electromagnetic induction method applied to planetary magnetic datasets without strong a priori hypothesis on the external inducing source field. My method is based on a spectral correction of gapped data magnetic time series to restore the time spectral content of the source field. This external source depends on the planetary environment and is therefore different for each planetary bodies. The method aims at recovering with a maximum accuracy internal and external spherical harmonic coefficients of transients fields, whose ratio is used as a transfer function to retrieve the internal distribution of electrical conductivity. While for the Earth, a good proxy of the source field activity is the Dst index, no such proxy exists for other planets. Hence, for our study of Mars transient magnetic field from MGS, one of the major part of my work is the determination of an appropriate continuous proxy for the external variability. On Earth the external electromagnetic source is well known, and may be described by a spherical harmonic geometry dominated by the dipole term. This source field may be characterized using a magnetic activity index named the Dst index. The method has been tested on synthetic data generated within the framework of SWARM mission. This mission consists of a 3 satellites constellation. One of the main objectives is to infer the 3D electrical distribution in the deep Earth. SWARM synthetic data consist in a time series of spherical harmonic (SH) coefficients, external and internal, generated from a simple non-realistic 3D model. In this model, several regional and local conductors, in a radially symmetric 3 layers model have been embedded. Using this dataset, our method give satisfactory results. We have been able to obtain the external and internal SH coefficients - for the first SH degree, which is known to be the most energetic degree of the external source - using only one of the 3 synthetic time series. Then, the method has been used on real data from Ørsted. In this case, we had to pre-process the data to correct from ionospheric and aligned currents contributions. We developed a statistical analysis to remove the ionospheric field using 2 geomagnetic indices : AL and Kp. Hence, we have enlarged data toward higher and lower latitudinal zones than what has been done in previous works. Finally, we have been able to obtain 1D conductivity models, which fits reasonably with existing conductivity data in the deep Earth. Finally, we worked on Mars Global Surveyor (MGS) data. One of the most time consuming parts of this work was the determination of an appropriate continuous proxy for the external variability in the vicinity of Mars. Without any measurements of the IMF (Interplanetary Magnetic Field) during MGS sciences acquisition, we have used ACE (Advanced Composition Explorer) data. This satellite orbits around the L1 point of the Sun-Earth system, measuring solar wind magnetic characteristics. We have time-shifted ACE data to Mars position for 4 temporal windows where Mars and Earth were closed to the same Parker's spiral's arm, and finally determined a proxy explaining the major part of the variability observed in Mars data. Despite numerous gaps in MGS data, we have been able to establish the 1D conductivity distribution, fitting reasonably existing geochemical models. Although the method may be unstable for some cases, we obtained satisfactory results for in depth conductivity of the planet
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Velarde, Pajares Sandra Judith. "Building critical mass of tree growers for bioenergy: The case of Central West New South Wales, Australia". Phd thesis, Canberra, ACT : The Australian National University, 2016. http://hdl.handle.net/1885/143281.

Texto completo
Resumen
The progression of the bioenergy industry needs to address concerns regarding the security of feedstock supply and the related environmental sustainability. Traditional first-generation biofuel feedstocks (e.g. maize, soybeans) are being questioned in favour of more environmentally-sound second-generation biofuel feedstocks (e.g. trees, perennial grasses). However, as an emerging industry, the commercial use of second-generation biofuel feedstock sources has several challenges to overcome. One of these challenges is landholders’ willingness to plant second-generation crops on their farms. To understand the landholders’ perspectives, this thesis used a conceptual framework based on adoption of innovation and diffusion theory, and applied this framework to a case study in the Central West region of New South Wales, Australia. The research questions addressed were: 1) what factors underlie landholders’ willingness to plant bioenergy tree crops, 2) what are the landholders’ preferences in the design of contracts for planting these trees, and 3) what are the potential pathways to build a critical mass of tree growers for bioenergy. A mixed methods approach was used involving quantitative analytical tools (e.g. tobit and logit regressions, choice modelling, and break even analysis) and qualitative analytical tools (e.g. integrated analysis). Tobit and logit regression models estimates revealed three key traits that positively influence the decision to plant second-generation biofuel feedstocks: 1) the landholder’s proportion of unproductive land, 2) the landholder’s membership in farming related organisations, and 3) the landholder’s experience with planting blocks of trees. Conversely, the landholder’s older age-squared would negatively influence their decision to plant second-generation biofuel feedstocks. The choice model estimates revealed that landholders who had already planted blocks of trees would be less likely to need a flexible contract for planting trees as energy crops, while landholders with larger proportions of unproductive land would prefer higher returns. This thesis concludes that for a second-generation bioenergy industry to emerge, a critical mass of biomass growers needs to be secured; this can be achieved by developing interlinked pathways that include: 1) supportive policies, 2) local support and an innovation champion, and 3) corporate support and/or a potential biomass buyer or investor. This research has identified critical pathways that can be developed to progress the bioenergy industry in Australia. The proposed pathways can be used to explore actors’ participation and their potential roles in scaling up, and to better understand the process of building critical mass for a second-generation bioenergy industry.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Jorge, Gabriela André. "Avaliação da viabilidade de mapeamento das tarefas do consumidor em processos de serviços". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-16052014-152206/.

Texto completo
Resumen
Esta dissertação aborda o problema da falta conhecimento a respeito do processo de mapeamento e avaliação das tarefas executadas pelos consumidores em processos de serviços reais. De fato, a literatura acerca da construção de mapas que permitam visualizar as tarefas desempenhadas pelos consumidores nos processos de serviço ainda se mostra escassa o que motiva o desenvolvimento de pesquisas que venham a contribuir para a sistematização e difusão de tal prática. Neste contexto, este trabalho objetiva contribuir com o gerenciamento e controle das operações de serviço verificando como a participação do consumidor nos processos de serviço pode ser mapeada e medida. Desta forma são considerados os seguintes objetivos específicos: a) Verificar como a literatura propõe a construção de mapas para visualizar as tarefas executadas pelos consumidores em processos de serviço; b) Aplicar o Mapa de Consumo em casos reais de serviço detalhando como a construção deste mapa pode ser conduzida, identificando e medindo as tarefas desempenhadas pelos consumidores; c) Identificar as principais dificuldades encontradas na construção do Mapa de Consumo e medição das tarefas dos consumidores. As principais ferramentas identificadas na revisão da literatura para a forma de mapeamento focada foram o SIPOC, Blueprint, Mapa de Consumo, Carta de Atividades e SERVPRO. Realizou-se então uma análise comparativa das mesmas, procurando-se destacar aspectos singulares da forma como propõem que a participação do consumidor no processo de serviço seja visualizada. Para se coletar dados empíricos sobre a aplicação de uma ferramenta de mapeamento com o propósito de representar e analisar a participação do consumidor, a pesquisa explora o potencial de aplicação da ferramenta Mapa de Consumo, proposta por Womack e Jones (2006), no estudo das tarefas e interações que compõem processos de serviço em que o consumidor atua como coprodutor, exercendo papel fundamental para concretização do serviço. Assim, o escopo desta pesquisa foi delimitado à consideração de processos de serviço do tipo faça você mesmo com interação remota e com interação presencial, entre cliente e provedor, em casos nos quais a variabilidade do processo é baixa. O serviço do tipo faça você mesmo com interação remota selecionado como objeto de estudo foi um processo de compra coletiva pela internet, e para seu mapeamento foram adotados os métodos de coleta de dados por estudo de caso, utilizando a empresa provedora do serviço como fonte de dados, e experimento, recorrendo aos seus consumidores como fonte de dados. Para o serviço do tipo faça você mesmo com interação presencial, selecionou-se o processo de estacionar o carro em um shopping center, e para seu mapeamento foi adotado o método de coleta de dados por meio de uma enquete realizada in loco com seus consumidores. Ao final da pesquisa, conclui-se que a construção e aplicação do Mapa de Consumo para os processos de serviço do tipo faça você mesmo, tanto na modalidade com interação remota como na modalidade com interação presencial são viáveis e que as principais dificuldades acerca da construção dos mapas referem-se ao tempo requerido para coleta dos dados e a adesão de participantes à pesquisa. Além disso, são elencadas propostas para o desenvolvimento de pesquisas futuras relacionadas ao tema abordado nesta dissertação.
This dissertation examines the problem of lack of knowledge about the process of mapping and evaluating the tasks performed by consumers in real service processes. Indeed, the literature about the construction of maps that enable the visualization of tasks undertaken by consumers in service processes are still scarce, what motivates the development of research works that might contribute to the systematization and dissemination of such practice. In this context, this work aims at contributing to the management and control of service operations, by examining how the consumer participation in service processes can be mapped and measured. Thus, the following objectives are considered: a) Verify how the literature proposes the construction of maps to visualize the tasks performed by consumers in service processes; b) Apply the Consumption Map in cases of real service detailing how the construction of this map can be conducted, by identifying and measuring tasks undertaken by consumers; c) Identify the main difficulties encountered in the construction of Consumption Map and in the measurement of consumer tasks. The main tools identified in the literature review to address this kind of mapping were SIPOC, Blueprint, Consumption Map, Activity Chart and SERVPRO. A comparative analysis of these tools was developed seeking to highlight unique aspects of the way how they propose that consumer participation in the service process be displayed. To collect empirical data on the application of a mapping tool with the purpose of representing and analyzing the participation of consumer, the research explores the potential application of the Consumption Map tool, proposed by Womack and Jones (2006), in the study of tasks and interactions that compose service processes in which the consumer acts as co-producer, playing a fundamental role for the realization of the service. Therefore, the purpose of this research was limited to the analysis of service processes of \"do it yourself\" type with remote interaction and with face-to-face interaction between customer and provider, in cases in which process variability is low. The \"do it yourself\" type of service with remote interaction selected as the object of the study was a process of shared shopping by the internet, and for its mapping the data collection method adopted were the case study, using the service provider company as data source, and experiment, using its consumers as data source. For the \"do it yourself\" type of service with face-to-face interaction, it was selected the car parking process in a shopping center, and the data collection method adopted for its mapping was a survey conducted in situ with its consumers. At the end of the research, it is concluded that the construction and application of Consumption Map for service processes of \"do it yourself\" type, either in remote interaction mode or in face-to-face interaction mode are feasible and that the main difficulties in the construction of the maps refer to the time required for data collection and the adherence of participants to the research. Also, proposals for the development of future research related to the topic covered in this dissertation are listed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Jhingree, Jacquelyn. "The effect of charge and temperature on gas phase protein conformational landscapes : an ion mobility mass spectrometry investigation". Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/the-effect-of-charge-and-temperature-on-gas-phase-protein-conformational-landscapes--an-ion-mobility-mass-spectrometry-investigation(1ecd7b47-eca8-4bcb-a13a-2b2606ade74b).html.

Texto completo
Resumen
The amino acid sequence of a protein determines its 3D fold, the ease with which its native structure is formed, its function, the conformational preferences sampled and the tendency to interact with itself (aggregation) and binding partners. In addition, certain conformational preferences can lead to dysfunction resulting in different diseased states in organisms. All of these conformations can be described by a protein's energy landscape; a native (functional) state being localised at the energy minimum. As protein dynamics is crucial to function it is important to monitor the sampling of different conformations. Thus the work in this thesis reports on two methods for monitoring protein conformation and conformational change in the gas phase using ion mobility mass spectrometry (IM-MS). The measurement from IM-MS methods allow the determination of a collision cross section (CCS) which is an indicator of a molecule's 3D shape. First, the effect of charge on protein structure is investigated by manipulation of protein charge, post electrospray ionisation (ESI), by exposure to radical anions of the electron transfer reagent, 1,3-dicyanaobenzene; the charge reduced products formed are the result of electron transfer to the charged protein without any dissociation (ETnoD). IM-MS is used to monitor the conformational preferences of the altered and unaltered precursor and its products. Secondly, intermediate (transient) conformers are formed by activating the charged protein in the source region of an instrument post ESI. Activation of the protein precursor allows the sampling of different conformational preferences after energetic barriers have been overcome; IM-MS following activation allows for the monitoring of protein conformational change before and after. Further, variable temperature (VT) IM-MS allows for the deduction of intermediate structures with a focus on measurements at cryogenic temperatures whereby intermediate structures can be 'frozen out' post activation; intermediate structures which would otherwise anneal out at room temperature. With both methods a range of conformer populations are mapped for different protein molecules sampled upon different energetic inputs (via activation) and the disruption of intramolecular neutralising contacts/salt bridges (via charge reduction) one of the main interactions responsible for maintaining the structural integrity (3D fold) of proteins.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Tomasini, Jérôme. "Géométrie combinatoire des fractions rationnelles". Thesis, Angers, 2014. http://www.theses.fr/2014ANGE0032/document.

Texto completo
Resumen
Le but de cette thèse est d’étudier, à l’aide d’outils combinatoires simples, différentes structures géométriques construites à partir de l’action d’un polynôme ou d’une fraction rationnelle. Nous considérerons d’abord la structure de l'ensemble des solutions séparatrices d’un champ de vecteurs polynomial ou rationnel. Nous allons établir plusieurs modèles combinatoires de ces cartes planaires, ainsi qu’une formule fermée énumérant les différentes structures topologiques dans le cas polynomial. Puis nous parlerons de revêtements ramifiés de la sphère que nous modéliserons, via un objet combinatoire nommée carte équilibrée, à partir d’une idée originale de W.Thurston. Ce modèle nous permettra de démontrer (géométriquement) de nombreuses propriétés de ces objets, et d’offrir une nouvelle approche et de nouvelles perspectives au problème d’Hurwitz, qui reste encore aujourd’hui un problème ouvert. Et enfin nous aborderons le sujet de la dynamique holomorphe via les primitives majeures dont l’utilité est de permettre de paramétrer les systèmes dynamiques engendrés par l’itération de polynômes. Cette approche nous permettra de construire une bijection entre les suites de parking et les arbres de Cayley, ainsi que d’établir une formule fermée liée à l’énumération d’un certain type d’arbres relié à la fois aux primitives majeures et aux revêtements ramifiés polynomiaux
The main topic of this thesis is to study, thanks to simple combinatorial tools, various geometric structures coming from the action of a complex polynomial or a rational function on the sphere. The first structure concerns separatrix solutions of polynomial or rational vector fields. We will establish several combinatorial models of these planar maps, as well as a closed formula enumerating the different topological structures that arise in the polynomial settings. Then, we will focus on branched coverings of the sphere. We establish a combinatorial coding of these mappings using the concept of balanced maps, following an original idea of W. Thurston. This combinatorics allows us to prove (geometrically) several properties about branched coverings, and gives us a new approach and perspective to address the still open Hurwitz problem. Finally, we discuss a dynamical problem represented by primitive majors. The utility of these objects is to allow us to parameterize dynamical systems generated by the iterations of polynomials. This approach will enable us to construct a bijection between parking functions and Cayley trees, and to establish a closed formula enumerating a certain type of trees related to both primitive majors and polynomial branched coverings
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Zimmer, Leonardo. "Numerical study of soot formation in laminar ethylene diffusion flames". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/150754.

Texto completo
Resumen
O objetivo desta tese é o estudo de formação de fuligem em chamas laminares de difusão. Para o modelo de formação de fuligem é escolhido um modelo semi-empírico de duas equações para prever a fração mássica de fuligem e o número de partículas de fuligem. O modelo descreve os processos de nucleação, de crescimento superficial e de oxidação das partículas. Para o modelo de radiação, a perda de calor por radiação térmica (gás e fuligem) é modelada considerando o modelo de gás cinza no limite de chama opticamente fina (OTA - Optically Thin Approximation). São avaliados diferentes modelos de cálculo das propriedades de transporte (detalhado e simplificado). Em relação à cinética química, tanto modelos detalhados quanto reduzidos são utilizados. No presente estudo, é explorada a técnica automática de redução conhecida como Flamelet Generated Manifold (FGM), sendo que esta técnica é capaz de resolver cinética química detalhada com tempos computacionais reduzidos. Para verificar o modelo de formação de fuligem foram realizados uma variedade de experimentos numéricos, desde chamas laminares unidimensionais adiabáticas de etileno em configuração tipo jatos opostos (counterflow) até chamas laminares bidimensionais com perda de calor de etileno em configuração tipo jato (coflow). Para testar a limitação do modelo os acoplamentos de massa e energia entre a fase sólida e a fase gasosa são investigados e quantificados para as chamas contra-corrente Os resultados mostraram que os termos de radiação da fase gasosa e sólida são os termos de maior importancia para as chamas estudas. Os termos de acoplamento adicionais (massa e propriedade termodinâmicas) são geralmente termos de efeitos de segunda ordem, mas a importância destes termos aumenta conforme a quantidade de fuligem aumenta. Como uma recomendação geral o acoplamento com todos os termos deve ser levado em conta somente quando a fração mássica de fuligem, YS, for igual ou superior a 0.008. Na sequência a formação de fuligem foi estudada em chamas bi-dimensionais de etileno em configuração jato laminar usando cinética química detalhada e explorando os efeitos de diferentes modelos de cálculo de propriedades de transporte. Foi encontrado novamente que os termos de radiação da fase gasosa e sólida são os termos de maior importância e uma primeira aproximação para resolver a chama bidimensional de jato laminar de etileno pode ser feita usando o modelo de transporte simplificado. Finalmente, o modelo de fuligem é implementado com a técnica de redução FGM e diferentes formas de armazenar as informações sobre o modelo de fuligem nas tabelas termoquímicas (manifold) são testadas A melhor opção testada neste trabalho é a de resolver todos os flamelets com as fases sólida e gasosa acopladas e armazenar as taxas de reação da fuligem por área de partícula no manifold. Nas simulações bidimensionais estas taxas são então recuperadas para resolver as equações adicionais de formação de fuligem. Os resultados mostraram uma boa concordância qualitativa entre as predições do FGM e da solução detalhada, mas a grande quantidade de fuligem no sistema ainda introduz alguns desafios para a obtenção de bons resultados quantitativos. Entretanto, este trabalho demonstrou o grande potencial do método FGM em predizer a formação de fuligem em chamas multidimensionais de difusão de etileno em tempos computacionais reduzidos.
The objective of this thesis is to study soot formation in laminar diffusion flames. For soot modeling, a semi-empirical two equation model is chosen for predicting soot mass fraction and number density. The model describes particle nucleation, surface growth and oxidation. For flame radiation, the radiant heat losses (gas and soot) is modelled by using the grey-gas approximation with Optically Thin Approximation (OTA). Different transport models (detailed or simplified) are evaluated. For the chemical kinetics, detailed and reduced approaches are employed. In the present work, the automatic reduction technique known as Flamelet Generated Manifold (FGM) is being explored. This reduction technique is able to deal with detailed kinetic mechanisms with reduced computational times. To assess the soot formation a variety of numerical experiments were done, from one-dimensional ethylene counterflow adiabatic flames to two-dimensional coflow ethylene flames with heat loss. In order to assess modeling limitations the mass and energy coupling between soot solid particles and gas-phase species are investigated and quantified for counterflow flames. It is found that the gas and soot radiation terms are of primary importance for flame simulations. The additional coupling terms (mass and thermodynamic properties) are generally a second order effect, but their importance increase as the soot amount increases As a general recommendation the full coupling should be taken into account only when the soot mass fraction, YS, is equal to or larger than 0.008. Then the simulation of soot is applied to two-dimensional ethylene co-flow flames with detailed chemical kinetics and explores the effect of different transport models on soot predictions. It is found that the gas and soot radiation terms are also of primary importance for flame simulations and that a first attempt to solve the two-dimensional ethylene co-flow flame can be done using a simplified transport model. Finally an implementation of the soot model with the FGM reduction technique is done and different forms for storing soot information in the manifold is explored. The best option tested in this work is to solve all flamelets with soot and gas-phase species in a coupled manner, and to store the soot rates in terms of specific surface area in the manifold. In the two-dimensional simulations, these soot rates are then retrieved to solve the additional equations for soot modeling. The results showed a good qualitative agreement between FGM solution and the detailed solution, but the high amount of soot in the system still imposes some challenges to obtain good quantitative results. Nevertheless, it was demonstrated the great potential of the method for predicting soot formation in multidimensional ethylene diffusion flames with reduced computational time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Alecu, Lucian. "Une approche neuro-dynamique de conception des processus d'auto-organisation". Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00606926.

Texto completo
Resumen
Dans ce manuscrit nous proposons une architecture neuronale d'inspiration corticale, capable de développer un traitement émergent de type auto-organisation. Afin d'implémenter cette architecture neuronale de manière distribuée, nous utilisons le modèle de champs neuronaux dynamiques, un formalisme mathématique générique conçu pour modéliser la compétition des activités neuronales au niveau cortical mésoscopique. Pour analyser en détail les propriétés dynamiques des modèles de référence de ce formalisme, nous proposons un critère formel et un instrument d'évaluation, capable d'examiner et de quantifier le comportement dynamique d'un champ neuronal quelconque dans différents contextes de stimulation. Si cet instrument nous permet de mettre en évidence les avantages pratiques de ces modèles, il nous révèle aussi l'incapacité de ces modèles à conduire l'implantation des processus d'auto-organisation (implémenté par l'architecture décrite) vers des résultats satisfaisants. Ces résultats nous amènent à proposer une alternative aux modèles classiques de champs, basée sur un mécanisme de rétro-inhibition, qui implémente un processus local de régulation neuronale. Grâce à ce mécanisme, le nouveau modèle de champ réussit à implémenter avec succès le processus d'auto-organisation décrit par l'architecture proposée d'inspiration corticale. De plus, une analyse détaillée confirme que ce formalisme garde les caractéristiques dynamiques exhibées par les modèles classiques de champs neuronaux. Ces résultats ouvrent la perspective de développement des architectures de calcul neuronal de traitement d'information pour la conception des solutions logicielles ou robotiques bio-inspirées.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Perout, Kateřina. "Vytvoření plánu projektu účasti na mezinárodních veletrzích". Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2020. http://www.nusl.cz/ntk/nusl-433412.

Texto completo
Resumen
Diplomová práce se zaměřuje na využití projektového managementu v praxi, konkrétně jeho využití při plánování účasti společnosti na mezinárodních veletrzích. Práce se opírá o teoretický základ projektového řízení, který je využit na konkrétních případech plánování veletrhů. Výsledkem práce je konkrétní postup a checklist, který by měl sloužit jako podpora projektovým manažerům v plánování.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Waddington, Kris Ian. "Diet and trophic role of western rock lobsters (Panulirus cygnus George) in temperate Western Australian deep-coastal ecosystems (35-60m)". University of Western Australia. School of Plant Biology, 2008. http://theses.library.uwa.edu.au/adt-WU2009.0035.

Texto completo
Resumen
[Truncated abstract] Removal of consumers through fishing has been shown to influence ecosystem structure and function by changing the biomass and composition of organisms occupying lower trophic levels. The western rock lobster (Panurilus cygnus), an abundant consumer along the temperate west coast of Australia, forms the basis of Australia's largest single species fishery, with catches frequently exceeding 11000 tonnes annually. Despite their high abundance and commercial importance, the diet and trophic role of adult lobster populations in deep-coastal-ecosystems (35-60 m) remains unknown. An understanding of the diet and trophic role of lobsters in these ecosystems is a key component of the assessment of ecosystem effects of the western rock lobster fishery. This study uses gut content and stable isotope analyses to determine the diet and trophic role of lobsters in deep-coastal ecosystems. Dietary analysis indicated adult lobsters in deep-coastal ecosystems were primarily carnivorous with diet reflecting food available on the benthos. Gut content analyses indicate crabs (62 %) and amphipods/isopods (~10 %) are the most important lobster dietary sources. Stable isotope analysis indicates natural diet of lobsters in deep coastal ecosystems is dominated by amphipods/isopods (contributing up to ~50 %) and crabs (to ~75 %), with bivalves/gastropods, red algae and sponges of lesser importance (<10 % of diet each). Diet of lobsters in deep-coastal ecosystems differed from that reported for lobsters inhabiting shallow water ecosystems in this region, reflecting differences in food availability and food choice between these ecosystems. Bait from the fishery was also determined (by stable isotope analyses) to be a significant dietary component of lobsters in deep-coastal ecosystems, contributing between 10 and 80 % of lobster food requirements at some study locations. '...' Given observed effects of organic matter addition in trawl fisheries, and also associated with aquaculture, bait addition is likely to have implications for processes occurring within deep-coastal ecosystems in this region, particularly given its oligotrophic status, most likely by increasing the food available to scavenging species. Removal of lobsters from deep-coastal ecosystems may affect the composition and abundance of lobster prey communities through a reduction in predation pressure. Such effects have been demonstrated for other spiny lobster species. These effects are typically most observable amongst common prey taxa which in other studies have been commonly herbivores. In deep-coastal ecosystems, crabs and amphipods/isopods are the most common prey taxa and most likely to be effected. The ecosystem-impacts of top-down control of non-herbivorous prey species is unknown and constrains the inferences possible from this study. However, the establishment of 'no-take' areas in deep-coastal ecosystems would allow the ecosystem effects of lobster removal to be further assessed in these deep-coastal ecosystems. While data from the current study did not allow the ecosystem effects of lobster removal to be properly assessed, this study provided information regarding the ecology of western rock lobsters in previously unstudied ecosystems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Alecu, Lucian. "Une approche neuro-dynamique de conception des processus d'auto-organisation". Electronic Thesis or Diss., Nancy 1, 2011. http://www.theses.fr/2011NAN10031.

Texto completo
Resumen
Dans ce manuscrit nous proposons une architecture neuronale d'inspiration corticale, capable de développer un traitement émergent de type auto-organisation. Afin d'implémenter cette architecture neuronale de manière distribuée, nous utilisons le modèle de champs neuronaux dynamiques, un formalisme mathématique générique conçu pour modéliser la compétition des activités neuronales au niveau cortical mésoscopique. Pour analyser en détail les propriétés dynamiques des modèles de référence de ce formalisme, nous proposons un critère formel et un instrument d'évaluation, capable d'examiner et de quantifier le comportement dynamique d'un champ neuronal quelconque dans différents contextes de stimulation. Si cet instrument nous permet de mettre en évidence les avantages pratiques de ces modèles, il nous révèle aussi l'incapacité de ces modèles à conduire l'implantation des processus d'auto-organisation (implémenté par l'architecture décrite) vers des résultats satisfaisants. Ces résultats nous amènent à proposer une alternative aux modèles classiques de champs, basée sur un mécanisme de rétro-inhibition, qui implémente un processus local de régulation neuronale. Grâce à ce mécanisme, le nouveau modèle de champ réussit à implémenter avec succès le processus d'auto-organisation décrit par l'architecture proposée d'inspiration corticale. De plus, une analyse détaillée confirme que ce formalisme garde les caractéristiques dynamiques exhibées par les modèles classiques de champs neuronaux. Ces résultats ouvrent la perspective de développement des architectures de calcul neuronal de traitement d'information pour la conception des solutions logicielles ou robotiques bio-inspirées
In this work we propose a cortically inspired neural architecture capable of developping an emergent process of self-organization. In order to implement this neural architecture in a distributed manner, we use the dynamic neural fields paradigm, a generic mathematical formalism aimed at modeling the competition between the neural activities at a mesoscopic level of the cortical structure. In order to examine in detail the dynamic properties of classical models, we design a formal criterion and an evaluation instrument, capable of analysing and quantifying the dynamic behavior of the any neural field, in specific contexts of stimulation. While this instrument highlights the practical advantages of the usage of such models, it also reveals the inability of these models to help implementing the self-organization process (implemented by the described architecture) with satisfactory results. These results lead us to suggest an alternative to the classical neural field models, based on a back-inhibition model which implements a local process of neural activity regulation. Thanks to this mechanism, the new neural field model is capable of achieving successful results in the implementation of the self-organization process described by our cortically inspired neural architecture. Moreover, a detailed analysis confirms that this new neural field maintains the features of the classical field models. The results described in this thesis open the perspectives for developping neuro-computational architectures for the design of software solutions or biologically-inspired robot applications
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Sosna, Petr. "Dynamický model nelineárního oscilátoru s piezoelektrickou vrstvou". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-443718.

Texto completo
Resumen
Tato diplomová práce je zaměřena na analýzu chování magnetopiezoelastického kmitajícího nosníku. V teoretické části jsou odvozeny diskretizované parametry, které popisují reálnou soustavu jako model s jedním stupněm volnosti. Tento model je následně použit pro kvalitativní i kvantitativní analýzu chování tohoto harvesteru. Frekvenční odezva harmonicky buzeného systému je zkoumána v dvouparametrické nebo tříparametrické analýze v závislosti na amplitudě buzení, elektrické zátěži a vzdálenosti mezi magnety. Posledně zmíněný parametr je v práci tím hlavním, proto je vliv vzdálenosti magnetů zkoumán také s pomocí bifurkačních diagramů. Tyto diagramy byly navíc použity k vytvoření oscilační "mapy", která pro každé zatěžovací podmínky ukazuje, jakou vzdálenost magnetů je třeba nastavit, aby bylo generováno nejvíce energie. Práce je doplněna o ukázky několika jevů, které mohou značně ovlivnit chování systému, pokud se nejdená o čistě harmonické buzení.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Escudie, Antony. "From the observation of UHECR signal in [1-200] MHz to the composition with the CODALEMA and EXTASIS experiments". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0145/document.

Texto completo
Resumen
Malgré la découverte des rayons cosmiques il y a plus de cent ans, de nombreuses questions restent aujourd’hui sans réponse : que sont les rayons cosmiques, comment sont-ils créés et d’où viennent-ils ? Depuis 2002, l’instrument CODALEMA, basé sur le site de l’Observatoire de radio-astronomie de Nançay, étudie les rayons cosmiques d’ultra haute énergie (RCUHE, au delà de 1017 eV) qui arrivent dans l’atmosphère terrestre. Leur faible flux rend impossible une détection directe à ces énergies. Ces rayons cosmiques vont cependant interagir avec les atomes de l’atmosphère, engendrant une cascade de particules secondaires chargées communément appelée gerbe de particules, détectable depuis le sol, et dont on va extraire des informations sur le rayon cosmique primaire. L’objectif est de remonter aux caractéristiques du primaire ayant engendré la gerbe de particules, donc de déterminer sa direction d’arrivée, sa nature et son énergie. Lors du développement de la gerbe, les particules chargées en mouvement engendrent notamment l’émission d’une impulsion de champ électrique très brève, que CODALEMA détecte au sol avec des antennes radio dédiées, sur une large bande de fréquences (entre 1 et 200 MHz). L’avantage majeur de la radio-détection est sa sensibilité au profil complet de la gerbe et son cycle utile proche des 100 %, qui pourrait permettre d’augmenter le nombre d’évènements détectés à très haute énergie, et donc de mieux contraindre les propriétés des RCUHE. Au fil des ans, des efforts importants ont été consacrés à la compréhension de l’émission radio-électrique des grandes gerbes de particules dans la gamme [20-80] MHz mais, malgré certaines études menées jusqu’aux années 90, la bande [1-10] MHz est restée inutilisée pendant près de 30 ans. L’une des contributions de cette thèse porte sur l’expérience EXTASIS, adossée à CODALEMA, qui vise à ré-investiguer cette bande et à étudier la contribution dite de ”mort subite”, impulsion de champ électrique créé par les particules de la gerbe lors de leur arrivée et de leur disparition au sol. Nous présentons la configuration instrumentale d’EXTASIS, composée de 7 antennes basses fréquences exploitées dans [1.7-3.7] MHz, couvrant environ 1 km2. Nous rapportons l’observation, sur 2 ans, de 25 évènements détectés en coïncidence par CODALEMA et EXTASIS et estimons un seuil de détection de 23±4 μV/m à partir de comparaisons avec des simulations. Nous rapportons également une forte corrélation entre l’observation du signal basse fréquence et le champ électrique atmosphérique. L’autre contribution majeure de cette thèse porte sur l’étude du champ électrique émis par les gerbes et l’amélioration des performances du détecteur dans la bande [20-200] MHz. Nous proposons dans un premier temps une méthode de calibration des antennes de CODALEMA en utilisant l’émission radio de la Galaxie. Nous investiguons aussi plusieurs algorithmes de réjection de bruit afin d’améliorer la sélectivité des évènements enregistrés. Nous présentons ensuite une méthode de reconstruction des paramètres du rayon cosmique primaire, mettant en oeuvre des comparaisons combinant des informations de polarisation et fréquentielles entre les données enregistrées et des simulations, nous menant enfin à une proposition de composition en masse des rayons cosmiques détectés
Despite the discovery of cosmic rays there are more than one hundred years ago, many questions remain unanswered today: what are cosmic rays, how are they created and where do they come from ? Since 2002, the CODALEMA instrument, located within the Nançay Radio Observatory, studies the ultra-high energy cosmic rays (UHECR, above 1017 eV) arriving in the Earth atmosphere. Their low flux makes it impossible to detect them directly at these energies. These cosmic rays, however, will interact with the atoms of the atmosphere, generating a cascade of secondary charged particles, commonly known as extensive air shower (EAS), detectable at ground level, and from which we will extract information on the primary cosmic ray. The objective is to go back to the characteristics of the primary that generated the EAS, thus to determine its direction of arrival, its nature and its energy. During the development of the shower, these charged particles in movement generate a fast electric field transient, detected at ground by CODALEMA with dedicated radio antennas over a wide frequency band (between 1 and 200 MHz). The major advantage of radio-detection is its sensibility to the whole profile of the shower and its duty cycle close to 100 %, which could increase the number of events detected at very high energy, and thus to better constrain the properties of the RCUHE. Over the years, significant efforts have been devoted to the understanding of the radio emission of extensive air shower (EAS) in the range [20-80] MHz but, despite some studies led until the nineties, the[1-10] MHz band has remained unused for nearly 30 years. One of the contributions of this thesis concerns the EXTASIS experiment, supported by the CODALEMA instrument, which aims to reinvestigate the [1-10] MHz band and to study the so-called ”sudden death” contribution, which is the expected impulsive electric field created by the particles at their arrival and their disappearance on the ground. We present the instrumental set up of EXTASIS, composed of 7 low frequency antennas exploited in [1.7-3.7] MHz, covering approximately 1 km2. We report the observation, over 2 years, of 25 low-frequency events detected in coincidence by CODALEMA and EXTASIS and estimate a detection limit of 23±4 μV/m from comparisons with simulations. We also report a strong correlation between the observation of the low frequency signal and the atmospheric electric field. The other major contribution of this thesis concerns the study of the electric field emitted by the EAS and the improvement of the detector’s performances in the [20-200] MHz band. First, we propose a calibration method for CODALEMA antennas using the radio emission of the Galaxy. We are also investigating several noise rejection algorithms to improve the selectivity of recorded events. We then present a method for reconstructing the parameters of the primary cosmic ray, implementing systematic comparisons combing polarization and frequency information between the recorded data and simulations, leading finally to a proposal for a mass composition of cosmic rays detected
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

DiBari, Michael Jr. "Advancing the Civil Rights Movement: Race and Geography of Life Magazine's Visual Representation, 1954-1965". Ohio University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1304690025.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Pinho, Deyna. "Contribuição à petrografia de pedra britada". Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/44/44137/tde-25082007-000528/.

Texto completo
Resumen
O conhecimento das propriedades físico-químicas da composição mineralógica dos agregados é de extrema importância para o não comprometimento da obra em que serão empregados. Desse modo, o conhecimento da petrografia, mineralogia e geologia das rochas-fonte para brita também são extremamente necessárias. O principal objetivo deste trabalho foi gerar informações sobre a geologia, mercado produtor e petrografia das rochas-fonte da pedra britada nas principais regiões produtoras do país. As informações disponíveis neste segmento da mineração são escassas, principalmente devido às próprias características do setor onde os investimentos em pesquisas geológicas geralmente são escassas e por vezes pouco exigidas. Os cinco principais pólos produtores de pedra britada, alvos de estudo deste trabalho, incluem as cinco maiores regiões metropolitanas do país: São Paulo,Minas Gerais, Rio de Janeiro , Paraná , Rio Grande do Sul. São locais que possuem diferentes rochas-fonte de brita para cada centro produtor, devido à diversidade geológica e abundância daquelas nestes centros. Assim sendo, na região de São Paulo capital a principal rocha-fonte utilizada são granitos e gnaisses provenientes do Embasamento; na região de Belo Horizonte são os calcários provenientes do Grupo Bambuí; na região do Rio de Janeiro capital são os sienitos alcalinos, localizados em diversos corpos alcalinos intrusivos e gnaisses; na região de Curitiba (RMC) são calcários (Formações Perau e Votuverava) e migmatitos extraídos de complexos migmatíticos; e na região de Porto Alegre (RMPA) são predominatemente basaltos e dacitos da Formação Serra Geral. Neste trabalho foi gerado um mapa geológico com localização das pedreiras ativas no período de 2004-2006 para cada região metropolitana relativa à capital de cada Estado. Em cada região foram selecionadas as minerações representativas de acordo com a geologia (rocha-fonte) e produtividade e feitas amostragens e mapeamento em frentes de lavra para a realização de análises petrográficas. As 180 amostras coletadas nas diferentes regiões metropolitanas foram analisadas petrograficamente de forma macroscópica, selecionadas e analisadas na forma microscópica, com base nas normas ABNT e recomendações do Laboratório de Petrologia e Tecnologia de Rochas do IPT. As principais características observadas foram: a composição mineralógica, texturas, estruturas, presença de minerais deletérios, grau de alteração deutérica e estado microfissural. Essas características intrínsecas da rocha-fonte influenciam diretamente a forma e a composição do material britado, e podem dificultar sua aplicação ou mesmo comprometê-la, tanto por motivo de geração de reação álcali-agregado com ligantes quanto por comprometer a resistência mecânica exigida na mistura. O desconhecimento dessas características muitas vezes gera um baixo aproveitamento dos materiais, principalmente finos de pedreira, que se acumulam em pilhas de rejeito ao derredor das empresas mineradoras podendo causar sérios problemas ambientais. Portanto, o trabalho gerou informações para uma melhor otimização e utilização das matérias-primas ou rochas-fonte de brita, contribuindo também indiretamente na redução desses problemas ambientais que atingem as principais regiões urbanas do país.
It is extremely important to know the physical and chemical properties of aggregate mineralogical composition so that the construction where they will be used is not compromised. In this sense, knowing the petrography and mineralogy is as necessary as knowing the geology of the rock deposit to be developed as a source of crushed stone. The main purpose of this work was to generate information on the geology, market and petrography of the rock source of crushed stone in the main producing areas of Brazil. This type of information is not commonly available, especially due to this sector?s characteristics, where investments in geological research are usually scarce and rarely required. The five main states that are crushed stone producers and that therefore contain the centers of production on which this present work focused as case study are: São Paulo, Minas Gerais, Rio de Janeiro, Paraná and Rio Grande do Sul. Each production center presents different types of crushed stone, mainly because of the geological diversity and abundance of the source rock in these places. In the region of the capital of São Paulo the main source rocks are granite and gneiss extracted from the embasement; in Belo Horizonte they are carbonates from the Bambuí group; in Rio de Janeiro, the alkali sienites, localized in diverse intrusive alkaline rocks and gneiss; in the region of Curitiba they are carbonates (Perau and Votuverava Formations) and migmatites extracted from migmatite complex; finally, in the region of Porto Alegre (RMPA), they are basalts and dacites from the Serra Geral Formation. The rock mines in urban regions, related to the state capitals and which were active in the period from 2004 to 2006, are shown on the geological maps generated for the present work. One map has been made for each urban region. The most important mines are shown according to the geology of the source-rock and the productivity. Samples and mapping or description of the benches from the over feet were also made in order to further proceed in petrographic analysis. The 180 samples collected in each urban region suffered macroscopic petrographic analysis after which they were selected and analysed microscopically, according to the ABNT norms and to the recommendation of the Laboratory of Petrology and Rock Technology of IPT. The main observed characteristics were: mineral composition, texture, structure, presence of deleterious minerals, degree in metheoric alteration and microfissural mapping. These intrinsic characteristics of the source rock influence directly the form and composition of the crushed stones and might cause difficulty or even compromise its use due to alkali-aggregate reaction or because of mechanical resistance lower than that required in the mixture. The lack of acknowledgement of these characteristics will often cause the poor use of material, especially of the stone quarry fines, which will end up as reject piled up around mines, causing environmental problems. Therefore, the present work has generated relevant information that can be used to optimize and better use raw material and source rock of crushed stone. It might also contribute indirectly to diminish the environmental problems which are evident in the main urban regions of the country.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Gallet, Florian. "Modélisation de l'évolution du moment cinétique des étoiles de faible masse". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENY055/document.

Texto completo
Resumen
En 1972, Skumanich découvre une relation empirique unique entre la période de rotation de surface des étoiles G et leur âge sur la séquence principale. Cette découverte ouvrit alors une nouvelle voie pour la datation stellaire : la gyrochronologie. Dès lors, bon nombre d'auteurs, entre la fin des années 80 et 90, se sont intéressés à l'évolution de la vitesse de rotation de surface des étoiles de faible masse ($M_*$ = 0.4 $M_{odot}$- 1.1 $M_{odot}$). Les premiers modèles phénoménologies sur le sujet été nés.L'évolution de la vitesse de rotation de ces étoiles commence à être raisonnablement bien reproduite par la classe de modèle paramétrique que je présente dans cette thèse. Par manque de descriptions théoriques satisfaisantes, seuls les effets globaux des mécanismes physiques impliqués sont ici décris. Le principal enjeu est d'étudier le cadre et la façon dont le moment cinétique stellaire est impacté par ces processus tout en contraignant leurs principales caractéristiques.Au cours de ma thèse, j'ai modélisé les trajets rotationnels des enveloppes externes et médianes des distributions de période de rotation de 18 amas stellaire entre 1 Myr et 1 Gyr. Ceci m'a permis d'analyser la dépendance temporelle des mécanismes physiques impliqués dans l'évolution du moment cinétique des étoiles de type solaire. Les résultats que j'ai obtenus montrent que l'évolution de la rotation différentielle interne impact fortement la convergence rotationnelle (relation empirique de Skumanich), l'évolution de l'abondance de surface en lithium, et les intensités du champ magnétique généré par effet dynamo. En plus de reproduire ces enveloppes externes, le modèle que j'ai développé fournit des contraintes sur les mécanismes de redistribution interne du moment cinétique et sur les durées de vie des disques circumstellaires, supposés responsables de la régulation rotationnelle observée durant les quelques premiers millions d'années de la pré-séquence principale. L'extension du modèle aux étoiles moins massives (0.5 et 0.8 $M_{odot}$) que j'ai réalisé, a également fournis la dépendance en masse de ces différents processus physiques.Cette étape à notamment ajoutée de fortes contraintes sur les temps caractéristiques associés au transport de moment cinétique entre le coeur et l'enveloppe, sur l'efficacité du freinage magnétique vraisemblablement reliée à un changement de topologie des étoiles de type solaire vers celles de 0.5 $M_{odot}$, et sur l'histoire rotationnelle, interne comme de surface, des étoiles entre 1 Myr à 1 Gyr
In 1972, Skumanich discovers a unique empirical relationship between the rotation period of the surface of G star and their age on the main sequence. This discovery then opened a new path for stellar dating: the gyrochronology. Therefore, many authors in the late 80's and the begenning 90's, were interested in the evolution of the surface angular velocity of low-mass stars ($M_*$ = 0.4 $M_{odot}$- 1.1 $M_{odot}$). The first phenomenological models on the subject were born.The angular velocity evolution of these stars begins to be reasonably well reproduced by the class of parametrical model that I present in this thesis. Because of the lack of adequate theoretical descriptions, only the overall effects of the physical mechanisms involved are described here. The main issue is to study the framework and how the stellar angular momentum is affected by these processes and to constrain their main characteristics.Over the course of my thesis, I modelled the rotational tracks of external and median envelopes and median of rotation period distributions of 18 stellar clusters between 1 Myr and 1 Gyr. This allowed me to analyse the time dependence of the physical mechanisms involved in the angular momentum evolution of solar-type stars. The results I obtained show that the evolution of the internal differential rotation significantly impact the rotational convergence (empirical Skumanich's relationship), the evolution of the surface lithium abundance, and the intensity of the magnetic field generated by dynamo effect. In addition to the reproduction of these external envelopes, the model I developed provides constraints on the mechanisms of internal redistribution of angular momentum and the lifetimes of circumstellar disks, that are held responsible for the rotational regulation observed during the first few million years of pre-main sequence. The extension of the model to less massive stars (0.5 et 0.8 $M_{odot}$) that I performed also provided the mass dependence of these physical processes. Most specifically, this step added strong constraints on the characteristic time associated to the transport of angular momentum between the core and the envelope, on the efficiency of magnetic braking likely related to a change of topology from solar-type stars to those of 0.5 $M_{odot}$, and on the internal and external rotational history of stars from 1 Myr to 1 Gyr
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Bernard, Yann. "Calcul neuromorphique pour l'exploration et la catégorisation robuste d'environnement visuel et multimodal dans les systèmes embarqués". Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0295.

Texto completo
Resumen
Tandis que la quête pour des systèmes de calcul toujours plus puissants se confronte à des contraintes matérielles de plus en plus fortes, des avancées majeures en termes d’efficacité de calcul sont supposées bénéficier d’approches non conventionnelles et de nouveaux modèles de calcul tels que le calcul inspiré du cerveau. Le cerveau est une architecture de calcul massivement parallèle avec des interconnexions denses entre les unités de calcul. Les systèmes neurobiologiques sont donc une source d'inspiration naturelle pour la science et l'ingénierie informatiques. Les améliorations technologiques rapides des supports de calcul ont récemment renforcé cette tendance à travers deux conséquences complémentaires mais apparemment contradictoires : d’une part en offrant une énorme puissance de calcul, elles ont rendu possible la simulation de très grandes structures neuronales comme les réseaux profonds, et d’autre part en atteignant leurs limites technologiques et conceptuelles, elles ont motivé l'émergence de paradigmes informatiques alternatifs basés sur des concepts bio-inspirés. Parmi ceux-ci, les principes de l’apprentissage non supervisé retiennent de plus en plus l’attention.Nous nous intéressons ici plus particulièrement à deux grandes familles de modèles neuronaux, les cartes auto-organisatrices et les champs neuronaux dynamiques. Inspirées de la modélisation de l’auto-organisation des colonnes corticales, les cartes auto-organisatrices ont montré leur capacité à représenter un stimulus complexe sous une forme simplifiée et interprétable, grâce à d’excellentes performances en quantification vectorielle et au respect des relations de proximité topologique présentes dans l’espace d’entrée. Davantage inspirés des mécanismes de compétition dans les macro-colonnes corticales, les champs neuronaux dynamiques autorisent l’émergence de comportements cognitifs simples et trouvent de plus en plus d’applications dans le domaine de la robotique autonome notamment.Dans ce contexte, le premier objectif de cette thèse est de combiner cartes auto-organisatrices (SOM) et champs neuronaux dynamiques (DNF) pour l’exploration et la catégorisation d’environnements réels perçus au travers de capteurs visuels de différentes natures. Le second objectif est de préparer le portage de ce calcul de nature neuromorphique sur un substrat matériel numérique. Ces deux objectifs visent à définir un dispositif de calcul matériel qui pourra être couplé à différents capteurs de manière à permettre à un système autonome de construire sa propre représentation de l’environnement perceptif dans lequel il évolue. Nous avons ainsi proposé et évalué un modèle de détection de nouveauté à partir de SOM. Les considérations matérielles nous ont ensuite amené à des optimisations algorithmiques significatives dans le fonctionnement des SOM. Enfin, nous complémenté le modèle avec des DNF pour augmenter le niveau d'abstraction avec un mécanisme attentionnel de suivi de cible
As the quest for ever more powerful computing systems faces ever-increasing material constraints, major advances in computing efficiency are expected to benefit from unconventional approaches and new computing models such as brain-inspired computing. The brain is a massively parallel computing architecture with dense interconnections between computing units. Neurobiological systems are therefore a natural source of inspiration for computer science and engineering. Rapid technological improvements in computing media have recently reinforced this trend through two complementary but seemingly contradictory consequences: on the one hand, by providing enormous computing power, they have made it possible to simulate very large neural structures such as deep networks, and on the other hand, by reaching their technological and conceptual limits, they have motivated the emergence of alternative computing paradigms based on bio-inspired concepts. Among these, the principles of unsupervised learning are receiving increasing attention.We focus here on two main families of neural models, self-organizing maps and dynamic neural fields. Inspired by the modeling of the self-organization of cortical columns, self-organizing maps have shown their ability to represent a complex stimulus in a simplified and interpretable form, thanks to excellent performances in vector quantization and to the respect of topological proximity relationships present in the input space. More inspired by competition mechanisms in cortical macro-columns, dynamic neural fields allow the emergence of simple cognitive behaviours and find more and more applications in the field of autonomous robotics.In this context, the first objective of this thesis is to combine self-organizing maps and dynamic neural fields for the exploration and categorisation of real environments perceived through visual sensors of different natures. The second objective is to prepare the porting of this neuromorphic computation on a digital hardware substrate. These two objectives aim to define a hardware computing device that can be coupled to different sensors in order to allow an autonomous system to construct its own representation of the perceptual environment in which it operates. Therefore, we proposed and evaluated a novelty detection model based on self-organising maps. Hardware considerations then led us to significant algorithmic optimisations SOM operations. Finally, we complemented the model with dynamic neural fields to increase the level of abstraction with an attentional target tracking mechanism
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Magin, Thierry. "A model for inductive plasma wind tunnels". Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211179.

Texto completo
Resumen
A numerical model for inductive plasma wind tunnels is developed. This model provides the flow conditions at the edge of a boundary layer in front of a thermal protection material placed in the plasma jet stream at the outlet of an inductive torch. The governing equations for the hydrodynamic field are derided from the kinetic theory. The electromagnetic field is deduced from the Maxwell equations. The transport properties of partially ionized and unmagnetized plasma in weak thermal nonequilibrium are derived from the Boltzmann equation. A kinetic data base of transport collision integrals is given for the Martian atmosphere. Multicomponent transport algorithms based upon Krylov subspaces are compared to mixture rules in terms of accuracy and computational cost. The composition and thermodynamic properties in local thermodynamic

equilibrium are computed from the semi-classical statistical mechanics.

The electromagnetic and hydrodynamic fields of an inductive wind tunnel is presented. A total pressure measurement technique is thoroughly investigated by means of numerical simulations.


Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Karim, Marwah. "Hijacking of cullin4-based E3 ligases confers non-proteolytic ubiquitination of influenza A virus PB2 protein". Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7187.

Texto completo
Resumen
Le système ubiquitine protéasome (UPS), régule un grand nombre de processus cellulaires, en catalysant l’ubiquitination des protéines. Les virus exploitent l'UPS pour favoriser l’infection, et échapper à la réponse immunitaire de l'hôte. Nous nous sommes concentrés sur l'interaction entre la protéine PB2 de la polymérase du virus de l'influenza A (IAV) et des enzymes E3 ubiquitine ligases à cullin 4 (CRL4), en particulier les facteurs les protéines DDB1, DCAF11 et DCAF12L1 les composant. Nous avons montré que deux CRL4s sont des régulateurs positifs de l'infection virale, et qu’ils catalysent l’ubiquitination de la protéine PB2 pendant l'infection. Cette ubiquitination, constituée par des chaînes poly-ubiquitine K29, n’induit pas la dégradation de PB2. Les CRL4s peuvent interagir avec PB2 lorsqu’elle est engagée dans le complexe de polymérase virale, mais n’affectent ni la transcription ni la réplication des segments viraux. Les CRL4 catalysent l'ubiquitination de différentes lysines sur PB2, qui pourraient supporter des fonctions distinctes de PB2. Nos travaux fournissent la première caractérisation d'une PTM non protéolytique de PB2, qui semble être nécessaire à l’infection optimale par les IAV. De plus, en utilisant la purification par affinité suivie d'une spectrométrie de masse (AP/MS), nous avons identifié des ensembles distincts de facteurs cellulaires se liant aux CRL4s pendant l'infection. Ces résultats pointent vers une modification du protéome cellulaire ciblé par ces CRL4 E3 ligases pro-virales pendant l'infection par les IAV
The ubiquitin proteasome system regulates numerous cell processes, through ubiquitination of proteins. A vast interplay between viral proteins and host UPS exists, to promote successful infection and escape host’s immune response. We focused on the interaction between influenza A virus (IAV) polymerase protein PB2 and factors of the multi-components E3 ubiquitin ligase complex based on cullin 4 (CRL4) namely DDB1, DCAF11 and DCAF12L1 (designated as CRL4s). We found that PB2 undergoes a non-proteolytic ubiquitination, catalyzed by two CRL4s during infection. These CRL4s are positive regulators of viral infection, required for an optimal virions production and normal progression of viral cycle. We identified K29-linked ubiquitin chains as the main components of the non-proteolytic PB2 ubiquitination mediated by the CRL4s, thereby providing the first example of the role of this atypical ubiquitin linkage in the regulation of a viral infection. Although CRL4 E3 ligases are able to bind to PB2 when engaged in the viral polymerase complex, they did not affect the transcription and replication of viral segments. The two CRL4 ligases catalyzed the ubiquitination of different lysines on PB2, which might support distinct functions of PB2. Our work provides the first characterization of a non-proteolytic PTM of PB2, which might be essential for the successful outcome of an IAV infection. Furthermore, using affinity-purification followed by mass spectrometry (AP/MS) we identified distinct sets of cellular factors binding to the CRL4s during IAV infection. These results point towards the rewiring of cellular proteome targeted by the pro-viral CRL4 E3 ligases during IAV infection
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ribeiro, Alexsandra Christianne Malaquias de Moura. "Avaliação do padrão de crescimento na síndrome de Noonan em pacientes com mutações identificadas nos genes PTPN11, SOS1, RAF1 e KRAS". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/5/5135/tde-17062011-160529/.

Texto completo
Resumen
A Síndrome de Noonan (SN) é caracterizada por baixa estatura proporcionada de início pós-natal, dismorfismos faciais, cardiopatia congênita e deformidade torácica. A frequência da SN é estimada entre 1:1000 e 1:2500 nascidos vivos, com distribuição semelhante em ambos os sexos. A herança é autossômica dominante com penetrância completa, porém a maioria dos casos é esporádica. Até o momento, mutações em genes da via RAS-MAPK (PTPN11, KRAS, SOS1, RAF1, MEK1, NRAS e SHOC2) foram identificadas em aproximadamente 70% dos pacientes. Uma das principais características fenotípicas da SN é a baixa estatura pós-natal, embora o mecanismo fisiopatológico do déficit de crescimento nesta síndrome ainda não esteja totalmente esclarecido. Estudos que avaliaram o padrão de crescimento linear em crianças com SN foram realizados anteriormente ao conhecimento do diagnóstico molecular dessa síndrome. No presente estudo, avaliamos a frequência de mutação nos genes PTPN11, SOS1, RAF1 e KRAS em 152 pacientes com SN e o padrão de crescimento linear (altura) e ponderal [índice de massa corpórea (IMC)] dos pacientes com mutação identificada. No total, mutações nos genes relacionados foram encontradas em 99 pacientes (65%) do nosso estudo, com predominância do gene PTPN11 (47%), seguido do SOS1 (9%), RAF1 (7%) e KRAS (3%). Foram construídas curvas específicas para SN de Altura e IMC para idade e sexo utilizando o método LMS. Os pacientes com SN apresentaram crescimento pré-natal preservado, porém o comprometimento do crescimento pós-natal foi observado desde o primeiro ano de vida, atingindo uma altura final de -2,5 e -2,2 desvios-padrão da média para população brasileira em homens e mulheres, respectivamente. O prejuízo da altura foi maior nos pacientes com mutação no gene RAF1 em comparação com os genes PTPN11 e SOS1. O IMC dos pacientes com SN apresentou queda de 1 desvio-padrão em relação à média da população brasileira normal. O comprometimento do IMC foi menor nos pacientes carreadores de mutação no RAF1. Pacientes com mutação nos genes PTPN11 e SOS1 apresentaram maior frequência de estenose de valva pulmonar, enquanto a miocardiopatia hipertrófica foi mais frequente nos pacientes com mutação no gene RAF1. A variabilidade fenotípica observada nos pacientes com mutação no PTPN11 não pode ser explicada pelo grau que estas mutações influenciam a atividade tirosina fosfatase da SHP-2 nem pela presença de polimorfismos no gene KRAS. Com a análise dos éxons 3, 8 e 13 do PTPN11, seguido dos éxons 6 e 10 do SOS1 e éxon 7 do RAF1 identificamos 86% dos pacientes carreadores de mutações nos genes relacionados, propondo uma forma mais eficiente de avaliação molecular na SN. Acreditamos que a variabilidade fenotípica presente nessa síndrome esteja diretamente ligada aos diferentes papéis exercidos pelas proteínas que participam da via RAS/MAPK. Entretanto, mais estudos em relação à via RAS/MAPK serão necessários para esclarecer as questões relacionadas ao crescimento e outras características fenotípicas da SN
Noonan Syndrome (NS) is characterized by distinctive facial features, short stature and congenital heart defects. The estimated prevalence is 1:1000 to 1:2500 live births, affecting equally both sexes. It is an autosomal dominant disorder with complete penetrance, but most cases are sporadic. To date, mutations in the RAS/MAPK pathway genes (PTPN11, KRAS, SOS1, RAF1, MEK1, NRAS and SHOC2) were identified in approximately 70% of patients. One of the cardinal signs of NS is proportional postnatal short stature although the physiopathological mechanism of growth impairment remains unclear. The current knowledge about the natural history of growth associated with NS was described before molecular diagnosis era. In this study, we performed PTPN11, SOS1, RAF1, and KRAS mutation analysis in a cohort of 152 NS patients and studied the natural linear (height) and ponderal growth [body mass index (BMI)] of NS patients with related mutations. Mutations in NS-causative genes were found in 99 patients (65%) of our cohort. The most common mutated gene was PTPN11 (47%), followed by SOS1 (9%), RAF1 (7%) and KRAS (3%). Sex-specific percentile curves for height and BMI were constructed using the LMS method. NS patients had birth weight and length within normal ranges but the postnatal growth impairment was observed during the first year of life, reaching a final height of -2.3 and -2.2 standard deviations from the mean for Brazilian healthy men and women, respectively. Postnatal growth impairment was higher in RAF1 mutation patients than in patients with SOS1 and PTPN11 mutations. BMI values in NS patients were lower in comparison with normal Brazilian population. BMI values were higher in patients with RAF1 mutations than in patients with other genotypes. Patients with mutations in PTPN11 and SOS1 genes were more likely to have pulmonary valve stenosis, whereas hypertrophic cardiomyopathy was more common in patients with mutations in the gene RAF1. The intensity of constitutive tyrosine phosphatase activity of SHP-2 due to PTPN11 mutations, as well as the presence of polymorphisms in KRAS gene did not influence the phenotype of NS patients with mutation in PTPN11 gene. Analysis of exons 3, 8 and 13 of PTPN11 gene, followed by exons 6 and 10 of SOS1 gene and exon 7of RAF1 gene identified 86% of patients harboring mutations in related genes, suggesting a more efficient evaluation of NS molecular diagnosis. We believe that the phenotypic variability in this syndrome is directly linked to the different roles played by proteins that participate in RAS/MAPK pathway. However, further studies in RAS/MAPK pathway are needed to clarify issues related to growth and other phenotypic characteristics of SN
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Roussey, Claire. "Étude multi-échelle des transferts couplés de liquide et d’oxygène à travers la barrique en chêne et les douelles". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST037.

Texto completo
Resumen
L’élevage des vins et spiritueux en barrique en chêne modifie leurs qualités organoleptiques par deux phénomènes principaux. D’une part, le bois libère des composés volatiles et non volatiles qui enrichissent la boisson, et, d’autre part, les propriétés du bois permettent une légère oxydation tout au long du vieillissement. Dans ce dernier cas, les modes de transfert d’oxygène ainsi que les facteurs limitants sont aujourd’hui peu connus alors qu’ils sont d’une grande importance dans la qualité du produit final. Cette thèse vise à mieux comprendre la dynamique des transferts d'oxygène dans le chêne, et ce, en présence du front d'imbibition du liquide dû au contact entre le contenu et la surface interne de la barrique. À cette fin, plusieurs montages expérimentaux originaux ont été développés.Dans un premier temps, à l’échelle macroscopique, 4 barriques instrumentées dans un chai ont permis d’étudier ces transferts en conditions réelles. Il se produit une perte de liquide au cours de l’élevage du vin, ce qui génère une dépression interne. Ainsi, à la diffusion d’oxygène à travers la barrique, s’ajoute un phénomène de percolation de l’air vers l’intérieur de la barrique à partir d’un certain seuil de dépression. Ce seuil de percolation peut être atteint lors de variations des conditions en humidité relative et en température du chai, ce qui est expliqué par le changement dimensionnel de la barrique. On constate des apports d’oxygène entre 10 et 100 µg/L par événement de percolation. Ces apports ne sont pas négligeables par rapport à la quantité d’oxygène que le vin reçoit durant son élevage.Dans un second temps, à l’échelle microscopique, chaque mécanisme est traité de façon découplée : diffusion d’oxygène d’une part et suivi du front d’imbibition d’autre part. La diffusion d’oxygène est étudiée pour le chêne sessile (Quercus petraea (Matt.) Liebl.) et le chêne pédonculé (Quercus robur L.) de largeurs de cerne différentes grâce à un dispositif expérimental innovant. Un modèle numérique fondé sur la méthode des volumes finis est employé pour identifier le coefficient de diffusion. On constate une bonne représentation de la diffusion via la simulation. Ensuite, le suivi du front d’imbibition est réalisé par un système d’imagerie à rayons X sur des échantillons de merrains en contact avec de l’eau et de l’éthanol. Un algorithme de corrélation d’images non supervisé est développé pour suivre l’avancée du front de liquide, et ce sur plusieurs mois.Enfin, l’étude des transferts simultanés est réalisée en combinant les deux dernières expériences. On observe alors une forte diminution de la diffusion de l’oxygène avec l’avancée du front d’imbibition dans le bois. Ces résultats nous permettent de mieux appréhender la complexité de la dynamique des transferts d'oxygène lors du vieillissement des vins et spiritueux en barrique en chêne
The aging of wines and spirits in oak barrels modifies their organoleptic qualities by two main phenomena. Firstly, the wood releases volatile and non-volatile compounds that enrich the beverage, and secondly, the wood properties allow a slight oxidation throughout the aging process. In the latter case, the modes of oxygen transfer as well as the limiting factors are little known today, although they are of great importance in the quality of the final product. This thesis aims to provide a better understanding of the dynamics of oxygen transfer in oak, in the presence of the liquid impregnation front due to the contact between the liquid and the internal surface of the barrel. To this end, several original experimental set-ups have been developed.Initially, at the barrel scale, 4 instrumented barrels were placed in a cellar to study the transfers in real conditions. The loss of liquid during aging generates an internal underpressure. Thus, in addition to the diffusion of oxygen through the wood thickness, there is a phenomenon of air percolation towards the inside of the barrel from a certain threshold of the pressure gap. This percolation threshold can be reached during variations in relative humidity and temperature conditions in the cellar, which provoke dimensional changes of the barrel. Oxygen inputs between 10 and 100 µg/L per percolation event are observed. These contributions are not negligible compared to the quantity of oxygen that the wine receives during its aging.Secondly, at the stave scale, each mechanism is treated in a decoupled way: diffusion of oxygen on the one hand and monitoring of the imbibition front on the other. Oxygen diffusion is studied for sessile oak (Quercus petraea (Matt.) Liebl.) and pedunculate oak (Quercus robur L.) with various ring widths using an innovative experimental device. A numerical model based on the finite volume method is used to identify the diffusion coefficient. A good representation of the diffusion via simulation is observed. Next, the imbibition front is monitored by an X-ray imaging system on stave samples in contact with water and ethanol. An unsupervised image correlation algorithm is developed to monitor the progress of the liquid front over several months.Finally, the study of simultaneous transfers is carried out by combining the last two experiments. A strong decrease in oxygen diffusion is then observed with the advance of the imbibition front in the stave thickness. These results allowed us to better apprehend the complexity of the dynamics of oxygen transfer during the aging of wines and spirits in oak barrels
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Todorova-Nova, Sharka. "Mesure de la masse des bosons w# au lep a l'aide du detecteur delphi". Strasbourg 1, 1998. http://www.theses.fr/1998STR13108.

Texto completo
Resumen
La these porte sur la mesure de la masse du boson w a lep2 par la methode de la reconstruction directe de ses produits de desintegration dans le canal hadronique. Des outils specifiques necessaires a l'extraction de la valeur de la masse du boson w a partir des donnees collectees avec l'appareillage delphi en 1997 ont ete developpes (recherche de variables optimales pour selectionner les evenements ww, mise au point d'une methode specifique de reconstruction cinematique). La valeur observee de la masse du boson w a ete interpretee dans le cadre du modele standard, notamment pour contraindre la masse du boson de higgs. Une partie importante est consacree aux effets systematiques provenant des interactions entre les produits de desintegration hadronique des bosons w (reconnection de couleur et correlations de bose-einstein), qui peuvent notoirement influer sur la mesure de leur masse.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Völkel, Norbert. "Design and characterization of gas-liquid microreactors". Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT020G/document.

Texto completo
Resumen
Cette étude est dédiée à l'amélioration du design des microréacteurs gaz-liquide. Le terme de microréacteur correspond à des appareils composés de canaux dont les dimensions sont de l’ordre de quelques dizaines à quelques centaines de microns. Grâce à la valeur importante du ratio surface/volume, ces appareils constituent une issue prometteuse pour contrôler les réactions rapides fortement exothermiques, souvent rencontrées en chimie fine et pharmaceutique. Dans le cas des systèmes gaz-liquide, on peut citer par exemple les réactions de fluoration, d’hydrogénation ou d’oxydation. Comparés à des appareils conventionnels, les microréacteurs permettent de supprimer le risque d’apparition de points chauds, et d’envisager le fonctionnement dans des conditions plus critiques, par exemple avec des concentrations de réactifs plus élevées. En même temps, la sélectivité peut être augmentée et les coûts opératoires diminués. Ainsi, les technologies de microréacteurs s’inscrivent bien dans les nouveaux challenges auxquels l'industrie chimique est confrontée ; on peut citer en particulier la réduction de la consommation énergétique et la gestion des stocks de produits intermédiaires. Les principaux phénomènes qui doivent être étudiés lors de la conception d’un microréacteur sont le transfert de matière et le transfert thermique. Dans les systèmes diphasiques, ces transferts sont fortement influencés par la nature des écoulements, et l'hydrodynamique joue donc un rôle central. Par conséquent, nous avons focalisé notre travail sur l’hydrodynamique de l’écoulement diphasique dans les microcanaux et sur les couplages constatés avec le transfert de masse. Dans ce contexte, nous nous sommes dans un premier temps intéressés aux régimes d’écoulement et aux paramètres contrôlant la transition entre les différents régimes. Au vu des capacités de transfert de matière et à la flexibilité offerte en terme de conditions opératoires, le régime de Taylor semble le plus prometteur pour mettre en œuvre des réactions rapides fortement exothermiques et limitées par le transfert de matière. Ce régime d'écoulement est caractérisé par des bulles allongées entourées par un film liquide et séparées les unes des autres par une poche liquide. En plus du fait que ce régime est accessible à partir d’une large gamme de débits gazeux et liquide, l'aire interfaciale développée est assez élevée, et les mouvements de recirculation du liquide induits au sein de chaque poche sont supposés améliorer le transport des molécules entre la zone interfaciale et le liquide. A partir d'une étude de l’hydrodynamique locale d’un écoulement de Taylor, il s’est avéré que la perte de charge et le transfert de matière sont contrôlés par la vitesse des bulles, et la longueur des bulles et des poches. Dans l’étape suivante, nous avons étudié l'influence des paramètres de fonctionnement sur ces caractéristiques de l’écoulement. Une première phase de notre travail expérimental a porté sur la formation des bulles et des poches et la mesure des champs de vitesse de la phase liquide dans des microcanaux de section rectangulaire. Nous avons également pris en compte le phénomène de démouillage, qui joue un rôle important au niveau de la perte de charge et du transfert de matière. Des mesures du coefficient de transfert de matière (kLa) ont été réalisées tandis que l'écoulement associé était enregistré. Les vitesses de bulles, longueurs de bulles et de poches, ainsi que les caractéristiques issues de l’exploitation des champs de vitesse précédemment obtenus, ont été utilisées afin de proposer un modèle modifié pour la prédiction du kLa dans des microcanaux de section rectangulaire. En mettant en évidence l'influence du design du microcanal sur l’hydrodynamique et le transfert de matière, notre travail apporte une contribution importante dans le contrôle en microréacteur des réactions rapides fortement exothermiques et limitées par le transfert de matière. De plus, ce travail a permis d'identifier certaines lacunes en termes de connaissance, ce qui devrait pouvoir constituer l'objet de futures recherches
The present project deals with the improvement of the design of gas-liquid microreactors. The term microreactor characterizes devices composed of channels that have dimensions in the several tens to several hundreds of microns. Due to their increased surface to volume ratios these devices are a promising way to control fast and highly exothermic reactions, often employed in the production of fine chemicals and pharmaceutical compounds. In the case of gas-liquid systems, these are for example direct fluorination, hydrogenation or oxidation reactions. Compared to conventional equipment microreactors offer the possibility to suppress hot spots and to operate hazardous reaction systems at increased reactant concentrations. Thereby selectivity may be increased and operating costs decreased. In this manner microreaction technology well fits in the challenges the chemical industry is continuously confronted to, which are amongst others the reduction of energy consumption and better feedstock utilization. The main topics which have to be considered with respect to the design of gasliquid μ-reactors are heat and mass transfer. In two phase systems both are strongly influenced by the nature of the flow and thus hydrodynamics play a central role. Consequently we focused our work on the hydrodynamics of the two-phase flow in microchannels and the description of the inter-linkage to gas-liquid mass transfer. In this context we were initially concerned with the topic of gas-liquid flow regimes and the main parameters prescribing flow pattern transitions. From a comparison of flow patterns with respect to their mass transfer capacity, as well as the flexibility offered with respect to operating conditions, the Taylor flow pattern appears to be the most promising flow characteristic for performing fast, highly exothermic and mass transfer limited reactions. This flow pattern is characterized by elongated bubbles surrounded by a liquid film and separated from each other by liquid slugs. In addition to the fact that this flow regime is accessible within a large range of gas and liquid flow rates, and has a relatively high specific interfacial area, Taylor flow features a recirculation motion within the liquid slugs, which is generally assumed to increase molecular transport between the gas-liquid interface and the bulk of the liquid phase. From a closer look on the local hydrodynamics of Taylor flow, including the fundamentals of bubble transport and the description of the recirculation flow within the liquid phase, it turned out that two-phase pressure drop and gas-liquid mass transfer are governed by the bubble velocity, bubble lengths and slug lengths. In the following step we have dealt with the prediction of these key hydrodynamic parameters. In this connection the first part of our experimental study was concerned with the investigation of the formation of bubbles and slugs and the characterization of the liquid phase velocity field in microchannels of rectangular cross-section. In addition we also addressed the phenomenon of film dewetting, which plays an important rôle concerning pressure drop and mass-transfer in Taylor flow. In the second part we focused on the prediction of gas-liquid mass transfer in Taylor flow. Measurements of the volumetric liquid side mass transfer coefficient (kLa-value) were conducted and the related two-phase flow was recorded. The measured bubble velocities, bubble lengths and slug lengths, as well as the findings previously obtained from the characterization of the velocity field were used to set-up a modified model for the prediction of kLa-values in μ-channels of rectangular cross-section. Describing the interaction of channel design hydrodynamics and mass transfer our work thus provides an important contribution towards the control of the operation of fast, highly exothermic and mass transfer limited gas-liquid reactions in microchannels. In addition it enabled us to identify gaps of knowledge, whose investigation should be items of further research
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Safeea, Mohammad. "Des robots manipulateurs collaboratifs sûrs". Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAE036.

Texto completo
Resumen
Les manipulateurs industriels collaboratifs ouvrent une nouvelle ère dans la fabrication flexible, où les robots et les humains sont capables de coexister et de travailler ensemble. Cependant, divers défis persistent pour parvenir à une collaboration complète entre les robots et les humains en milieu industriel. Dans cette thèse, deux défis principaux - la sécurité et la collaboration - sont abordés pour atteindre cet objectif. Concernant la sécurité, la thèse présente une méthode d'évitement des collisions en temps réel qui permet au robot d'ajuster les chemins générés hors ligne pour une tâche industrielle, tout en évitant les collisions avec les humains à proximité. En outre, la thèse a présenté une nouvelle méthode pour effectuer un mouvement d'évitement de collision réactif, en utilisant la méthode de Newton du second ordre qui offre divers avantages par rapport aux méthodes traditionnelles utilisées dans la littérature. Sur la collaboration, la thèse présente un mode de guidage manuel précis comme alternative au mode de guidage actuel pour effectuer des opérations de positionnement précis de l'effecteur terminal du robot d’une manière simple et intuitive. La thèse présente également de nouvelles contributions à la formulation mathématique de la dynamique des robots, y compris un algorithme récursif pour calculer la matrice de masse des robots sériels avec un coût minimal du second ordre et un algorithme récursif pour calculer efficacement les symboles de Christoffel. Tous les algorithmes présentés sont validés soit en simulation, soit dans un scénario réel
Collaborative industrial manipulators are ushering a new era in flexible manufacturing, where robots and humans are allowed to coexist and work side by side. However, various challenges still persist in achieving full human robot collaboration on the factory floor. In this thesis two main challenges - safety and collaboration - for achieving that goal are addressed. On safety, the thesis presents a real-time collision avoidance method which allows the robot to adjust the offline generated paths of the industrial task in real-time for avoiding collisions with humans nearby. In addition, the thesis presented a new method for performing the reactive collision avoidance motion using second order Newton method which offers various advantages over the traditional methods in the literature. On collaboration, the thesis presents the precision hand-guiding as an alternative to the teach-pendant for performing precise positioning operations of the robot’s end-effector in a simple and intuitive manner. The thesis also presents new contributions into the mathematical formulation of robot dynamics, including a recursive algorithm for calculating the mass matrix of serially linked robots with a minimal second order cost, and a recursive algorithm for calculating Christoffel symbols efficiently. All the presented algorithms are validated either in simulation or in a real-world scenario
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Arnoud, Yannick. "Mesure de la masse du meson bs par la reconstruction de ses modes de desintegration exclusifs a l'aide du detecteur delphi". Paris 11, 1995. http://www.theses.fr/1995PA112177.

Texto completo
Resumen
On a cherche a determiner la masse du meson bs, par la reconstruction complete de ses desintegrations a partir des donnees enregistrees par l'experience delphi entre 1991 et 1994 au collisionneur l. E. P. , situe au cern pres de geneve. Les previsions theoriques ont permis de situer le domaine de masse attendu pour le meson bs. Le resultat d'un calcul original base sur la theorie effective des quarks lourds a ete compare a d'autres previsions, obtenues precedemment par d'autres methodes. La coherence de la methode experimentale a ete verifiee sur les mesons bu et bd, de masses parfaitement connues, et abondamment produits. La masse moyenne des candidats bu et bd reconstruits etait en accord avec la valeur moyenne mondiale. Le nombre d'evenements bs attendus dans quatre canaux de desintegration, qui presentent les rapports d'embranchement les plus importants et qui conduisent a des etats finals comportant uniquement des particules chargees, a ete estime. L'analyse des donnees a permis de reconstruire entierement la chaine de desintegration de quatre evenements candidats bs. Pour chacun des candidats, les differentes sources de fond (reflexion, reconstruction incomplete ou association combinatoire) ont ete etudiees. Une methode maximum de vraisemblance a alors permis de determiner la valeur de la masse la plus probable du meson bs a partir des evenements selectionnes: 5382 mev, avec une erreur de 16 mev
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Mahnic̆-Kalamiza, Samo. "Effects of electrical and thermal pre-treatment on mass transport in biological tissue". Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2247/document.

Texto completo
Resumen
Le champ électrique d'une puissance suffisante peut provoquer une augmentation de conductivité et perméabilité de la membrane cellulaire. L'effet est connu comme l'électroporation, attribuée à la création de voies aqueuses dans la membrane. Quantifier le transport de la matière dans le cadre d'électroporation est un objectif important. Comprendre ces processus a des ramifications dans l’extraction du jus ou l’extraction sélective des composés de cellules végétales, l'amélioration de l'administration de médicaments, et des solutions aux défis environnementaux. Il y a un manque de modèles qui pourraient être utilisés pour modéliser le transport de la matière dans les structures complexes (tissus biologiques) par rapport à l'électroporation. Cette thèse présente une description mathématique théorique (un modèle) pour étudier le transport de la matière et le transfert de la chaleur dans tissu traité par l’électroporation. Le modèle a été développé en utilisant les lois de conservation et de transport et permet le couplage des effets de l'électroporation sur la membrane des cellules individuelles au transport de la matière ou la chaleur dans le tissu. Une solution analytique a été trouvée par une simplification, mais le modèle peut être étendu avec des dépendances fonctionnelles supplémentaires et résolu numériquement. La thèse comprend cinq articles sur l'électroporation dans l'industrie alimentaire, la création de modèle pour le problème de diffusion, la traduction du modèle au problème lié à l’expression de jus, validation du modèle, ainsi que des suggestions pour une élaboration future du modèle. Un chapitre supplémentaire est dédié au transfert de la chaleur dans tissu
An electric field of sufficient strength can cause an increase of conductivity and permeability of cell membrane. Effect is known as electroporation and is attributed to creation of aqueous pathways in the membrane. Quantifying mass transport in connection with electroporation of biological tissues is an important goal. The ability to fully comprehend transport processes has ramifications in improved juice extraction and improved selective extraction of compounds from plant cells, improved drug delivery, and solutions to environmental challenges. While electroporation is intensively investigated, there is a lack of models that can be used to model mass transport in complex structures such as biological tissues with relation to electroporation. This thesis presents an attempt at constructing a theoretical mathematical description – a model, for studying mass (and heat) transfer in electroporated tissue. The model was developed employing conservation and transport laws and enables coupling effects of electroporation to the membrane of individual cells with the resulting mass transport or heat transfer in tissue. An analytical solution has been found though the model can be extended with additional dependencies to account for the phenomenon of electroporation, and solved numerically. Thesis comprises five peer-reviewed papers describing electroporation in the food industry, model creation for the problem of diffusion, translation of the model to the mathematically-related case of juice expression, model validation, as well as suggestions for possible future development, extension, and generalization. An additional chapter is dedicated to transfer of heat in tissue
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Kerdi, Banan Khaled. "Transport quantique des trous dans une monocouche de WSe2 sous champ magnétique intense". Thesis, Toulouse 3, 2021. http://www.theses.fr/2021TOU30009.

Texto completo
Resumen
Les dichalcogénures des métaux de transition sont constitués d'un empilement de monocouches atomiques liées entre elles par des liaisons faibles de type Van der Waals. Lorsqu'une monocouche de ce matériau est isolée, la symétrie d'inversion du cristal est brisée et la présence d'un couplage spin-orbite fort introduit une levée de dégénérescence des états électroniques ayant des spins différents. Le facteur de Landé effectif (g*) qui intervient dans l'énergie Zeeman est un paramètre qui caractérise, entre autres, la structure de bande du matériau. Il est exceptionnellement grand dans le système WSe_2 en raison de la présence de tungstène et des interactions électroniques. Sa détermination au travers des mesures de résistance électrique sous champ magnétique intense est l'objet de cette thèse. Dans un premier temps, des monocouches de WSe_2 sont produites par l'exfoliation mécanique du matériau massif et leur adressage électrique à l'échelle micrométrique est réalisé par des procédés technologiques de salle blanche impliquant la lithographie électronique. La magnétorésistance des échantillons produits est ensuite étudiée dans des conditions extrêmes de basse température et de champ magnétique intense. La densité de porteur de charges, des trous dans le cas cette thèse, peut être ajustée in-situ par effet de champ. Dans les monocouches de WSe_2, la quantification de l'énergie des niveaux de Landau modifiée par l'effet Zeeman est révélée par la présence d'oscillations complexes de la magnéto-résistance (oscillations de Shubnikov-de Haas). Le développement d'un modèle théorique dédié, où le désordre est pris en compte par un élargissement Gaussien des niveaux de Landau, est nécessaire afin d'interpréter quantitativement les résultats expérimentaux. Il simule l'évolution des composantes du tenseur de résistivité où les paramètres d'ajustement sont la mobilité électronique, l'énergie des bords de mobilité des niveaux de Landau ainsi que le facteur de Landé effectif. L'ajustement théorique aux résultats expérimentaux permet d'extraire l'évolution de g* des trous en fonction de leur densité dans une gamme variant de 5.10^12 à 7,5.10^12 cm^-2, qui s'inscrit dans la continuité des résultats issus de la littérature. Au-delà des approches novatrices sur le plan des conditions expérimentales et de modélisation, cette étude confirme l'importance des interactions électroniques dans la compréhension des propriétés électroniques de ce matériau
Transition metal dichalcogenides are made up of a stack of atomic monolayers bound together by weak Van der Waals interactions. When a single layer of this material is isolated, the crystal inversion symmetry is broken, leading to the degeneracy lifting of the electronic states having different spins in the presence of strong spin-orbit coupling. The effective Landé factor (g*) which arises in the Zeeman energy is a parameter which characterizes, among others, the band-structure of the material. It is exceptionally large in WSe_2 monolayers thanks to the presence of heavy tungsten atoms as well as electronic interactions. Its experimental determination through electrical resistance measurements under intense magnetic field constitutes the objective of this thesis. First, WSe_2 monolayers are produced by mechanical exfoliation of the mother material and their electrical addressing at the micrometric scale is achieved by clean room processes involving electron-beam lithography. Their magneto-resistance is studied under extreme conditions of low temperature and high magnetic field. The charge carrier density, holes in the thesis, can be varied in situ thanks to field effect. In WSe_2 monolayers, the quantization of the Landau level energy modified by the Zeeman effect is revealed by the presence of complex magneto-resistance oscillations (Shubnikov-de Haas oscillations). A dedicated theoretical model, where disorder is introduced through a Gaussian broadening of the Landau levels, is necessary for a quantitative understanding of the experimental results. The components of the resistivity tensor are simulated by this model where the main fitting parameters are the electronic mobility, the mobility edge of the Landau levels and the effective Landé factor. The fitting of the experimental results allows the extraction of g* for a hole density ranging from 5.10^12 to 7.5.10^12 cm^-2, which follows the trend reported in the literature. Beyond the innovative approaches in terms of experimental conditions and modelling, this study confirms the importance of electronic interactions in understanding the electronic properties of this material
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Garcia, Fernandez Carlos. "Modeling Optical Properties of Combustion Soot emitted in the Troposphere". Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2040/document.

Texto completo
Resumen
Ce travail concerne la modélisation, à l’échelle moléculaire, de l’interaction entre des nanoparticules carbonées et le rayonnement électromagnétique. Le but est d’aider à la compréhension des propriétés optiques des particules de suie afin de mieux quantifier l’influence des suies sur l’atmosphère et le climat. L’étude de l’interaction rayonnement/particules de suie fraîche a été effectuée par la méthode PDI ; il a été montré que : i) le coefficient d’absorption massique (MAC) des particules de suie dépend de la répartition des atomes dans la particule et de leurs liaisons, en particulier entre 200 et 350 nm ; ii) le MAC diffère selon que le cœur de la particule carbonée est occupé ou non par des plans graphitiques ; iii) un modèle analytique n’est pas adapté pour calculer le MAC d’une nanoparticule carbonée présentant des défauts structuraux. De plus, des méthodes de chimie quantique ont été utilisées pour caractériser le vieillissement des suies. Les résultats montrent que : i) NO, Cl, et HCl sont physisorbées sur une surface carbonée parfaite alors que sur une surface défective, ces espèces sont chimisorbées et conduisent à une modification de la surface ; ii) la présence de Cl conduit à un piégeage fort des molécules d’eau supérieur à celui obtenu lorsqu’un site oxygéné est présent sur la surface carbonée, expliquant ainsi le caractère hydrophile des suies émises lors d’incendies dans des milieux industriels. Enfin, la méthode PDI a été appliquée au calcul de la polarisabilité de HAP afin d’interpréter des spectres d’absorption des grains carbonés du milieu interstellaire, en incluant des molécules pour lesquelles aucune donnée n’était actuellement disponible
This work concerns the modeling, at the molecular level, of the interaction between carbonaceous particles of nanometric size and the electromagnetic radiation. The goal is to improve our understanding of the optical properties of soot particles, to better quantify the influence of soot on the atmosphere and on climate change. The study of the interaction between radiation and fresh soot particles was carried out using the point dipole interaction method; it has been shown that: i) the mass absorption coefficient (MAC) of these soot nanoparticles may significantly depend on their atomistic details, especially between 200 and 350 nm; ii) the MAC depends on whether the heart of the carbonaceous particle is occupied or not by graphite planes; iii) an analytical model is not suitable for calculating the MAC of carbonaceous nanoparticles having structural defects. In addition, quantum chemical methods have been used to characterize the ageing of soot. The results obtained are i) NO, Cl, and HCl are physisorbed on a perfect carbonaceous surface whereas on a defective surface, these species are chemisorbed and lead to a modification of the surface; ii) on a carbonaceous surface, the presence of adsorbed Cl atoms leads to a strong trapping of the surrounding water molecules. This may be related to the highly hydrophilic nature of soot emitted during fires in industrial environments. Finally, the PDI method was applied to calculate the polarizability of PAHs to help at interpreting the absorption spectra of carbonaceous grains in the interstellar medium, including molecules for which no data was currently available
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Ferrari, Arnaud. "Calorimétrie électromagnétique et recherche de neutrinos droits de Majorana dans l'expérience ATLAS". Phd thesis, Université Joseph Fourier (Grenoble), 1999. http://tel.archives-ouvertes.fr/tel-00005400.

Texto completo
Resumen
Malgré l'accord remarquable entre le modèle standard et les nombreuses mesures de précision effectuées depuis quelques décennies, la physique des particules a encore bien des mystères à éclaircir. L'un deux concerne la violation de la parité et la masse des neutrinos. Pour résoudre cette énigme, le modèle symétrique et le mécanisme du See-Saw prédisent l'existence de nouveaux bosons de jauge (W$_R$ et Z') et de neutrinos droits de Majorana N$_l$. Si ces nouvelles particules ont des masses voisines de quelques TeV/c2, le LHC et son détecteur ATLAS devraient en permettre la découverte, grâce aux processus qui sont décrits dans cette thèse : $pp \to W_(R) \to eN_(e) \to eejj$ et $pp \to Z' \to N_(e)N_(e) \to eejjjj$. Ceux-ci conduisent non seulement à des jets hadroniques mais aussi à des électrons dans l'état final. L'énergie et la position de ces derniers sont reconstruites dans un calorimètre électromagnétique à argon liquide. Avant d'y interagir, les électrons produits au point de collision rencontrent une certaine quantité de matière inerte, où il perdent une partie de leur énergie. Afin de compenser cet effet et ainsi maintenir la résolution en énergie à un niveau acceptable, on installe un pré-échantillonneur sur la face interne du calorimètre électromagnétique. Cette thèse en présente les principales performances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Okou, Urbain. "La sécurité juridique en droit fiscal : étude comparée France-Côte d’Ivoire". Thesis, Paris 5, 2014. http://www.theses.fr/2014PA05D022.

Texto completo
Resumen
La France et la Côte d’Ivoire sont deux États qui présentent des similitudes dues principalement à leur passé colonial commun ; mais il s’agit également de deux États qui présentent de nombreuses différences tenant notamment à leur niveau de développement. Si les règles de droit fiscal substantiel au sein de chacun de ces deux États permettent d’étudier les exigences de sécurité juridique et les moyens par lesquels elles sont prises en compte, c’est en réalité la pratique processuelle qui révèle de manière plus substantielle l’effectivité de cette prise en compte. Au demeurant, la problématique de la sécurité juridique n’est bien souvent réduite qu’aux seules exigences d’accessibilité, de stabilité ou de prévisibilité de la norme. Ce qui témoigne au fond d’une approche partielle de l’exigence de sécurité juridique tendant à en limiter l’étude à la qualité formelle et à l’évolution temporelle des actes juridiques. La prise en compte d’une pluralité de systèmes juridiques différents révèle cependant que la notion de sécurité juridique ne ramène pas nécessairement à un contenu univoque. En effet, l’insécurité juridique ne s’exprimant pas toujours en des termes identiques d’un cadre juridique à un autre, la sécurité juridique pourrait se révéler polysémique, voire antinomique, d’un système juridique et fiscal à un autre. Ainsi donc, au-delà de la norme, la sécurité juridique s’applique également au cadre et au système juridique ainsi qu’à la pratique juridique et juridictionnelle. La sécurité juridique apparaît donc, en droit fiscal, comme l’expression de la fiabilité d’un cadre et d’un système juridiques et fiscaux, à travers des normes de qualité offrant une garantie d’accessibilité et d’intelligibilité ainsi que des moyens pour le contribuable de bâtir des prévisions ou donner satisfaction à celles légitimement bâties. En outre, au-delà du cadre imposé par la présente thèse, il convient d’aborder la problématique de la sécurité juridique dans une approche moins restrictive, afin de ne point en occulter les aspects historiques, philosophiques, sociologiques et juridiques essentiels à une étude d’ensemble de la question
France and Côte d'Ivoire are two countries with similarities mainly due to their common colonial past; but they are also two countries with many differences especially due to their level of development. While the rules of substantive tax law within each of these two countries make it possible to study the requirements of legal certainty and the means whereby they are taken into account, it is actually the procedural practice that reveals more substantively the effectiveness of this consideration. It should also be noted that the issue of legal certainty is often reduced to the only requirements of accessibility, stability or predictability of the standard. This actually reflects a partial approach to the requirements of legal certainty that tends to limit its study to the formal quality and the temporal evolution of legal acts. Taking into account a plurality of different legal systems, however, reveals that the concept of legal certainty does not necessarily lead to an unequivocal content. Indeed, since legal certainty is not always expressed in identical terms from one legal framework to another, legal certainty could prove to be polysemic, or even antinomic, from one legal and fiscal system to another. Thus, beyond the norm, legal certainty also applies to the legal framework and system as well as to the legal and judicial practice. Legal certainty thus, appears in tax law, as an expression of the reliability of a legal and fiscal framework and system, through quality standards, offering a guarantee of accessibility and intelligibility, as well as means for the taxpayer to build predictions or satisfy those legitimately built. Moreover, beyond the framework imposed by the present dissertation, it is important to deal with the problem of legal certainty in a less restrictive way, so as not to obscure the historical, philosophical, sociological and legal aspects essential to a holistic study of the issue
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Moreira, Ana Sofia Pereira. "Study of modifications induced by thermal and oxidative treatment in oligo and polysaccharides of coffee by mass spectrometry". Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17074.

Texto completo
Resumen
Doutoramento em Bioquímica
Os polissacarídeos são os componentes maioritários dos grãos de café verde e torrado e da bebida de café. Os mais abundantes são as galactomananas, seguindo-se as arabinogalactanas. Durante o processo de torra, as galactomananas e arabinogalactanas sofrem modificações estruturais, as quais estão longe de estar completamente elucidadas devido à sua diversidade e à complexidade estrutural dos compostos formados. Durante o processo de torra, as galactomananas e arabinogalactanas reagem com proteínas, ácidos clorogénicos e sacarose, originando compostos castanhos de alto peso molecular contendo nitrogénio, designados de melanoidinas. As melanoidinas do café apresentam diversas atividades biológicas e efeitos benéficos para a saúde. No entanto, a sua estrutura exata e os mecanismos envolvidos na sua formação permanecem desconhecidos, bem como a relação estrutura-atividade biológica. A utilização de sistemas modelo e a análise por espectrometria de massa permitem obter uma visão global e, simultaneamente, detalhada das modificações estruturais nos polissacarídeos do café promovidas pela torra, contribuindo para a elucidação das estruturas e mecanismos de formação das melanoidinas. Com base nesta tese, oligossacarídeos estruturalmente relacionados com a cadeia principal das galactomananas, (β1→4)-Dmanotriose (Man3), e as cadeias laterais das arabinogalactanas, (α1→5)-Larabinotriose (Ara3), isoladamente ou em misturas com ácido 5-Ocafeoilquínico (5-CQA), o ácido clorogénico mais abundante nos grãos de café verde, e péptidos compostos por tirosina e leucina, usados como modelos das proteínas, foram sujeitos a tratamento térmico a seco, mimetizando o processo de torra. A oxidação induzida por radicais hidroxilo (HO•) foi também estudada, uma vez que estes radicais parecem estar envolvidos na modificação dos polissacarídeos durante a torra. A identificação das modificações estruturais induzidas por tratamento térmico e oxidativo dos compostos modelo foi feita por estratégias analíticas baseadas principalmente em espectrometria de massa, mas também em cromatografia líquida. A cromatografia de gás foi usada na análise de açúcares neutros e ligações glicosídicas. Para validar as conclusões obtidas com os compostos modelo, foram também analisadas amostras de polissacarídeos do café obtidas a partir de resíduo de café e café instantâneo. Os resultados obtidos a partir dos oligossacarídeos modelo quando submetidos a tratamento térmico (seco), assim como à oxidação induzida por HO• (em solução), indicam a ocorrência de despolimerização, o que está de acordo com estudos anteriores que reportam a despolimerização das galactomananas e arabinogalactanas do café durante a torra. Foram ainda identificados outros compostos resultantes da quebra do anel de açúcares formados durante o tratamento térmico e oxidativo da Ara3. Por outro lado, o tratamento térmico a seco dos oligossacarídeos modelo (individualmente ou quando misturados) promoveu a formação de oligossacarídeos com um maior grau de polimerização, e também polissacarídeos com novos tipos de ligações glicosídicas, evidenciando a ocorrência de polimerização através reações de transglicosilação não enzimática induzidas por tratamento térmico a seco. As reações de transglicosilação induzidas por tratamento térmico a seco podem ocorrer entre resíduos de açúcares provenientes da mesma origem, mas também de origens diferentes com formação de estruturas híbridas, contendo arabinose e manose como observado nos casos dos compostos modelo usados. Os resultados obtidos a partir de amostras do resíduo de café e de café instantâneo sugerem a presença de polissacarídeos híbridos nestas amostras de café processado, corroborando a ocorrência de transglicosilação durante o processo de torra. Além disso, o estudo de misturas contendo diferentes proporções de cada oligossacarídeo modelo, mimetizando regiões do grão de café com composição distinta em polissacarídeos, sujeitos a diferentes períodos de tratamento térmico, permitiu inferir que diferentes estruturas híbridas e não híbridas podem ser formadas a partir das arabinogalactanas e galactomananas, dependendo da sua distribuição nas paredes celulares do grão e das condições de torra. Estes resultados podem explicar a heterogeneidade de estruturas de melanoidinas formadas durante a torra do café. Os resultados obtidos a partir de misturas modelo contendo um oligossacarídeo (Ara3 ou Man3) e 5-CQA sujeitas a tratamento térmico a seco, assim como de amostras provenientes do resíduo de café, mostraram a formação de compostos híbridos compostos por moléculas de CQA ligadas covalentemente a um número variável de resíduos de açúcar. Além disso, os resultados obtidos a partir da mistura contendo Man3 e 5-CQA mostraram que o CQA atua como catalisador das reações de transglicosilação. Por outro lado, nas misturas modelo contendo um péptido, mesmo contendo também 5-CQA e sujeitas ao mesmo tratamento, observou-se uma diminuição na extensão das reações transglicosilação. Este resultado pode explicar a baixa extensão das reações de transglicosilação não enzimáticas durante a torra nas regiões do grão de café mais ricas em proteínas, apesar dos polissacarídeos serem os componentes maioritários dos grãos de café. A diminuição das reações de transglicosilação na presença de péptidos/proteínas pode dever-se ao facto de os resíduos de açúcares redutores reagirem preferencialmente com os grupos amina de péptidos/proteínas por reação de Maillard, diminuindo o número de resíduos de açúcares redutores disponíveis para as reações de transglicosilação. Além dos compostos já descritos, uma diversidade de outros compostos foram formados a partir dos sistemas modelo, nomeadamente derivados de desidratação formados durante o tratamento térmico a seco. Em conclusão, a tipificação das modificações estruturais promovidas pela torra nos polissacarídeos do café abre o caminho para a compreensão dos mecanismos de formação das melanoidinas e da relação estrutura-atividade destes compostos.
Polysaccharides are the major components of green and roasted coffee beans, and coffee brew. The most abundant ones are galactomannans, followed by arabinogalactans. During the roasting process, galactomannans and arabinogalactans undergo structural modifications that are far to be completely elucidated due to their diversity and complexity of the compounds formed. During the roasting process, galactomannans and arabinogalactans react with proteins, chlorogenic acids, and sucrose, originating high molecular weight brown compounds containing nitrogen, known as melanoidins. Several biological activities and beneficial health effects have been attributed to coffee melanoidins. However, their exact structures and the mechanisms involved in their formation remain unknown, as well as the structure-biological activity relationship. The use of model systems and mass spectrometry analysis allow to obtain an overall view and, simultaneously, detailed, of the structural modifications in coffee polysaccharides promoted by roasting, contributing to the elucidation of the structures and formation mechanisms of melanoidins. Based on this thesis, oligosaccharides structurally related to the backbone of galactomannans, (β1→4)-D-mannotriose, and the side chains of arabinogalactans, (α1→5)-Larabinotriose, alone or in mixtures with 5-O-caffeoylquinic acid, the most abundant chlorogenic acid in green coffee beans, and dipeptides composed by tyrosine and leucine, used as models of proteins, were submitted to dry thermal treatments, mimicking the coffee roasting process. The oxidation induced by hydroxyl radicals (HO•) was also studied, since these radicals seem to be involved in the modification of the polysaccharides during roasting. The identification of the structural modifications induced by thermal and oxidative treatment of the model compounds was performed mostly by mass spectrometry-based analytical strategies, but also using liquid chromatography. Gas chromatography was used in the analysis of neutral sugars and glycosidic linkages. To validate the conclusions achieved with the model compounds, coffee polysaccharide samples obtained from spent coffee grounds and instant coffee were also analysed. The results obtained from the model oligosaccharides when submitted to thermal treatment (dry) or oxidation induced by HO• (in solution) indicate the occurrence of depolymerization, which is in line with previous studies reporting the depolymerization of coffee galactomannans and arabinogalactans during roasting. Compounds resulting from sugar ring cleavage were also formed during thermal treatment and oxidative treatment of Ara3. On the other hand, the dry thermal treatment of the model oligosaccharides (alone or when mixed) promoted the formation of oligosaccharides with a higher degree of polymerization, and also polysaccharides with new type of glycosidic linkages, evidencing the occurrence of polymerization via non-enzymatic transglycosylation reactions induced by dry thermal treatment. The transglycosylation reactions induced by dry thermal treatment can occur between sugar residues from the same origin, but also of different origins, with formation of hybrid structures, containing arabinose and mannose in the case of the model compounds used. The results obtained from spent coffee grounds and instant coffee samples suggest the presence of hybrid polysaccharides in these processed coffee samples, corroborating the occurrence of transglycosylation during the roasting process. Furthermore, the study of mixtures containing different proportions of each model oligosaccharide, mimicking coffee bean regions with distinct polysaccharide composition, subjected to different periods of thermal treatment, allowed to infer that different hybrid and non-hybrid structures may be formed from arabinogalactans and galactomannans, depending on their distribution in the bean cell walls and on roasting conditions. These results may explain the heterogeneity of melanoidins structures formed during coffee roasting. The results obtained from model mixtures containing an oligosaccharide (Ara3 or Man3) and 5-CQA and subjected to dry thermal treatment, as well as samples derived from spent coffee grounds, showed the formation of hybrid compounds composed by CQA molecules covalently linked to a variable number of sugar residues. Moreover, the results obtained from the mixture containing Man3 and 5-CQA showed that CQA acts as catalyst of transglycosylation reactions. On the other hand, in the model mixtures containing a peptide, even if containing 5-CQA and subjected to the same treatment, it was observed a decrease in the extent of transglycosylation reactions. This outcome can explain the low extent of non-enzymatic transglycosylation reactions during roasting in coffee bean regions enriched in proteins, although polysaccharides are the major components of the coffee beans. The decrease of transglycosylation reactions in the presence of peptides/proteins can be related with the preferential reactivity of reducing residues with the amino groups of peptides/proteins by Maillard reaction, decreasing the number of reducing residues available to be directly involved in the transglycosylation reactions. In addition to the compounds already described, a diversity of other compounds were formed from model systems, namely dehydrated derivatives formed during dry thermal treatment. In conclusion, the identification of the structural modifications in coffee polysaccharides promoted by roasting pave the way to the understanding of the mechanisms of formation of melanoidins and structure-activity relationship of these compounds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Loomer, Scott Allen. "A cartometric analysis of portolan charts a search for methodology /". 1987. http://catalog.hathitrust.org/api/volumes/oclc/17161406.html.

Texto completo
Resumen
Thesis (Ph. D.)--University of Wisconsin--Madison, 1987.
Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 230-235).
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Yeh, Yu-Wei y 葉昱緯. "Application of Boomerang Chart to Real-World Mass Production Wafer Maps". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/93539216431114992560.

Texto completo
Resumen
碩士
國立中央大學
電機工程學系
104
In this paper, we use Boomerang Chart that we published in the past to analyze the classified wafer maps in a great view, whether the distribution of defects uniform or not and verify it in mass production. At first, we choose five kinds of size of wafers that we would like to analyze, and simulate basic curve according to these five kinds of size. In this experiment, we will create parameters NBD and NCD from every wafer in every data, and normalize the two parameters to create two new parameters, NNBD and NNCD. We will create Boomerang Chart according to the two parameters and compare with basic curve to observe every failure type’s position on basic curve, we will judge whether a wafer uniform or not and the situation of clustering of bad dice by this. Then, we will view whether there are systematic errors in non-classified data based on the result. In this experiment, we verify mass production by using Boomerang Chart ,and find out the position and the situation of clustering of every failure type by observing Boomerang Chart ,and confirm the practicability of Boomerang Chart through non-classified data set to get the achievement of increasing yield、testing efficiency and reduce the production cost.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Yang, Yi-Ting y 楊依婷. "A Study on the Concept Maps Based on Student-Problem Chart". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/35409234751138274691.

Texto completo
Resumen
碩士
國立臺灣師範大學
圖書資訊學研究所
102
Concept maps are graphical tools that help learners organize and integrate new knowledge based on what they have learned. This study analyzes students’ assessment data applying association rule, one of the data mining techniques, to construct concept maps. In the previous studies, students were both divided into three groups which were high-score, middle-score, and low-score students group based on the percentage of students’ ranking only when the data of students’ scores follow normal distribution. However, the data of students’ scores usually don’t follow normal distribution. This study analyzes 30131 9th grade students’ assessment data and divides students into six groups based on student-problem chart (S-P Chart) to construct concept maps of mechanics subject for these six groups by applying association rule. This study suggests remedial learning paths for students based on the combination of their concept maps and misconceptions, and then analyses learning performances. Results of this study show that the concept maps constructed by students’ assessment data are quite different from the one defined by school curriculum. A major finding is that the concept maps constructed based on student-problem chart can be more accurate and elaborated. The results indicate that the structure of concept map of group A is quite different from the one of group A’ even if group A and group A’ both belong to high-score students group, and the same as group B and group B’. It can be reasoned that dividing based on student-problem chart is more accurate and elaborated than dividing just by high-score, middle-score, and low-score students groups. Furthermore, it can achieve adaptive learning more thoroughly. The result of learning performance indicates all participants have improvement significantly after they learned by suggested remedial learning paths, and the same as group B, group C, and group C’. The findings can be offered to the developers of digital learning systems so as to construct an adaptive remedial instruction system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Almeida, Filipa Cardoso de. "Loan modifications and risk of default: a Markov chains approach". Master's thesis, 2020. http://hdl.handle.net/10362/99608.

Texto completo
Resumen
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and Management
With the housing crisis, credit risk analysis has had an exponentially increasing importance, since it is a key tool for banks’ credit risk management, as well as being of great relevance for rigorous regulation. Credit scoring models that rely on logistic regression have been the most widely applied to evaluate credit risk, more specifically to analyze the probability of default of a borrower when a credit contract initiates. However, these methods have some limitations, such as the inability to model the entire probabilistic structure of a process, namely, the life of a mortgage, since they essentially focus on binary outcomes. Thus, there is a weakness regarding the analysis and characterization of the behavior of borrowers over time and, consequently, a disregard of the multiple loan outcomes and the various transitions a borrower may face. Therefore, it hampers the understanding of the recurrence of risk events. A discrete-time Markov chain model is applied in order to overcome these limitations. Several states and transitions are considered with the purpose of perceiving a borrower’s behavior and estimating his default risk before and after some modifications are made, along with the determinants of post-modification mortgage outcomes. Mortgages loans are considered in order to take a reasonable timeline towards a proper assessment of different loan performances. In addition to analyzing the impact of modifications, this work aims to identify and evaluate the main risk factors among borrowers that justify transitions to default states and different loan outcomes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Brito, Hoyos Diana Marcela. "Generación e incorporación de productos de valor agregado a un servidor de mapas para el manejo epidemiológico de Chagas". Master's thesis, 2015. http://hdl.handle.net/11086/2752.

Texto completo
Resumen
tesis (magister en aplicaciones espaciales de alerta y respuesta temprana a emergencias)--universidad nacional de córdoba, facultad de matemática, astronomía y física, 2015.
maestría conjunta con el instituto de altos estudios espaciales "mario gulich"-conae
el presente trabajo de tesis describe la cadena de obtención de datos y procedimientos implementados para la generación de productos de valor agregado, en formato raster y vectorial, y su incorporación al sistema web-gis para el manejo epidemiológico de chagas. se describe detalladamente los aspectos epidemiológicos de la transmisión de la enfermedad de chagas en colombia y argentina; así como las funcionalidades, arquitectura y tecnologías con las que fue construido el web-gis. este documento describe paso a paso las fuentes de variables ambientales provenientes de datos del sensado remoto, el procedimiento, software, y scripts generados para obtener los 10 productos en formato raster, y las herramientas de transferencia de datos para la incorporación y publicación de las capas en la plataforma. adicionalmente, se describe el procedimiento implementado para generar las capas vectoriales a partir de los registros de la planilla chagas 6, su incorporación y publicación, haciendo principal énfasis en la generación de consultas sql.
this thesis describes the data collection chain and procedures implemented to the value-added products generation, in raster and vector format, and its incorporation into the chagas disease epidemiological management web-gis system. epidemiological aspects of chagas disease transmission in colombia and argentina are described in detail; as well as the functionality, architecture and technologies with which the web-gis was built. this document describes step by step the environmental variables sources from remote sensing data, the procedures, software, and scripts to generate the 10 products in raster format, and the data transfer tools for the integration and publication of the layers into the platform. furthermore, the procedure implemented to generate vector layers from the chagas 6 records, its incorporation and publication are described, by making primary emphasis on sql queries
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Hong, Jenny (Hong). "Structural and Functional Relationships between Ubiquitin Conjugating Enzymes (E2s) and Ubiquitin Ligases (E3s)". Thesis, 2013. http://hdl.handle.net/1807/35799.

Texto completo
Resumen
The first part of the thesis describes a systematic function analysis that identified in vitro E2 partners for ten different HECT E3 ligase proteins. Using mass spectrometry, the linkage composition for the resulting autoubiquitylation products of a number of functional E2-HECT pairs was determined. HECT domains from different subfamilies catalyze the formation of very different types of Ub chains, largely independent of the E2 in the reaction. The second part of the thesis describes the characterization of the RAD6-interactome. Using affinity purification coupled with mass spectrometry, I identified a novel RAD6-interacting E3 ligase, KCMF1, which binds to a different surface on RAD6 than the other RAD6-associated E3 ligases. KCMF1 also recruits additional proteins to RAD6, and this new complex points to novel RAD6 functions. Interestingly, the RAD6A R11Q mutant polypeptide, found in X-linked mental retardation patients specifically loses the interaction with KCMF1, but not with other RAD6-associated E3 ligases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Islam, Farah. "Promoting healthy body images in populations : does body dissatisfaction influence reactions to Québec’s charter for a healthy and diverse body image?" Thèse, 2018. http://hdl.handle.net/1866/21606.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía