Dissertations / Theses on the topic 'Maximum Tension'

To see the other types of publications on this topic, follow the link: Maximum Tension.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Maximum Tension.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Phung, Kent, and Charles Chu. "Adhesives for Load-Bearing Timber-Glass Elements : Elastic, plastic and time dependent properties." Thesis, Linnéuniversitetet, Institutionen för bygg- och energiteknik (BE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-27386.

Full text
Abstract:
This thesis work is part of an on-going project regarding load-bearing timber glass composites within the EU program WoodWisdom-Net. One major scope of that project is the adhesive material between the glass and timber parts. The underlying importance of the bonding material is related to the transfer of stress between the two materials – the influence of the adhesive stiffness and ductility on the possibility of obtaining uniform stress distributions. In this study the mechanical properties of two different adhesives are investigated, an epoxy (3M DP490) and an acrylate (SikaFast 5215). The differences of the adhesives lay in dissimilar stiffness, strength and viscous behaviour. In long term load caring design is important to understand the materials behavior under a constant load and a permanent displacement within the structure can cause major consequences. Therefore the main aim in this project is to identify the adhesives strength, deformation capacity and possible viscous (time dependent) effects. Because of the limitation of equipment and time this study is restricted to only three different experiments. Three different types of tensile tests have been conducted: monotonic, cyclic relaxation tests.The results of the experiments show that 3M DP490 has a higher strength and a smaller deformation capacity as compared to the SikaFast 5215. Thus, the SikaFast 5215 is more ductile. The 3M DP490 exhibits a lower loss of strength under constant strain (at relaxation). SikaFast 5215 showed also a large dependency of strain level on the stress loss in relaxation.
APA, Harvard, Vancouver, ISO, and other styles
2

Кучірка, Ю. М. "Удосконалені методи підвищення точності результатів дослідження поверхневого натягу рідин та пристрій для їх реалізації." Thesis, Івано-Франківський національний технічний університет нафти і газу, 2013. http://elar.nung.edu.ua/handle/123456789/4629.

Full text
Abstract:
Дисертація присвячена дослідженню і розробленню удосконалених методів для дослідження ПН рідин та розчинів ПАР на межі контакту рідина - газ. Проаналізовано відомі методи і прилади для вимірювання ПН рідин і розчинів ПАР за максимальним тиском у бульбашці, внаслідок чого визначено їхні недоліки. Представлено удосконалені методи, що враховують несферичність меніска у момент максимального тиску у бульбашці, і не потребують попереднього визначення густини рідини та прецизійної системи занурення капілярів на певну глибину цієї рідини, а також пристрій з їх реалізації, який дозволяє автоматизовано досліджувати РПН і ДПН рідин і розчинів ПАР за допомогою трьох капілярів за максимальними тисками у бульбашках, які утворюються на їх нижніх торцях.
Диссертация посвящена разработке усовершенствованных методов и устройства по их реализации для автоматизированного исследования поверхностного натяжения (ПН) жидкостей и растворов ПАВ на границе жидкость - газ с использованием трех зафиксированных между собой капилляров с различными внутренними радиусами и расстоянием между их нижними торцами. В первом разделе осуществлена оценка экспериментальных условий проведения исследования ПН однокомпонентных жидкостей, промышленных растворов ПАВ и биологических жидкостей человека, проанализированы известные методы и приборы для измерения ПН жидкостей и растворов ПАВ за максимальным давлением в пузырьке. Определены их недостатки, сформулированы задачи и направления по их усовершенствованию. Во втором разделе представлены усовершенствованные методы определения равновесного (РПН) и динамического (ДПН) ПН жидкостей и растворов ПАВ, учитывающие отклонения поверхности мениска от полусферической формы в момент максимального давления в пузырьке, которые не требуют предварительного определения плотности жидкости и прецизионной системы погружения капилляров на заданную глубину жидкости, а также методики, которые повышают точность определения РПН и ДПН жидкостей и растворов ПАВ. В третьем разделе описаны требования к устройству с целью реализации им разработанных методов определения ПН жидкостей и растворов ПАВ, а также структурная, функциональная, электрическая, пневматическая схемы, конструкция и программное обеспечение трикапилярного устройства для автоматизированного исследования РПН и ДПН жидкостей и растворов ПАВ. Четвертый раздел посвящен метрологическому анализу погрешностей предложеных методов определения РПН и ДПН жидкостей и растворов ПАВ, а также трикапилярного устройства. Показано, что граничная погрешность определения этим устройством РПН и ДПН жидкостей и растворов ПАВ составит 0,45 ÷ 0,6 мН/м для значений ПН в интервале от 10 до 100 мН/м. В пятом разделе разработана процедура проведения лабораторных испытаний трикапилярного устройства, приведены результаты лабораторных и натурных испытаний, а также выводы, которые были получены при их анализе.
Dissertation is dedicated to research and development of measurement of surface tension at the boundary of contact of fluid and gas and the device, that realizes developed improved methods by using of maximum pressure in the bubble. The known methods of measuring of surface tension of fluids and surfactants solutions by using of maximum pressure in the bubble are analyzed, their merits and demerits are determined. Presented improved methods that take into account deviations from hemispherical surface meniscus forms at the moment of the maximum pressure in the bubble and do not require prior density measurement of liquids and precision system for capillary immersion at a certain depth of fluid and the device with their implementation, allowing automatically investigate surface tension liquids by using of maximum pressure in the bubble.
APA, Harvard, Vancouver, ISO, and other styles
3

Ndoye, Mamadou Mustapha. "Contribution à l'étude du transistor bipolaire hyperfréquence sur puce de silicium." Bordeaux 1, 1997. http://www.theses.fr/1997BOR10682.

Full text
Abstract:
Ce travail est une contribution à l'étude du transistor bipolaire hyperfréquence sur puce de silicium. Il présente tout d’abord deux méthodes originales permettant de réduire la Capacité extrinsèque Base-Collecteur, d'augmenter la tension de claquage Base-Collecteur, d'augmenter la tension Early VA, d'augmenter le gain en puissance maximal Gpmax et d'augmenter la fréquence de transition FT. Il présente ensuite un nouveau transistor, de structure hybride entre le NPN vertical et le NPN latéral, baptisé bipolaire-CLEV (à Collecteur Latéral-Emetteur Vertical). Ce travail est généralisable à d'autres technologies de transistors hyperfréquences telles que les transistors à substrats III-V ou les transistors à hétérojonctions
This work is a contribution to the study of the high-speed bipolar transistor on silicon chip. First, it presents two original methods allowing to reduce the Base-Collector extrinsic Capacitance, to increase the Base-Collector breakdown voltage, to increase the Voltage Early VA, to increase the maximum power gain Gpmax and to increase the transition frequency FT. Then, it presents a new transistor, hybrid structure between the vertical NPN and the lateral NPN, named bipolar-CLEV (lateral collector-vertical emitter). This study can be generalized to other high speed transistor technologies such as III-V substrate transistors or heterojunction transistors
APA, Harvard, Vancouver, ISO, and other styles
4

Ortiz, Thomas. "Two dimensional Maximal Supergravity, Consistent Truncations and Holography." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2014. http://tel.archives-ouvertes.fr/tel-01070735.

Full text
Abstract:
A complete non trivial supersymmetric deformation of the maximal supergravity in two dimensions is achieved by the gauging of a SO(9) group. The resulting theory describes the reduction of type IIA supergravity on an AdS_2 x S^8 background and is of first importance in the Domain-Wall / Quantum Field theory correspondence for the D0-brane case. To prepare the construction of the SO(9) gauged maximal supergravity, we focus on the eleven dimensional supergravity and the maximal supergravity in three dimensions since they give rise to important off-shell inequivalent formulations of the ungauged theory in two dimensions. The embedding tensor formalism is presented, allowing for a general desciption of the gaugings consistent with supersymmetry. The SO(9) supergravity is explicitly constructed and applications are considered. In particular, an embedding of the bosonic sector of the two-dimensional theory into type IIA supergravity is obtained. Hence, the Cartan truncation of the SO(9) supergravity is proved to be consistent. This motivated holographic applications. Therefore, correlation functions for operators in dual Matrix models are derived from the study of gravity side excitations around half BPS backgrounds. These results are fully discussed and outlooks are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Pereira, Claudia Cristina. "Um estudo do metodo da continuação aplicado a analise do maximo carregamento dos sistemas de potencia." reponame:Repositório Institucional da UFSC, 1998. http://repositorio.ufsc.br/xmlui/handle/123456789/77439.

Full text
Abstract:
Dissertação (Mestrado) - Universidade Federal de Santa Catarina, Centro Tecnologico
Made available in DSpace on 2012-10-17T04:13:44Z (GMT). No. of bitstreams: 0Bitstream added on 2016-01-09T00:25:52Z : No. of bitstreams: 1 149019.pdf: 5566618 bytes, checksum: de54225315919a581876f0ca23513303 (MD5)
Este trabalho aborda o problema de determinação do máximo carregamento de um sistema de potência, sob o ponto de vista de análise da estabilidade de tensão. A operação da rede elétrica em condições limite no que diz respeito à capacidade dos equipamentos, tem exigido o desenvolvimento de métodos adequados tanto para a determinação da solução das equações do fluxo de potência no ponto de máxima demanda, como para a detecção da proximidade do ponto crítico sob o aspecto de estabilidade de tensão. Apresenta-se a aplicação do Método da Continuação ao problema de determinação da maxima demanda da rede elétrica. Este método pode ser formulado em coordenadas polares e em coordenadas retangulares. Uma metodologia baseada na modelagem em coordenadas retangulares é mostrada, em duas versões diferentes no que diz respeito ao etapa de correção do Método da Continuação. Os resultados da aplicação da metodologia a sistemas de variados portes ilustram a potencialidade da abordagem baseada no Método da Continuação para a detecção e identificacão de áreas sujeitas ao colapso de tensão, em estudos de planejamento e operação.
APA, Harvard, Vancouver, ISO, and other styles
6

Llacua, Zarate Luis Alberto. "Estimação rapida do ponto de maximo carregamento para a analise de estabilidade de tensão de sistemas eletricos de potencia." [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260236.

Full text
Abstract:
Orientador: Carlos Alberto de Castro Junior
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-04T00:35:29Z (GMT). No. of bitstreams: 1 LlacuaZarate_LuisAlberto_D.pdf: 4678111 bytes, checksum: ae45f2c83335f23c77b3b0f46577449d (MD5) Previous issue date: 2004
Resumo: o objetivo deste trabalho de pesquisa é propor metodologias para a estimação do ponto de máximo carregamento (PMC) de sistemas elétricos de potência com vistas à análise de estabilidade de tensão. A principal característica dos métodos propostos é a extrema rapidez de cálculo com a manutenção da precisão dos resultados. São utilizadas ferramentas de análise apropriadas que, devidamente combinadas e integralizadas, produzem os resultados esperados. Dentre elas, pode-se citar a análise de sensibilidade, análise de sistemas de equações mal condicionadas e/ou sem solução, e técnicas de otimização. Os métodos essencialmente com enfoque estático, baseiam-se em realizar um certo número de cálculos de fluxo de carga para diferentes níveis de carga no espaço de parâmetros. Assim, o caminho com direção ao PMC é baseado em simples processos de incrementos de carga e cortes de carga, que serão detalhados nos capítulos respectivos. Ressalta-se que dois métodos são presentados, com características bem diferentes em relação ao processo de incrementos de carga com direção ao PMC. A similaridade é apresentada apenas no processo de recuperação da factibilidade. Espera-se que as metodologia que resultem deste trabalho de pesquisa possam ser utilizadas de forma rotineira na análise de estabilidade de tensão de redes elétricas durante o planejamento da operação e, potencialmente, em ambientes cujas restrições de esforço computacional sejam ainda mais severas, como em ambientes de operação e análise em tempo real
Abstract: The goal of this research work is to propose methodologies for estimating the maximum loading point (MLP) of power systems for voltage stability analysis. The main feature of the proposed methods is the fast computation of MLP, while maintaining the precision of the results. Appropriate analysis tools are integrated to provide the expected results. Among them, are can mention sensitivity analysis, ill-conditioned and/or infeasible system analysis and optimization techniques. The proposed methods, with a static approach, is based on solving a certain number of load flow calculations for different load levels. Therefore, the path toward MLP is based on simple load increments or curtailments. It must be emphasized that two methods will be presented, with different characteristics with respect to the load increment processo Moreover, they are similar with respect to the feasibility restoration processo It is expected that the proposed methods can be routinely used in power systems voltage stability analysis in operation planning and potentially in enviroments with severe computational effort constraints, such as in real time operation
Doutorado
Energia Eletrica
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
7

Souza, Tiago de Jesus. "Previsão da curva tensão-recalque em solos tropicais arenosos a partir de ensaios de cone sísmico." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18132/tde-25042012-163755/.

Full text
Abstract:
Apresenta-se neste trabalho a aplicação de um método para a previsão da curva tensão-recalque de fundações diretas assentes em solos tropicais arenosos a partir de resultados de ensaios de cone sísmico (SCPT). Os locais estudados foram os campos experimentais de fundações da EESC/USP - São Carlos e da UNESP-Bauru, onde existem resultados de provas de carga realizados a diferentes profundidades, assim como resultados de ensaios SCPT. As previsões realizadas apresentaram bons resultados, após ajustes dos parâmetros f e g, pois as curvas tensão-recalque estimadas foram próximas a aquelas obtidas a partir de provas de carga em placa, para as profundidades maiores que 1,5 metros. Verifica-se assim a aplicabilidade do método, após seu ajuste, para reproduzir a curva tensão-recalque neste tipo de solo, empregando uma abordagem mais racional, com menor dependência de correlações empíricas. Destaca-se nesta pesquisa que existe uma variabilidade dos resultados de ensaios SCPT e de provas de carga que está relacionada com a mudança de sucção no solo. Para o campo experimental de São Carlos foi possível ainda fazer uma avaliação da variabilidade nas previsões realizadas, pois existe maior número de resultados de ensaios de campo e provas de cargas disponíveis.
It is presented in this dissertation the use of a method for predicting the stress-settlement curve of shallow foundations on tropical sandy soils based on seismic cone (SCPT) test results. The studied sites were the experimental research sites from USP - São Carlos, and UNESP - Bauru, Brazil, where there are results from plate load tests conducted at various depths, as well as SCPT test results. The stress-settlement curve predictions show good results, after adjusting the parameters f and g, because the estimated curves were close to those obtained from plate load tests, to depths greater than 1.5 meters. The applicability of the method, after its adjustment, to reproduce the stress-settlement curve for this type of soil, was verified employing a more rational approach with less reliance on empirical correlations. It is highlighted in this research that there is variability on SCPT and plate load test results, which is related to the change in soil suction. It was also possible to access the variability on the prediction for the USP São Carlos site, since there is a greater number of in situ and plate load tests in this site.
APA, Harvard, Vancouver, ISO, and other styles
8

Romanello, Michael T. "Load Response Analysis of the WAY-30 Test Pavements: US Route 30, Wayne County, Ohio." Ohio : Ohio University, 2007. http://www.ohiolink.edu/etd/view.cgi?ohiou1196092689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zeferino, Cristiane Lionço. "Estudo do máximo carregamento em sistemas de energia elétrica via método da barreira modificada." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-08032007-113530/.

Full text
Abstract:
Nesta dissertação é aplicado o método da função Lagrangiana barreira modificada (FLBM), uma variante do método de pontos interiores, para determinação do máximo carregamento em sistemas de energia elétrica. A formulação do problema tem como restrições de igualdade as equações de balanço de potência do sistema, em sua forma parametrizada, e como restrições de desigualdade os limites de tensões nas barras e os limites de geração de potência reativa nas barras com controle de reativo. Os resultados encontrados com a técnica de otimização estática utilizada neste estudo são confrontados com os resultados obtidos com o método primal-dual barreira logarítmica. Para realização dos testes de desempenho da metodologia proposta, utilizou-se como padrão os sistemas do IEEE de 14, 57 e 118 barras. Os testes demonstraram a robustez e a eficiência do algoritmo proposto.
In this work the modified barrier Lagrangian function (MBLF) method, a variant of the interior point method. The formulation of the problem will have as constraints of equality the power system swinging equations, in a parametrized form, and as inequality constraints the voltage limits in the buses and the reactive generation limits in the buses with reactive control. The results found with the static optimization technique used in this study are confronted with the results obtained with the primal-dual barrier logarithmic method. The performance of the method is illustrated using as pattern the systems IEEE 14, 57 and 118 bars. The tests demonstrated the robustness and the efficiency of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Attia, Sid Ahmed. "Sur la commande des systèmes non linéaires à dynamique hybride." Phd thesis, Grenoble INPG, 2005. http://tel.archives-ouvertes.fr/tel-00082495.

Full text
Abstract:
This dissertation concerns the development of reduced complexity controllers for
hybrid switched systems. A diverse number of applications from automotive industry, fluid dyna-
mics and power systems are treated. Some general open loop optimal and predictive control schemes
are proposed. The main motivation behind each method is the reduction of the combinatorics. In
this thesis, two main contributions can be distinguished. The first one concerns the optimal control
of switched nonlinear systems where an algorithm based on strong variations is proposed and some
convergence results proven. The complexity of the scheme is linear in the number of locations, this
in conjunction with its simplicity makes it attractive for large scale systems. An example from
the automotive industry is treated to further illustrate the tractability of the scheme. The second
contribution concerns the development of a hierarchical approach for switched nonlinear systems.
At the lower level, feedback controllers are associated to each location and at the higher level a
predictive approach with a reduced order parametrization is in force. Based on this methodology,
two schemes are developed and successfully tested in respectively fluid stabilisation by actuator
switching and voltage stabilization in power systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Bonini, Neto Alfredo [UNESP]. "Técnicas de parametrização geométrica para o método da continuação." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/100307.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:30:50Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-06-03Bitstream added on 2014-06-13T19:25:56Z : No. of bitstreams: 1 boninineto_a_dr_ilha.pdf: 824701 bytes, checksum: 6658988060f97f3bc85ad52076cad0b2 (MD5)
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Este trabalho analisa a utilização de técnicas de parametrização global para o fluxo de carga continuado. Essas técnicas são consideradas inadequadas para a obtenção da margem de carregamento de sistemas com problemas de estabilidade de tensão com características fortemente locais. Isto se deve ao fato de que no ponto de máximo carregamento a singularidade da matriz Jacobiana do método de parametrização global coincide com a da matriz Jacobiana do fluxo de carga. Nesses casos, a parametrização local é considerada como a única forma de se eliminar a singularidade. Entretanto, este trabalho mostra que a singularidade também pode ser eficientemente eliminada não só para estes sistemas, mas para qualquer outro, através de uma nova técnica de parametrização (global). A técnica utiliza a equação de uma reta que passa por um ponto no plano determinado pelas variáveis fator de carregamento e a somatória das magnitudes, ou dos ângulos, das tensões nodais de todas as barras do sistema, que são as variáveis comumente usadas pelas técnicas de parametrização global. Os resultados obtidos para diversos sistemas confirmam o aumento da eficiência dos métodos propostos e mostram sua viabilidade para aplicações no planejamento da operação nos atuais sistemas de gerenciamento de energia
This work presents an analysis of the use of global parameterization techniques to the continuation power flow. Those techniques are considered inadequate for computation of the loading margin of power systems characterized by strong local static voltage stability. In such systems, at maximum loading point, the singularity of the Jacobian matrices of global parameterization techniques coincide with the one of the power flow Jacobian matrix. In those cases, the local parameterization is considered as the only way to overcome the singularity. However, this paper shows that this kind of singularity can be efficiently eliminated not only for these systems, but also for all others, by a new parameterization technique (global). This technique uses the addition of a line equation, which passes through a point in the plane determined by the sum of all the bus voltage magnitudes, or angles, and loading factor variables, that are variables commonly used by global parameterization techniques. The obtained results for several systems confirm the efficiency increased of the proposed methods and show its viability for applications in the operating planning in a modern energy management system
APA, Harvard, Vancouver, ISO, and other styles
12

Son, Hyeon-Dong [Verfasser], Maxim [Gutachter] Polyakov, and Hyun-Chul [Gutachter] Kim. "Parton quasi-distributions and energy-momentum tensor form factors for large Nc nucleons / Hyeon-Dong Son ; Gutachter: Maxim Polyakov, Hyun-Chul Kim ; Fakultät für Physik und Astronomie." Bochum : Ruhr-Universität Bochum, 2021. http://d-nb.info/1240479239/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zeferino, Cristiane Lionço. "Avaliação e controle de margem de carregamento em sistemas elétricos de potência." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-05052011-091651/.

Full text
Abstract:
Neste trabalho é proposta a determinação do ponto de Máximo Carregamento (PMC) em sistemas elétricos de potência por meio do método da Função Lagrangiana Barreira Modificada (FLBM), uma variante do método de Pontos Interiores (PI). Também por meio do método da FLBM, busca-se determinar qual é a barra, para cada sistema, que apresenta a maior sensibilidade em relação ao fator de carregamento, ou seja, qual seria a primeira barra que deveria sofrer corte de carga a fim de aumentar a margem de carregamento do sistema e, assim, evitar o colapso de tensão. Para comprovação dos resultados obtidos por meio do método da FLBM utiliza-se a técnica de Análise de Sensibilidade (AS). A formulação do problema tem como restrições de igualdade as equações de balanço de potência do sistema elétrico e como restrições de desigualdade os limites de tensões nas barras, assim como os limites de geração de potência reativa nas barras com controle da referida potência. Estudos de casos foram realizados em um sistema de 3 barras e nos sistemas IEEE 14, 57, 118 e 300 barras; tais estudos demonstraram a robustez e a eficiência dos algoritmos propostos.
This work proposes the determination of the Maximum Loading Point (MLP) in electric power system via Lagrangian Modified Barrier Function (LMBF) method, a variant of Interior Point (IP). The LMBF method is also used to determine which bus, for each system, has the highest sensitivity of load factor, i.e., which bus would be the first to have load shedding in order to increase the loading margin system and thus prevent voltage collapse. To validate this approach, the Sensitivity Analysis (SA) technique was used for the confirmation of the results obtained by the LMBF method. The formulation of the problem considered the equations of power balance of the electrical system equality constraints, and the buses voltage magnitude limits, as well as the limits of reactive power control at the buses of that power inequality constraints. Case studies were conducted in a system of 3 buses and IEEE systems 14, 57, 118 and 300 buses, demonstrating the robustness and efficiency of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
14

Rojas, Quintero Juan Antonio. "Contribution à la manipulation dextre dynamique pour les aspects conceptuels et de commande en ligne optimale." Thesis, Poitiers, 2013. http://www.theses.fr/2013POIT2284/document.

Full text
Abstract:
Nous nous intéressons à la conception des mains mécaniques anthropomorphes destinées à manipuler des objets dans un environnement humain. Via l'analyse du mouvement de sujets humains lors d'une tâche de manipulation de référence, nous proposons une méthode pour évaluer la capacité des mains robotiques à manipuler les objets. Nous montrons comment les rapports de couplage angulaires entre les articulations et les limites articulaires, influent sur l'aptitude à manipuler dynamiquement des objets. Nous montrons également l'impact du poignet sur les tâches de manipulation rapides. Nous proposons une stratégie pour calculer les forces de manipulation en bout de doigts et dimensionner les moteurs d'un tel préhenseur. La méthode proposée est dépendante de la tâche visée et s'adapte à tout type de mouvement dès lors qu'il peut être capturé et analysé. Dans une deuxième partie, consacrée aux robots manipulateurs, nous élaborons des algorithmes de commande optimale. En considérant l'énergie cinétique du robot comme une métrique, le modèle dynamique est formulé sous forme tensorielle dans le cadre de la géométrie Riemannienne. La discrétisation temporelle est basée sur les Éléments Finis d'Hermite. Nous intégrons les équations de Lagrange du mouvement par une méthode de perturbation. Des exemples de simulation illustrent la superconvergence de la technique d'Hermite. Le critère de contrôle est choisi indépendant des paramètres de configuration. Les équations de la commande associées aux équations du mouvement se révèlent covariantes. La méthode de commande optimale proposée consiste à minimiser la fonction objective correspondant au critère invariant sélectionné
We focus on the design of anthropomorphous mechanical hands destined to manipulate objects in a human environment. Via the motion analysis of a reference manipulation task performed by human subjects, we propose a method to evaluate a robotic hand manipulation capacities. We demonstrate how the angular coupling between the fingers joints and the angular limits affect the hands potential to manipulate objects. We also show the influence of the wrist motions on the manipulation task. We propose a strategy to calculate the fingertip manipulation forces and dimension the fingers motors. In a second part devoted to articulated robots, we elaborate optimal control algorithms. Regarding the kinetic energy of the robot as a metric, the dynamic model is formulated tensorially in the framework of Riemannian geometry. The time discretization is based on the Hermite Finite Elements.A time integration algorithm is designed by implementing a perturbation method of the Lagrange's motion equations. Simulation examples illustrate the superconvergence of the Hermite's technique. The control criterion is selected to be coordinate free. The control equations associated with the motion equations reveal to be covariant. The suggested control method consists in minimizing the objective function corresponding to the selected invariant criterion
APA, Harvard, Vancouver, ISO, and other styles
15

Silveira, Cristiano da Silva. "Estudo de máximo carregamento em sistemas de energia elétrica." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-13052003-133219/.

Full text
Abstract:
Este trabalho apresenta um estudo sobre o método da continuação aplicado ao problema de fluxo de potência. Definições e conceitos de estabilidade de tensão são descritos de forma a explicitar as diferenças e semelhanças existentes com relação ao estudo de máximo carregamento em sistemas de energia elétrica. Uma síntese da teoria da bifurcação aborda sua importância em estudos de colapso de tensão. É proposta uma técnica de controle do tamanho do passo para o método da continuação com o objetivo de determinar o ponto de máximo carregamento (PMC) sem a necessidade de especificar, por meio do usuário, um valor para o tamanho inicial do passo. Os resultados dos estudos realizados em sistemas testes do IEEE (14, 30, 57 e 118 barras) mostram a aplicação do método da continuação convencional e de sua associação à técnica de controle do tamanho do passo.
This work presents a research about the continuation method applied to the power flow problem. Voltage stability definitions and concepts are described in a way to highlight and point out the differences and the similarities among several methods used to determine the maximum loading of electrical power systems. A short description of the bifurcation theory is also presented in order to show its importance to the voltage collapse studies. A technique based on automatically controlling the step size is proposed as an innovation of the continuation method. The objective of this technique is to determine the maximum loading point without the traditional need of asking the user for the initial step size. The results compare the performance between the conventional and the new method. These methods are analyzed using IEEE test systems (14, 30, 57 and 118-bus).
APA, Harvard, Vancouver, ISO, and other styles
16

Kutty, Sangeetha. "Enriching XML documents clustering by using concise structure and content." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/48326/1/Sangeetha_Kutty_Thesis.pdf.

Full text
Abstract:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
APA, Harvard, Vancouver, ISO, and other styles
17

Jedlička, František. "Rozpoznání květin v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-376895.

Full text
Abstract:
This paper is focus on flowers recognition in an image and class classification. Theoretical part is focus on problematics of deep convolutional neural networks. The practical part if focuse on created flowers database, with which it is further worked on. The database conteins it total 13000 plant pictures of 26 spicies as cornflower, violet, gerbera, cha- momile, cornflower, liverwort, hawkweed, clover, carnation, lily of the valley, marguerite daisy, pansy, poppy, marigold, daffodil, dandelion, teasel, forget-me-not, rose, anemone, daisy, sunflower, snowdrop, ragwort, tulip and celandine. Next is in the paper described used neural network model Inception v3 for class classification. The resulting accuracy has been achieved 92%.
APA, Harvard, Vancouver, ISO, and other styles
18

Ye, Jian. "Régularité de l'application du transport optimal sur des variétés riemanniennes compactes." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30354.

Full text
Abstract:
Dans cette thèse on s'intéressons à la régularité de l'application du transport optimal sur des variétés riemanniennes compactes. Dans le premier chapitre, on rappelle certaines définitions sur une variété riemannienne. Dans le deuxième chapitre, on décrit la variation de la courbure sur des géodésiques. Dans le troisième chapitre, on étudie le tenseur de MTW sur une variété riemannienne compacte. On montre qune condition de MTW améliorée est satisfaite sur une variété presque sphérique. La preuve consiste à une analyse minutieuse, combinée avec les arguments de perturbation sur des sphères. Dans le quatrième chapitre, on étudie le comportement de l'inverse de la matrice Hessienne de la distance au carré. Dans le cinquième chapitre, on prouve la régularité du transport optimale sur deux classes des variétés riemanniennes compactes- des variétés presque sphériques et des produits riemanniens des variétés presque sphériques. Dans le dernier chapitre, on déscrit quelques perspectives sur le transport optimal dans la littérature
In this thesis, we are concerned with the regularity of optimal transport maps on compact Riemannian manifolds. In the first chapter, we give some definitions and recall some facts in Riemannian geometry. In the second chapter, we examine the variation of the curvature on the geodesics. In the third chapter, we study the MTW tensor on compact Riemannian manifold. We show that an improved MTW condition is satisfied on nearly spherical manifold. The proof goes by a careful analysis combined with the perturbative arguments on the spheres. In the fourth chapter, we study the inverse of the Hessian matrix of the squared distance. In the fifth chapter, we prove the smoothness of the optimal transport maps on two classes of compact Riemannian manifold-nearly spherical manifolds and Riemannian products of nearly spherical manifolds. In the last chapter, we provide some perspectives about the optimal transportation in the literature
APA, Harvard, Vancouver, ISO, and other styles
19

Sánchez, Malagón Josep. "Determinants genòmics de la condició física: influència del polimorfisme BDNF val66met en la recuperació cardíaca postesforç." Doctoral thesis, Universitat Ramon Llull, 2012. http://hdl.handle.net/10803/66241.

Full text
Abstract:
Introduction: A commmon single nucleotide polymorphism in the human brain derived neurotrophic factor (BDNF) gene (Val66met) may have a modulatory effect on the cardiac sympathovagal balance (Yang A. , Chen, Tsai, Hong, Kuo, & Yang, 2010). Recovery of the heart rate immediately after exercise is a function of vagal reactivation. During exercise the increase of the simpathetic activity and the decrease of the vagal activity produce an increase of the cardiac heart rate. The parasimpathetic activation, seems that causes the deceleration of the heart (Levy, 1971; Javorka, Zila, & Balhárek, 2002).Methods: The sample was 67 healthy universitary Spanish students (age, 18-35 years), 17 females and 50 males, All of them were Caucasian for C≥3 generations. The genotype frequency was val/val (n=41), val/met (n=22) and met/met (n=4). Hardy-Weimberg equilibrium X2 test (p = 0.65). They performed a treadmill test with an inclination 3%, and speed increment (begining at 6 km/h and increasing 2km/h every 2 minutes) until exhaustion, maximal heart rate was obtained as well as heart rate recovery immediately after the exercise during the first minute until the fourth minute. The blood pressure was obtained in rest, when finalising the test and past four minutes. Treadmill Lifefitness (USA), MP100 System Hardware. AcqKnoledge Software 3.9-Windows XP. Biopac Systems (USA) and Polar T31 (Finland) were used. DNA testing from saliva Oragene Kit (DNA GENOTEK). Descriptives with Mean and Standard deviation were calculated using SPSS 18 package. Results: Heart rate peak was 185.2 bpm. The results suggest that subjects without p.val66met have a better heart rate recovery during the first minute (24.59 bpm ±10.25). Second minute subjects with p.val66met have a better heart rate recovery (40.50 bpm±12.29). Third minute the subjectes with p.val66met have a better heart rate recovery (51.55 bpm±14.56). Fourth minute the subjects with p.val66met haver a better heart rate recovery (66.55 bpm±10.53). In rest the systolic blood pressure (SBP) (123.48mmHg±11.15) and the diastolic blood pressure (DBP) (63.63 mmHg±7.17) was higher in subjectes without the p.val66met. At the end of the treadmill test the SBP was higher in subjects with the p.val66met (213.63 mmHg±15.28) and the DBP was higher in subjects without the p.val66met. (44.34 mmHg±17.75). At the end of the four minutes the subjects with the p.val66met obtained values higher in SBP (169.63 mmHg±9.43) and in relation to DBP did not have differences (48.86 mmHg±4.86).Our research was only a descriptive study and needs more research to determine the influence of the (BDNF) gene (Val66met) in the heart rate recovery and in the blood pressure.
Introducción: El polimorfismo val66met del factor neurotrófico derivado del cerebro (BDNF: Brain Derived Neurotrophic Factor) podría ejercer un efecto regulador en el equilibrio cardíaco simpatovagal (Yang A. , Chen, Tsai, Hong, Kuo, & Yang, 2010). La recuperación cardíaca inmediatamente después del ejercicio es una función de la reactivación vagal. Durante el ejercicio el incremento de la actividad simpática i la disminución de la actividad vagal produce un incremento de la frecuencia cardíaca. La activación parasimpática parece ser que causa la desaceleración de la frecuencia cardíaca (Levy, 1971; Javorka, Zila, & Balhárek, 2002). Metodología: La muestra escogida es de estudiantes universitarios (edad, 18-35 años), 17 mujeres i 50 hombres. Todos ellos caucásicos en que 3 generaciones o más de la misma familia han vivido en la península ibérica. La frecuencia de genotipo es de val/val (n=41), val/met (n=22) y met/met (n=4), con un Hardy-Weimberg equilibrium X2 test de p = 0.65. Se les administró un test en cinta rodante con una inclinación del 3% y un incremento progresivo de la velocidad (inicio a 6 km/h y incremento de 2km/h cada 2 minutos) hasta la exhaustación, obteniéndose la frecuencia cardíaca máxima, la frecuencia cardíaca en el primer, segundo, tercer y cuarto minuto. También se registró la tensión arterial sistólica y diastólica en reposo, al acabar la prueba de esfuerzo y pasados cuatro minutos. Para este estudio se utilizó una cinta rodante Lifefitness (USA), MP100 System Hardware, AcqKnoledge Software 3.9-Windows XP, Biopac Systems (USA) and Polar T31 (Finland). Para la obtención del DNA Se utilizó el DNA testing from saliva Oragene Kit (DNA GENOTEK). Se hicieron los estudios descriptivos utilizando la media y desviación típica utilizando el programa SPSS 18 package. Resultados: La media de la frecuencia cardíaca máxima fue de 185.2 bpm. Los resultados sugieren que los sujetos sin p.val66met tienen una mejor recuperación cardíaca con 24.59 lat/min ±10.25. Durante el segundo minuto los portadores del p.val66met recuperan mejor (40.50 lat/min ±12.29). En el tercer minuto son los sujetos con p.val66met con mejor recuperación cardíaca (51.55 lat/min ±14.56). En el cuarto minuto los sujetos p.val66met recuperan mejor (66.55 bpm±10.53). En situación de reposo la tensión arterial sistólica (TAS) (123.48 mmHg±11.15) y la tensión arterial diastólica (TAD) (63.63 mmHg±7.17) es más alta en portadores de p.val66met. Al finalizar la prueba la TAS es más alta en individuos con p.val66met (213.63 mmHg±15.28) y la TAD és más alta en sujetos sin p.val66met. (44.34 mmHg±17.75). Pasados cuatro minutos los registros más altos de TAD son para los sujetos con p.val66met (169.63 mmHg±9.43) no habiendo diferencias en relación a la TAD. Este es un estudio descriptivo que necesita más investigaciones del p.val66met.
Introducció: El polimorfisme val66met del factor neurotròfic derivat del cervell (BDNF: Brain Derived Neurotrophic Factor) podria tenir un efecte regulador en l’equilibri cardíac simpatovagal (Yang A. , Chen, Tsai, Hong, Kuo, & Yang, 2010). La recuperació cardíaca després de l’exercici ésuna funció de la reactivació vagal. Durant l’exercici l’augment de l’activitat simpàtica i la disminució de l’activitat vagal provoca un increment de la freqüència cardíaca. L’activació parasimpàtica sembla ser que causa la desacceleració de la freqüència cardíaca (Levy, 1971; Javorka, Zila, & Balhárek, 2002). Metodologia: La mostra escollida és d’estudiants universitaris de nacionalitat espanyola (edat, 18-35 años), 17 dones i 50 homes. Tots ells caucàsics i tres generacions o més de la mateixa família han viscut a la península ibèrica. La freqüència de genotip era de val/val (n=41), val/met (n=22) i met/met (n=4), amb un Hardy-Weimberg equilibrium X2 test de p = 0.65. Es va administrar un test amb cinta rodant amb una inclinació del 3% i un incriment progressiu de la velocitat (inici a 6 km/h y augment de 2km/h cada 2 minuts) fins a la exhaustació, obtenint la freqüència cardíaca màxima, la freqüència cardíaca inmediata durant el primer, segon, tercer i quart minut. També es va enregistrar la tensió arterial sistòlica i la diastòlica en repòs, en acabar la prova d’esforç i passats quatre minuts. Per a realitzar aquest estudi es va utilitzar cinta rodant Lifefitness (USA), MP100 System Hardware, AcqKnoledge Software 3.9-Windows XP, Biopac Systems (USA) and Polar T31 (Finland). Les mostres de DNA es vang obtenir amb DNA testing from saliva Oragene Kit (DNA GENOTEK). Els estudis descriptius es van fer utilitzant la mitjana i la desviació estàndard fent servir el programa SPSS 18 package. Resultats: La mitjana de freqüència cardíaca màxima va ser de 185.2 bpm. Els resultats suggereixen que els subjectes sense p.val66met tenen una millor recuperació cardíaca amb 24.59 bpm ±10.25. Durant el segon minut els portadors del p.val66met recuperen millor (40.50 bpm±12.29). En el tercer minut son els subjectes amb p.val66met els que recuperen millor (51.55 bpm±14.56). En el quart minut els subjectes amb p.val66met recuperen millor (66.55 bpm±10.53). Pel que fa a la situació de repòs la tensió arterial sistòlica (TAS) (123.48mmHg±11.15) i la tensió arterial diastòlica (TAD) (63.63 mmHg±7.17) és més alta en portadors de p.val66met. En finalitzar la prova la TAS és més alta en individus amb p.val66met (213.63 mmHg±15.28) i la TAD més alta en subjectes sense p.val66met (44.34 mmHg±17.75). Passats quatre minuts els registres més alts de TAD són per a subjectes amb p.val66met (169.63 mmHg±9.43). No es detecten diferències en relació a la TAD. Aquest és un estudi descriptiu que necessita més recerques al voltant del p.val66met.
APA, Harvard, Vancouver, ISO, and other styles
20

Wouts, Marc. "Le modèle d'Ising dilué : coexistence de phases à l'équilibre & dynamique dans la région de transition de phase." Phd thesis, Université Paris-Diderot - Paris VII, 2007. http://tel.archives-ouvertes.fr/tel-00272899.

Full text
Abstract:
Cette thèse porte sur le modèle d'Ising dilué, dans la région de transition de phase. Le modèle d'Ising est un modèle classique de la mécanique statistique ; il a la particularité de présenter deux phases distinctes à basse température, ce qui a motivé, entre autres, son utilisation pour l'étude rigoureuse de la coexistence de phases. Notre objectif était d'étendre la description du phénomène de coexistence de phases au cas du milieu aléatoire, c'est-à-dire au modèle d'Ising dilué, lorsque la température et la dilution sont suffisamment faibles pour que deux phases d'aimantation opposées apparaissent.

La thèse comporte quatre chapitres. Dans un premier chapitre, nous adaptons les travaux de Pisztora au cas du milieu aléatoire et établissons une procédure de renormalisation compatible avec la dilution. Dans un second chapitre, nous étudions en détail la tension superficielle de ce modèle, pour la mesure de Gibbs correspondant à un milieu fixé, et pour la mesure moyennée. Nous caractérisons la limite à basse température de chacune de ces quantités et décrivons les formes des cristaux correspondants. Nous montrons que les déviations inférieures de la tension superficielle ont un coût surfacique et donnons une borne inférieure sur la fonction de taux à l'aide de méthodes de concentration de la mesure. Dans un troisième chapitre, nous décrivons le phénomène de coexistence de phases, sous la mesure Gibbs et sous la mesure moyennée. Dans un quatrième et dernier chapitre, nous concluons la thèse avec une application à la dynamique de Glauber, et montrons que l'autocorrélation décroît au plus vite comme une puissance inverse du temps.
APA, Harvard, Vancouver, ISO, and other styles
21

Ghrissi, Amina. "Ablation par catheter de fibrillation atriale persistante guidée par dispersion spatiotemporelle d’électrogrammes : Identification automatique basée sur l’apprentissage statistique." Thesis, Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ4026.

Full text
Abstract:
La fibrillation atriale (FA) est l’arythmie cardiaque soutenue la plus fréquemment rencontrée dans la pratique clinique. Pour la traiter, l’ablation par cathéter de zones cardiaques jugées responsables de soutenir l’arythmie est devenue la thérapie la plus utilisée. Un nouveau protocole d’ablation se base sur l’identification des zones atriales où les électrogrammes (EGM) enregistrés à l’aide d’un cathéter à électrodes multiples, appelé PentaRay, manifestent des décalages spatiotemporels significatifs sur plusieurs voies adjacentes. Ce phénomène est appelé dispersion spatio-temporelle (DST). L’intervention devient ainsi plus adaptée aux spécificités de chaque patient et elle atteint un taux de succès procédural de 95%. Cependant, à l’heure actuelle les zones de DST sont identifiées de manière visuelle par le spécialiste pratiquant l’ablation. Cette thèse vise à identifier automatiquement les sites potentiels d’ablation basée sur la DST à l’aide de techniques d’apprentissage statistique et notamment d’apprentissage profond adaptées. Dans la première partie, les enregistrements EGM sont classés par catégorie en DST vs. non-DST. Cependant, le rapport très déséquilibré entre les données issues des deux classes dégrade les résultats de classification. Nous abordons ce problème en utilisant des techniques d’augmentation de données adaptées à la problématique médicale et qui permettent d’obtenir de bons taux de classification. La performance globale s’élève ainsi atteignant des valeurs de précision et d’aire sous la courbe ROC autour de 90%. Deux approches sont ensuite comparées, l’ingénierie des caractéristiques et l’extraction automatique de ces caractéristiques par apprentissage statistique à partir d’une série temporelle, appelée valeur absolue de tension maximale aux branches du PentRay (VAVp). Les résultats montrent que la classification supervisée de VAVp est prometteuse avec des valeurs de précision, sensibilité et spécificité autour de 90%. Ensuite, la classification des enregistrements EGM bruts est effectuée à l’aide de plusieurs outils d’apprentissage statistique. Une première approche consiste à étudier les circuits arithmétiques à convolution pour leur intérêt théorique prometteur, mais les expériences sur des données synthétiques sont infructueuses. Enfin, nous investiguons des outils d’apprentissage supervisé plus conventionnels comme les réseaux de neurones convolutifs (RNC). Nous concevons une sélection de représentation des données adaptées à différents algorithmes de classification. Ces modèles sont ensuite évalués en termes de performance et coût de calcul. L’apprentissage profond par transfert est aussi étudié. La meilleure performance est obtenue avec un RNC peu profond pour la classification des matrices EGM brutes, atteignant 94% de précision et d’aire sous la courbe ROC en plus d’un score F1 de 60%. Dans la deuxième partie, les enregistrements EGM acquis pendant la cartographie sont étiquetés ablatés vs. non-ablatés en fonction de leur proximité par rapport aux sites d’ablation, puis classés dans les mêmes catégories. Les annotations de dispersion sont aussi prises en compte comme une probabilité à priori dans la classification. La meilleure performance représente un score F1 de 76%. L’agrégation de l’étiquette DST ne permet pas d’améliorer les performances du modèle. Globalement, ce travail fait partie des premières tentatives d’application de l’analyse statistique et d’outils d’apprentissage pour l’identification automatique et réussie des zones d’ablation en se basant sur la DST. En fournissant aux cardiologues interventionnels un outil intelligent, objectif et déployé en temps réel qui permet la caractérisation de la dispersion spatiotemporelle, notre solution permet d’améliorer potentiellement l’efficacité de la thérapie personnalisée d’ablation par cathéter de la FA persistante
Catheter ablation is increasingly used to treat atrial fibrillation (AF), the most common sustained cardiac arrhythmia encountered in clinical practice. A recent patient-tailored AF ablation therapy, giving 95% of procedural success rate, is based on the use of a multipolar mapping catheter called PentaRay. It targets areas of spatiotemporal dispersion (STD) in the atria as potential AF drivers. STD stands for a delay of the cardiac activation observed in intracardiac electrograms (EGMs) across contiguous leads.In practice, interventional cardiologists localize STD sites visually using the PentaRay multipolar mapping catheter. This thesis aims to automatically characterize and identify ablation sites in STD-based ablation of persistent AF using machine learning (ML) including deep learning (DL) techniques. In the first part, EGM recordings are classified into STD vs. non-STD groups. However, highly imbalanced dataset ratio hampers the classification performance. We tackle this issue by using adapted data augmentation techniques that help achieve good classification. The overall performance is high with values of accuracy and AUC around 90%. First, two approaches are benchmarked, feature engineering and automatic feature extraction from a time series, called maximal voltage absolute values at any of the bipoles (VAVp). Statistical features are extracted and fed to ML classifiers but no important dissimilarity is obtained between STD and non-STD categories. Results show that the supervised classification of raw VAVp time series itself into the same categories is promising with values of accuracy, AUC, sensi-tivity and specificity around 90%. Second, the classification of raw multichannel EGM recordings is performed. Shallow convolutional arithmetic circuits are investigated for their promising theoretical interest but experimental results on synthetic data are unsuccessful. Then, we move forward to more conventional supervised ML tools. We design a selection of data representations adapted to different ML and DL models, and benchmark their performance in terms of classification and computational cost. Transfer learning is also assessed. The best performance is achieved with a convolutional neural network (CNN) model for classifying raw EGM matrices. The average performance over cross-validation reaches 94% of accuracy and AUC added to an F1-score of 60%. In the second part, EGM recordings acquired during mapping are labeled ablated vs. non-ablated according to their proximity to the ablation sites then classified into the same categories. STD labels, previously defined by interventional cardiologists at the ablation procedure, are also aggregated as a prior probability in the classification task.Classification results on the test set show that a shallow CNN gives the best performance with an F1-score of 76%. Aggregating STD label does not help improve the model’s performance. Overall, this work is among the first attempts at the application of statistical analysis and ML tools to automatically identify successful ablation areas in STD-based ablation. By providing interventional cardiologists with a real-time objective measure of STD, the proposed solution offers the potential to improve the efficiency and effectiveness of this fully patient-tailored catheter ablation approach for treating persistent AF
APA, Harvard, Vancouver, ISO, and other styles
22

MEDEIROS, Rex Antonio da Costa. "Zero-Error capacity of quantum channels." Universidade Federal de Campina Grande, 2008. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1320.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-01T21:11:37Z No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5)
Made available in DSpace on 2018-08-01T21:11:37Z (GMT). No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5) Previous issue date: 2008-05-09
Nesta tese, a capacidade erro-zero de canais discretos sem memória é generalizada para canais quânticos. Uma nova capacidade para a transmissão de informação clássica através de canais quânticos é proposta. A capacidade erro-zero de canais quânticos (CEZQ) é definida como sendo a máxima quantidade de informação por uso do canal que pode ser enviada através de um canal quântico ruidoso, considerando uma probabilidade de erro igual a zero. O protocolo de comunicação restringe palavras-código a produtos tensoriais de estados quânticos de entrada, enquanto que medições coletivas entre várias saídas do canal são permitidas. Portanto, o protocolo empregado é similar ao protocolo de Holevo-Schumacher-Westmoreland. O problema de encontrar a CEZQ é reformulado usando elementos da teoria de grafos. Esta definição equivalente é usada para demonstrar propriedades de famílias de estados quânticos e medições que atingem a CEZQ. É mostrado que a capacidade de um canal quântico num espaço de Hilbert de dimensão d pode sempre ser alcançada usando famílias compostas de, no máximo,d estados puros. Com relação às medições, demonstra-se que medições coletivas de von Neumann são necessárias e suficientes para alcançar a capacidade. É discutido se a CEZQ é uma generalização não trivial da capacidade erro-zero clássica. O termo não trivial refere-se a existência de canais quânticos para os quais a CEZQ só pode ser alcançada através de famílias de estados quânticos não-ortogonais e usando códigos de comprimento maior ou igual a dois. É investigada a CEZQ de alguns canais quânticos. É mostrado que o problema de calcular a CEZQ de canais clássicos-quânticos é puramente clássico. Em particular, é exibido um canal quântico para o qual conjectura-se que a CEZQ só pode ser alcançada usando uma família de estados quânticos não-ortogonais. Se a conjectura é verdadeira, é possível calcular o valor exato da capacidade e construir um código de bloco quântico que alcança a capacidade. Finalmente, é demonstrado que a CEZQ é limitada superiormente pela capacidade de Holevo-Schumacher-Westmoreland.
APA, Harvard, Vancouver, ISO, and other styles
23

Tang, Herbert Hoi Chi. "Bayesian Analysis of Intratumoural Oxygen Data." Thesis, 2009. http://hdl.handle.net/10012/4643.

Full text
Abstract:
There is now ample evidence to support the notion that a lack of oxygen (hypoxia) within the tumour adversely affects the outcome of radiotherapy and whether a patient is able to remain disease free. Thus, there is increasing interest in accurately determining oxygen concentration levels within a tumour. Hypoxic regions arise naturally in cancerous tumours because of their abnormal vasculature and it is believed that oxygen is necessary in order for radiation to be effective in killing cancer cells. One method of measuring oxygen concentration within a tumour is the Eppendorf polarographic needle electrode; a method that is favored by many clinical researchers because it is the only device that is inserted directly into the tumour, and reports its findings in terms of oxygen partial pressure (PO2). Unfortunately, there are often anomalous readings in the Eppendorf measurements (negative and extremely high values) and there is little consensus as to how best to interpret the data. In this thesis, Bayesian methods are applied to estimate two measures commonly used to quantify oxygen content within a tumour in the current literature: the median PO2, and Hypoxic Proportion (HP5), the percentage of readings less than 5mmHg. The results will show that Bayesian methods of parameter estimation are able to reproduce the standard estimate for HP5 while providing an additional piece of information, the error bar, that quantifies how uncertain we believe our estimate to be. Furthermore, using the principle of Maximum Entropy, we will estimate the true median PO2 of the distribution instead of simply relying on the sample median, a value which may or may not be an accurate indication of the actual median PO2 inside the tumour. The advantage of the Bayesian method is that it takes advantage of probability theory and presents its results in the form of probability density functions. These probability density functions provide us with more information about the desired quantity than the single number that is produced in the current literature and allows us to make more accurate and informative statements about the measure of hypoxia that we are trying to estimate.
APA, Harvard, Vancouver, ISO, and other styles
24

Carvalho, Thiago Cardoso. "Development of an inhalational formulation of Coenzyme Q₁₀ to treat lung malignancies." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4798.

Full text
Abstract:
Cancer is the second leading cause of death in the United States and its onset is highly incident in the lungs, with very low long-term survival rates. Chemotherapy plays a significant role for lung cancer treatment, and pulmonary delivery may be a potential route for anticancer drug delivery to treat lung tumors. Coenzyme Q₁₀ (CoQ₁₀) is a poorly-water soluble compound that is being investigated for the treatment of carcinomas. In this work, we hypothesize that formulations of CoQ10 may be developed for pulmonary delivery with a satisfactory pharmacokinetic profile that will have the potential to improve a pharmacodynamic response when treating lung malignancies. The formulation design was to use a vibrating-mesh nebulizer to aerosolize aqueous dispersions of CoQ₁₀ stabilized by phospholipids physiologically found in the lungs. In the first study, a method was developed to measure the surface tension of liquids, a physicochemical property that has been shown to influence the aerosol output characteristics from vibrating-mesh nebulizers. Subsequently, this method was used, together with analysis of particle size distribution, zeta potential, and rheology, to further evaluate the factors influencing the capability of this nebulizer system to continuously and steadily aerosolize formulations of CoQ₁₀ prepared with high pressure homogenization. The aerosolization profile (nebulization performance and in vitro drug deposition of nebulized droplets) of formulations prepared with soybean lecithin, dimyristoylphosphatidylcholine (DMPC), dipalmitoylphosphatidylcholine (DPPC) and distearoylphosphatidylcholine (DSPC) were evaluated. The rheological behavior of these dispersions was found to be the factor that may be indicative of the aerosolization output profile. Finally, the pulmonary deposition and systemic distribution of CoQ₁₀ prepared as DMPC, DPPC, and DSPC dispersions were investigated in vivo in mice. It was found that high drug amounts were deposited and retained in the mouse lungs for at least 48 hours post nebulization. Systemic distribution was not observed and deposition in the nasal cavity occurred at a lower scale than in the lungs. This body of work provides evidence that CoQ₁₀ may be successfully formulated as dispersions to be aerosolized using vibrating-mesh nebulizers and achieve high drug deposition in the lungs during inhalation.
APA, Harvard, Vancouver, ISO, and other styles
25

Fu, Huang,Jun, and 黃俊夫. "The Use of Surfactants with High Electrolyte Tolerance for Ulttralow Interfacial Tension and Maximun Solubilization Processes." Thesis, 1993. http://ndltd.ncl.edu.tw/handle/55244249809498955750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Линник, Дмитро Олександрович. "Дослідження та розробка пристрою для керування потужністю фотоелектричних систем." Магістерська робота, 2020. https://dspace.znu.edu.ua/jspui/handle/12345/4870.

Full text
Abstract:
Линник Д. О. Дослідження та розробка пристрою для керування потужністю фотоелектричних систем : кваліфікаційна робота магістра спеціальності 153 "Мікро- та наносистемна техніка" / наук. керівник Г. Г. Коломоєць. Запоріжжя : ЗНУ, 2021. 93 с.
UA : Розроблений пристрій для керування потужністю фотоелектричних систем, що має високу енергоефективність.
EN : The device for the photoelectric system’s capacity control, which has a high energetic efficiency, is developed.
APA, Harvard, Vancouver, ISO, and other styles
27

Pilote, Bruno. "Relation entre une réponse inappropriée de la tension artérielle lors d'une épreuve d'effort maximal au tapis roulant et le monitorage de la tension artérielle ambulatoire chez les diabétiques de type 2 /." 2004. http://proquest.umi.com/pqdweb?did=885673081&sid=20&Fmt=2&clientId=9268&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Scoleri, Tony. "Fundamental numerical schemes for parameter estimation in computer vision." 2008. http://hdl.handle.net/2440/50726.

Full text
Abstract:
An important research area in computer vision is parameter estimation. Given a mathematical model and a sample of image measurement data, key parameters are sought to encapsulate geometric properties of a relevant entity. An optimisation problem is often formulated in order to find these parameters. This thesis presents an elaboration of fundamental numerical algorithms for estimating parameters of multi-objective models of importance in computer vision applications. The work examines ways to solve unconstrained and constrained minimisation problems from the view points of theory, computational methods, and numerical performance. The research starts by considering a particular form of multi-equation constraint function that characterises a wide class of unconstrained optimisation tasks. Increasingly sophisticated cost functions are developed within a consistent framework, ultimately resulting in the creation of a new iterative estimation method. The scheme operates in a maximum likelihood setting and yields near-optimal estimate of the parameters. Salient features of themethod are that it has simple update rules and exhibits fast convergence. Then, to accommodate models with functional dependencies, two variant of this initial algorithm are proposed. These methods are improved again by reshaping the objective function in a way that presents the original estimation problem in a reduced form. This procedure leads to a novel algorithm with enhanced stability and convergence properties. To extend the capacity of these schemes to deal with constrained optimisation problems, several a posteriori correction techniques are proposed to impose the so-called ancillary constraints. This work culminates by giving two methods which can tackle ill-conditioned constrained functions. The combination of the previous unconstrained methods with these post-hoc correction schemes provides an array of powerful constrained algorithms. The practicality and performance of themethods are evaluated on two specific applications. One is planar homography matrix computation and the other trifocal tensor estimation. In the case of fitting a homography to image data, only the unconstrained algorithms are necessary. For the problem of estimating a trifocal tensor, significant work is done first on expressing sets of usable constraints, especially the ancillary constraints which are critical to ensure that the computed object conforms to the underlying geometry. Evidently here, the post-correction schemes must be incorporated in the computational mechanism. For both of these example problems, the performance of the unconstrained and constrained algorithms is compared to existing methods. Experiments reveal that the new methods perform with high accuracy to match a state-of-the-art technique but surpass it in execution speed.
Thesis (Ph.D.) - University of Adelaide, School of Mathemtical Sciences, Discipline of Pure Mathematics, 2008
APA, Harvard, Vancouver, ISO, and other styles
29

Ravele, Thakhani. "Medium term load forecasting in South Africa using Generalized Additive models with tensor product interactions." Diss., 2018. http://hdl.handle.net/11602/1165.

Full text
Abstract:
MSc (Statistics)
Department of Statistics
Forecasting of electricity peak demand levels is important for decision makers in Eskom. The overall objective of this study was to develop medium term load forecasting models which will help decision makers in Eskom for planning of the operations of the utility company. The frequency table of hourly daily demands was carried out and the results show that most peak loads occur at hours 19:00 and 20:00, over the period 2009 to 2013. The study used generalised additive models with and without tensor product interactions to forecast electricity demand at 19:00 and 20:00 including daily peak electricity demand. Least absolute shrinkage and selection operator (Lasso) and Lasso via hierarchical interactions were used for variable selection to increase the model interpretability by eliminating irrelevant variables that are not associated with the response variable, this way also over tting is reduced. The parameters of the developed models were estimated using restricted maximum likelihood and penalized regression. The best models were selected based on smallest values of the Akaike information criterion (AIC), Bayesian information criterion (BIC) and Generalized cross validation (GCV) along with the highest Adjusted R2. Forecasts from best models with and without tensor product interactions were evaluated using mean absolute percentage error (MAPE), mean absolute error (MAE) and root mean square error (RMSE). Operational forecasting was proposed to forecast the demand at hour 19:00 with unknown predictor variables. Empirical results from this study show that modelling hours individually during the peak period results in more accurate peak forecasts compared to forecasting daily peak electricity demand. The performance of the proposed models for hour 19:00 were compared and the generalized additive model with tensor product interactions was found to be the best tting model.
NRF
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography