Dissertations / Theses on the topic 'Dose Computation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Dose Computation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Jung, Haejae. "Algorithms for external beam dose computation." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5955/etd.pdf.
Full textTitle from first page of PDF file. Document formatted into pages; contains xii, 143 p.; also contains graphics. Vita. Includes bibliographical references (p. 140-142).
Fox, Timothy Harold. "Computation and optimization of dose distributions for rotational stereotactic radiosurgery." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/32843.
Full textInacio, Eloïse. "Méthodes numériques en imagerie médicale pour l'évaluation de dose per-opératoire en ablation par électroporation." Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0474.
Full textAs life expectancy rises, cancer has tragically become one of the world’s leading causes of death. Among the most challenging cancers are deep-seated tumors, which are difficult to treat due to their location in vital organs like the liver or the pancreas. A promising method to tackle these tumors is electroporation ablation, which uses electric fields to create pores in the cell membranes of tumor cells. When applied with high intensity, this results in irreversible electroporation, leading to cell death without damaging nearby structures. However, electroporation requires precise planning and real-time adaptation due to its complexity. This involves numerical tools to analyze medical images and estimate the treatment area. The aim of this work is to provide such tools, analysing medical images, to per-operatively estimate the treatment area so that the interventional radiologists may adapt their approach as they are performing the procedure. More specifically, we tackle the localisation of the electrode by introducing deep learning in the existing pipeline, and the registration of the multiple scans captured during the intervention with novel auto-adaptive boundary conditions. Both computer vision tasks are crucial for a precise estimation of the electric field and need to be solved in near real time to be practical in clinical settings. These advancements in computer vision and image processing contribute to more accurate electric field estimation and improve the overall effectiveness of the procedure, leading to better patient outcomes for those battling deep-seated cancers
Vautrin, Mathias. "Planification de traitement en radiothérapie stéréotaxique par rayonnement synchrotron. Développement et validation d'un module de calcul de dose par simulations Monte Carlo." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00641325.
Full textNygren, Nelly. "Optimization of the Gamma Knife Treatment Room Design." Thesis, KTH, Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300904.
Full textStrålsäkerhet är en viktig aspekt vid uppförandet av behandlingsrum för strål-terapisystem. Strålningsnivåerna som sjukvårdspersonal och allmänheten kan exponeras för utanför behandlingsrummet regleras av myndigheter och påverkar vilken väggtjocklek som behövs och vilka platser som är lämpliga att placera systemen på. Flertalet metoder för strålskyddsberäkning existerar, men de är inte väl anpassade till det stereotaktiska radiokirurgiska systemet Leksell Gamma Knife, eftersom det har ett inbyggt strålskydd. Det inbyggda strålskyddet gör att strålfältet runt Gamma Knife är anisotropt och generellt har lägre energi än primärstrålningen från systemets koboltkällor. Förenklingar som görs rörande strålfältet i flera existerande metoder för strålskyddsberäkning kan leda till att överdrivet tjocka strålskydd används eller begränsa antalet lämpliga platser att placera systemet på. I detta projekt utvecklades en dosberäkningsalgoritm, som i två steg använder data genererad genom Monte Carlo-simuleringar. Algoritmen använder ett fasrum för att detaljerat beskriva strålfältet runt Gamma Knife. Information om enskilda fotoner i fältet används sen i kombination med ett genererat bibliotek av data som beskriver det dosbidrag som en foton bidrar med utanför behandlingsrummet, baserat på fotonens energi och väggarnas tjocklek. Dosberäkningsalgoritmen är snabb nog att integreras i optimeringsprocesser där den används iterativt samtidigt som rumsdesignparametrar varieras. I denna rapport demonstreras ett fall med ett rum av bestämd storlek, där positionen av Gamma Knife i rummet och väggarnas tjocklekar varieras. Optimeringens syfte i exemplet är att hitta den rumsdesign som med de minsta väggtjocklekarna resulterar i acceptabla strålningsnivåer utanför rummet. Resultaten tyder på att dosberäkningsalgoritmen sannolikt kan användas i mer komplexa optimeringar med fler designvariabler och mer avancerade designmål.
Morén, Björn. "Mathematical Modelling of Dose Planning in High Dose-Rate Brachytherapy." Licentiate thesis, Linköpings universitet, Optimeringslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154966.
Full textAspradakis, Maria M. "A study to assess and improve dose computations in photon beam therapy." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/21334.
Full textOLIVEIRA, JUNIOR Wilson Rosa de. "Turing´s analysis of computation and artificial neural network." Universidade Federal de Pernambuco, 2004. https://repositorio.ufpe.br/handle/123456789/1970.
Full textConselho Nacional de Desenvolvimento Científico e Tecnológico
Inspirado por uma sugestão de McCulloch e Pitts em seu trabalho pioneiro, uma simulação de Máquinas de Turing (MT) por Redes Neurais Artifiais (RNAs) apresentada. Diferente dos trabalhos anteriores, tal simulação está de acordo com a interpretação correta da análise de Turing sobre computação; é compatvel com as abordagens correntes para análise da cognição como um processo interativo agente-ambiente; e é fisicamente realizável uma vez que não se usa pesos nas conexãos com precisão ilimitada. Uma descrição completa de uma implementação de uma MT universal em uma RNA recorrente do tipo sigmóide é dada. A fita, um recurso infinito, é deixada fora da codificação como uma caracterstica externa não-intrínsica. A rede resultante é chamada de Máquina de Turing Neural. O modelo clássico de computação Máquina de Turing = Fita + Autômato de Estados Finito (AEF) é trocado pelo modelo de computação neural Máquina de Turing Neural (MTN) = Fita + Rede Neural Artifial (RNA) Argumentos para plausabilidade física e cognitiva desta abordagem são fornecidos e as consequências matemáticas são investigadas. E bastante conhecido na comunidade de neurocomputação teórica, que um AEF arbitrário não pode ser implementado em uma RNA quando ruído ou limite de precisão é considerado: sob estas condições, sistemas analógicos em geral, e RNA em particular, são computacionalmente equivalentes aos Autômatos Definidos uma classe muita restrita de AEF. Entre as principais contribuições da abordagem proposta é a definição de um novo modelo de máquina, Máquina de Turing Definida(MTD), que surge quando ruído é levado em consideração. Este resultado reflete na segunda equação descrita acima se tornando MTN com ruíıdo (MTN) = Fita + RNA com ruído(RNA) com a equação correspondente Máquina de Turing Definida = Fita + Autômatos Finitos Definidos (AFD) A investigação de capacidades computacionais das Máquinas de Turing Definida é uma outra contribuição importante da Tese. É provado que elas computam a classe das funções elementares (Brainerd & Landweber, 1974) da Teoria da Recursão
Palit, Robin. "Computational Tools and Methods for Objective Assessment of Image Quality in X-Ray CT and SPECT." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/268492.
Full textMoura, Augusto Fontan. "A computational study of the airflow at the intake region of scramjet engines." Instituto Tecnológico de Aeronáutica, 2014. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2973.
Full textCampanelli, Henrique Barcellos. "Avaliação do sistema computadorizado de planejamento radioterápico XiO 5.10 – aspectos funcionais e avanços tecnológicos para melhoria da resposta terapêutica dos tratamentos." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/153158.
Full textApproved for entry into archive by Luciana Pizzani null (luciana@btu.unesp.br) on 2018-03-22T15:09:17Z (GMT) No. of bitstreams: 1 campanelli_hb_me_bot.pdf: 1741646 bytes, checksum: 36fc92e7787c2620fa984fb8bcfeedf4 (MD5)
Made available in DSpace on 2018-03-22T15:09:17Z (GMT). No. of bitstreams: 1 campanelli_hb_me_bot.pdf: 1741646 bytes, checksum: 36fc92e7787c2620fa984fb8bcfeedf4 (MD5) Previous issue date: 2018-02-23
Outra
Este trabalho apresenta um estudo de planos radioterápicos realizados com o Sistema de Planejamento de Tratamento (TPS) XiO 5.10, utilizados em centros de radioterapia. Foram analisados os protocolos clínicos de radioterapia através de pesquisa bibliográfica em artigos científicos e textos produzidos pelas sociedades especializadas da área da radioncologia. Foram realizadas visitas técnicas junto a serviços de radioterapia para melhor entendimento das diferenças entre os distintos sistemas de planejamentos radioterápicos aplicados. A pesquisa verificou a resposta de três diferentes algoritmos computacionais de cálculo de dose de radiação: Superposition, Convolution e Fast Superposition, disponíveis no TPS XiO® da Elekta Medical Systems. Para isto foram analisados os parâmetros de distribuição de dose para 22 planejamentos radioterápicos realizados em uma clínica de radioterapia privada, sendo: 1 paciente com metástase cerebral, 3 pacientes com tumores de cabeça e pescoço, 9 com câncer de mama e 9 com tumores da próstata. Também foi quantificada a influência da heterogeneidade do tecido irradiado, através dos métodos de correção de heterogeneidade do XiO. Independentemente da correção de heterogeneidade do tecido, de um modo geral, o algoritmo de Convolution tendeu a subestimar a dose no PTV quando comparado com os outros dois algoritmos de cálculo de dose. Não foram percebidas diferenças significativas nos valores de dose de radiação calculados para o PTV, determinadas com os outros dois algoritmos, no entanto, o método Superposition proporciona leve redução do tempo de processamento computacional. O trabalho pretende contribuir para uma melhor compreensão da sistemática de cálculo do TPS XiO e assim, beneficiar os usuários dos planejamentos computadorizados durante a análise dos indicadores da qualidade do plano de tratamento.
This paper bring forward a study of radiotherapeutic planning accomplished by XiO 5.10 treatment planning systems utilized in radiotherapic centers. The clinical protocols of radiotherapy were analyzed through bibliographic research in scientific articles and texts produced by societies related to the area of radioncology. . Technical reconnaissance visits were performed in radiotherapy services for a better understanding of the differences between the systems of radiotherapy planning applied. The research verified the response of three distinct computational algorithms of calculation of dose radiation: Superposition, Convolution and Fast Superposition, available in the Tps XiO® of Elekta Medical Systems. For this purpose, were analyzed the dose distribution parameters for 22 radiotherapy plans performed in a private radiotherapy clinic: 1 patient with cerebral metastasis, 3 patients with head and neck tumors, 9 with breast cancer and 9 with prostate tumors . The influence of the heterogeneity of the irradiated tissue was also quantified through the XiO heterogeneity correction methods. Regardless of the correction of tissue heterogeneity, the Convolution algorithm tended to, in general, underestimate the dose in the PTV when compared to the other two dose calculation algorithms. No significant differences were observed in the radiation dose values calculated for the PTV, determined with the other two algorithms, however, the Superposition method provides a slight reduction of computational processing time. This paper aims to contribute to a better understanding of the systematics of TPS XiO calculation and thus, to benefit the users of the computerized planning during the analysis of the quality indicators of the treatment plan.
empresa privada
Lee, Choonik. "Development of the voxel computational phantoms of pediatric patients and their application to organ dose assessment." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0013660.
Full textStaton, Robert J. "Organ dose assessment in pediatric fluoroscopy and CT via a tomographic computational phantom of the newborn patient." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0013050.
Full textPerin, Lucas Pandolfo. "Formulas for p-th root computations in finite fields of characteristic p using polynomial basis." reponame:Repositório Institucional da UFSC, 2016. https://repositorio.ufsc.br/xmlui/handle/123456789/162861.
Full textMade available in DSpace on 2016-05-24T17:57:52Z (GMT). No. of bitstreams: 1 339450.pdf: 541374 bytes, checksum: f41e40b020a69e46c7f813384780d40b (MD5) Previous issue date: 2016
Motivado por algoritmos criptográficos de emparelhamento bilinear, a computação da raiz cúbica em corpos finitos de caracterÃstica 3 já fora abordada na literatura. Adicionalmente, novos estudos sobre a computação da raiz p-ésima em corpos finitos de caracterÃstica p, onde p é um número primo, têm surgido. Estas contribuições estão centradas na computação de raÃzes para corpos de caracterÃstica fixa ou para polinômios irredutÃveis com poucos termos não nulos. Esta dissertação propõe novas famÃlias de polinômios irredutÃveis em ??p, com k termos não nulos onde k = 2 e p = 3, para a computação eficiente da raiz p-ésima em corpos finitos de caracterÃstica p. Além disso, para o caso onde p = 3, são obtidas novas extensões onde a computação da raiz cúbica é eficiente e polinômios cujo desempenho é ligeiramente melhor em comparação aos resultados da literatura. Palavras-chave: Criptografia, Teoria de Números, Aritmética em Corpos Finitos.
Abstract : Efficient cube root computations in extensions fields of characteristic three have been studied, in part motivated by pairing cryptography implementations. Additionally, recent studies have emerged on the computation of p-th roots of finite fields of characteristic p, where p prime. These contributions have either considered a fixed characteristics for the extension field or irreducible polynomials with few nonzero terms. We provide new families of irreducible polynomials over ??p, taking into account polynomials with k = 2 nonzero terms and p = 3. Moreover, for the particular case p = 3, we slightly improve some previous results and we provide new extensions where efficient cube root computations are possible.
Pelz, Philipp M. [Verfasser], and R. J. Dwayne [Akademischer Betreuer] Miller. "Low-dose computational phase contrast transmission electron microscopy via electron ptychography / Philipp M. Pelz ; Betreuer: R.J. Dwayne Miller." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2018. http://d-nb.info/1173899243/34.
Full textNelms, Mark David. "Development of in silico models to support repeat dose safety assessment of cosmetic ingredients to humans." Thesis, Liverpool John Moores University, 2015. http://researchonline.ljmu.ac.uk/4424/.
Full textMoignier, Alexandra. "Dosimétrie cardiovasculaire à l’aide de fantômes numériques hybrides en radiothérapie externe." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112309/document.
Full textCardiovascular diseases following radiotherapy are major secondary late effects raising questions among the scientific community, especially regarding the dose-Effect relationship and confounding risk factors (chemotherapy, cholesterolemia, age at treatment, blood pressure,…). Post-Radiation coronary diseases are one of the main causes of cardiac morbidity. Some approximations are made when coronary doses due to radiotherapy are estimated, especially regarding the morphology. For retrospective studies with old medical records, only radiographs are usually available with sometimes some contours made with a simulator. For recent medical records, CT scans displaying the anatomy in 3D are used for radiotherapy simulation but do not allow the coronary artery visualization due to low resolution and contrast. Currently, coronary doses are barely assessed in clinical practice, and when it is done, anatomical prior knowledge is generally used. This thesis proposes an original approach based on hybrid computational phantoms to study coronary artery doses following radiotherapy for left-Side breast cancer and Hodgkin lymphoma.During the thesis, a method inserting hybrid computational phantoms in a DICOM format into the treatment planning system has been developed and validated. It has been adapted and tested in conditions where only radiographs provide anatomical information, as with old medical records for left side breast radiotherapy.The method has also been adapted to perform precise dose reconstructions to the coronary artery for patients treated for a mediastinal Hodgkin lymphoma and diagnosed with coronary stenosis through a coroscanner. A case-Control study was carried out and the risk of coronary stenosis on a coronary artery segment was assessed to be multiplied by 1.049 at each additional gray on the median dose to the coronary artery segment.For recent medical records, coronary doses uncertainties related to an approach by anatomical prior knowledge segmentation were estimated for a left side breast radiotherapy by simulating different realistic coronary artery topologies in a single representative thorax anatomy and calculating doses due to beam sets, with and without irradiation of the internal mammary chain. The inter-Topology variability of the mean dose to the most irradiated coronary artery, the left descending coronary artery, was assessed to 35% and 19% with and without the internal mammary chain irradiation, respectively; and it was of 76% and 49%, respectively, considering the dose to the most irradiated 2% of this coronary artery volume.Finally, an order of magnitude of the differences between measurments by radiochromic films and dose calculations by the ISOgray treatment planning system in the peripheral field area, has been estimated by for both a simple configuration (parallelepiped physical phantom, homogeneous media, open square field) and a complex configuration (anthropomorphic physical phantom, heterogeneous media, rectangular tangential beams with wedge filter). These differences were judged significant essentially around the geometrical border of the irradiation field for both configuration
Silva, André Carvalho 1987. "Sobre a caracterização de grafos de visibilidade de leques convexos." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275633.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T03:33:56Z (GMT). No. of bitstreams: 1 Silva_AndreCarvalho_M.pdf: 846347 bytes, checksum: ec1253f464900a70a0ef3bb68f9af1fa (MD5) Previous issue date: 2013
Resumo: Grafos de visibilidade entre vértices de polígonos são estruturas que resumem as informações de visibilidade de tais vértices. Existem três relevantes problemas relativos a grafos de visibilidade: caracterização, reconhecimento e reconstrução. O problema da caracterização consiste em encontrar um conjunto de condições necessárias e suficientes para a classe de grafos de visibilidade. A procura de uma forma algorítmica para se reconhecer quando um grafo é de visibilidade define o problema de reconhecimento. O problema de reconstrução trata da questão de se reconstruir um polígono a partir de um grafo de visibilidade de tal forma que este seja isomorfo ao grafo de visibilidade do polígono. Neste trabalho, abordamos estes problemas para uma subclasse destes grafos: grafos de visibilidade de leques convexos. Dois resultados principais são apresentados nesse trabalho. O primeiro deles é um conjunto de três condições necessárias que um grafo de visibilidade de um leque convexo deve satisfazer. Adicionalmente, mostramos que estas condições são necessárias e suficientes para grafos de visibilidade de pseudo-leques convexos. Um resultado colateral deste processo é a equivalência entre grafos de visibilidade entre vértices, e grafos de visibilidade vértice-aresta, para leques convexos em posição geral. O segundo resultado consiste em que podemos reduzir o problema de reconstrução de polígonos unimonótonos para o mesmo problema para leques convexos
Abstract: The (vertex) visibility graph of a polygon is a graph that gathers all the visibility information among the vertices of the polygon. Three relevant problems related to visibility graphs are: characterization, recognition and reconstruction. Characterization calls for a set of necessary and sufficient conditions that every visibility graph must satisfy. Recognition deals with the problem of determining whether a given graph is the visibility graph of some polygon. Reconstruction asks for the generation of a polygon whose visibility graph is isomorphic to a given visibility graph. In this work, we study these problems on a restricted class of polygons, namely, convex fans: polygons that contain a convex vertex in its kernel. This work is comprised of two main results. The first one presents three necessary conditions that visibility graphs of convex fans must satisfy. We also show that those conditions are necessary and sufficient for visibility graphs of convex pseudo-fans. As a byproduct, we show that we can construct the vertex-edge visibility graph of a convex fan in general position from its vertex visibility graph. In the second major result, we show that we can reduce the reconstruction problem of unimonotone polygons to the same problem for convex fans
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
De, Pietri Marco. "Development of a Human Unstructured Mesh Model Based on CT Scans for Dose Calculation in Medical Radiotherapy." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Find full textLima, Alessandro Alberto de. "Estudo numérico do escoamento ao redor de cilindros flexíveis." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3150/tde-07112011-122650/.
Full textIn this work the dynamic response of a high aspect ratio flexible cylinder due to vortex shedding is numerically investigated. The model is divided in two-dimensional sections along the riser length. The discrete vortex method (DVM) is employed for the assessment of the hydrodynamic forces acting on these two-dimensional sections. The hydrodynamic sections are solved independently, and the coupling among the sections is taken into account by the solution of the structure in the time domain by the finite element method implemented in Anflex code (Mourelle et al. (2001)). Parallel processing is employed to improve the performance of the method. A master-slave approach via MPI (Message Passing Interface) is used to exploit the parallelism of the present code. The riser sections are equally divided among the nodes of the cluster. Each node solves the hydrodynamic sections assigned to it. The forces acting on the sections are then passed to the master processor, which is responsible for the calculation of the displacement of the whole structure. One of the main contributions of the present work is the possibility of simulating the flow around flexible cylinders in the pos-critical regime and around bundle of risers.
Kyffin, J. A. "Establishing species-specific 3D liver microtissues for repeat dose toxicology and advancing in vitro to in vivo translation through computational modelling." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/9707/.
Full textPopov, G. F., S. I. Savan, R. V. Lazurik, and A. V. Pochynok. "Selection of calculation methods for the analysis of absorbed depth-dose distributions of electron beams." Thesis, Sumy State University, 2016. http://essuir.sumdu.edu.ua/handle/123456789/46936.
Full textCarmo, Bruno Souza. "Estudo numérico do escoamento ao redor de cilindros alinhados." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/3/3132/tde-21072005-144943/.
Full textThis work deals with the incompressible flow around pairs of rigid and immovable circular cylinders in tandem arrangements. There are two goals in this research: the first one is to find causality relationships between physical characteristics of the flow and the changes that are observed in the forces and in the flow field with the variation of the Reynolds number (Re) and the distance between the bodies; and the second one is to comprehend the mutual influence between three-dimensional structures and interference. The spectral element method was employed to carry out two- and three-dimensional simulations of the flow. The centre-to-centre distance (lcc) of the investigated configurations varies between 1.5 and 8 diameters, and they are compared to the isolated cylinder case. The Re range goes from 160 to 320, covering the transition in the wake. We focused in the small scale instabilities (modes A and B). Data of Strouhal number, mean drag coefficient, RMS of the lift coefficient and axial correlation are presented. With aid of flow visualizations, we propose mechanisms to explain the interference phenomenon, which is reflected in the behaviour of the graphics. The results show that two-dimensional simulations are not sufficient to predict the (Re, lcc) pair correspondent to the drag inversion point. We also verified that, in the cases where lcc is lower than the critical spacing, the transition in the wake happens in a way different from the one observed in the flow around a single cylinder.
Masuero, Joao Ricardo. "Computação paralela na análise de problemas de engenharia utilizando o Método dos Elementos Finitos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/16874.
Full textAnalysis and development of distributed memory parallel algorithms for the solution of Solid Mechanics, Fluid Mechanics and Fluid-Structure Interaction problems using the Finite Element Method is the main goal of this work. Two process for mesh partitioning and task division were developed, based in the Stripwise Partitioning and the Recursive Coordinate Bisection Methods, but applied not over the mesh geometry but over the resultant system of equations through a nodal ordering algorithm for system bandwidth minimization. To schedule the communication tasks in scenarios where each processor must exchange data with all others in the cluster, a simple and generic algorithm based in a circular an alternate ordering was developed. The algorithms selected to be parallelized were of iterative types due to their suitability for distributed memory parallelism. Parallel codes were developed for the Conjugate Gradient Method ( for Solid Mechanics analysis), for the explicit one-step scheme of Taylor-Galerkin method (for transonic and supersonic compressible flow analysis), for the two-step explicit scheme of Taylor-Galerkin method (for subsonic incompressible flow analysis) and for a Fluid-Structure Interaction algorithm using a coupling model based on a partitioned scheme. Explicit two-step scheme of Taylor-Galerkin were employed for the fluid and the implicit Newmark algorithm for the structure. Several configurations were tested for three-dimensional problems using tetrahedral and hexahedral elements in uniform and nonuniform clusters and grids, with several sizes of meshes, numbers of computers and network speeds.
Mostrag-Szlichtyng, A. S. "Development of knowledge within a chemical-toxicological database to formulate novel computational approaches for predicting repeated dose toxicity of cosmetics-related compounds." Thesis, Liverpool John Moores University, 2017. http://researchonline.ljmu.ac.uk/6798/.
Full textSilva, Vinicius Girardi. "Estudo numérico da vibração induzida por vórtices em um corpo cilíndrico." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3150/tde-26062014-112828/.
Full textThe objective of this work is to study through numerical simulations the fluid-structure interaction in a cylinder, which is free to oscillate in the cross flow direction, aiming to better comprehend the phenomena that frequently appears in offshore structures, mainly in pipelines for oil extraction, called risers, The studied condition is a flow with Reynolds 10000 around a cylinder with mass ratio of 3.3 and damping factor of 0.0026. Under these conditions, multiples models are created to represent the reduced velocities range where the synchronization phenomena happen. Those parameters were chosen due to the availability of experimental data in the literature, which allows the comparison between the simulations and experiments that is presented in the end of the work. Results such as the drag coefficient mean and the lift coefficient RMS are also compared with the experimental data available in the literature and an analysis of the flow in the wake region is done with the intention of identifying the patterns found in this type of case.
Isler, João Anderson. "Computational study in fluid mechanics of bio-inspired geometries: constricted channel and paediatric ventricular assist device." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3150/tde-18072018-140712/.
Full textModelagem e simulação numéricas são ferramentas poderosas para análise e design, e com a melhoria do poder computacional e dos métodos numéricos, eles estão sendo aplicados em fenômenos e sistemas complexos. Este trabalho mostra exemplos de aplicações de um método numérico sofisticado, o método dos elementos espectrais/hp, no estudo do escoamento dentro de geometrias complexas bio-inspiradas. Os dois tópicos investigados são: instabilidades em dinâmica de fluido em um canal constrito e o escoamento dentro de um dispositivo de assistência ventricular pediátrica. O canal constrito é um modelo idealizado de uma cavidade nasal, que é caracterizada por canais complexos da via aérea, e também tem semelhança com uma artéria humana na presença de placas ateroscleróticas. O dispositivo de assistência ventricular pediátrica é um dispositivo real, projetado pelo grupo de pesquisa de Bioengenharia do Instituto do Coração da Faculdade de Medicina da Universidade de São Paulo, que funciona como uma bomba que auxilia o ventrículo esquerdo dos pacientes à espera de transplante. Portanto, o objetivo desta tese é contribuir na compreensão de escoamentos em geometrias biológicas e bio-inspiradas, usando ferramentas computacionais. Análises de estabilidade linear e não linear foram feitas para um canal constrito. Três diferentes regimes de escoamento foram empregados: escoamento estacionário simétrico, que é estável para baixo número de Reynolds, escoamento assimétrico, o qual é resultado da primeira bifurcação do escoamento simétrico e escoamento pulsátil. Análise de estabilidade direta foi executada para determinar as regiões instáveis em cada regime de escoamento. Os mecanismos físicos por trás do processo de transição foram estudados por meio de simulação numérica direta para caracterizar as bifurcações. Uma vez que, as bifurcações tiveram um comportamento subcrítico, a relevância do crescimento não normal nestes escoamentos foi avaliado. Assim, dependência com a fase, número de Reynolds e número de onda do modo tridimensional foram extensivamente investigados em regiões estáveis para os três regimes de escoamento. Instabilidades convectivas foram também estudadas a fim de compreender os mecanismos físicos que conduzem os modos ótimos para seus crescimentos máximos, e diferentes mecanismos convectivos foram encontrados. O escoamento dentro do dispositivo de assistência ventricular pediátrico foi analisado por meios de simulações numéricas tridimensionais. Um modelo computacional baseado em condições de contorno especiais foi desenvolvido para modelar o escoamento pulsátil. Neste modelo, a abertura e fechamento da válvula mitral e diafragma foram representados com o uso de condições de contorno especialmente elaboradas. A força motora e o direcionamento do fluxo do diafragma foram definidos por uma distribuição de velocidades na parede do diafragma, e a abertura e fechamento da válvula mitral foram executadas por uma função de onda de velocidade que vai a zero no período sistólico. Padrões do escoamento, campos de velocidade e tensão de cisalhamento no tempo foram analisadas para avaliar o desempenho do dispositivo.
Silva, Fernanda Martins da. "Pensamento computacional : uma análise dos documentos oficiais e das questões de Matemática dos vestibulares /." Bauru, 2020. http://hdl.handle.net/11449/192267.
Full textResumo: Essa dissertação tem como objetivo investigar as habilidades em potencial do Pensamento Computacional no contexto da Educação Matemática no quesito das questões de Matemática dos vestibulares do estado de São Paulo, ENEM e dos documentos oficiais. Além disso, pretende contribuir com a discussão sobre o papel ou não do Pensamento Computacional na Educação Básica. Isso porque muitas habilidades estão próximas de conceitos matemáticos e o processo de ensino e aprendizagem através do Pensamento Computacional pode contribuir com o desempenho de estudantes da Educação Básica. A importância de se buscar uma melhora no desempenho dos estudantes é baseada na avaliação realizada pelo Programa Internacional de Avaliação de Alunos (PISA 2015) que apontou uma grande dificuldade por parte de estudantes em questões que exigem conclusões diretas e fórmulas básicas, sendo que sua maioria está cursando o Ensino Médio e buscando o ingresso no Ensino Superior. A partir disso, essa pesquisa explora argumentos para justificar a inclusão das habilidades do Pensamento Computacional na Educação Básica, especificamente na disciplina de Matemática. Enquadrando-se como uma pesquisa qualitativa, essa investigação utilizou da metodologia de Análise de Conteúdo para analisar os documentos oficiais como complemento de possíveis habilidades em potencial do Pensamento Computacional exploradas nestes. Também fez-se a análise das questões de Matemática das avaliações de ingresso do Ensino Superior público como ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: This dissertation aims to investigate the potential skills of Computational Thinking in the context of Mathematics Education in the area of Mathematics questions regarding the admission exams of the São Paulo state universities, the ENEM and the official documents. In addition, it intends to contribute to the discussion about the role, or not, of Computational Thinking in Basic Education, because many skills are close to mathematical concepts and the process of teaching and learning through Computational Thinking can contribute to the performance of Basic Education students. The importance of seeking an improvement in student performance is based on the assessment carried out by the International Student Assessment Program (PISA 2015), which pointed out that students face great difficulty in issues that consider basic guidelines and formulas, the majority of whom are attending High School and trying to join Higher Education. Based on that, this research explores arguments to justify the inclusion of Computational Thinking skills in Basic Education, specifically in the Mathematics subject. Being a qualitative research, this investigation used the Content Analysis methodology to analyze the official documents as a complement to the potential Computational Thinking skills explored by them. Mathematics questions of the admission exams to join public Higher Education, such as ENEM and the admission exams of state universities of São Paulo, such as USP, UNESP and UNICAMP, for the y... (Complete abstract click electronic access below)
Mestre
Miranda, Alberto Alexandre Assis. "Grafos pfaffianos e problemas relacionados." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275836.
Full textTese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-15T06:54:51Z (GMT). No. of bitstreams: 1 Miranda_AlbertoAlexandreAssis_D.pdf: 571362 bytes, checksum: bd3600cbf8fe0c2875d8e5c60b6abfd3 (MD5) Previous issue date: 2009
Resumo: A área de grafos Pfaffianos apresenta muitos problemas em aberto. Nesta tese resolvemos dois problemas sobre grafos Pfaffianos. O primeiro problema resolvido é a obtenção de um algoritmo polinomial para reconhecimento de grafos quase-bipartidos Pfaffianos. Além disso, estendemos tanto o algoritmo como a caracterização de grafos quase-bipartidos Pfaffianos para a classe dos grafos meio-bipartidos. O segundo resultado é a obtencão de vários resultados estruturais básicos sobre grafos k-Pfaffianos. Utilizando esses resultados, obtivemos um contra-exemplo para a conjectura de Norine, que afirma que o número Pfaffiano de todo grafo é uma potência de quatro: apresentamos um grafo cujo numero Pfaffiano é 6
Abstract: The area of Pfaffian graphs contains many open problems. In this thesis, we solve two problems related to Pfaffian graphs. The first result is a polynomial time algorithm to recognize near-bipartite Pfaffian graphs. Moreover, we extend this algorithm and the characterization of near-bipartite Pfaffian graphs to the class of half-bipartite graphs. The second result is obtaining several basic structural results concerning k-Pfaffian graphs. Using these results, we obtained a counter-example to Norine's conjecture, which states that the Pfaffian number of a graph is always a power of four: we present a graph whose Pfaffian number is 6
Doutorado
Matematica Discreta e Combinatoria
Doutor em Ciência da Computação
Lautenschlager, Willian Wagner. "Um modelo estocástico de simulação da dinâmica dos queratinócitos, melanócitos e melanomas no desenvolvimento dos tumores." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100132/tde-21082017-174520/.
Full textDuring the last decades, tumor biology research with the use of new techniques in molecular biology resulted in a profusion of information that have given conditions and motivated the development of new mathematical models dedicated to analyzing various aspects of growth and proliferation of the cell population. Some of these models have been devoted to the description and analysis of the steady state of the development process of a cell population under chemical conditions that, in theory, promote the acceleration or deceleration of the growth of tumor cell population. However, these studies have not yet analyzed the temporal dynamics of growth of a tumor cell population. One of the difficulties is the establishment of the interaction between cells of multiple types that serve as the description for this dynamic. Our work fills this gap and this dissertation aims to present the model, developed by us, to simulate the growth dynamics and cellular proliferation of melanoma (cancer of low incidence but of extremely high lethality) and the results obtained through the simulations of this computational model
Jönsson, Carl Axel, and Emil Tarukoski. "How does an appointed ceo influence the stock price? : A Multiple Regression Approach." Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209788.
Full textNär ett börsnoterat företag byter VD kommer aktiemarknaden att reagera på ett positivt eller negativt sätt. Denna uppsats använder multipel regressionsanalys för att undersöka vilka egenskaper hos den nya VD:n som kan framkalla positiva eller negativa reaktioner från aktiemarknaden, både på en dags och på ett års tid. De matematiska resultaten jämförs med professionella åsikter om vad som definierar en optimal VD. De ineffektiva egenskaperna hos den finansiella marknaden kombinerat med aktiers komplexitet gör de matematiska resultaten till stor del insignifikanta. De enda korrelationerna som hittades var en positive korrelation för högt betalda VD:ar och en negativ korrelation för internt rekryterade VD:ar. Uppsatsen drar slutsatsen att en optimal VD definieras av sina ledarskapsförmågor och inte av sin personliga bakgrund.
Romeiro, Lauro Correa. "Estudo termodinâmico da influência dos microelementos (V, Nb, Al e Ti) no crescimento dos grãos em aços forjados a quente." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/75882.
Full textThermodynamics studies of nitrides, carbides and carbonitrides of the micro elements V, Nb, Al and Ti were performed using the FactSage program databases, in order to verify their potential for grain growth inhibition in a range of temperatures of hot forged steels. A procedure to verify the solubility of compounds of the micro elements was developed by means of FactSage, and it was studied complex states less discussed in literature, such as the combined effect of two micro elements in steel and the behavior of carbonitrides at high temperatures, varying carbon and nitrogen contents. The studies demonstrate that the nitrides with Titanium have high stability in austenite at elevated temperatures: therefore they are potentially highly effective in grain size control. Carbides with Niobium also present effective stability at high temperatures, as well as the carbonitrides of these two elements. A good correlation was verified between the solubility product obtained either by experimental methods, or by thermodynamics calculation presented by literature. Also, a valuable contribution is made by the software in the studies of precipitation and dissolution of nitrides, carbides and carbonitrides in austenite; thus aiding the selection of steels, and/or modification of the chemical composition of standardized steels, by means of adding micro elements; and providing a suitable choice of carbon and nitrogen content when one seeks to control the grain growth at elevated temperatures. Such results can help indirectly in the studies eliminating conventional heat treatment methods, by using micro alloyed steels with controlled cooling and low alloy steels quenched after forging, with overall reducing costs. Besides these considerations, the results presented and the developed procedure can be useful in other areas where grain growth is an important factor, such as in hot rolling and carburizing at high temperature.
Vatanabe, Sandro Luis. "Estudo de viabilidade de atuadores piezelétricos bilaminares para bombeamento de líquidos." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-09022009-181845/.
Full textFlow pumps, in addition to traditional applications in Engineering, are important tools in areas such as Bioengineering, applied to blood pumping, dosage of medicine and chemical reagents, and in the field of thermal management solutions for electronic devices. Many of the new principles in flow pumps development are based on the use of piezoelectric actuators. These actuators present some advantages in relation to other applied types, for example, miniaturization potential, lower noise generation and fewer numbers of moving parts. Flow pumps based on undulatory and oscillatory movements, such as fish swimming, stand out among the various types of piezoelectric flow pumps. It is well known that fish swimming does not cause the death of microorganisms, what makes this principle applicable in Biotechnology, for example. Thus, the objective of this work is to study parallel-cascade configurations of bimorph piezoelectric actuators for liquid pumping based on the oscillatory principle, in order to obtain higher flow rates and pressure. The scope of this work includes structural and analyses of bimorph piezoelectric actuators and fluid flow simulations, and construction of prototypes for result validation. First, it is investigated the behavior of a single bimorph piezoelectric actuator oscillating in viscous fluid (water) to better understand the working principle used in this work. The study of a single piezoelectric actuator was used as a reference for the other proposed parallel-cascade configurations of actuators. It is expected that parallel actuators achieve higher flow rates, while the series actuators achieve higher pressures. The methods employed are presented and the obtained results are discussed, analyzing the principle and the related physical phenomena.
Peres, Jose Carlos Gonçalves. "Análise de microrreatores usando a fluidodinâmica computacional." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3137/tde-16072018-152921/.
Full textMiniaturized reaction vessels are drawing attention of chemical industries because they promote better mass and heat transfer and also enhance process safety. To understand the relevance of each element of a microreactor on the velocity field of the equipment and the corresponding mixing processes, several microdevices were simulated using computational fluid dynamics: an assembly of two channels, a T-junction, 30 channels in a serpentine assembly and a full microreactor. The cross section of the devices is 100 - 300 µm wide and the length of the channels varies between 3000 and 25190 µm. Computational domains were discretized using hexahedral meshes and steady-state velocity fields were computed considering laminar flow for flow rates between 12,5 and 2000 µL min-1. Mixing was evaluated by injecting inert tracers and monitoring their distribution. Simulations were validated against experimental micro particle image velocimetry data. Velocities throughout the devices are relatively high despite the small dimensions of the cross sections and small flow rates. Experimental images of the flow elucidated the parabolic shape of the velocity profile and its distortion on curved segments caused by centrifugal forces, matching predictions of the computational model. Tracer maps indicated secondary flows play an important role in mixing stream perpendicular to the main flow direction. This study emphasizes the use of computational fluid dynamics as a tool for understating flow throughout microdevices and supporting their design.
Leão, Cedric Rocha. "Propriedades eletrônicas de nanofios semicondutores." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-19112008-104834/.
Full textWe have performed an extensive study on the electronic and structural properties of silicon nanowires (NWs) using parameter free computational simulations (DFT). We show that in Si NWs, surfaces whose atoms are connected to inner ones perpendicularly to the wires axes become electronically inactive at the band edges. However, when these bonds are oriented along the growth axes the surface states contribute significantly to the formation of the HOMO and LUMO, even for relatively large wires (diameters > 30 °A). This is the dimension of the smallest experimental as-grown wires. These effects are caused by the fact that the electronic wave function is confined in the two directions perpendicular to the wires axes but it is not along it. Therefore, these conclusions can be extended to other types of semiconductor NWs, grown along different directions, with different facets and even surface reconstructions. These results can be used to guide actual implementations of NW based chemical and biological sensors, in a fashion that is now being followed by experimentalists. Following this work, we have investigated the electronic transport in these NWs with a NH2 radical adsorbed on different types of facets. These investigations not only confirm our previous conclusions but also indicate different effects associated with impurities adsorbed on distinct active surfaces. In some cases, the impurity level induces scattering centres that reduce the transport in an uniform way, whereas on other types of facets the decrease in the eletronic transport is sharp, suggesting the occurence of fano resonance.
Lamarre, Greg Brian. "Experimental and computational determination of radiation dose rates in the SLOWPOKE-2 research reactor at the Royal Military College of Canada/Collège militaire royal du Canada." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/Mq44912.pdf.
Full textBispo, Danilo Gustavo. "Dos fundamentos da matemática ao surgimento da teoria da computação por Alan Turing." Pontifícia Universidade Católica de São Paulo, 2013. https://tede2.pucsp.br/handle/handle/13286.
Full textIn this paper I present initially in order to contextualize the influences involved in the emergence of the theory of Alan Turing computability on a history of some issues that mobilized mathematicians in the early twentieth century. In chapter 1, an overview will be exposed to the emergence of ideology Formalist designed by mathematician David Hilbert in the early twentieth century. The aim was to base the formalism elementary mathematics from the method and axiomatic theories eliminating contradictions and paradoxes. Although Hilbert has not obtained full success in your program, it will be demonstrated how their ideas influenced the development of the theory of computation Turing. The theory proposes that Turing is a decision procedure, a method that analyzes any arbitrary formula of logic and determines whether it is likely or not. Turing proves that there can be no general decision. For that will be used as a primary source document On Computable Numbers, with an application to the Entscheidungsproblem. In Chapter 2, you will see the main sections of the document Turing exploring some of its concepts. The project will be completed with a critique of this classic text in the history of mathematics based on historiographical proposals presented in the first chapter
Neste texto apresento inicialmente com o intuito de contextualizar as influências envolvidas no surgimento da teoria de Alan Turing sobre computabilidade um histórico de algum problemas que mobilizaram os matemáticos no início do século XX. No capítulo 1, será exposto um panorama do surgimento da ideologia formalista concebida pelo matemático David Hilbert no início do século XX. O objetivo do formalismo era de fundamentar a matemática elementar a partir do método e axiomático, eliminando das teorias suas contradições e paradoxos. Embora Hilbert não tenha obtido pleno êxito em seu programa, será demonstrado como suas concepções influenciaram o desenvolvimento da teoria da computação de Turing. A teoria que Turing propõe é um procedimento de decisão, um método que analisa qualquer fórmula arbitrária da lógica e determina se ela é provável ou não. Turing prova que nenhuma decisão geral pode existir. Para tanto será utilizado como fonte primária o documento On computable numbers, with an application to the Entscheidungsproblem. No capítulo 2, será apresentado as principais seções do documento de Turing explorando alguns de seus conceitos. O projeto será finalizado com uma crítica a este texto clássico da história da matemática com base nas propostas historiográficas apresentadas no primeiro capítulo
Andrade, Luiz Fernando de Souza. "Animação de jatos oscilantes em fluidos viscosos usando SPH em GPU." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08082014-113954/.
Full textI n recent years, the study of methods of animating fluid flow has been an area of intense research in Computer Graphics. The main objective of this project is to develop new techniques based on the CUDA GPGPU architecture to simulate the flow of non-Newtonian fluids, such as viscoelastic and viscoplastic fluids. Instead of traditional methods with mesh - finite differences and finite elements, these techniques are based on a Lagrangian discretization of the governing equations of these fluids through the mesh free method known as SPH (Smoothed Particle Hydrodynamics)
Moraes, Herickson Faria de. "Análise numérica do escoamento da mistura de ar e EGR em coletores de admissão com misturador do tipo pré-câmara e tipo anel." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263398.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-17T03:33:15Z (GMT). No. of bitstreams: 1 Moraes_HericksonFariade_M.pdf: 3805937 bytes, checksum: 448bbae52b9a5cf3e1dc4574be708118 (MD5) Previous issue date: 2010
Resumo: As normas regulamentadoras que restringem as emissões provenientes dos veículos automotores tem se tornado cada vez mais rigorosas, especialmente para os veículos movidos a motores diesel. Para atender estas normas, os fabricantes automotivos têm desenvolvido diversas tecnologias para reduzir e controlar a emissão de poluentes resultantes do processo de combustão nos motores. Entre estas tecnologias está o sistema de recirculação de gases de escapamento, usualmente conhecido na indústria como sistema EGR. Este sistema é extremamente funcional, entretanto o sucesso de sua operação é bastante sensível às condições de contorno que cercam seu funcionamento. Uma destas condições de contorno é a homogeneidade da mistura de ar e gás EGR que chega aos cilindros. Quanto mais homogênea for esta mistura, mais eficiente será o funcionamento do sistema EGR e menor será o nível de emissão de poluentes gerados pelo motor. Neste trabalho objetivou-se avaliar numericamente dois tipos de dispositivos que são responsáveis pela execução deste processo de mistura, o misturador do tipo pré-câmara e o misturador do tipo anel. Foram utilizadas as técnicas da Dinâmica dos Fluidos Computacional (CFD) para a realização destas analises. O pacote computacional de CFD adotado foi o FLUENT® 6.3.26, um código que se baseia no método dos volumes finitos para solução das equações de transporte da mecânica dos fluidos. As condições de contorno foram definidas com base no ciclo de testes ao qual o motor é submetido para avaliação pelas entidades regulamentadoras. Como foi adotado o regime permanente para a realização das análises, apresentam-se resultados qualitativos do processo de mistura e do escoamento da mistura ar EGR (no fenômeno real as condições de contorno variam em função da operação do motor - regime transiente). Na análise dos resultados é possível observar de forma clara e objetiva a eficiência de cada dispositivo e qual deles se apresentou como a melhor solução
Abstract: The standards which define the limits of the pollutant emissions produced by the automotive vehicles have become increasingly restricted, specialty for diesel engines. To meet these standards, the automotive companies have developed several technologies to reduce and control the pollutant emissions resulting from the combustion process in the engines. Among these technologies we can find the exhaust gas recirculation system, which is usually known in the industry as EGR system. This system is very functional; however the success of its operation is quite sensitive to the boundary conditions which surround its operating. One of these boundary conditions is the homogeneity of the air and EGR gas mixture which is provided to the engine cylinders. The more homogeneous were this mixture, more efficient will be the EGR system operation and lower will be the pollutant emissions generated by the engine. In this investigation aimed to evaluate numerically two types of devices which are responsible for the mixture process, the prechamber mixer and the ring mixer. It was used the Computational Fluid Dynamics (CFD) techniques to develop the analyses. The computational code used is the FLUENT® 6.3.26, which is a code based on the finite volume method to solve the transport equations of the fluid mechanics. The boundary conditions of the problems were defined with basis on the test cycles which the engine is submitted to be evaluated by the regulatory agencies. As it was adopted the permanent regime to make the system numerical analyses, it's presented qualitative results of the flow and the mixture process of the air and EGR gas (in the real phenomena the boundary conditions vary according to the engine operation - transient regime). On the result analysis it's easy to check the efficiency of each mixer and see which one has presented as the best solution for the problem
Mestrado
Motores
Mestre em Engenharia Automobilistica
Santos, Ivanildo Antonio dos. "Avaliação dos diagramas de fase do sistema LiF-GdF3 - LuF3 utilizando termodinâmica computacional." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/85/85134/tde-08032013-151744/.
Full textIn this work, it was carried out the study that allowed the thermodynamic optimization of the binary sections belonging to the ternary phase diagram of the LiF-GdF3-LuF3 system, for this purpose the FactSage software was used in the computational simulation. Thus, the melting behavior of the mixture of these compounds has been elucidated, which represents an innovative contribution to the knowledge of the physical and chemical properties of these materials. In particular, it was determined the composition ranges in which the solid solutions of LiGdxLu1-xF4 and GdxLu1-xF3 can be obtained directly from the liquid phase. In this work the three binary sections, LiF-GdF3, LiF-LuF3 and GdF3-LuF3, were evaluated using differential scanning calorimetry to obtain more accurate data of temperature versus composition, since it was possible to minimize the contamination of the samples due to oxygen compounds. The heat capacity and other calorimetric data were experimentally determined and compared with those cited in the literature. The terms of the Gibbs free excess energy for the solution phases, which describe the non ideal interaction effects between the two fluorides at these phases, were expressed by the Redlich-Kister polynomial model. Finally, the solidification path in the ternary phase diagram LiF-GdF3-LuF3 could be extrapolated according to the Kohler-Toop formalism. Thus, for the first time, the interaction between the ternary compounds LiF, GdF3 and LuF3 was determined.
Carvalho, Heber Lombardi de. "Método de análise para a coordenação dos processos de produção sob a ótica de redes de inovação colaborativas apoiado por agente inteligente evolutivo." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/18/18156/tde-15012013-103000/.
Full textIn the industrial engineering context this research examines the coordination of production process. The main concerns of planning managers and controllers are the demand oscillation and deviation of budget production costs. There is a lack between two processes, the controlling analysis aimed at real costs and planning analysis aimed at demand. The motivation of this work is the asymmetry of the use of the same data set from different perspectives but with similar goals. It is possible to elaborate the hypothesis to analyse of structured data with the conceptual basis to study the cooperative network. The goal is to establish a method of analysis for the coordination of production process systematized from the perspective of innovation collaborative networks where this method is compiled by evolutionary concepts with an intelligent agent application. The literature review comprises the coordination of production process, the innovation concepts and the production function concepts. The method of research applies variables belonging to internal process to external process from principal network node, this approach it is done under the analytical of theoretical networks basis. The method of research is designed to find variables belonging to internal process to relate to external process variables from principal network node, this approach it is done under the analytical of theoretical networks basis. This way has promoted a innovation for the work. A commercial technological application is not enough to mining data set from this dynamic and change oriented environment. The DAMICORE algorithm under the evolutionary concepts from biology area can find correlated nodes validated with the field data. The new method of analysis for the coordination of production process is adjusted by a pilot project then it is replicated in twenty-one networks with amazing results when compared to the current method. The research creates a new paradigm for process analysis and demonstrates the variables power representation and association from network processes if they are under conceptual basis to validate by experts.
Ribeiro, Gabriel Henrique de Souza. "Desenvolvimento de ferramentas computacionais para a simulação do fenômeno de cravação de estacas torpedos pelo método de partículas Moving Particle Semi-implicit (MPS)." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3135/tde-07032019-114456/.
Full textThis work aims to develop computational tools to simulate and analysis the torpedo anchor penetration in marine soil. The approach will be based on the Moving Particle Semi-Implicit (MPS) method. Because it is a meshless method, it is extremely flexible to model fluid-solid interaction with fragmentation or junction of free surface and large displacements or deformations of solids, phenomena presented at the torpedo anchor impact. Two challenges were listed: the modeling of soils as non-Newtonian fluids and the determination of the viscous drag on the solids surface. The modeling of non-Newtonian fluid was done based on the Power Law, Bingham and Herschel-Bulkley models. The calculation of the viscous drag was evaluated by determining the velocity gradient in the normal direction of the wall based on polynomial regression considering the fluid particles near the solid wall. In this work, for sake of simplicity, the hypothesis that the velocity variation in the tangential direction of the wall is much smaller compared to its variation in the normal direction is adopted. The proposed technique, as well as the flow of non-Newtonian fluids, were validated comparing the results obtained in flow simulations with predefined geometries with the expected analytical responses. As an example of the application of the computational tools developed, a simplified case of torpedo penetration was simulated by evaluating its displacement and the shear stresses submitted to it.
Grilo, Alex Bredariol 1987. "Computação quântica e teoria de computação." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275508.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-25T06:09:05Z (GMT). No. of bitstreams: 1 Grilo_AlexBredariol_M.pdf: 1279418 bytes, checksum: 80f0b105ffcfb57f6e43c530b32cb7a9 (MD5) Previous issue date: 2014
Resumo: A Computação Quântica é um tópico relativamente recente e pouco conhecido, principalmente no meio da Computação. Seu estudo surgiu na tentativa de físicos simularem sistemas regidos pela Mecânica Quântica por computadores clássicos, o que se conjecturou inviável. Portanto, um novo modelo computacional que utiliza a estrutura quântica da matéria para computar foi teorizado para suprir estas deficiências. Este trabalho tem como objetivo principal estudar as influências da Computação Quântica na Teoria da Computação. Para atingir tal objetivo, primeiramente são expostos os conhecimentos básicos da Mecânica Quântica através de uma linguagem voltada para Teóricos de Computação sem conhecimento prévio na área, de forma a remover a barreira inicial sobre o tema. Em seguida, serão apresentadas inovações na área da Teoria de Computação oriundas da Computação Quântica. Começaremos com os principais Algoritmos Quânticos desenvolvidos até hoje, que foram os primeiros passos para demonstrar a possível superioridade computacional do novo modelo. Dentre estes algoritmos, apresentaremos o famoso Algoritmo de Shor, que fatora números em tempo polinomial. Adicionalmente, neste trabalho foram estudados tópicos mais avançados e atuais em Computabilidade e Complexidade Quânticas. Sobre Autômatos Quânticos, foram estudados aspectos de um modelo que mistura estados clássicos e quânticos, focando na comparação do poder computacional em relação aos Autômatos Finitos Clássicos. Do ponto de vista de Classes de Complexidade, será abordada a questão se em linguagens da classe QMA, o análogo quântico da classe NP, consegue-se atingir probabilidade de erro nulo na aceitação de instâncias positivas
Abstract: Quantum Computing is a relatively new area and it is not well known, mainly among Computer Scientists. It has emerged while physicists tried to simulate Quantum Systems with classical computers efficiently, which has been conjectured impossible. Then, a new computational model that uses the quantum structure of matter to perform computations has been theorized in order to perform these operations. We intend in this work to study the influences of Quantum Computing in Theoretical Computer Science. In order to achieve this goal, we start by presenting the basics of Quantum Computing to Theoretical Computer Science readers with no previous knowledge in this area, removing any initial barriers for a clean understanding of the topic. We will then follow by showing innovations in Theoretical Computer Science introduced by Quantum Computation. We start by showing the main Quantum Algorithms, that exemplify advantages of the new computational model. Among these algorithms, we will present the Shor Algorithm that factors numbers in polynomial time. We follow with more advanced topics in Quantum Computability and Complexity. We study Quantum Finite Automata Models that work with quantum and classical states, focusing on comparing their computational power with Deterministic Finite Automata. In Complexity Theory, we study the question if for languages in QMA, the quantum analogue of NP, zero probability error can be achieved in yes-instances
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Costa, Isis Santos. "Modelagem do escoamento em reator catalítico de membrana cerâmica para hidrogenação parcial trifásica." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3137/tde-20032012-160003/.
Full textThis study focused on the development of a model for the flow in a reactor membrane of the type active contactor, approached through computational fluid dynamics (CFD), using the commercial code ANSYS FLUENT. The model included the entire membrane module, consisted of a tubular membrane and a metal shell. The model reaction studied was the partial hydrogenation of 1,5-cyclooctadiene initiated by the pumping of the reaction mixture, dissolved in n-heptane, through the membrane, from the ends of the tube. As a catalyst, the study considered the presence of impregnated Pd nanoparticles in the membrane. The porous medium was approximated by a granular bed as represented by the Ergun equation, having as parameters the porosity and the grain size of the alfa-Al2O3 membrane. The value for the grain size was adopted as equivalent to particle diameter determined through the open source stereology software ImageJ, of the National Institute of Health USA. The turbulence model used was the RNG k-epsilon. A sensitivity study included simulations of flow neglecting and including reactions, speed variation, change the flow outlet and activation of turbulence model in the porous media. Simulations of structural defects in the membrane were performed, defining regions of porosity changes with and without loss of azimuthal uniformity. The conclusion was that the presence of structural defects that affect the azimuthal uniformity of the membrane can result in marked alteration of the flow regime in CMRs.
Akwa, João Vicente. "Análise aerodinâmica de turbinas eólicas Savonius empregando dinâmica dos fluidos computacional." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2010. http://hdl.handle.net/10183/26532.
Full textThis research work presents a discussion of basic concepts, the methodology and the results of numerical simulations based on Finite Volume Method for the air flow through some configuration options of the Savonius wind turbines, with and without stators, in operation, and also under static conditions, such as those found in the self starting. Results for different computational domains, as well as alternative spatial and temporal discretization are compared, in order to present the influence of these on the obtained values from the computational analysis of the turbines in study. In the numerical simulations, performed using the commercial software Star-CCM+, the equation of continuity and the Reynolds Averaged Navier-Stokes Equations were solved, together with the equations of a turbulence model appropriate, which is chosen, so that the fields of pressure and velocity could be found. It was used, in the calculations, a domain containing a region with a moving mesh, in which the rotor was inserted. In each simulation, the rotational rate of the moving mesh region was specified so as to vary the tip speed ratio of rotor. Through the integration of the forces arising due to the pressure gradients and the forces originated from the viscous friction on the wind rotor blades, the moment coefficient could be obtained in each simulation. The moment and forces acting on the rotor were also obtained similarly. With these data, other parameters such as the power and the power coefficient of the wind rotor could be obtained. Analysis of the principals performance parameters of the Savonius wind rotor were performed and indicated a good agreement with experimental results and numerical simulations performed by other authors. The simulations results are quite representative of the phenomenon analyzed.
Nogueira, Leon White. "Otimização acústica e análise numérica do escoamento ao redor de um conjunto cilindro-placa separadora." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3150/tde-13072016-150044/.
Full textComputational aeroacoustics simulations require a considerable amount of time, which makes the comparison of a large number of different geometric designs a difficult task. The goal of the present study is to provide a suitable methodology for aeroacustic optimization. By means of numerical analyses using computational fluid dynamics tools, the application of a detached splitter plate as a passive control method for the turbulent wake of a circular cylinder was investigated. The irradiation of noise caused by the interaction between the flow and both bodies was evaluated using computational aeroacoustics tools based on the Ffowcs-Williams and Hawkings method. Various design optimization methodologies were applied to this flow in order to achieve a possible optimal configuration, i.e., one which is capable of reducing the far field noise level without increasing the aerodynamic forces. Using a multidisciplinary optimization tool, it was possible to evaluate the behavior of heuristic optimization algorithms and the major advantage of algorithms based on response surface methods when applied to a nonlinear aeroacoustics problem, since they require a smaller number of calculated designs to reach the optimal configuration. In addition, it was possible to identify and group the outcomes into 5 clusters based on their geometric parameters, overall sound pressure level and drag coefficient, confirming the efficiency of the application of long detached splitter plates placed next to the cylinder in stabilizing the turbulent wake, whereas the positioning of splitter plates at a distance larger than a critical gap increased the overall sound pressure level radiated due to the formation of vortices in the gap.
Andrade, Thiago Menezes de. "Compartimentação de edifícios para a segurança contra incêndio." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3146/tde-24092018-090530/.
Full textMethods for designing a building with adequate fire safety are poorly studied and applied in Brazil. This lack of study is even more present in the compartmentation subject, much-undeveloped topic and not yet standardized by ABNT. The compartmentation, especially the vertical, is critical to life safety, since it minimizes the spread of fire between building floors, and to validate the standardized procedures for structures in fire situation, because an important hypothesis of design methods is the vertical compartmentation. Currently the requirements of compartmentation are only present in the Technical Instructions of the States Fire Department, varying from state to state. This paper presents a study on compartmentation, its influence on the fire safety of buildings and a comparison of the vision of some countries of four different continents on this subject. After a comparative study of the vision and standardization between countries such as Brazil, Portugal, England, Hong Kong and the United States, computer simulation based on the theory of the finite volumes and computational fluid dynamics were carried out in order to try to check thoroughly the influence of compartmentation requirements in a real fire spread. Ultimately, the main focus of this work is to demonstrate that the Computational Fluid Dynamics (CFD) can be used in order to assist the study of fire, in order to allow a more in-depth analysis of the subject.
Reiser, Renata Hax Sander. "A categoria computável dos espaços coerentes gerados por conjuntos básicos com aplicação em análise real." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1997. http://hdl.handle.net/10183/18235.
Full textIn this work the Coherence Spaces Generated by Basic Sets with additional structure are studied. By additional structure one means an algebraic, topological and logical structure with a punctual order and a measure system. These spaces, indicated by A, are a subcategory of the category of Coherence Spaces, whose objects, ordered by inclusion, are coherent sets formed by the induced web coherence relation. The morphisms of this category are the functions of objects generated by basic functions. The algebraic and relational properties of these basic functions - external to the construction process - are propagated and cause important influences in the verification of the internal properties of the functions of objects However, this research is not a categorical study. The methodology uses the simple and intuitive language of the Set Theory, which allows the visualization and the analysis of the existing relationships, not only among, the morphisms of the total and partial objects of this category, but also among their structures or pre-structures, represented by the functions of tokens and basic functions. It is shown that the functions of objects are total and well defined. They are also monotone and continuous. However the stability and the linearity of the functions of objects depend on the fact if the basic functions are injective or not. One of the most important features of this construction is the development of a linear representation system for the local linear functions, by the definition of a coherence space A*, which is generated by the subweb product. In this space the functions of objects are linear and therefore they are the morphisms of the category of Coherence Spaces. Moreover, it is proved that A* is isomorphic to the coherence space generated by the directed product of the subspaces, denoted by ПĄ . Then, for each transformation defined for a structured data type considering a denumerable basic set there exists its related linear representation. The existence of a linear representation for elementary functions guarantees the existence of a linear representation for others derived functions. As an application of this construction, the Coherence Space of Rational Intervals, denoted by IIQ, is introduced. In order to show an application which is compatible to a computational approach, specially for the real analysis, each elementary real function is identified with a linear function of objects, defined considering the related elementary rational function. Some of the analyzed functions are the exponential, the logarithmic, the power , the extended power, the root, the trigonometric (sine, cosine and tangent and their relates inverses), and the polynomial functions. It is proved that all of these functions of objects are total and well defined. Moreover, either they belong to the category COPS-LIN of the coherence spaces or they have a linear representation in the same category. It is also possible to define a related projection pair for each one of them.
Hodapp, Maximilian Joachim. "Modelagem e simulação de um leito fluidizado : um estudo comparativo." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/266452.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica
Made available in DSpace on 2018-08-12T20:09:08Z (GMT). No. of bitstreams: 1 Hodapp_MaximilianJoachim_M.pdf: 3485880 bytes, checksum: 79b4777bac2b444262795c5ddaaa44ac (MD5) Previous issue date: 2009
Resumo: O objetivo deste trabalho foi o estudo comparativo de duas modelagens para representação de um escoamento gás-sólido. Na primeira modelagem avaliou-se uma correlação de arraste apresentada recentemente na literatura, baseada em simulações lattice Boltzmann, aplicada a escoamentos gás-sólido. Na segunda observou-se o efeito da variação do coeficiente de especularidade na condição de contorno na parede sobre os perfis de velocidade da fase particulada. Grande atenção tem sido dada a modelagem matemática de escoamentos multifásicos, em especial ao gás-sólido, uma vez que vários processos industriais utilizam-se desta operação. Porém o desenvolvimento de modelos analíticos que incluam todos os fenômenos de transferência de massa, energia e quantidade de movimento não encontra-se disponível, devido principalmente a grande complexidade dos fenômenos envolvidos. Neste aspecto a fluidodinâmica computacional (CFD) tem demonstrado ser uma boa alternativa para o estudo de sistemas complexos, sendo diversos estudos, não somente de engenharia, aplicando esta técnica, publicados todos os anos. Como forma de validar os resultados obtidos por este método numérico, escolheu-se um caso de estudo em escala de laboratório. Os softwares comerciais ANSYS CFX 10 e FLUENT 6.3 foram utilizados para a definição e resolução do problema, além do pós-processamento. Os resultados obtidos foram comparados com os dados numéricos de um trabalho de mestrado do PQGe, bem como com dados experimentais da literatura. Pode-se perceber que os resultados, para a primeira abordagem não apresentaram melhoras em relação a outras modelagens das forças entre e intra-partículas, além do maior tempo computacional requerido. A segunda abordagem demonstrou valores adequados para o coeficiente estudado
Abstract: The aim of this work was a comparative study of two different modeling to represent a gas-solid flow. The first approach consists of a new drag correlation presented in the literature. This relation was obtained through lattice Boltzmann simulations of gas-solid flow, thus not depending on empirical data. The second looked for the effects of the variation of the specularity coefficient at the wall boundary condition. Multiphase flow modeling is gathering great attention, especially to gas-solid flows, due to its importance in industrial processes. However analytical models that take into account the mass, momentum and energy transfers are not available, mainly because of the complexities evolved in such systems. Therefore Computational Fluid Dynamics (CFD) has proved to be a viable alternative, having a large number of scientific works been published in recent years. In order to validate the results a comparison with other simulations using different modeling, done by another member of the PQGe laboratory, and with experimental data was carried out. The commercial softwares ANSYS CFX 10 and FLUENT 6.3 were used to define and numerically solve the problem, also to post process the results. For the first approach, the comparison showed that the studied drag correlation gave no improvement upon the other two models analyzed. Also a longer computational time was required, which can not be ignored as an important parameter in CFD simulations. As for the second approach, it was possible to obtain adequate values for the specularity coefficient
Mestrado
Desenvolvimento de Processos Químicos
Mestre em Engenharia Química
Almeida, André Silva. "Uma análise dos leilões de partilha de produção do pré-sal através de simulação computacional." reponame:Repositório Institucional do FGV, 2013. http://hdl.handle.net/10438/11369.
Full textApproved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2013-12-17T11:28:40Z (GMT) No. of bitstreams: 1 Dissertação de Mestrado - André Almeida.pdf: 3036708 bytes, checksum: f5e9d98bba824c492f931f0e8fa4ae69 (MD5)
Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-12-19T16:37:12Z (GMT) No. of bitstreams: 1 Dissertação de Mestrado - André Almeida.pdf: 3036708 bytes, checksum: f5e9d98bba824c492f931f0e8fa4ae69 (MD5)
Made available in DSpace on 2013-12-19T16:37:41Z (GMT). No. of bitstreams: 1 Dissertação de Mestrado - André Almeida.pdf: 3036708 bytes, checksum: f5e9d98bba824c492f931f0e8fa4ae69 (MD5)
This work considers the application of computer simulation as a method of deepening the study of mechanisms for auctions applied in the allocation of oil exploitation rights in the pre-salt layer. The pre-salt layer is located in the Brazilian coast and presents a large potential in terms of barrel of oil equivalent. Based on an experimental data, the bid function was estimated as an exponential function and applied at the participants created computationally. Considering all features and parameters of the experiments, the simulation allows to reproduce the auction model without incurring implementation costs on new auction sessions with real participants. The auction models studied were the rst-price sealed-bid auction and the second-price sealed-bid auction. The results show that the rst-price sealed-bid auctions are less risky than the second-price sealed-bid auctions; the Revenue Equivalence Principle is valid on symmetric auctions; asymmetric auctions present lower e ciency compared to the rst-price auction; the second-price auction presents a tra- deo between e ciency and government revenue; and considering participant learning, were not observed signi cant changes on the statistics analyzed as the participants become more experienced.
O presente trabalho consiste na aplicação da simulação computacional como método de aprofundamento do estudo de mecanismos de leilões aplicados na alocação do direito de exploração das reservas de petróleo da camada do pré-sal. A camada do pré-sal está localizada na costa brasileira e apresenta um grande potencial em termos de reserva de óleo e gás. A função lance aplicada para os participantes criados computacionalmente foi estimada com base em dados experimentais e segue uma função exponencial. A simulação possibilita reproduzir o modelo de leilão considerando todas as características e parâmetros dos experimentos sem incorrer no custo da realização de novas sessões de leilão com participantes reais. Os leilões estudados foram o leilão de valores privados de 1° preço e o leilão de valores privados de 2° preço. Através dos resultados obtidos identificou-se que o leilão de valores privados de 1° preço é menos arriscado que o leilão de valores privados de 2° preço; no leilão com simetria, o Princípio de Equivalência de Receita é válido; a eficiência observada é menor em leilões assimétricos; o leilão de 2° preço comparado com o de 1° preço apresenta um tradeoff entre a eficiência e a receita do governo; e que considerando o aprendizado dos participantes, não se observam alterações significativas nas estatísticas analisadas à medida que os participantes se tornam mais experientes.