Dissertations / Theses on the topic 'Software reconstruction'

To see the other types of publications on this topic, follow the link: Software reconstruction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software reconstruction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Knodel, Jens. "Process models for the reconstruction of software architecture views." [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10252225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Collins, Anthony Leslie. "The tomographic reconstruction of holographic interferograms." Thesis, City University London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Strother, Philip David. "Design and application of the reconstruction software for the BaBar calorimeter." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Taylor, Ian James. "Development of T2K 280m near detector software for muon and photon reconstruction." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.505000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dikmen, Mehmet. "3d Face Reconstruction Using Stereo Vision." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607543/index.pdf.

Full text
Abstract:
3D face modeling is currently a popular area in Computer Graphics and Computer Vision. Many techniques have been introduced for this purpose, such as using one or more cameras, 3D scanners, and many other systems of sophisticated hardware with related software. But the main goal is to find a good balance between visual reality and the cost of the system. In this thesis, reconstruction of a 3D human face from a pair of stereo cameras is studied. Unlike many other systems, facial feature points are obtained automatically from two photographs with the help of a dot pattern projected on the object&
#8217
s face. It is seen that using projection pattern also provided enough feature points to derive 3D face roughly. These points are then used to fit a generic face mesh for a more realistic model. To cover this 3D model, a single texture image is generated from the initial stereo photographs.
APA, Harvard, Vancouver, ISO, and other styles
6

Pardoe, Andrew Charles. "Neural network image reconstruction for nondestructive testing." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/44616/.

Full text
Abstract:
Conventional image reconstruction of advanced composite materials using ultrasound tomography is computationally expensive, slow and unreliable. A neural network system is proposed which would permit the inspection of large composite structures, increasingly important for the aerospace industry. It uses a tomographic arrangement, whereby a number of ultrasonic transducers are positioned along the edges of a square, referred to as the sensor array. Two configurations of the sensor array are utilized. The first contains 16 transducers, 4 of which act as receivers of ultrasound, and the second contains 40 transducers, 8 of which act as receivers. The sensor array has required the development of instrumentation to generate and receive ultrasonic signals, multiplex the transmitting transducers and to store the numerous waveforms generated for each tomographic scan. The first implementation of the instrumentation required manual operation, however, to increase the amount of data available, the second implementation was automated.
APA, Harvard, Vancouver, ISO, and other styles
7

Ren, Yuheng. "Implicit shape representation for 2D/3D tracking and reconstruction." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:c70dc663-ee7c-4100-b492-3a85bf8640d1.

Full text
Abstract:
This thesis develops and describes methods for real-time tracking, segmentation and 3-dimensional (3D) model acquisition, in the context of developing games for stroke patients that are rehabilitating at home. Real-time tracking and reconstruction of a stroke patient's feet, hands and the control objects that they are touching can enable not only the graphical visualization of the virtual avatar in the rehabilitation games, but also permits measurement of the patient's performs. Depth or combined colour and depth imagery from a Kinect sensor is used as input data. The 3D signed distance function (SDF) is used as implicit shape representation, and a series of probabilistic graphical models are developed for the problem of model-based 3D tracking, simultaneous 3D tracking and reconstruction and 3D tracking of multiple objects with identical appearance. The work is based on the assumption that the observed imagery is generated jointly by the pose(s) and the shape(s). The depth of each pixel is randomly and independently sampled from the likelihood of the pose(s) and the shape(s). The pose(s) tracking and 3D shape reconstruction problems are then cast as the maximum likelihood (ML) or maximum a posterior (MAP) estimate of the pose(s) or 3D shape. This methodology first leads to a novel probabilistic model for tracking rigid 3D objects with only depth data. For a known 3D shape, optimization aims to find the optimal pose that back projects all object region pixels onto the zero level set of the 3D shape, thus effectively maximising the likelihood of the pose. The method is extended to consider colour information for more robust tracking in the presence of outliers and occlusions. Initialised with a coarse 3D model, the extended method is also able to simultaneously reconstruct and track an unknown 3D object in real time. Finally, the concept of `shape union' is introduced to solve the problem of tracking multiple 3D objects with identical appearance. This is formulated as the minimum value of all SDFs in camera coordinates, which (i) leads to a per-pixel soft membership weight for each object thus providing an elegant solution for the data association in multi-target tracking and (ii) it allows for probabilistic physical constraints that avoid collisions between objects to be naturally enforced. The thesis also explore the possibility of using implicit shape representation for online shape learning. We use the harmonics of 2D discrete cosine transform (DCT) to represent 2D shapes. High frequency harmonics are decoupled from low ones to represent the coarse information and the details of the 2D shape. A regression model is learnt online to model the relationship between the high and low frequency harmonics using Locally Weighted Projection Regression (LWPR). We have demonstrated that the learned regression model is able to detect occlusion and recover them to the complete shape.
APA, Harvard, Vancouver, ISO, and other styles
8

Yamada, Randy Matthew. "Identification of Interfering Signals in Software Defined Radio Applications Using Sparse Signal Reconstruction Techniques." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/50609.

Full text
Abstract:
Software-defined radios have the agility and flexibility to tune performance parameters, allowing them to adapt to environmental changes, adapt to desired modes of operation, and provide varied functionality as needed.  Traditional software-defined radios use a combination of conditional processing and software-tuned hardware to enable these features and will critically sample the spectrum to ensure that only the required bandwidth is digitized.  While flexible, these systems are still constrained to perform only a single function at a time and digitize a single frequency sub-band at time, possibly limiting the radio\'s effectiveness.  
Radio systems commonly tune hardware manually or use software controls to digitize sub-bands as needed, critically sampling those sub-bands according to the Nyquist criterion.  Recent technology advancements have enabled efficient and cost-effective over-sampling of the spectrum, allowing all bandwidths of interest to be captured for processing simultaneously, a process known as band-sampling.  Simultaneous access to measurements from all of the frequency sub-bands enables both awareness of the spectrum and seamless operation between radio applications, which is critical to many applications.  Further, more information may be obtained for the spectral content of each sub-band from measurements of other sub-bands that could improve performance in applications such as detecting the presence of interference in weak signal measurements.    
This thesis presents a new method for confirming the source of detected energy in weak signal measurements by sampling them directly, then estimating their expected effects.  First, we assume that the detected signal is located within the frequency band as measured, and then we assume that the detected signal is, in fact, interference perceived as a result of signal aliasing.  By comparing the expected effects to the entire measurement and assuming the power spectral density of the digitized bandwidth is sparse, we demonstrate the capability to identify the true source of the detected energy.  We also demonstrate the ability of the method to identify interfering signals not by explicitly sampling them, but rather by measuring the signal aliases that they produce.  Finally, we demonstrate that by leveraging techniques developed in the field of Compressed Sensing, the method can recover signal aliases by analyzing less than 25 percent of the total spectrum.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Andersson, Sebastian. "Implementation of a reconstruction software and image quality assessment tool for a micro-CT system." Thesis, KTH, Fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gryshkov, O. P., M. Y. Tymkovych, О. Г. Аврунін, and B. Glasmacher. "Experience of development and use of specialized software intended for automated analysis of alginate structures." Thesis, ХНУРЕ, 2019. http://openarchive.nure.ua/handle/document/8374.

Full text
Abstract:
The work is devoted to the problem of three-dimensional reconstruction of alginate capsules and their subsequent analysis. The results of the software are shown and the main stages of its work are described. They include such image processing operations as filtering, segmentation, morphological operations, classification, construction of Hough space with the subsequent analysis stage.
APA, Harvard, Vancouver, ISO, and other styles
11

Lotfy, M. Y. "Stereoscopic image feature matching during endoscopic procedure." Thesis, Boston, USA, 2020. http://openarchive.nure.ua/handle/document/11836.

Full text
Abstract:
This research work describes of the developed software of endoscopic images processing. Calculating pairs of corresponding points, which in the future can be used for three-dimensional reconstruction, was conducted. The number of points for each frame is not large enough, so the 3D reconstruction should use the entire video stream. To increase the number of points should also conduct a study on setting the parameters of the detector. The study tested the stage of finding matches on stereo pairs of endoscopic images.
APA, Harvard, Vancouver, ISO, and other styles
12

Wilhelm, Andreas Johannes [Verfasser], Hans Michael [Akademischer Betreuer] Gerndt, Hans Michael [Gutachter] Gerndt, and Felix [Gutachter] Wolf. "Interactive Software Parallelization Based on Hybrid Analysis and Software Architecture Reconstruction / Andreas Johannes Wilhelm ; Gutachter: Hans Michael Gerndt, Felix Wolf ; Betreuer: Hans Michael Gerndt." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1185637990/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Judeh, Thair. "SEA: a novel computational and GUI software pipeline for detecting activated biological sub-pathways." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/463.

Full text
Abstract:
With the ever increasing amount of high-throughput molecular profile data, biologists need versatile tools to enable them to quickly and succinctly analyze their data. Furthermore, pathway databases have grown increasingly robust with the KEGG database at the forefront. Previous tools have color-coded the genes on different pathways using differential expression analysis. Unfortunately, they do not adequately capture the relationships of the genes amongst one another. Structure Enrichment Analysis (SEA) thus seeks to take biological analysis to the next level. SEA accomplishes this goal by highlighting for users the sub-pathways of a biological pathways that best correspond to their molecular profile data in an easy to use GUI interface.
APA, Harvard, Vancouver, ISO, and other styles
14

Zapalowski, Vanius. "Evaluation of code-based information to architectural module identification." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/94691.

Full text
Abstract:
Arquitetura de software desempenha um importante papel no desenvolvimento de software, quando explicitamente documentada, ela melhora o entendimento sobre o sistema implementado e torna possível entender a forma com que requisitos não funcionais são tratados. Apesar da relevância da arquitetura de software, muitos sistemas não possuem uma arquitetura documentada, e nos casos em que a arquitetura existe, ela pode estar desatualizada por causa da evolução descontrolada do software. O processo de recuperação de arquitetura de um sistema depende principalmente do conhecimento que as pessoas envolvidas com o software tem. Isso acontece porque a recuperação de arquitetura é uma tarefa que demanda muita investigação manual do código fonte. Pesquisas sobre recuperação de arquitetura objetivam auxiliar esse processo. A maioria dos métodos de recuperação existentes são baseados em dependência entre elementos da arquitetura, padrões arquiteturais ou similaridade semântica do código fonte. Embora as abordagem atuais ajudem na identi cação de módulos arquiteturais, os resultados devem ser melhorados de forma signi cativa para serem considerados con áveis. Então, nesta dissertação, objetivamos melhorar o suporte a recuperação de arquitetura explorando diferentes fontes de informação e técnicas de aprendizado de máquina. Nosso trabalho consiste de uma análise, considerando cinco estudo de casos, da utilidade de usar um conjunto de características de código (features, no contexto de aprendizado de máquina) para agrupar elementos em módulos da arquitetura. Atualmente não são conhecidas as características que afetam a identificação de papéis na arquitetura de software. Por isso, nós avaliamos a relação entre diferentes conjuntos de características e a acurácia obtida por um algoritmo não supervisionado na identificação de módulos da arquitetura. Consequentemente, nós entendemos quais dessas características revelam informação sobre a organização de papéis do código fonte. Nossa abordagem usando características de elementos de software atingiu uma acurácia média significativa. Indicando a relevância das informações selecionadas para recuperar a arquitetura. Além disso, nós desenvolvemos uma ferramenta para auxílio ao processo de recuperação de arquitetura de software. Nossa ferramenta tem como principais funções a avaliação da recuperação de arquitetura e apresentação de diferentes visualizações arquiteturais. Para isso, apresentamos comparações entre a arquitetura concreta e a arquitetura sugerida.
Software architecture plays an important role in the software development, and when explicitly documented, it allows understanding an implemented system and reasoning about how non-functional requirements are addressed. In spite of that, many developed systems lack proper architecture documentation, and if it exists, it may be outdated due to software evolution. The process of recovering the architecture of a system depends mainly on developers' knowledge requiring a manual inspection of the source code. Research on architecture recovery provides support to this process. Most of the existing approaches are based on architectural elements dependency, architectural patterns or source code semantics, but even though they help identifying architectural modules, the obtained results must be signi cantly improved to be considered reliable. We thus aim to support this task by the exploitation of di erent code-oriented information and machine learning techniques. Our work consists of an analysis, involving ve case studies, of the usefulness of adopting a set of code-level characteristics (or features, in the machine learning terminology) to group elements into architectural modules. The characteristics mainly source code metrics that a ect the identi cation of what role software elements play in software architecture are unknown. Then, we evaluate the relationship between di erent sets of characteristics and the accuracy achieved by an unsupervised algorithm the Expectation Maximization in identifying architectural modules. Consequently, we are able to understand which of those characteristics reveal information about the source code structure. By the use of code-oriented information, our approach achieves a signi cant average accuracy, which indicates the importance of the selected information to recover software architecture. Additionally, we provide a tool to support research on architecture recovery providing software architecture measurements and visualizations. It presents comparisons between predicted architectures and concrete architectures.
APA, Harvard, Vancouver, ISO, and other styles
15

Krogmann, Klaus [Verfasser], and R. [Akademischer Betreuer] Reussner. "Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis / Klaus Krogmann ; Betreuer: R. Reussner." Karlsruhe : KIT Scientific Publishing, 2012. http://d-nb.info/1184493901/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Qing Hua. "Image segmentation and reconstruction based on graph cuts and texton mask." Thesis, University of Macau, 2007. http://umaclib3.umac.mo/record=b1677228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hauth, Thomas [Verfasser], and G. [Akademischer Betreuer] Quast. "New Software Techniques in Particle Physics and Improved Track Reconstruction for the CMS Experiment / Thomas Hauth. Betreuer: G. Quast." Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/1066737037/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gessinger-Befurt, Paul [Verfasser]. "Development and improvement of track reconstruction software and search for disappearing tracks with the ATLAS experiment / Paul Gessinger-Befurt." Mainz : Universitätsbibliothek der Johannes Gutenberg-Universität Mainz, 2021. http://d-nb.info/1233783203/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Noschinski, Leonie. "Validierung einer neuen Software für halbautomatische Volumetrie – ist diese besser als manuelle Messungen?" Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-210703.

Full text
Abstract:
This study compared a manual program for liver volumetry with a semiautomated software. The hypothesis was that the software would be faster, more accurate and less dependent on the evaluator’s experience. Materials and Methods: Ten patients undergoing hemihepatectomy were included into this IRB approved study after written informed consent. All patients underwent a preoperative abdominal CTScan, which was used for whole liver volumetry and volume prediction for the liver part to be resected. Two different softwares were used: 1) manual method: borders of the liver had to be defined per slice by the user; 2) semiautomated software: automatic identification of liver volume with manual assistance for definition of Couinaud-segments. Measurements were done by six observers with different experience levels. Water displacement volumetry immediately after partial liver resection served as gold standard. The resected part was examined with a CT-scan after displacement volumetry. Results: Volumetry of the resected liver scan showed excellent correlations to water displacement volumetry (manual: ρ=0.997; semiautomated software: ρ=0.995). Difference between the predicted volume and the real volume was significantly smaller with the semiautomated software than with the manual method (33 % vs. 57 %, p=0.002). The semiautomated software was almost four times faster for volumetry of the whole liver. Conclusion: Both methods for liver volumetry give an estimated liver volume close to the real one. The tested semiautomated software is faster, more accurate in predicting the volume of the resected liver part, gives more reproducible results and is less dependent on the user’s experience
Ziel dieser Studie war es, eine manuelle Methode zur Lebervolumetrie mit einer halbautomatischen Software zu vergleichen. Die zu prüfende Hypothese war eine Überlegenheit der halbautomatischen Software hinsichtlich Schnelligkeit, Genauigkeit und Unabhängigkeit von der Erfahrung des Auswerters. Material und Methoden: Die Studie wurde von der Ethikkommission geprüft und es lagen Einverständniserklärungen aller Patienten vor. In die Studie wurden zehn Patienten eingeschlossen, die eine Hemihepatektomie erhielten. Es wurde präoperativ ein CT-Scan angefertigt, der sowohl für die Volumetrie der gesamten Leber als auch zur Bestimmung des Resektatvolumens verwendet wurde. Für die Volumetrie wurden zwei verschiedene Programme genutzt: 1) eine manuelle Methode, wobei die Lebergrenzen in jeder Schicht vom Auswerter definiert werden mussten 2) eine halbautomatische Software mit automatischer Erkennung des Lebervolumens und manueller Definition der Lebersegmente nach Coinaud. Die Messungen wurden von sechs Auswertern mit unterschiedlicher Erfahrung vorgenommen. Als Goldstandard diente eine Verdrängungsvolumetrie des Leberresektats, die direkt nach der Resektion im Operationssaal durchgeführt wurde. Anschließend wurde zusätzlich ein CT-Scan des Resektats angefertigt. Ergebnisse: Die Ergebnisse des postoperativen CT-Scans korrelierten hochgradig mit den Ergebnissen der Verdrängungsvolumetrie (manuell: ρ=0.997; halbautomatische Software: ρ=0.995). Mit der halbautomatischen Software fielen die Unterschiede zwischen dem vorhergesagten und dem tatsächlichen Volumen signifikant kleiner aus (33 % vs. 57 %, p=0.002). Zudem lieferte die halbautomatische Software die Volumina der Gesamtleber 3.9mal schneller. Schlussfolgerung: Beide Methoden erlauben eine sehr gute Abschätzung des Lebervolumens. Die getestete halbautomatische Software kann das Lebervolumen jedoch schneller und das Resektatvolumen genauer vorhersagen und ist zusätzlich unabhängiger von der Erfahrung des Auswerters
APA, Harvard, Vancouver, ISO, and other styles
20

Fredriksson, Mattias. "Tree structured neural network hierarchy for synthesizing throwing motion." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20812.

Full text
Abstract:
Realism in animation sequences requires movements to be adapted to changing environments within the virtual world. To enhance visual experiences from animated characters, research is being focused on recreating realistic character movement adapted to surrounding environment within the character's world. Existing methods as applied to the problem of controlling character animations are often poorly suited to the problem as they focus on modifying and adapting static sequences, favoring responsiveness and reaching the motion objective rather than realism in characters movements.   Algorithms for synthesizing motion sequences can then bridge the gap between motion quality and responsiveness, and recent methods have shown to open the possibility to recreate specific motions and movement patterns. Effectiveness of proposed methods to synthesize motion can however be questioned, particularly due to the sparsity and quality of evaluations between methods. An issue which is further complicated by variations in learning tasks and motion data used to train models.   Rather than directly propose a new synthesis method, focus is put on refuting existing methods by applying them to the task of synthesizing objective-oriented motion involving the action of throwing a ball. To achieve this goal, two experiments are designed. The first experiment evaluates if a phase-functioned neural network (PFNN) model based on absolute joint configurations can generate objective oriented motion.   To achieve this objective, a separate approach utilizing a hierarchy of phase-function networks is designed and implemented. By comparing application of the two methods on the learning task, the proposed hierarchy model showed significant improvement regarding the ability to fit generated motion to intended end effector trajectories.   To be able to refute the idea of using dense feed-forward neural networks, a second experiment is performed comparing PFNN and feed-forward based network hierarchies. Outcome from the experiment show significant differences in favor for the hierarchy model utilizing phase-function networks.   To facilitate experimentation, objective oriented motion data for training network models are obtained by researching and implementing methods for processing optical motion capture data over repeated practices of over-arm ball throws. Contribution is then threefold: creation of a dataset containing motion sequences of ball throwing actions, evaluation of PFNN on the task of learning sequences of objective oriented motion, and definition of a hierarchy based neural network model applicable to the motion synthesis task.
APA, Harvard, Vancouver, ISO, and other styles
21

Sumer, Emre. "Automatic Reconstruction Of Photorealistic 3-d Building Models From Satellite And Ground-level Images." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613131/index.pdf.

Full text
Abstract:
This study presents an integrated framework for the automatic generation of the photorealistic 3-d building models from satellite and ground-level imagery. First, the 2-d building patches and the corresponding footprints are extracted from a high resolution imagery using an adaptive fuzzy-genetic algorithm approach. Next, the photorealistic facade textures are automatically extracted from the single ground-level building images using a developed approach, which includes facade image extraction, rectification, and occlusion removal. Finally, the textured 3-d building models are generated automatically by mapping the corresponding textures onto the facades of the models. The developed 2-d building extraction and delineation approach was implemented on a selected urban area of the Batikent district of Ankara, Turkey. The building regions were extracted with an approximate detection rate of 93%. Moreover, the overall delineation accuracy was computed to be 3.9 meters. The developed concept for facade image extraction was tested on two distinct datasets. The facade image extraction accuracies were computed to be 82% and 81% for the Batikent and eTrims datasets, respectively. As to rectification results, 60% and 80% of the facade images provided errors under ten pixels for the Batikent and eTrims datasets, respectively. In the evaluation of occlusion removal, the average scores were computed to be 2.58 and 2.28 for the Batikent and eTrims datasets, respectively. The scores are ranked between 1 (Excellent) to 6 (Unusable). The modeling of the total 110 single buildings with the photorealistic textures took about 50 minutes of processor running time and yielded a satisfactory level of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
22

Le, Borgne Alexandre. "ARIANE : Automated Re-Documentation to Improve software Architecture uNderstanding and Evolution." Thesis, IMT Mines Alès, 2020. http://www.theses.fr/2020EMAL0001.

Full text
Abstract:
Tout au long de son cycle de vie, un logiciel peut connaître de nombreux changements affectant potentiellement sa conformité avec sa documentation originelle. De plus, bien qu'une documentation à jour, conservant les décisions de conception prises pendant le cycle de développement, soit reconnue comme une aide importante pour maîtriser les évolutions, la documentation des logiciels est souvent obsolète. Les modèles d’architectures sont l’une des pièces majeures de la documentation. Assurer leur cohérence avec les autres modèles d’un logiciel (incluant son code) pendant les processus d’évolution (co-évolution) est un atout majeur pour la qualité logicielle. En effet, la compréhension des architectures logicielles est hautement valorisable en termes de capacités de réutilisation, d'évolution et de maintenance. Pourtant les modèles d’architectures sont rarement explicitement disponibles et de nombreux travaux de recherche visent à les retrouver à partir du code source. Cependant, la plupart des approches existantes n'effectuent pas un strict processus de rétro-documentation afin de re-documenter les architectures "comme elles sont implémentées" mais appliquent des étapes de ré-ingénierie en regroupant des éléments de code dans de nouveaux composants. Ainsi, cette thèse propose un processus de re-documentation des architectures telles qu’elles ont été conçues et implémentées, afin de fournir un support d’analyse des décisions architecturales effectives. Cette re-documentation se fait par l’analyse du code orienté objet et les descripteurs de déploiement de projets. Le processus re-documente les projets dans le langage de description d’architecture Dedal, qui est spécialement conçu pour contrôler et guider l’évolution des logiciels.Un autre aspect très important de la documentation des logiciels est le suivi de leurs différentes versions. Dans de nombreuses approches et gestionnaires de version actuels, comme Github, les fichiers sont versionnés de manière agnostique. S’il est possible de garder une trace de l’historique des versions de n’importe quel fichier, aucune information ne peut être fournie sur la sémantique des changements réalisés. En particulier, lors du versionnement d’éléments logiciels, il n’est fourni aucun diagnostic de retro-compatibilité avec les versions précédentes. Cette thèse propose donc un mécanisme de versionnement d’architectures logicielles basé sur le métamodèle et les propriétés formelles de l’ADL Dedal.Il permet d’analyser automatiquement les versions en termes de substituabilité, de gérer la propagation de version et d’incrémenter automatiquement les numéros de versions en tenant compte de l’impact des changements. En proposant cette approche formelle, cette thèse vise à prévenir le manque de contrôle des décisions architecturale (dérive / érosion).Cette thèse s’appuie sur une étude empirique appliquant les processus de re-documentation et de versionnement à de nombreuses versions d’un projet industriel extrait de GitHub
All along its life-cycle, a software may be subject to numerous changes that may affect its coherence with its original documentation. Moreover, despite the general agreement that up-to-date documentation is a great help to record design decisions all along the software life-cycle, software documentation is often outdated. Architecture models are one of the major documentation pieces. Ensuring coherence between them and other models of the software (including code) during software evolution (co-evolution) is a strong asset to software quality. Additionally, understanding a software architecture is highly valuable in terms of reuse, evolution and maintenance capabilities. For that reason, re-documenting software becomes essential for easing the understanding of software architectures. However architectures are rarely available and many research works aim at automatically recovering software architectures from code. Yet, most of the existing re-documenting approaches do not perform a strict reverse-documenting process to re-document architectures "as they are implemented" and perform re-engineering by clustering code into new components. Thus, this thesis proposes a framework for re-documentating architectures as they have been designed and implemented to provide a support for analyzing architectural decisions. This re-documentation is performed from the analysis of both object-oriented code and project deployment descriptors. The re-documentation process targets the Dedal architecture language which is especially tailored for managing and driving software evolution.Another highly important aspect of software documentation relates to the way concepts are versioned. Indeed, in many approaches and actual version control systems such as Github, files are versioned in an agnostic manner. This way of versioning keeps track of any file history. However, no information can be provided on the nature of the new version, and especially regarding software backward-compatibility with previous versions. This thesis thus proposes a formal way to version software architectures, based on the use of the Dedal architecture description language which provides a set of formal properties. It enables to automatically analyze versions in terms of substitutability, version propagation and proposes an automatic way for incrementing version tags so that their semantics corrrespond to actual evolution impact. By proposing such a formal approach, this thesis intends to prevent software drift and erosion.This thesis also proposes an empirical study based on both re-documenting and versioning processes on numerous versions on an enterprise project taken from Github
APA, Harvard, Vancouver, ISO, and other styles
23

Sun, Yi-Ran. "Generalized Bandpass Sampling Receivers for Software Defined Radio." Doctoral thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Björklund, Daniel. "Implementation of a Software-Defined Radio Transceiver on High-Speed Digitizer/Generator SDR14." Thesis, Linköpings universitet, Elektroniksystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78213.

Full text
Abstract:
This thesis describes the specification, design and implementation of a software-defined radio system on a two-channel 14-bit digitizer/generator. The multi-stage interpolations and decimations which are required to operate two analog-to-digital converters at 800 megasamples per second (MSps) and two digital-to-analog converters at 1600 MSps from a 25 MSps software-side interface, were designed and implemented. Quadrature processing was used throughout the system, and a combination of fine-tunable low-rate mixers and coarse high-rate mixers were implemented to allow frequency translation across the entire first Nyquist band of the converters. Various reconstruction filter designs for the transmitter side were investigated and a cheap implementation was done through the use of programmable base-band filters and polynomial approximation.
APA, Harvard, Vancouver, ISO, and other styles
25

Abu-Al-Saud, Wajih Abdul-Elah. "Efficient Wideband Digital Front-End Transceivers for Software Radio Systems." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5257.

Full text
Abstract:
Software radios (SWR) have been proposed for wireless communication systems to enable them to operate according to incompatible wireless communication standards by implementing most analog functions in the digital section on software-reprogrammable hardware. However, this significantly increases the required computations for SWR functionality, mainly because of the digital front-end computationally intensive filtering functions, such as sample rate conversion (SRC), channelization, and equalization. For increasing the computational efficiency of SWR systems, two new SRC methods with better performance than conventional SRC methods are presented. In the first SRC method, we modify the conventional CIC filters to enable them to perform SRC on slightly oversampled signals efficiently. We also describe a SRC method with high efficiency for SRC by factors greater than unity at which SRC in SWR systems may be computationally demanding. This SRC method efficiently increases the sample rate of wideband signals, especially in SWR base station transmitters, by applying Lagrange interpolation for evaluating output samples hierarchically using a low-rate signal that is computed with low cost from the input signal. A new channelizer/synthesizer is also developed for extracting/combining frequency multiplexed channels in SWR transceivers. The efficiency of this channelizer/synthesizer, which uses modulated perfect reconstruction (PR) filter banks, is higher than polyphase filter banks (when applicable) for processing few channels, and significantly higher than discrete filter banks for processing any number of variable-bandwidth channels where polyphase filter banks are inapplicable. Because the available methods for designing modulated PR filter banks are inapplicable due to the required number of subchannels and stopband attenuation of the prototype filters, a new design method for these filter banks is introduced. This method is reliable and significantly faster than the existing methods. Modulated PR filter banks are also considered for implementing a frequency-domain block blind equalizer capable of equalizing SWR signals transmitted though channels with long impulse responses and severe intersymbol interference (ISI). This blind equalizer adapts by using separate sets of weights to correct for the magnitude and phase distortion of the channel. The adaptation of this blind equalizer is significantly more reliable and its computational requirements increase at a lower rate compared to conventional time-domain equalizers making it efficient for equalizing long channels that exhibit severe ISI.
APA, Harvard, Vancouver, ISO, and other styles
26

Oudot, Steve Y. "Échantillonnage et maillage de surfaces avec garanties." Phd thesis, Ecole Polytechnique X, 2005. http://tel.archives-ouvertes.fr/tel-00338378.

Full text
Abstract:
Cette dernière décennie a vu apparaître et se développer toute une théorie sur l'échantillonnage des surfaces lisses. L'objectif était de trouver des conditions d'échantillonnage qui assurent une bonne reconstruction d'une surface lisse S à partir d'un sous-ensemble fini E de points de S. Parmi ces conditions, l'une des plus importantes est sans conteste la condition d'e-échantillonnage, introduite par Amenta et Bern, qui stipule que tout point p de S doit être à distance de E au plus e fois lfs(p), où lfs(p) désigne la distance de p à l'axe médian de S. Amenta et Bern ont montré qu'il est possible d'extraire de la triangulation de Delaunay d'un e-échantillon E une surface affine par morceaux qui approxime S du point de vue topologique (isotopie) et géométrique (distance de Hausdorff). Néanmoins restaient ouvertes les questions cruciales de pouvoir vérifier si un ensemble de points donné est un e-échantillon d'une part, et de construire des e-échantillons d'une surface lisse donnée d'autre part. De plus, les conditions d'échantillonnage proposées jusque là n'offraient des garanties que dans le cas lisse, puisque lfs s'annule aux points où la surface n'est pas différentiable. Dans cette thèse, nous introduisons le concept d'e-échantillon lâche, qui peut être vu comme une version faible de la notion d'e-échantillon. L'avantage majeur des e-échantillons lâches sur les e-échantillons classiques est qu'ils sont plus faciles à vérifier et à construire. Plus précisément, vérifier si un ensemble fini de points est un e-échantillon lâche revient à regarder si les rayons d'un nombre fini de boules sont suffisamment petits. Quand la surface S est lisse, nous montrons que les e-échantillons sont des e-échantillons lâches et réciproquement, à condition que e soit suffisamment petit. Il s'ensuit que les e-échantillons lâches offrent les mêmes garanties topologiques et géométriques que les e-échantillons. Nous étendons ensuite nos résultats au cas où la surface échantillonnée est non lisse en introduisant une nouvelle grandeur, appelée rayon Lipschitzien, qui joue un rôle similaire à lfs dans le cas lisse, mais qui s'avère être bien défini et positif sur une plus large classe d'objets. Plus précisément, il caractérise la classe des surfaces Lipschitziennes, qui inclut entre autres toutes les surfaces lisses par morceaux pour lesquelles la variation des normales aux abords des points singuliers n'est pas trop forte. Notre résultat principal est que, si S est une surface Lipschitzienne et E un ensemble fini de points de S tel que tout point de S est à distance de E au plus une fraction du rayon Lipschitzien de S, alors nous obtenons le même type de garanties que dans le cas lisse, à savoir : la triangulation de Delaunay de E restreinte à S est une variété isotope à S et à distance de Hausdorff O(e) de S, à condition que ses facettes ne soient pas trop aplaties. Nous étendons également ce résultat aux échantillons lâches. Enfin, nous donnons des bornes optimales sur la taille de ces échantillons. Afin de montrer l'intérêt pratique des échantillons lâches, nous présentons ensuite un algorithme très simple capable de construire des maillages certifiés de surfaces. Etant donné une surface S compacte, Lipschitzienne et sans bord, et un paramètre positif e, l'algorithme génère un e-échantillon lâche E de S de taille optimale, ainsi qu'un maillage triangulaire extrait de la triangulation de Delaunay de E. Grâce à nos résultats théoriques, nous pouvons garantir que ce maillage triangulaire est une bonne approximation de S, tant sur le plan topologique que géométrique, et ce sous des hypothèses raisonnables sur le paramètre d'entrée e. Un aspect remarquable de l'algorithme est que S n'a besoin d'être connue qu'à travers un oracle capable de détecter les points d'intersection de n'importe quel segment avec la surface. Ceci rend l'algorithme assez générique pour être utilisé dans de nombreux contextes pratiques et sur une large gamme de surfaces. Nous illustrons cette généricité à travers une série d'applications : maillage de surfaces implicites, remaillage de polyèdres, sondage de surfaces inconnues, maillage de volumes.
APA, Harvard, Vancouver, ISO, and other styles
27

Bismack, Brian James. "Implementation of the Dosimetry Check Software Package in Computing 3D Patient Exit Dose Through Generation of a Deconvolution Kernel to be Used for Patients’ IMRT Treatment Plan QA." University of Toledo Health Science Campus / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=mco1290456365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Mikulášková, Eliška. "Technika zatáčení řidičů a možnosti vozidel v aplikaci software pro analýzu nehod." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-382228.

Full text
Abstract:
The diploma thesis deals with the turning technique of cars and common drivers. The thesis examines the behavior of vehicles and drivers when cornering, forward and reverse driving. The aim of the thesis is to analyze the technique of turning vehicles and common drivers and apply the acquired data to simulation programs for road accident analysis. The individual aims of the thesis are theoretical description of the given problem and experimental verification of the parameters of radii, angles and time course of turning of different vehicles and common drivers in forward and reverse driving under comparable conditions and to describe mutual relations.
APA, Harvard, Vancouver, ISO, and other styles
29

Нещерет, Марина Олександрівна. "Environmental impact assessment of the reconstruction of the car M-14 road on the Kherson-Mariupol section." Thesis, Національний авіаційний університет, 2020. https://er.nau.edu.ua/handle/NAU/44918.

Full text
Abstract:
Робота публікується згідно наказу ректора від 21.01.2020 р. №008/од "Про перевірку кваліфікаційних робіт на академічний плагіат у 2019-2020 навчальному році". Керівник роботи: доцент кафедри екології, канд.геол.-мін.наук, Дудар Тамара Вікторівна
Object of research – M-14 Road reconstruction on the Kherson-Mariupol section. Aim оf work – to analyze possible effects of the M-14 road reconstruction on all components of the environment on the Kherson-Mariupol section; to assess the negative impacts on the air environment. Methods of research: mathematical calculations, analysis and synthesis of information, computer software processing, geospatial analysis (Google Earth’s maps).
APA, Harvard, Vancouver, ISO, and other styles
30

Нещерет, Марина Олександрівна. "Environmental impact assessment of the reconstruction of the car M-14 road on the Kherson-Mariupol section." Thesis, Національний авіаційний університет, 2020. https://er.nau.edu.ua/handle/NAU/49683.

Full text
Abstract:
Робота публікується згідно наказу ректора від 21.01.2020 р. №008/од "Про перевірку кваліфікаційних робіт на академічний плагіат у 2019-2020 навчальному році". Керівник роботи: доцент кафедри екології, канд.геол.-мін.наук, Дудар Тамара Вікторівна
Object of research – M-14 Road reconstruction on the Kherson-Mariupol section. Aim оf work – to analyze possible effects of the M-14 road reconstruction on all components of the environment on the Kherson-Mariupol section; to assess the negative impacts on the air environment. Methods of research: mathematical calculations, analysis and synthesis of information, computer software processing, geospatial analysis (Google Earth’s maps).
APA, Harvard, Vancouver, ISO, and other styles
31

Bayir, Murat Ali. "A New Reactive Method For Processing Web Usage Data." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12607323/index.pdf.

Full text
Abstract:
In this thesis, a new reactive session reconstruction method '
Smart-SRA'
is introduced. Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigations of Web users. As in classical data mining, data processing and pattern discovery are the main issues in web usage mining. The first phase of the web usage mining is the data processing phase including session reconstruction. Session reconstruction is the most important task of web usage mining since it directly affects the quality of the extracted frequent patterns at the final step, significantly. Session reconstruction methods can be classified into two categories, namely '
reactive'
and '
proactive'
with respect to the data source and the data processing time. If the user requests are processed after the server handles them, this technique is called as &lsquo
reactive&rsquo
, while in &lsquo
proactive&rsquo
strategies this processing occurs during the interactive browsing of the web site. Smart-SRA is a reactive session reconstruction techique, which uses web log data and the site topology. In order to compare Smart-SRA with previous reactive methods, a web agent simulator has been developed. Our agent simulator models behavior of web users and generates web user navigations as well as the log data kept by the web server. In this way, the actual user sessions will be known and the successes of different techniques can be compared. In this thesis, it is shown that the sessions generated by Smart-SRA are more accurate than the sessions constructed by previous heuristics.
APA, Harvard, Vancouver, ISO, and other styles
32

Reiss, Mário Luiz Lopes. "Reconstrução tridimensional digital de objetos à curta distância por meio de luz estruturada." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/10072.

Full text
Abstract:
Neste trabalho apresenta-se o desenvolvimento e avaliação de um sistema de reconstrução 3D por luz estruturada. O sistema denominado de Scan3DSL é baseado em uma câmara digital de pequeno formato e um projetor de padrões. O modelo matemático para a reconstrução 3D é baseado na equação paramétrica da reta formada pelo raio de luz projetado combinado com as equações de colinearidade. Uma estratégia de codificação de padrões foi desenvolvida para permitir o reconhecimento dos padrões projetados em um processo automático. Uma metodologia de calibração permite a determinação dos vetores diretores de cada padrão projetado e as coordenadas do centro de perspectiva do projetor de padrões. O processo de calibração é realizado com a aquisição de múltiplas imagens em um plano de calibração com tomadas em diferentes orientações e posições. Um conjunto de algoritmos de processamento de imagens foi implementado para propiciar a localização precisa dos padrões e de algumas feições, como o centro de massa e quinas. Para avaliar a precisão e as potencialidades da metodologia, um protótipo foi construído, integrando uma única câmara e um projetor de padrões. Experimentos mostram que um modelo de superfície pode ser obtido em um tempo total de processamento inferior a 10 segundos, e com erro absoluto em profundidade em torno de 0,2 mm. Evidencia-se com isso a potencialidade de uso em várias aplicações.
The purpose of this work is to present a structured light system developed. The system named Scan3DSL is based on off-the-shelf digital cameras and a projector of patterns. The mathematical model for 3D reconstruction is based on the parametric equation of the projected straight line combined with the collinearity equations. A pattern codification strategy was developed to allow fully automatic pattern recognition. A calibration methodology enables the determination of the direction vector of each pattern and the coordinates of the perspective centre of the pattern projector. The calibration processes are carried out with the acquisition of several images of a flat surface from different distances and orientations. Several processes were combined to provide a reliable solution for patterns location. In order to assess the accuracy and the potential of the methodology, a prototype was built integrating in a single mount a projector of patterns and a digital camera. The experiments using reconstructed surfaces with real data indicated a relative accuracy of 0.2 mm in depth could be achieved, in a processing time less than 10 seconds.
APA, Harvard, Vancouver, ISO, and other styles
33

Lahouli, Rihab. "Etude et conception de convertisseur analogique numérique large bande basé sur la modulation sigma delta." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0074/document.

Full text
Abstract:
Les travaux de recherche de cette thèse de doctorat s’inscrivent dans le cadre de la conception d’unconvertisseur analogique-numérique (ADC, Analog-to-Digital Converter) large bande et à haute résolution afinde numériser plusieurs standards de communications sans fil. Il répond ainsi au concept de la radio logiciellerestreinte (SDR, Software Defined Radio). L’objectif visé est la reconfigurabilité par logiciel et l’intégrabilité envue d’un système radio multistandard. Les ADCs à sur-échantillonnage de type sigma-delta () s’avèrent debons candidats dans ce contexte de réception SDR multistandard en raison de leur précision accrue. Bien queleur bande passante soit réduite, il est possible de les utiliser dans une architecture en parallèle permettantd’élargir la bande passante. Nous nous proposons alors dans cette thèse de dimensionner et d’implanter unADC parallèle à décomposition fréquentielle (FBD) basé sur des modulateurs  à temps-discret pour unrécepteur SDR supportant les standards E-GSM, UMTS et IEEE802.11a. La nouveauté dans l’architectureproposée est qu’il est programmable, la numérisation d’un signal issu d’un standard donné se réalise enactivant seulement les branches concernées de l’architecture parallèle avec des sous-bandes defonctionnement et une fréquence d’échantillonnage spécifiée. De plus, le partage fréquentiel des sous-bandesest non uniforme. Après validation du dimensionnement théorique par simulation, l’étage en bande de base aété dimensionné. Cette étude conduit à la définition d’un filtre anti-repliement passif unique d’ordre 6 et detype Butterworth, permettant l’élimination du circuit de contrôle de gain automatique (AGC). L’architectureFBD requière un traitement numérique permettant de combiner les signaux à la sortie des branches enparallèle pour reconstruire le signal de sortie finale. Un dimensionnement optimisé de cet étage numérique àbase de démodulation a été proposé. La synthèse de l’étage en bande de base a montré des problèmes destabilité des modulateurs . Pour y remédier, une solution basée sur la modification de la fonction detransfert du signal (STF) afin de filtrer les signaux hors bande d’intérêt par branche a été élaborée. Unediscontinuité de phase a été également constatée dans le signal de sortie reconstruit. Une solution deraccordement de phase a été proposée. L’étude analytique et la conception niveau système ont étécomplétées par une implantation de la reconstruction numérique de l’ADC parallèle. Deux flots de conceptionont été considérés, un associé au FPGA et l’autre indépendant de la cible choisie (VHDL standard).L’architecture proposée a été validée sur un FPGA Xilinx de type VIRTEX6. Une dynamique de 74 dB a étémesurée pour le cas d’étude UMTS, ce qui est compatible avec celle requise du standard UMTS
The work presented in this Ph.D. dissertation deals with the design of a wideband and accurate Analog-to-Digital Converter (ADC) able to digitize signals of different wireless communications standards. Thereby, itresponds to the Software Defined Radio concept (SDR). The purpose is reconfigurability by software andintegrability of the multistandard radio terminal. Oversampling  (Sigma Delta) ADCs have been interestingcandidates in this context of multistandard SDR reception thanks to their high accuracy. Although they presentlimited operating bandwidth, it is possible to use them in a parallel architecture thus the bandwidth isextended. Therefore, we propose in this work the design and implementation of a parallel frequency banddecomposition ADC based on Discrete-time  modulators in an SDR receiver handling E-GSM, UMTS andIEEE802.11a standard signals. The novelty of this proposed architecture is its programmability. Where,according to the selected standard digitization is made by activating only required branches are activated withspecified sub-bandwidths and sampling frequency. In addition the frequency division plan is non-uniform.After validation of the theoretical design by simulation, the overall baseband stage has been designed. Resultsof this study have led to a single passive 6th order Butterworth anti-aliasing filter (AAF) permitting theelimination of the automatic gain control circuit (AGC) which is an analog component. FBD architecturerequires digital processing able to recombine parallel branches outputs signals in order to reconstruct the finaloutput signal. An optimized design of this digital reconstruction signal stage has been proposed. Synthesis ofthe baseband stage has revealed  modulators stability problems. To deal with this problem, a solution basedon non-unitary STF has been elaborated. Indeed, phase mismatches have been shown in the recombinedoutput signal and they have been corrected in the digital stage. Analytic study and system level design havebeen completed by an implementation of the parallel ADC digital reconstruction stage. Two design flows havebeen considered, one associated to the FPGA and another independent of the chosen target (standard VHDL).Proposed architecture has been validated using a VIRTEX6 FPGA Xilinx target. A dynamic range over 74 dB hasbeen measured for UMTS use case, which responds to the dynamic range required by this standard
APA, Harvard, Vancouver, ISO, and other styles
34

Alliez, Pierre. "Approches variationnelles pour le traitement numérique de la géométrie." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00434316.

Full text
Abstract:
Cette thèse d'habilitation présente une synthèse de contributions dans le domaine du traitement numérique de la géométrie sous la forme de concepts et d'algorithmes pour la reconstruction de surfaces, l'approximation de surfaces, le remaillage quadrangle de surfaces et la génération de maillages.
APA, Harvard, Vancouver, ISO, and other styles
35

Strand, Mathias. "Standardisering av processer och aktiviteter inom kontrollanläggningar och elmontage." Thesis, KTH, Data- och elektroteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169324.

Full text
Abstract:
I detta examensarbete har en undersökning utförts på konsultföretaget ÅF, och deras arbetssätt inom dokumentation och kontrollanläggningar för att utreda om det fanns effektiviseringspotentialer. Undersökningen har i största del innefattat intervjuer med verksamma konsulter inom kontrollanläggningar och elmontage på transformatorstationer. Konsulterna arbetade som konstruktörer och tog fram ritningar främst till kontrollutrustningar. Resultatet från intervjuerna analyserades för att sedan dra slutsatser om effektiviseringsmöjligheter inom verksamheten. Olika kontor i verksamheten undersöktes och arbetssättet varierade på de olika kontoren. En skillnad var det CAD-program som användes och ett förslag var att använda samma program. Effektiviseringspotentialer fanns också genom att återanvända kretsscheman till viss del från föregående projekt. Ett annat förslag var därför att införa databaser där kretsscheman kan samlas in och delas mellan de olika kontoren.
In this thesis, a study has been carried out for the consulting company ÅF, and their work method within documentation and control facilities to investigate whether there were any potential for efficiency improvements. The investigation has largely involved interviews with operating consultants within the control facility on substations. The consultants worked as electrical designers and produced drawings mainly for control equipment. The results of the interviews were analyzed to draw conclusions about the efficiency potentials within the business. Different offices in the business were examined and the work approach varied between the offices. One difference was the CAD-software used and a suggestion was to use the same program. Efficiency improvement potentials were also by re-using electrical schematics to some extent from previous projects and another suggestion was to establish databases where electrical schematics can be gathered and shared between the different offices.
APA, Harvard, Vancouver, ISO, and other styles
36

Kizilgul, Serdar A. "Study of Pion Photo-Production Using a TPC Detector to Determine Beam Asymmetries from Polarized HD." Ohio University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1210629380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Miranda, Geraldo Elias. "Avaliação da acurácia e da semelhança da reconstrução facial forense computadorizada tridimensional e variação facial fotoantropométrica intraindivíduo." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/23/23153/tde-05112018-125105/.

Full text
Abstract:
Esta tese é composta por três capítulos. O primeiro teve o objetivo de avaliar a acurácia e a semelhança da reconstrução facial forense computadorizada (RFFC) tridimensional realizada com softwares livres. As RFFC foram realizadas no programa Blender® a partir de tomografias do crânio, utilizando templates do MakeHuman®. A avaliação da acurácia foi feita no CloudCompare® que comparou a RFFC com a pele na tomografia, enquanto a avaliação da semelhança foi realizada no Picasa® utilizando fotografias dos participantes. Os resultados mostraram que do total de pontos que formam cada reconstrução, 63.20% a 73.67% apresentaram uma distância de -2,5 <= x <= 2,5mm entre a RFFC e a superfície da pele, enquanto a distância média variou entre -1,66 a 0,33mm. Duas das quatro reconstruções foram reconhecidas objetivamente pelo Picasa®. As RFFC realizadas utilizando esses softwares apresentam plausíveis níveis de acurácia e semelhança, portanto indicam valor para uso no campo forense. Os outros dois capítulos tiveram como objetivo avaliar a estabilidade métrica facial do mesmo indivíduo por meio da análise de fotografias tomadas em um intervalo de tempo de cinco anos. Trata-se de um estudo longitudinal realizado com fotografias frontais padronizadas de 666 indivíduos adultos divididos por faixa etária e sexo. Com o programa SAFF 2D foram marcados 32 pontos, cujas coordenadas foram utilizadas para calcular 40 medidas, sendo 20 horizontais e 20 verticais. Cada uma dessas medidas foi dividida pelo diâmetro da íris e assim foram obtidas razões iridianas. Os resultados mostraram que a maioria das razões não sofreu variação estatisticamente significante. As razões que tiveram maior variação foram aquelas da região do nariz e da boca. Quando se compara as faixas etárias entre si observa-se que a grande maioria das razões é diferente, mostrando a influência da idade nas dimensões faciais. Quando se compara a estabilidade dentro mesmo sexo observa-se que houve razões que diminuíram e outras aumentaram tanto no sexo feminino quanto no sexo masculino, enquanto outras variaram apenas em um dos sexos. Quando se compara a variação entre os sexos observa-se que a maioria das razões é diferente, mostrando o dimorfismo sexual das medidas faciais. A face passa por alterações métricas ao longo da vida em todas as faixas etárias, principalmente na região do nariz e boca, com maiores diferenças após os 60 anos. Além disso, algumas medidas faciais são mais influenciadas pelo sexo do que outras. Entretanto, a maioria das medidas levantadas se mantem relativamente estáveis dentro de um período de 5 anos tanto em relação ao sexo quanto a idade.
This thesis contains three chapters. The aim of the first chapter was to evaluate the accuracy and recognition level of three-dimensional (3D) computerized forensic craniofacial reconstruction (CCFR) performed in a blind test on open-source software using computed tomography data from live subjects. The CCFRs were completed using Blender® with 3D models obtained from the computed tomography data and templates from the MakeHuman® program. The evaluation of accuracy was carried out in CloudCompare®, by geometric comparison of the CCFR to the subject 3D face model (obtained from the CT data). A recognition level was performed using the Picasa® with a frontal standardized photography. The results were presented from all the points that form the CCFR model, with an average for each comparison between 63.20% and 73.67% with a distance -2.5 <= x <= 2.5 mm from the skin surface and the average distances were 1.66 to 0.33 mm. Two of the four CCFRs were correctly matched by the Picasa® tool. Free software programs are capable of producing 3D CCFRs with plausible levels of accuracy and recognition and therefore indicate their value for use in forensic applications. The other two chapters study the facial comparison and aimed to evaluate the facial metrical stability of an individual through photographs taken in a time interval of five years. It is a longitudinal study composed of standard frontal photographs of 666 adults divided by sex and age groups. By using the SAFF 2D® software, 32 landmarks were positioned, whose coordinates were used to calculate 40 measurements, 20 horizontal and 20 vertical. Each of these measurements was divided by iris diameter and thus iridian ratios were obtained. The results showed that most of the ratios did not suffer statistically significant variations. The ratios that had the greatest variation in the different age groups were those of the nose and mouth regions. When comparing the age groups with each other it is observed that the great majority of the reasons are different, showing the influence of age on the facial dimensions. When comparing stability with respect to sex, it was observed that there were ratios that decreased and others that increased in both sexes, while other ratios varied only in females or in males. When the sexes were compared, it was observed that the majority of the ratios were different, showing sexual dimorphism of the facial measures. The face undergoes metrical alterations throughout the life, mainly in the region of the nose and mouth, with the greatest differences seen in those who are aged 60 years and older. In addition, some facial measures are more influenced by sex than others. However, most of the measures raised have remained relatively stable within a period of five years in both sex and age groups.
APA, Harvard, Vancouver, ISO, and other styles
38

Buckland, Philip. "The development and implementation of software for palaeoenvironmental and palaeoclimatological research : the Bugs Coleopteran Ecology Package (BugsCEP)." Doctoral thesis, Umeå University, Archaeology and Sami Studies, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1105.

Full text
Abstract:

This thesis documents the development and application of a unique database orientated software package, BugsCEP, for environmental and climatic reconstruction from fossil beetle (Coleoptera) assemblages. The software tools are described, and the incorporated statistical methods discussed and evaluated with respect to both published modern and fossil data, as well as the author’s own investigations.

BugsCEP consists of a reference database of ecology and distribution data for over 5 800 taxa, and includes temperature tolerance data for 436 species. It also contains abundance and summary data for almost 700 sites - the majority of the known Quaternary fossil coleopteran record of Europe. Sample based dating evidence is stored for a large number of these sites, and the data are supported by a bibliography of over 3 300 sources. Through the use of built in statistical methods, employing a specially developed habitat classification system (Bugs EcoCodes), semi-quantitative environmental reconstructions can be undertaken, and output graphically, to aid in the interpretation of sites. A number of built in searching and reporting functions also increase the efficiency with which analyses can be undertaken, including the facility to list the fossil record of species found by searching the ecology and distribution data. The existing Mutual Climatic Range (MCR) climate reconstruction method is implemented and improved upon in BugsCEP, as BugsMCR, which includes predictive modelling and the output of graphs and climate space maps.

The evaluation of the software demonstrates good performance when compared to existing interpretations. The standardization method employed in habitat reconstructions, designed to enable the inter-comparison of samples and sites without the interference of differing numbers of species and individuals, also appears to be robust and effective. Quantitative climate reconstructions can be easily undertaken from within the software, as well as an amount of predictive modelling. The use of jackknifing variants as an aid to the interpretation of climate reconstructions is discussed, and suggested as a potential indicator of reliability. The combination of the BugStats statistical system with an enhanced MCR facility could be extremely useful in increasing our understanding of not only past environmental and climate change, but also the biogeography and ecology of insect populations in general.

BugsCEP is the only available software package integrating modern and fossil coleopteran data, and the included reconstruction and analysis tools provide a powerful resource for research and teaching in palaeo-environmental science. The use of modern reference data also makes the package potentially useful in the study of present day insect faunas, and the effects of climate and environmental change on their distributions. The reconstruction methods could thus be inverted, and used as predictive tools in the study of biodiversity and the implications of sustainable development policies on present day habitats.

BugsCEP can be downloaded from http://www.bugscep.com

APA, Harvard, Vancouver, ISO, and other styles
39

Hunt, Cahill. "Developing an efficient method for generating facial reconstructions using photogrammetry and open source 3D/CAD software." Thesis, Hunt, Cahill (2017) Developing an efficient method for generating facial reconstructions using photogrammetry and open source 3D/CAD software. Masters by Coursework thesis, Murdoch University, 2017. https://researchrepository.murdoch.edu.au/id/eprint/39826/.

Full text
Abstract:
The identification of deceased individuals is important in society as it not only facilitates the progression of criminal investigations into suspicious deaths, but also enables the resolution of legal matters and brings closure to the families affected by the death. When a corpse is skeletonized, heavily burned, or the soft tissue has degraded to a point that other professionals cannot obtain information about the deceased, a forensic anthropologist or odontologist is often tasked with identification. A variety of methods exist that enable forensic anthropologists to achieve identification. These include: non-imaged records comparisons; craniofacial superimposition and comparative radiography. Facial reconstructions can also be utilized when no ante mortem information about the deceased individual is available or when law enforcement have no suspicions on who the deceased person is. Facial reconstructions are traditionally a manual method however with the recent advancement of photogrammetry and three-dimensional and computer-aided design modeling software, the process can be performed within a virtual space. The purpose of this literature review is to identify an efficient and low-cost method of generating facial reconstructions using photogrammetry and open-source three-dimensional and computer-aided design software.
APA, Harvard, Vancouver, ISO, and other styles
40

Goret, Gael. "Recalage flexible de modèles moléculaires dans les reconstructions 3D de microscopie électronique." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00631858.

Full text
Abstract:
Aujourd'hui, la cristallographie de macromolécules produit couramment des modèles moléculaires à résolution atomique. Cependant, cette technique est particulièrement difficile à mettre en œuvre, dans le cas de complexes de taille importante. La microscopie électronique permet, elle, de visualiser des particules de grande taille, dans des conditions proches de celles in vivo. Cependant, la résolution des reconstructions tridimensionnelles obtenues exclut, en général, leur interprétation directe en termes de structures moléculaires, étape nécessaire à la compréhension des problèmes biologiques. Il est donc naturel d'essayer de combiner les informations fournies par ces deux techniques pour caractériser la structure des assemblages macromoléculaires. L'idée est de positionner les modèles moléculaires déterminés par cristallographie à l'intérieur de reconstructions 3D issues de la microscopie électronique, et de comparer la densité électronique associée à la reconstruction 3D avec une densité électronique calculée à partir des modèles. Le problème numérique réside dans la détermination et l'optimisation des variables qui spécifient les positions des modèles, considérés comme des corps rigides, à l'intérieur de l'assemblage. Cette idée simple a donné lieu au développement d'une méthode appelée recalage. Ce travail de thèse a eu pour but de fournir aux biologistes un outil, basé sur la méthode du recalage, qui leurs permette de construire des modèles pseudo-moléculaires associés aux assemblages produits par microscopie électronique. Le logiciel issu de ce travail, nommé VEDA est un environnement graphique convivial, intégrant la possibilité de recalage flexible, et un moteur de calcul performant (calcul rapide, traitement de symétries complexes, utilisation de grands volumes, ...). Testé sur des dizaines de cas réels, VEDA est aujourd'hui pleinement fonctionnel et est utilisé par un nombre croissant de chercheurs, en France et à l'étranger qui lui reconnaissent tous facilité d'utilisation, stabilité, rapidité et qualité des résultats.
APA, Harvard, Vancouver, ISO, and other styles
41

Urbano, Díaz Elisa. "Análisis de un patrón de relación conflictiva entre padres e hijos desde una perspectiva relacional: Proceso reconstructivo con una nueva estructuración del tiempo." Doctoral thesis, Universitat Ramon Llull, 2013. http://hdl.handle.net/10803/108092.

Full text
Abstract:
El problema a investigar és l'ocupació del limitat temps disponible dedicat al nucli familiar. Requereix una distribució conscient, orientant cap a uns objectius concrets. Examinem la interacció en un cas únic familiar gravat en vídeo, utilitzant la metodologia Grounded Theory, mitjançant el programari informàtic Atlas.tu. Apliquem l'Anàlisi Transaccional, observant què provoca problemes, com es transmeten valors, la imposició de límits, si el format utilitzat ha estat efectiu, i comparant-lo amb la teoria, quin és el motiu. Detectem els àmbits problemàtics i posteriorment realitzem una intervenció psicològica. Pretenem crear un sistema d'educar en els quatre valors bàsics proposats, sobre la base de la utilització d'una estructura del temps basada en tres eixos: temps, comunicació i valors.
El problema a investigar es el empleo del limitado tiempo disponible dedicado al núcleo familiar. Requiere una distribución consciente, orientando hacia unos objetivos concretos. Examinamos la interacción en un caso único familiar grabado en video, utilizando la metodología Grounded Theory, mediante el software informático Atlas.ti. Aplicamos el Análisis Transaccional, observando qué provoca problemas, cómo se transmiten valores, la imposición de límites, si el formato utilizado ha sido efectivo, y comparándolo con la teoría, cuál es el motivo. Detectamos los ámbitos problemáticos y posteriormente realizamos una intervención psicológica. Pretendemos crear un sistema de educar en los cuatro valores básicos propuestos, en base a la utilización de una estructura del tiempo basada en tres ejes: tiempo, comunicación y valores.
The issue under research is the use of limited available time devoted to family. Time distribution requires a conscious, moving towards specific targets. We examined the interaction in a single case family videotaped using Grounded Theory methodology, computer software Atlas.ti is used. We apply Transactional Analysis, looking at the causes of problems, how values are transmitted, the imposition of limits, if the format has been effective, and compared with theory, what are the reasons. We identify problem areas and accomplish a psychological intervention. We want to create a system of education in the proposed four core values, based on the use of a time structure based on three axes: time, communication and values.
APA, Harvard, Vancouver, ISO, and other styles
42

Guo, We-Ker, and 郭韋克. "3-D Model Reconstruction and Pre-Processing Software Development for Finite Element Analysis." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/63655703945627511332.

Full text
Abstract:
碩士
國立成功大學
機械工程學系碩博士班
91
With the rapid development of computer technology, more and more CAD/CAM/CAE tools are used in product design. However, the model construction capability of CAE tools is often not powerful and convenient enough for the construction of complex models. Therefore, the CAD tools are used to construct the solid models of products instead. In this way, the model construction time will be less, and the total analysis time is shorter. Therefore, this thesis is focusing on the solid model reconstruction and solid mesh generation. For the solid model reconstruction function, the standard data exchange file of geometry model created by CAD tools, STEP, is imported and reconstructed. As to the solid mesh generation, the tetrahedral and hexahedral mesh generation techniques are discussed. The generated solid mesh is more suitable for further computer simulation. With the two functions, the difficulty of pre-processor can be reduced and the efficiency and accuracy of simulation is much improved.
APA, Harvard, Vancouver, ISO, and other styles
43

Kuo, Tai-Hong, and 郭泰宏. "The Development of Medical Image Software – Fundamental Interface and Three-dimensional Solid Modeling Reconstruction." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/93134433261972020216.

Full text
Abstract:
碩士
國立成功大學
機械工程學系碩博士班
90
In the last decade, the big progress of computer techniques combine with medical CT (Computed Tomography) and MRI (Magnetic Resonance Imaging) equipment improves welfares of human life. The ability of computer graphics technology has been greatly enhanced. Therefore, the three-dimensional physical models created by laser sinter machine are widely used. The aim of this research is to develop a computer-assisted medical image software. In order to assist medical doctors to preview sectional images from any angles, we reconstruct virtual 3D models on display that based on the images from patient’s tomography data. This fundamental research involves the techniques of medical radiation and nuclear system and mechanical laser sinter system. The STL (Stereo-Lithography) file format has been generated from our research. The produced models from RP (Rapid Prototyping) machine are used to assist surgeon on surgical planning, pre-operational simulation, and shaped implant. From the outcomes of our work, we can efficiently reduce the complex process in surgical operation and improve the probability of success in surgery. Meanwhile, we also investigate the errors produced by reconstruction process. A refined method has been developed to reduce such errors. Regard to the outcomes of the experiment, we suggest that the scan distance of CT should close to the pixel size of CT image for the purpose of improving accuracy. Furthermore, we also develop a simple indemnifying method to amend the problems caused by large scan step. Virtual pixels are interpolated in order to produce smooth modeling.
APA, Harvard, Vancouver, ISO, and other styles
44

Almeida, Vítor Miguel Amorim de. "3D reconstruction through photographs." Master's thesis, 2014. http://hdl.handle.net/10400.13/1057.

Full text
Abstract:
Humans can perceive three dimension, our world is three dimensional and it is becoming increasingly digital too. We have the need to capture and preserve our existence in digital means perhaps due to our own mortality. We have also the need to reproduce objects or create small identical objects to prototype, test or study them. Some objects have been lost through time and are only accessible through old photographs. With robust model generation from photographs we can use one of the biggest human data sets and reproduce real world objects digitally and physically with printers. What is the current state of development in three dimensional reconstruction through photographs both in the commercial world and in the open source world? And what tools are available for a developer to build his own reconstruction software? To answer these questions several pieces of software were tested, from full commercial software packages to open source small projects, including libraries aimed at computer vision. To bring to the real world the 3D models a 3D printer was built, tested and analyzed, its problems and weaknesses evaluated. Lastly using a computer vision library a small software with limited capabilities was developed.
APA, Harvard, Vancouver, ISO, and other styles
45

Gentiluomo, Gina Marie. "The reproducibility of incomplete skulls using freeform modeling plus software." Thesis, 2014. https://hdl.handle.net/2144/15381.

Full text
Abstract:
As early as 1883, forensic artists and forensic anthropologists have utilized forensic facial reconstruction in the attempt to identify skulls from decomposed remains. Common knowledge dictates that in order to complete identification from the skull with facial reconstruction, the splanchnocranium (also known as the viscerocranium or facial portion of the skull) needs to still be intact. However, there has been very little research conducted (Colledge 1996; Ismail 2008; Wilkinson and Neave 2001) to determine the minimal amount of intact skull that can be present for a reconstruction to still be possible and accurate. Accordingly, in the present study, the researcher attempted to prove that a skull with significant damage to the splanchnocranium could be repaired and facially reconstructed to bear a likeness to the original skull and face. Utilizing FreeForm Modeling Plus Software, version 11.0 (Geomagic Solutions - Andover, MA), in conjunction with the Phantom Desktop Haptic Device (Geomagic Solutions - Andover, MA), five CT scans of males between 19 and 40 years old and of varying ethnicities (four Caucasian and one Asian) were digitally altered to present significant skull damage to the splanchnocranium. The hard tissue digital images were repaired using the same software mentioned above and template skulls (i.e., superfluous CT scanned skulls of similar age, sex, and ancestry). The soft tissue digital images were facially reconstructed also utilizing the same software mentioned above and by following basic tissue depth charts/placement rules and guidelines for feature reconstruction. The reconstructed images were compared to their original CT scans in a side-by-side comparison. Assessors were given a rating scale rubric to fill out that asked them specific questions pertaining to both certain facial features and overall similarity between the original and reconstructed images. Two of the reconstructions each ranked an overall 29% "close resemblance" to their original counterparts, one was ranked an overall 71% "no resemblance" to its original counterpart, and the other three fell somewhere in the middle ("slight" or "approximate") in the rating scale. The results reflected a number of issues related to this project (i.e., the researcher's lack of artistic skill) and to facial reconstruction in general (i.e., tissue depth measurement charts) and showed that while it is not impossible to reconstruct skulls that had been damaged in some capacity, the accuracy of the resulting facial reconstruction is questionable. Future studies would benefit from using an artist to reconstruct the images rather than someone with little to no experience in the field, a larger sample size consisting of one ancestry to avoid the cross-race effect, a comparison of the original skull to the repaired one utilizing Geomagic Qualify (Geomagic Solutions - Andover, MA) to glean an overall view of the project's accuracy, and utilization of a photo lineup as the method of comparison in addition to a side-by-side comparison to give a more realistic feel to the comparison process.
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Tsai-Rong, and 張財榮. "The Study of 3D Medical Image Reconstruction and It''''s Application Using VRML Software-Component." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/03778989455274082016.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
87
Medicine is an extremely challenging field of research, which has been -- more than any other discipline -- of fundamental importance in human existence. The variety and inherent complexity of unsolved problems has made it a major driving force for many natural and engineering sciences. Hence, from the early days of computer graphics the medical field has been one of most important application areas with an enduring provision of exciting research challenges. Usually, physician adopts various modality imaging equipments such as, X-Ray, Computed Tomography, Magnetic Resonance Image and Nuclear Scan, to generate a sequence of 2D medical images to provide aid in diagnostics and operation simulation drilling. Nonetheless, the amount of information provided is limited, in which the most effective judgement cannot be made. The skill of 3D stereo imaging is the new generation of researching field for computer graphics, in both data analysis and surgical simulation. Furthermore, the increase popularity of the World Wide Web has indirectly shortened the distance between humans, and hence the telemedicine system which is the incorporation of modern computer technology, is becoming a new diagnostic pattern in iatrology. This thesis will not only dissertate the 3D model reconstruction technique based on medical images, but also seek for the approach in using VRML software components to exhibit 3D medical model. The scene graph of the 3D virtual environment can be accessed by commonly used programming languages (Visual C++, Visual Basic, C++ Builder, Delphi, etc.) or famous platform independent programming languages (JAVA, JAVA script, etc.). By applying the technique proposed in this thesis we can integrate 3D medical imaging into the web, and have telemedicine step into the world of virtual reality. Hence high quality medical services can also be attained through the remote virtual operation.
APA, Harvard, Vancouver, ISO, and other styles
47

Krogmann, Klaus [Verfasser]. "Reconstruction of software component architectures and behaviour models using static and dynamic analysis / von Klaus Krogmann." 2010. http://d-nb.info/1010373587/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Huang-Tsu, and 黃祖揚. "A Study of Performance of Accident Reconstruction Software-Using PC-Crash and HVE Programs as Example." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/06847498202905342715.

Full text
Abstract:
碩士
逢甲大學
交通工程與管理所
96
The traffic accident reconstruction software can visually show the traffic accident process, and provide the scientific data to user or relevant people, so that those who can easily and quickly understand the process of a traffic accident. Final, the causes of a traffic accident can be determined. In Taiwan, two sets of accident simulation software such as PC-Crash and HVE are comparably widely used. Therefore the aim of this study tries to fully understand the fundamental principles of two sets of simulation software, and then collects the data of full-scale dynamic vehicle tests such as brake distance, skid mark distance, collision of two cars, and hit the fixed barrier , and so on. We use these data and put into PC-Crash and HVE, and figure out what are the differences between two sets of software. The results show that (1) The error rate of brake distance and skid mark distance of two sets of software both are less 6%。(2) In two-cars collision and barrier collision simulations, HVE has better results than PC-Crash in simulating vehicle’s damage, but PC-Crash predicts the velocity difference (ΔV) of collision is better than HVE. Furthermore, this study uses HVE-EDCRASH software to reconstruct the impact speeds and ΔVs of two cars before collision that is on the basis of vehicle’s damages and relative vehicle positions after collision. The results show that the prediction accuracy of oblique collisions has substantial results better than the results of collinear collisions of two cars, for the error of collinear collisions of two cars has up to 50%.
APA, Harvard, Vancouver, ISO, and other styles
49

Dias, António Carlos Fortuna Ribeiro. "Development and implementation of bioinformatics tools for the reconstruction of GiSMos." Master's thesis, 2017. http://hdl.handle.net/1822/56103.

Full text
Abstract:
Dissertação de mestrado em Bioinformatics
The reconstruction of Genomic-Scale Metabolic Model (GiSMo)s is an increasingly growing methodology, which allows to develop models that can be used to perform in silico predictions on the phenotypical response of an organism to environmental changes and genetic modifications. These predictions allow focusing in vivo experiments on methodologies that will, theoretically, present better results, thus reducing the high costs on time and money spent in laboratorial experiments. GiSMos are a mathematical representation of the organism’s genome, in the form of metabolic networks. As complex as these can be, because of the large number of compounds involved in many different reactions and pathways, the treatment of all such data is not easily manually performed. Several bioinformatics software were developed with the aims of improving this procedure, by automating many operations in the reconstruction process. Metabolic Models Reconstruction Using Genome-Scale Information (merlin) is one of such tools, following a philosophy that thrives on providing an intuitive and powerful graphical environment, to annotate data on key metabolic components and building a complete genome-scale model. While already encompassing a wide range of tools, it is still a work in development. Upon analyzing its functioning, several improvement opportunities were identified, mainly in existing operations. Moreover, missing important features for the reconstruction of GiSMos were as well identified. This work details the results of this analysis and the improvements performed to enrich merlin’s toolbox.
A reconstrução de modelos metabólicos à escala genómica (GiSMo) é uma metodologia em rápido crescimento, que permite o desenvolvimento de modelos para fazer previsões in silico sobre a resposta fenotípica de um organismo a alterações ambientais e a modificações genéticas. Estas previsões permitem a focagem em experiências in vivo e em metodologias que, em teoria, apresentarão melhores resultados, e portanto reduzir os elevados custos em tempo e dinheiro gastos em experiências laboratoriais. GiSMos são uma representação matemática do genoma de um organismo, em forma de redes metabólicas. Devido à complexidade que estas podem ter, devido ao elevado número de compostos envolvidos em muitas reações em diferentes redes, o tratamento de todos estes dados não é simples de ser feito manualmente. Vários softwares bioinformáticos foram desenvolvidos com o objectivo de melhorar o procedimento, automatizando várias operações do processo de reconstrução. merlin é uma dessa ferramentas, e segue uma filosofia que prospera em providenciar um ambiente gráfico intuitivo e eficaz, para anotar informação de componentes metabólicos chave e construir a partir dela um modelo à escala genómica completo. Apesar de já conter uma grande variedade de ferramentas, ainda está em fase de desenvolvimento. Ao ser analisado o seu funcionamento, foram apontadas várias operações passíveis de ser melhoradas. Também foram indentificadas em falta funcionalidades importantes par a resconstrução de GiSMo. Este trabalho detalha os resultados desta análise e o que foi feito para enriquecer a caixa de ferramentas do merlin.
APA, Harvard, Vancouver, ISO, and other styles
50

Correia, Carlos Manuel Leitão. "Avaliação de software de reconstrução ótica em tomografia difusa para geometria de transiluminação." Master's thesis, 2019. http://hdl.handle.net/10316/88061.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Biomédica apresentada à Faculdade de Ciências e Tecnologia
The main goal of this project was to evaluate an optical reconstruction software in diffuse tomography, TOAST (Time-Resolved Optical Absorption and Scatter Tomography), when applied to a transillumination geometry. With that in mind, we studied and adapted a demo from the TOAST++ package available online, based on a cylindrical geometry whose sample consists of a set of spheres with distinct optical properties. We adapted the problem for a laminar geometry with just one centered sphere. TOAST simulations implies representing the sample by a nodes mesh which contains the optical properties of the medium (absorption coefficient, diffusion coefficient and refraction index) and a mesh of tetragonal elements created from those nodal points, through a mesh generator software, TetGen. The performance evaluation was done through four sets of simulations: one preliminary set whose goal was to study the impact of the sphere dimension, the mesh density and the detectors width on the reconstruction; a second set to study the dependence of the results on the initial values of the optical properties assigned for the reconstruction process; a third set which consisted on the analysis of the information loss related to the adoption of a laminar geometry, comparing it with the result of a cylindrical geometry simulation; and a final set of tests conceived with the intent of evaluating the reconstruction linearity related to the optical properties of the object. The results show that the bigger the higher densities of the sample mesh and larger dimension for the spherical object yield higher reconstruction quality. The obtained distributions of optical properties are extremely dependent on the optical properties defined for the reconstruction process and the transillumination geometry has less capability to reconstruct the shape of the object when compared to the cylindrical geometry. In conclusion, the software tend has difficulties in reconstructing small objects, to ensure a better reconstruction the object has to be represented by a significant number of nodes and trnasillumination geometry has lower reconstruction fideity due to the limitation on the projections to the illumination and detection planes.
Este projeto tem como principal objetivo avaliar um \textit{software} de reconstrução de imagem ótica em tomografia difusa, o TOAST (Time-Resolved Optical Absorption and Scatter Tomography), quando aplicado a uma geometria de transiluminação. Para tal, estudou-se e adaptou-se um exemplo do pacote TOAST++ disponível \textit{online}, baseado numa geometria cilíndrica cuja amostra consiste num conjunto de esferas com distintas propriedades óticas. Desta forma, procurou-se fazer a adaptação do problema para uma geometria laminar com apenas uma esfera no meio. A realização de simulações através do TOAST implica representar a amostra através de uma malha de nós que contêm as propriedades óticas do meio (coeficiente de absorção, coeficiente de difusão e índice de refração) e uma rede de elementos tetragonais gerados a partir desses pontos, através de um \textit{software} gerador de malhas, o TetGen. A avaliação do desempenho foi realizada através de quatro conjuntos de simulações: um conjunto preliminar cujo objetivo é estudar o impacto que a dimensão da esfera, a densidade da malha e a largura dos detetores têm na reconstrução; um segundo conjunto a partir do qual se analisou a dependência dos resultados em função dos valores dos parâmetros óticos inicialmente atribuídos no processo de reconstrução; um terceiro conjunto que consistiu na análise da perda de informação relativa à adoção de uma geometria laminar, comparando-a com o resultado de uma simulação em geometria cilíndrica; e, por fim, um último conjunto de testes concebido com o intuito de avaliar a linearidade da reconstrução em função das propriedades óticas do objeto. Os resultados obtidos indicam que quanto mais densa é a malha representativa da amostra e maior for a dimensão do objeto maior é a qualidade da reconstrução. Para além disto, as distribuições dos coeficientes obtidas são extremamente dependentes das propriedades óticas definidas para o início do processo de reconstrução e a geometria em transiluminação demonstrou menor capacidade de reconstrução da forma do objeto relativamente à cilíndrica. Conclui-se que o software tende a ter dificuldade em reconstruir objetos pequenos, que para se garantir um resultado mais fidedigno o objeto tem de ser representado por um significativo número de nós e que a geometria em transiluminação tem menor fidelidade de reconstrução devido à limitação das projeções aos planos de iluminação e de deteção.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography