Дисертації з теми "Software reconstruction"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Software reconstruction".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Knodel, Jens. "Process models for the reconstruction of software architecture views." [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10252225.
Повний текст джерелаCollins, Anthony Leslie. "The tomographic reconstruction of holographic interferograms." Thesis, City University London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287671.
Повний текст джерелаStrother, Philip David. "Design and application of the reconstruction software for the BaBar calorimeter." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314229.
Повний текст джерелаTaylor, Ian James. "Development of T2K 280m near detector software for muon and photon reconstruction." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.505000.
Повний текст джерелаDikmen, Mehmet. "3d Face Reconstruction Using Stereo Vision." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607543/index.pdf.
Повний текст джерела#8217
s face. It is seen that using projection pattern also provided enough feature points to derive 3D face roughly. These points are then used to fit a generic face mesh for a more realistic model. To cover this 3D model, a single texture image is generated from the initial stereo photographs.
Pardoe, Andrew Charles. "Neural network image reconstruction for nondestructive testing." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/44616/.
Повний текст джерелаRen, Yuheng. "Implicit shape representation for 2D/3D tracking and reconstruction." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:c70dc663-ee7c-4100-b492-3a85bf8640d1.
Повний текст джерелаYamada, Randy Matthew. "Identification of Interfering Signals in Software Defined Radio Applications Using Sparse Signal Reconstruction Techniques." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/50609.
Повний текст джерелаRadio systems commonly tune hardware manually or use software controls to digitize sub-bands as needed, critically sampling those sub-bands according to the Nyquist criterion. Recent technology advancements have enabled efficient and cost-effective over-sampling of the spectrum, allowing all bandwidths of interest to be captured for processing simultaneously, a process known as band-sampling. Simultaneous access to measurements from all of the frequency sub-bands enables both awareness of the spectrum and seamless operation between radio applications, which is critical to many applications. Further, more information may be obtained for the spectral content of each sub-band from measurements of other sub-bands that could improve performance in applications such as detecting the presence of interference in weak signal measurements.
This thesis presents a new method for confirming the source of detected energy in weak signal measurements by sampling them directly, then estimating their expected effects. First, we assume that the detected signal is located within the frequency band as measured, and then we assume that the detected signal is, in fact, interference perceived as a result of signal aliasing. By comparing the expected effects to the entire measurement and assuming the power spectral density of the digitized bandwidth is sparse, we demonstrate the capability to identify the true source of the detected energy. We also demonstrate the ability of the method to identify interfering signals not by explicitly sampling them, but rather by measuring the signal aliases that they produce. Finally, we demonstrate that by leveraging techniques developed in the field of Compressed Sensing, the method can recover signal aliases by analyzing less than 25 percent of the total spectrum.
Master of Science
Andersson, Sebastian. "Implementation of a reconstruction software and image quality assessment tool for a micro-CT system." Thesis, KTH, Fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183147.
Повний текст джерелаGryshkov, O. P., M. Y. Tymkovych, О. Г. Аврунін, and B. Glasmacher. "Experience of development and use of specialized software intended for automated analysis of alginate structures." Thesis, ХНУРЕ, 2019. http://openarchive.nure.ua/handle/document/8374.
Повний текст джерелаLotfy, M. Y. "Stereoscopic image feature matching during endoscopic procedure." Thesis, Boston, USA, 2020. http://openarchive.nure.ua/handle/document/11836.
Повний текст джерелаWilhelm, Andreas Johannes [Verfasser], Hans Michael [Akademischer Betreuer] Gerndt, Hans Michael [Gutachter] Gerndt, and Felix [Gutachter] Wolf. "Interactive Software Parallelization Based on Hybrid Analysis and Software Architecture Reconstruction / Andreas Johannes Wilhelm ; Gutachter: Hans Michael Gerndt, Felix Wolf ; Betreuer: Hans Michael Gerndt." München : Universitätsbibliothek der TU München, 2019. http://d-nb.info/1185637990/34.
Повний текст джерелаJudeh, Thair. "SEA: a novel computational and GUI software pipeline for detecting activated biological sub-pathways." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/463.
Повний текст джерелаZapalowski, Vanius. "Evaluation of code-based information to architectural module identification." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/94691.
Повний текст джерелаSoftware architecture plays an important role in the software development, and when explicitly documented, it allows understanding an implemented system and reasoning about how non-functional requirements are addressed. In spite of that, many developed systems lack proper architecture documentation, and if it exists, it may be outdated due to software evolution. The process of recovering the architecture of a system depends mainly on developers' knowledge requiring a manual inspection of the source code. Research on architecture recovery provides support to this process. Most of the existing approaches are based on architectural elements dependency, architectural patterns or source code semantics, but even though they help identifying architectural modules, the obtained results must be signi cantly improved to be considered reliable. We thus aim to support this task by the exploitation of di erent code-oriented information and machine learning techniques. Our work consists of an analysis, involving ve case studies, of the usefulness of adopting a set of code-level characteristics (or features, in the machine learning terminology) to group elements into architectural modules. The characteristics mainly source code metrics that a ect the identi cation of what role software elements play in software architecture are unknown. Then, we evaluate the relationship between di erent sets of characteristics and the accuracy achieved by an unsupervised algorithm the Expectation Maximization in identifying architectural modules. Consequently, we are able to understand which of those characteristics reveal information about the source code structure. By the use of code-oriented information, our approach achieves a signi cant average accuracy, which indicates the importance of the selected information to recover software architecture. Additionally, we provide a tool to support research on architecture recovery providing software architecture measurements and visualizations. It presents comparisons between predicted architectures and concrete architectures.
Krogmann, Klaus [Verfasser], and R. [Akademischer Betreuer] Reussner. "Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis / Klaus Krogmann ; Betreuer: R. Reussner." Karlsruhe : KIT Scientific Publishing, 2012. http://d-nb.info/1184493901/34.
Повний текст джерелаWu, Qing Hua. "Image segmentation and reconstruction based on graph cuts and texton mask." Thesis, University of Macau, 2007. http://umaclib3.umac.mo/record=b1677228.
Повний текст джерелаHauth, Thomas [Verfasser], and G. [Akademischer Betreuer] Quast. "New Software Techniques in Particle Physics and Improved Track Reconstruction for the CMS Experiment / Thomas Hauth. Betreuer: G. Quast." Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/1066737037/34.
Повний текст джерелаGessinger-Befurt, Paul [Verfasser]. "Development and improvement of track reconstruction software and search for disappearing tracks with the ATLAS experiment / Paul Gessinger-Befurt." Mainz : Universitätsbibliothek der Johannes Gutenberg-Universität Mainz, 2021. http://d-nb.info/1233783203/34.
Повний текст джерелаNoschinski, Leonie. "Validierung einer neuen Software für halbautomatische Volumetrie – ist diese besser als manuelle Messungen?" Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-210703.
Повний текст джерелаZiel dieser Studie war es, eine manuelle Methode zur Lebervolumetrie mit einer halbautomatischen Software zu vergleichen. Die zu prüfende Hypothese war eine Überlegenheit der halbautomatischen Software hinsichtlich Schnelligkeit, Genauigkeit und Unabhängigkeit von der Erfahrung des Auswerters. Material und Methoden: Die Studie wurde von der Ethikkommission geprüft und es lagen Einverständniserklärungen aller Patienten vor. In die Studie wurden zehn Patienten eingeschlossen, die eine Hemihepatektomie erhielten. Es wurde präoperativ ein CT-Scan angefertigt, der sowohl für die Volumetrie der gesamten Leber als auch zur Bestimmung des Resektatvolumens verwendet wurde. Für die Volumetrie wurden zwei verschiedene Programme genutzt: 1) eine manuelle Methode, wobei die Lebergrenzen in jeder Schicht vom Auswerter definiert werden mussten 2) eine halbautomatische Software mit automatischer Erkennung des Lebervolumens und manueller Definition der Lebersegmente nach Coinaud. Die Messungen wurden von sechs Auswertern mit unterschiedlicher Erfahrung vorgenommen. Als Goldstandard diente eine Verdrängungsvolumetrie des Leberresektats, die direkt nach der Resektion im Operationssaal durchgeführt wurde. Anschließend wurde zusätzlich ein CT-Scan des Resektats angefertigt. Ergebnisse: Die Ergebnisse des postoperativen CT-Scans korrelierten hochgradig mit den Ergebnissen der Verdrängungsvolumetrie (manuell: ρ=0.997; halbautomatische Software: ρ=0.995). Mit der halbautomatischen Software fielen die Unterschiede zwischen dem vorhergesagten und dem tatsächlichen Volumen signifikant kleiner aus (33 % vs. 57 %, p=0.002). Zudem lieferte die halbautomatische Software die Volumina der Gesamtleber 3.9mal schneller. Schlussfolgerung: Beide Methoden erlauben eine sehr gute Abschätzung des Lebervolumens. Die getestete halbautomatische Software kann das Lebervolumen jedoch schneller und das Resektatvolumen genauer vorhersagen und ist zusätzlich unabhängiger von der Erfahrung des Auswerters
Fredriksson, Mattias. "Tree structured neural network hierarchy for synthesizing throwing motion." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20812.
Повний текст джерелаSumer, Emre. "Automatic Reconstruction Of Photorealistic 3-d Building Models From Satellite And Ground-level Images." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613131/index.pdf.
Повний текст джерелаLe, Borgne Alexandre. "ARIANE : Automated Re-Documentation to Improve software Architecture uNderstanding and Evolution." Thesis, IMT Mines Alès, 2020. http://www.theses.fr/2020EMAL0001.
Повний текст джерелаAll along its life-cycle, a software may be subject to numerous changes that may affect its coherence with its original documentation. Moreover, despite the general agreement that up-to-date documentation is a great help to record design decisions all along the software life-cycle, software documentation is often outdated. Architecture models are one of the major documentation pieces. Ensuring coherence between them and other models of the software (including code) during software evolution (co-evolution) is a strong asset to software quality. Additionally, understanding a software architecture is highly valuable in terms of reuse, evolution and maintenance capabilities. For that reason, re-documenting software becomes essential for easing the understanding of software architectures. However architectures are rarely available and many research works aim at automatically recovering software architectures from code. Yet, most of the existing re-documenting approaches do not perform a strict reverse-documenting process to re-document architectures "as they are implemented" and perform re-engineering by clustering code into new components. Thus, this thesis proposes a framework for re-documentating architectures as they have been designed and implemented to provide a support for analyzing architectural decisions. This re-documentation is performed from the analysis of both object-oriented code and project deployment descriptors. The re-documentation process targets the Dedal architecture language which is especially tailored for managing and driving software evolution.Another highly important aspect of software documentation relates to the way concepts are versioned. Indeed, in many approaches and actual version control systems such as Github, files are versioned in an agnostic manner. This way of versioning keeps track of any file history. However, no information can be provided on the nature of the new version, and especially regarding software backward-compatibility with previous versions. This thesis thus proposes a formal way to version software architectures, based on the use of the Dedal architecture description language which provides a set of formal properties. It enables to automatically analyze versions in terms of substitutability, version propagation and proposes an automatic way for incrementing version tags so that their semantics corrrespond to actual evolution impact. By proposing such a formal approach, this thesis intends to prevent software drift and erosion.This thesis also proposes an empirical study based on both re-documenting and versioning processes on numerous versions on an enterprise project taken from Github
Sun, Yi-Ran. "Generalized Bandpass Sampling Receivers for Software Defined Radio." Doctoral thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4009.
Повний текст джерелаBjörklund, Daniel. "Implementation of a Software-Defined Radio Transceiver on High-Speed Digitizer/Generator SDR14." Thesis, Linköpings universitet, Elektroniksystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78213.
Повний текст джерелаAbu-Al-Saud, Wajih Abdul-Elah. "Efficient Wideband Digital Front-End Transceivers for Software Radio Systems." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5257.
Повний текст джерелаOudot, Steve Y. "Échantillonnage et maillage de surfaces avec garanties." Phd thesis, Ecole Polytechnique X, 2005. http://tel.archives-ouvertes.fr/tel-00338378.
Повний текст джерелаBismack, Brian James. "Implementation of the Dosimetry Check Software Package in Computing 3D Patient Exit Dose Through Generation of a Deconvolution Kernel to be Used for Patients’ IMRT Treatment Plan QA." University of Toledo Health Science Campus / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=mco1290456365.
Повний текст джерелаMikulášková, Eliška. "Technika zatáčení řidičů a možnosti vozidel v aplikaci software pro analýzu nehod." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-382228.
Повний текст джерелаНещерет, Марина Олександрівна. "Environmental impact assessment of the reconstruction of the car M-14 road on the Kherson-Mariupol section." Thesis, Національний авіаційний університет, 2020. https://er.nau.edu.ua/handle/NAU/44918.
Повний текст джерелаObject of research – M-14 Road reconstruction on the Kherson-Mariupol section. Aim оf work – to analyze possible effects of the M-14 road reconstruction on all components of the environment on the Kherson-Mariupol section; to assess the negative impacts on the air environment. Methods of research: mathematical calculations, analysis and synthesis of information, computer software processing, geospatial analysis (Google Earth’s maps).
Нещерет, Марина Олександрівна. "Environmental impact assessment of the reconstruction of the car M-14 road on the Kherson-Mariupol section." Thesis, Національний авіаційний університет, 2020. https://er.nau.edu.ua/handle/NAU/49683.
Повний текст джерелаObject of research – M-14 Road reconstruction on the Kherson-Mariupol section. Aim оf work – to analyze possible effects of the M-14 road reconstruction on all components of the environment on the Kherson-Mariupol section; to assess the negative impacts on the air environment. Methods of research: mathematical calculations, analysis and synthesis of information, computer software processing, geospatial analysis (Google Earth’s maps).
Bayir, Murat Ali. "A New Reactive Method For Processing Web Usage Data." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12607323/index.pdf.
Повний текст джерелаSmart-SRA'
is introduced. Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigations of Web users. As in classical data mining, data processing and pattern discovery are the main issues in web usage mining. The first phase of the web usage mining is the data processing phase including session reconstruction. Session reconstruction is the most important task of web usage mining since it directly affects the quality of the extracted frequent patterns at the final step, significantly. Session reconstruction methods can be classified into two categories, namely '
reactive'
and '
proactive'
with respect to the data source and the data processing time. If the user requests are processed after the server handles them, this technique is called as &lsquo
reactive&rsquo
, while in &lsquo
proactive&rsquo
strategies this processing occurs during the interactive browsing of the web site. Smart-SRA is a reactive session reconstruction techique, which uses web log data and the site topology. In order to compare Smart-SRA with previous reactive methods, a web agent simulator has been developed. Our agent simulator models behavior of web users and generates web user navigations as well as the log data kept by the web server. In this way, the actual user sessions will be known and the successes of different techniques can be compared. In this thesis, it is shown that the sessions generated by Smart-SRA are more accurate than the sessions constructed by previous heuristics.
Reiss, Mário Luiz Lopes. "Reconstrução tridimensional digital de objetos à curta distância por meio de luz estruturada." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/10072.
Повний текст джерелаThe purpose of this work is to present a structured light system developed. The system named Scan3DSL is based on off-the-shelf digital cameras and a projector of patterns. The mathematical model for 3D reconstruction is based on the parametric equation of the projected straight line combined with the collinearity equations. A pattern codification strategy was developed to allow fully automatic pattern recognition. A calibration methodology enables the determination of the direction vector of each pattern and the coordinates of the perspective centre of the pattern projector. The calibration processes are carried out with the acquisition of several images of a flat surface from different distances and orientations. Several processes were combined to provide a reliable solution for patterns location. In order to assess the accuracy and the potential of the methodology, a prototype was built integrating in a single mount a projector of patterns and a digital camera. The experiments using reconstructed surfaces with real data indicated a relative accuracy of 0.2 mm in depth could be achieved, in a processing time less than 10 seconds.
Lahouli, Rihab. "Etude et conception de convertisseur analogique numérique large bande basé sur la modulation sigma delta." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0074/document.
Повний текст джерелаThe work presented in this Ph.D. dissertation deals with the design of a wideband and accurate Analog-to-Digital Converter (ADC) able to digitize signals of different wireless communications standards. Thereby, itresponds to the Software Defined Radio concept (SDR). The purpose is reconfigurability by software andintegrability of the multistandard radio terminal. Oversampling (Sigma Delta) ADCs have been interestingcandidates in this context of multistandard SDR reception thanks to their high accuracy. Although they presentlimited operating bandwidth, it is possible to use them in a parallel architecture thus the bandwidth isextended. Therefore, we propose in this work the design and implementation of a parallel frequency banddecomposition ADC based on Discrete-time modulators in an SDR receiver handling E-GSM, UMTS andIEEE802.11a standard signals. The novelty of this proposed architecture is its programmability. Where,according to the selected standard digitization is made by activating only required branches are activated withspecified sub-bandwidths and sampling frequency. In addition the frequency division plan is non-uniform.After validation of the theoretical design by simulation, the overall baseband stage has been designed. Resultsof this study have led to a single passive 6th order Butterworth anti-aliasing filter (AAF) permitting theelimination of the automatic gain control circuit (AGC) which is an analog component. FBD architecturerequires digital processing able to recombine parallel branches outputs signals in order to reconstruct the finaloutput signal. An optimized design of this digital reconstruction signal stage has been proposed. Synthesis ofthe baseband stage has revealed modulators stability problems. To deal with this problem, a solution basedon non-unitary STF has been elaborated. Indeed, phase mismatches have been shown in the recombinedoutput signal and they have been corrected in the digital stage. Analytic study and system level design havebeen completed by an implementation of the parallel ADC digital reconstruction stage. Two design flows havebeen considered, one associated to the FPGA and another independent of the chosen target (standard VHDL).Proposed architecture has been validated using a VIRTEX6 FPGA Xilinx target. A dynamic range over 74 dB hasbeen measured for UMTS use case, which responds to the dynamic range required by this standard
Alliez, Pierre. "Approches variationnelles pour le traitement numérique de la géométrie." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00434316.
Повний текст джерелаStrand, Mathias. "Standardisering av processer och aktiviteter inom kontrollanläggningar och elmontage." Thesis, KTH, Data- och elektroteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169324.
Повний текст джерелаIn this thesis, a study has been carried out for the consulting company ÅF, and their work method within documentation and control facilities to investigate whether there were any potential for efficiency improvements. The investigation has largely involved interviews with operating consultants within the control facility on substations. The consultants worked as electrical designers and produced drawings mainly for control equipment. The results of the interviews were analyzed to draw conclusions about the efficiency potentials within the business. Different offices in the business were examined and the work approach varied between the offices. One difference was the CAD-software used and a suggestion was to use the same program. Efficiency improvement potentials were also by re-using electrical schematics to some extent from previous projects and another suggestion was to establish databases where electrical schematics can be gathered and shared between the different offices.
Kizilgul, Serdar A. "Study of Pion Photo-Production Using a TPC Detector to Determine Beam Asymmetries from Polarized HD." Ohio University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1210629380.
Повний текст джерелаMiranda, Geraldo Elias. "Avaliação da acurácia e da semelhança da reconstrução facial forense computadorizada tridimensional e variação facial fotoantropométrica intraindivíduo." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/23/23153/tde-05112018-125105/.
Повний текст джерелаThis thesis contains three chapters. The aim of the first chapter was to evaluate the accuracy and recognition level of three-dimensional (3D) computerized forensic craniofacial reconstruction (CCFR) performed in a blind test on open-source software using computed tomography data from live subjects. The CCFRs were completed using Blender® with 3D models obtained from the computed tomography data and templates from the MakeHuman® program. The evaluation of accuracy was carried out in CloudCompare®, by geometric comparison of the CCFR to the subject 3D face model (obtained from the CT data). A recognition level was performed using the Picasa® with a frontal standardized photography. The results were presented from all the points that form the CCFR model, with an average for each comparison between 63.20% and 73.67% with a distance -2.5 <= x <= 2.5 mm from the skin surface and the average distances were 1.66 to 0.33 mm. Two of the four CCFRs were correctly matched by the Picasa® tool. Free software programs are capable of producing 3D CCFRs with plausible levels of accuracy and recognition and therefore indicate their value for use in forensic applications. The other two chapters study the facial comparison and aimed to evaluate the facial metrical stability of an individual through photographs taken in a time interval of five years. It is a longitudinal study composed of standard frontal photographs of 666 adults divided by sex and age groups. By using the SAFF 2D® software, 32 landmarks were positioned, whose coordinates were used to calculate 40 measurements, 20 horizontal and 20 vertical. Each of these measurements was divided by iris diameter and thus iridian ratios were obtained. The results showed that most of the ratios did not suffer statistically significant variations. The ratios that had the greatest variation in the different age groups were those of the nose and mouth regions. When comparing the age groups with each other it is observed that the great majority of the reasons are different, showing the influence of age on the facial dimensions. When comparing stability with respect to sex, it was observed that there were ratios that decreased and others that increased in both sexes, while other ratios varied only in females or in males. When the sexes were compared, it was observed that the majority of the ratios were different, showing sexual dimorphism of the facial measures. The face undergoes metrical alterations throughout the life, mainly in the region of the nose and mouth, with the greatest differences seen in those who are aged 60 years and older. In addition, some facial measures are more influenced by sex than others. However, most of the measures raised have remained relatively stable within a period of five years in both sex and age groups.
Buckland, Philip. "The development and implementation of software for palaeoenvironmental and palaeoclimatological research : the Bugs Coleopteran Ecology Package (BugsCEP)." Doctoral thesis, Umeå University, Archaeology and Sami Studies, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1105.
Повний текст джерелаThis thesis documents the development and application of a unique database orientated software package, BugsCEP, for environmental and climatic reconstruction from fossil beetle (Coleoptera) assemblages. The software tools are described, and the incorporated statistical methods discussed and evaluated with respect to both published modern and fossil data, as well as the author’s own investigations.
BugsCEP consists of a reference database of ecology and distribution data for over 5 800 taxa, and includes temperature tolerance data for 436 species. It also contains abundance and summary data for almost 700 sites - the majority of the known Quaternary fossil coleopteran record of Europe. Sample based dating evidence is stored for a large number of these sites, and the data are supported by a bibliography of over 3 300 sources. Through the use of built in statistical methods, employing a specially developed habitat classification system (Bugs EcoCodes), semi-quantitative environmental reconstructions can be undertaken, and output graphically, to aid in the interpretation of sites. A number of built in searching and reporting functions also increase the efficiency with which analyses can be undertaken, including the facility to list the fossil record of species found by searching the ecology and distribution data. The existing Mutual Climatic Range (MCR) climate reconstruction method is implemented and improved upon in BugsCEP, as BugsMCR, which includes predictive modelling and the output of graphs and climate space maps.
The evaluation of the software demonstrates good performance when compared to existing interpretations. The standardization method employed in habitat reconstructions, designed to enable the inter-comparison of samples and sites without the interference of differing numbers of species and individuals, also appears to be robust and effective. Quantitative climate reconstructions can be easily undertaken from within the software, as well as an amount of predictive modelling. The use of jackknifing variants as an aid to the interpretation of climate reconstructions is discussed, and suggested as a potential indicator of reliability. The combination of the BugStats statistical system with an enhanced MCR facility could be extremely useful in increasing our understanding of not only past environmental and climate change, but also the biogeography and ecology of insect populations in general.
BugsCEP is the only available software package integrating modern and fossil coleopteran data, and the included reconstruction and analysis tools provide a powerful resource for research and teaching in palaeo-environmental science. The use of modern reference data also makes the package potentially useful in the study of present day insect faunas, and the effects of climate and environmental change on their distributions. The reconstruction methods could thus be inverted, and used as predictive tools in the study of biodiversity and the implications of sustainable development policies on present day habitats.
BugsCEP can be downloaded from http://www.bugscep.com
Hunt, Cahill. "Developing an efficient method for generating facial reconstructions using photogrammetry and open source 3D/CAD software." Thesis, Hunt, Cahill (2017) Developing an efficient method for generating facial reconstructions using photogrammetry and open source 3D/CAD software. Masters by Coursework thesis, Murdoch University, 2017. https://researchrepository.murdoch.edu.au/id/eprint/39826/.
Повний текст джерелаGoret, Gael. "Recalage flexible de modèles moléculaires dans les reconstructions 3D de microscopie électronique." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00631858.
Повний текст джерелаUrbano, Díaz Elisa. "Análisis de un patrón de relación conflictiva entre padres e hijos desde una perspectiva relacional: Proceso reconstructivo con una nueva estructuración del tiempo." Doctoral thesis, Universitat Ramon Llull, 2013. http://hdl.handle.net/10803/108092.
Повний текст джерелаEl problema a investigar es el empleo del limitado tiempo disponible dedicado al núcleo familiar. Requiere una distribución consciente, orientando hacia unos objetivos concretos. Examinamos la interacción en un caso único familiar grabado en video, utilizando la metodología Grounded Theory, mediante el software informático Atlas.ti. Aplicamos el Análisis Transaccional, observando qué provoca problemas, cómo se transmiten valores, la imposición de límites, si el formato utilizado ha sido efectivo, y comparándolo con la teoría, cuál es el motivo. Detectamos los ámbitos problemáticos y posteriormente realizamos una intervención psicológica. Pretendemos crear un sistema de educar en los cuatro valores básicos propuestos, en base a la utilización de una estructura del tiempo basada en tres ejes: tiempo, comunicación y valores.
The issue under research is the use of limited available time devoted to family. Time distribution requires a conscious, moving towards specific targets. We examined the interaction in a single case family videotaped using Grounded Theory methodology, computer software Atlas.ti is used. We apply Transactional Analysis, looking at the causes of problems, how values are transmitted, the imposition of limits, if the format has been effective, and compared with theory, what are the reasons. We identify problem areas and accomplish a psychological intervention. We want to create a system of education in the proposed four core values, based on the use of a time structure based on three axes: time, communication and values.
Guo, We-Ker, and 郭韋克. "3-D Model Reconstruction and Pre-Processing Software Development for Finite Element Analysis." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/63655703945627511332.
Повний текст джерела國立成功大學
機械工程學系碩博士班
91
With the rapid development of computer technology, more and more CAD/CAM/CAE tools are used in product design. However, the model construction capability of CAE tools is often not powerful and convenient enough for the construction of complex models. Therefore, the CAD tools are used to construct the solid models of products instead. In this way, the model construction time will be less, and the total analysis time is shorter. Therefore, this thesis is focusing on the solid model reconstruction and solid mesh generation. For the solid model reconstruction function, the standard data exchange file of geometry model created by CAD tools, STEP, is imported and reconstructed. As to the solid mesh generation, the tetrahedral and hexahedral mesh generation techniques are discussed. The generated solid mesh is more suitable for further computer simulation. With the two functions, the difficulty of pre-processor can be reduced and the efficiency and accuracy of simulation is much improved.
Kuo, Tai-Hong, and 郭泰宏. "The Development of Medical Image Software – Fundamental Interface and Three-dimensional Solid Modeling Reconstruction." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/93134433261972020216.
Повний текст джерела國立成功大學
機械工程學系碩博士班
90
In the last decade, the big progress of computer techniques combine with medical CT (Computed Tomography) and MRI (Magnetic Resonance Imaging) equipment improves welfares of human life. The ability of computer graphics technology has been greatly enhanced. Therefore, the three-dimensional physical models created by laser sinter machine are widely used. The aim of this research is to develop a computer-assisted medical image software. In order to assist medical doctors to preview sectional images from any angles, we reconstruct virtual 3D models on display that based on the images from patient’s tomography data. This fundamental research involves the techniques of medical radiation and nuclear system and mechanical laser sinter system. The STL (Stereo-Lithography) file format has been generated from our research. The produced models from RP (Rapid Prototyping) machine are used to assist surgeon on surgical planning, pre-operational simulation, and shaped implant. From the outcomes of our work, we can efficiently reduce the complex process in surgical operation and improve the probability of success in surgery. Meanwhile, we also investigate the errors produced by reconstruction process. A refined method has been developed to reduce such errors. Regard to the outcomes of the experiment, we suggest that the scan distance of CT should close to the pixel size of CT image for the purpose of improving accuracy. Furthermore, we also develop a simple indemnifying method to amend the problems caused by large scan step. Virtual pixels are interpolated in order to produce smooth modeling.
Almeida, Vítor Miguel Amorim de. "3D reconstruction through photographs." Master's thesis, 2014. http://hdl.handle.net/10400.13/1057.
Повний текст джерелаGentiluomo, Gina Marie. "The reproducibility of incomplete skulls using freeform modeling plus software." Thesis, 2014. https://hdl.handle.net/2144/15381.
Повний текст джерелаChang, Tsai-Rong, and 張財榮. "The Study of 3D Medical Image Reconstruction and It''''s Application Using VRML Software-Component." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/03778989455274082016.
Повний текст джерела國立成功大學
電機工程學系
87
Medicine is an extremely challenging field of research, which has been -- more than any other discipline -- of fundamental importance in human existence. The variety and inherent complexity of unsolved problems has made it a major driving force for many natural and engineering sciences. Hence, from the early days of computer graphics the medical field has been one of most important application areas with an enduring provision of exciting research challenges. Usually, physician adopts various modality imaging equipments such as, X-Ray, Computed Tomography, Magnetic Resonance Image and Nuclear Scan, to generate a sequence of 2D medical images to provide aid in diagnostics and operation simulation drilling. Nonetheless, the amount of information provided is limited, in which the most effective judgement cannot be made. The skill of 3D stereo imaging is the new generation of researching field for computer graphics, in both data analysis and surgical simulation. Furthermore, the increase popularity of the World Wide Web has indirectly shortened the distance between humans, and hence the telemedicine system which is the incorporation of modern computer technology, is becoming a new diagnostic pattern in iatrology. This thesis will not only dissertate the 3D model reconstruction technique based on medical images, but also seek for the approach in using VRML software components to exhibit 3D medical model. The scene graph of the 3D virtual environment can be accessed by commonly used programming languages (Visual C++, Visual Basic, C++ Builder, Delphi, etc.) or famous platform independent programming languages (JAVA, JAVA script, etc.). By applying the technique proposed in this thesis we can integrate 3D medical imaging into the web, and have telemedicine step into the world of virtual reality. Hence high quality medical services can also be attained through the remote virtual operation.
Krogmann, Klaus [Verfasser]. "Reconstruction of software component architectures and behaviour models using static and dynamic analysis / von Klaus Krogmann." 2010. http://d-nb.info/1010373587/34.
Повний текст джерелаYang, Huang-Tsu, and 黃祖揚. "A Study of Performance of Accident Reconstruction Software-Using PC-Crash and HVE Programs as Example." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/06847498202905342715.
Повний текст джерела逢甲大學
交通工程與管理所
96
The traffic accident reconstruction software can visually show the traffic accident process, and provide the scientific data to user or relevant people, so that those who can easily and quickly understand the process of a traffic accident. Final, the causes of a traffic accident can be determined. In Taiwan, two sets of accident simulation software such as PC-Crash and HVE are comparably widely used. Therefore the aim of this study tries to fully understand the fundamental principles of two sets of simulation software, and then collects the data of full-scale dynamic vehicle tests such as brake distance, skid mark distance, collision of two cars, and hit the fixed barrier , and so on. We use these data and put into PC-Crash and HVE, and figure out what are the differences between two sets of software. The results show that (1) The error rate of brake distance and skid mark distance of two sets of software both are less 6%。(2) In two-cars collision and barrier collision simulations, HVE has better results than PC-Crash in simulating vehicle’s damage, but PC-Crash predicts the velocity difference (ΔV) of collision is better than HVE. Furthermore, this study uses HVE-EDCRASH software to reconstruct the impact speeds and ΔVs of two cars before collision that is on the basis of vehicle’s damages and relative vehicle positions after collision. The results show that the prediction accuracy of oblique collisions has substantial results better than the results of collinear collisions of two cars, for the error of collinear collisions of two cars has up to 50%.
Dias, António Carlos Fortuna Ribeiro. "Development and implementation of bioinformatics tools for the reconstruction of GiSMos." Master's thesis, 2017. http://hdl.handle.net/1822/56103.
Повний текст джерелаThe reconstruction of Genomic-Scale Metabolic Model (GiSMo)s is an increasingly growing methodology, which allows to develop models that can be used to perform in silico predictions on the phenotypical response of an organism to environmental changes and genetic modifications. These predictions allow focusing in vivo experiments on methodologies that will, theoretically, present better results, thus reducing the high costs on time and money spent in laboratorial experiments. GiSMos are a mathematical representation of the organism’s genome, in the form of metabolic networks. As complex as these can be, because of the large number of compounds involved in many different reactions and pathways, the treatment of all such data is not easily manually performed. Several bioinformatics software were developed with the aims of improving this procedure, by automating many operations in the reconstruction process. Metabolic Models Reconstruction Using Genome-Scale Information (merlin) is one of such tools, following a philosophy that thrives on providing an intuitive and powerful graphical environment, to annotate data on key metabolic components and building a complete genome-scale model. While already encompassing a wide range of tools, it is still a work in development. Upon analyzing its functioning, several improvement opportunities were identified, mainly in existing operations. Moreover, missing important features for the reconstruction of GiSMos were as well identified. This work details the results of this analysis and the improvements performed to enrich merlin’s toolbox.
A reconstrução de modelos metabólicos à escala genómica (GiSMo) é uma metodologia em rápido crescimento, que permite o desenvolvimento de modelos para fazer previsões in silico sobre a resposta fenotípica de um organismo a alterações ambientais e a modificações genéticas. Estas previsões permitem a focagem em experiências in vivo e em metodologias que, em teoria, apresentarão melhores resultados, e portanto reduzir os elevados custos em tempo e dinheiro gastos em experiências laboratoriais. GiSMos são uma representação matemática do genoma de um organismo, em forma de redes metabólicas. Devido à complexidade que estas podem ter, devido ao elevado número de compostos envolvidos em muitas reações em diferentes redes, o tratamento de todos estes dados não é simples de ser feito manualmente. Vários softwares bioinformáticos foram desenvolvidos com o objectivo de melhorar o procedimento, automatizando várias operações do processo de reconstrução. merlin é uma dessa ferramentas, e segue uma filosofia que prospera em providenciar um ambiente gráfico intuitivo e eficaz, para anotar informação de componentes metabólicos chave e construir a partir dela um modelo à escala genómica completo. Apesar de já conter uma grande variedade de ferramentas, ainda está em fase de desenvolvimento. Ao ser analisado o seu funcionamento, foram apontadas várias operações passíveis de ser melhoradas. Também foram indentificadas em falta funcionalidades importantes par a resconstrução de GiSMo. Este trabalho detalha os resultados desta análise e o que foi feito para enriquecer a caixa de ferramentas do merlin.
Correia, Carlos Manuel Leitão. "Avaliação de software de reconstrução ótica em tomografia difusa para geometria de transiluminação." Master's thesis, 2019. http://hdl.handle.net/10316/88061.
Повний текст джерелаThe main goal of this project was to evaluate an optical reconstruction software in diffuse tomography, TOAST (Time-Resolved Optical Absorption and Scatter Tomography), when applied to a transillumination geometry. With that in mind, we studied and adapted a demo from the TOAST++ package available online, based on a cylindrical geometry whose sample consists of a set of spheres with distinct optical properties. We adapted the problem for a laminar geometry with just one centered sphere. TOAST simulations implies representing the sample by a nodes mesh which contains the optical properties of the medium (absorption coefficient, diffusion coefficient and refraction index) and a mesh of tetragonal elements created from those nodal points, through a mesh generator software, TetGen. The performance evaluation was done through four sets of simulations: one preliminary set whose goal was to study the impact of the sphere dimension, the mesh density and the detectors width on the reconstruction; a second set to study the dependence of the results on the initial values of the optical properties assigned for the reconstruction process; a third set which consisted on the analysis of the information loss related to the adoption of a laminar geometry, comparing it with the result of a cylindrical geometry simulation; and a final set of tests conceived with the intent of evaluating the reconstruction linearity related to the optical properties of the object. The results show that the bigger the higher densities of the sample mesh and larger dimension for the spherical object yield higher reconstruction quality. The obtained distributions of optical properties are extremely dependent on the optical properties defined for the reconstruction process and the transillumination geometry has less capability to reconstruct the shape of the object when compared to the cylindrical geometry. In conclusion, the software tend has difficulties in reconstructing small objects, to ensure a better reconstruction the object has to be represented by a significant number of nodes and trnasillumination geometry has lower reconstruction fideity due to the limitation on the projections to the illumination and detection planes.
Este projeto tem como principal objetivo avaliar um \textit{software} de reconstrução de imagem ótica em tomografia difusa, o TOAST (Time-Resolved Optical Absorption and Scatter Tomography), quando aplicado a uma geometria de transiluminação. Para tal, estudou-se e adaptou-se um exemplo do pacote TOAST++ disponível \textit{online}, baseado numa geometria cilíndrica cuja amostra consiste num conjunto de esferas com distintas propriedades óticas. Desta forma, procurou-se fazer a adaptação do problema para uma geometria laminar com apenas uma esfera no meio. A realização de simulações através do TOAST implica representar a amostra através de uma malha de nós que contêm as propriedades óticas do meio (coeficiente de absorção, coeficiente de difusão e índice de refração) e uma rede de elementos tetragonais gerados a partir desses pontos, através de um \textit{software} gerador de malhas, o TetGen. A avaliação do desempenho foi realizada através de quatro conjuntos de simulações: um conjunto preliminar cujo objetivo é estudar o impacto que a dimensão da esfera, a densidade da malha e a largura dos detetores têm na reconstrução; um segundo conjunto a partir do qual se analisou a dependência dos resultados em função dos valores dos parâmetros óticos inicialmente atribuídos no processo de reconstrução; um terceiro conjunto que consistiu na análise da perda de informação relativa à adoção de uma geometria laminar, comparando-a com o resultado de uma simulação em geometria cilíndrica; e, por fim, um último conjunto de testes concebido com o intuito de avaliar a linearidade da reconstrução em função das propriedades óticas do objeto. Os resultados obtidos indicam que quanto mais densa é a malha representativa da amostra e maior for a dimensão do objeto maior é a qualidade da reconstrução. Para além disto, as distribuições dos coeficientes obtidas são extremamente dependentes das propriedades óticas definidas para o início do processo de reconstrução e a geometria em transiluminação demonstrou menor capacidade de reconstrução da forma do objeto relativamente à cilíndrica. Conclui-se que o software tende a ter dificuldade em reconstruir objetos pequenos, que para se garantir um resultado mais fidedigno o objeto tem de ser representado por um significativo número de nós e que a geometria em transiluminação tem menor fidelidade de reconstrução devido à limitação das projeções aos planos de iluminação e de deteção.