Dissertationen zum Thema „Reconstruction intelligente“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Reconstruction intelligente" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Bonvard, Aurélien. „Algorithmes de détection et de reconstruction en aveugle de code correcteurs d'erreurs basés sur des informations souples“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0178.
Der volle Inhalt der QuelleRecent decades have seen the rise of digital communications. This has led to a proliferation of communication standards, requiring greater adaptability of communication systems. One way to make these systems more flexible is to design an intelligent receiver that would be able to retreive all the parameters of the transmitter from the received signal. In this manuscript, we are interested in the blind identification of error-correcting codes. We propose original methods based on the calculation of Euclidean distances between noisy symbol sequences. First, a classification algorithm allows the detection of a code and then the identification of its code words lenght. A second algorithm based on the number of collisions allows to identify the length of the information words. Then, we propose another method using the minimum Euclidean distances to identify block codes length. Finally, a method for reconstructing the dual code of an error-correcting code is presented
El, Hage Josiana. „Smart Reconstruction after a natural or man-made disaster : Feedback, methodology, and application to the Beirut Harbor Disaster“. Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILN015.
Der volle Inhalt der QuelleThe objective of this study is to develop a smart framework for post-disaster reconstruction of buildings, with a focus on the Beirut explosion as a case study, due to its complex geopolitical context, extensive damage, and socio-economic crises. The study delves into various dimensions encompassing physical, economic, and social to prioritize marginalized community groups in the recovery efforts and advocate for the “Build-Back-Better approach”, according to the recommendations of « Sendai Framework For Disaster Risk Reduction ».To attain these objectives, the thesis starts with a literature review (Chapter 1) to identify research gaps and existing post-disaster reconstruction frameworks. Drawing from this review, a research methodology is formulated to address these gaps with emphasis on Beirut city in Lebanon (Chapter 2). It includes the local context study, the data analysis methods, and an understanding of the challenges facing the post-disaster reconstruction with a focus on Beirut. A comprehensive framework for assessing post-disaster buildings in Beirut following the explosion is developed (Chapter 3), comprising 12 indicators spanning physical attributes of the building and socio-economic profile of its residents. This framework facilitates the calculation of a Priority Index for a large set of damaged buildings in Beirut (Chapter 4). The assessment assists decision-makers and stakeholders involved in the reconstruction process manage and monitor building renovation projects while encouraging the affected community engagement. It prioritizes the most vulnerable individuals, thereby fostering a people-centric approach to recovery, underpinned by the principles of building-back-better and inclusivity.The data-based framework and results presented in this thesis form a step forward in the post-disaster reconstruction field. However, this research shows some limitations including the data collection via crowdsourcing and the lack of people participation, the dynamics and the complexity of the post-disaster context, and the focus on the building sector only. Future research could focus on (i) considering all the sectors affected by the disaster, (ii) investigating the social acceptance for participating in the data collection process, (iii) and diversifying the data collection sources
Mallik, Mohammed Tariqul Hassan. „Electromagnetic Field Exposure Reconstruction by Artificial Intelligence“. Electronic Thesis or Diss., Université de Lille (2022-....), 2023. https://pepite-depot.univ-lille.fr/ToutIDP/EDENGSYS/2023/2023ULILN052.pdf.
Der volle Inhalt der QuelleThe topic of exposure to electromagnetic fields has received muchattention in light of the current deployment of the fifth generation(5G) cellular network. Despite this, accurately reconstructing theelectromagnetic field across a region remains difficult due to a lack ofsufficient data. In situ measurements are of great interest, but theirviability is limited, making it difficult to fully understand the fielddynamics. Despite the great interest in localized measurements, thereare still untested regions that prevent them from providing a completeexposure map. The research explored reconstruction strategies fromobservations from certain localized sites or sensors distributed inspace, using techniques based on geostatistics and Gaussian processes.In particular, recent initiatives have focused on the use of machinelearning and artificial intelligence for this purpose. To overcome theseproblems, this work proposes new methodologies to reconstruct EMFexposure maps in a specific urban area in France. The main objective isto reconstruct exposure maps to electromagnetic waves from some datafrom sensors distributed in space. We proposed two methodologies basedon machine learning to estimate exposure to electromagnetic waves. Forthe first method, the exposure reconstruction problem is defined as animage-to-image translation task. First, the sensor data is convertedinto an image and the corresponding reference image is generated using aray tracing-based simulator. We proposed an adversarial network cGANconditioned by the environment topology to estimate exposure maps usingthese images. The model is trained on sensor map images while anenvironment is given as conditional input to the cGAN model.Furthermore, electromagnetic field mapping based on the GenerativeAdversarial Network is compared to simple Kriging. The results show thatthe proposed method produces accurate estimates and is a promisingsolution for exposure map reconstruction. However, producing referencedata is a complex task as it involves taking into account the number ofactive base stations of different technologies and operators, whosenetwork configuration is unknown, e.g. powers and beams used by basestations. Additionally, evaluating these maps requires time andexpertise. To answer these questions, we defined the problem as amissing data imputation task. The method we propose takes into accountthe training of an infinite neural network to estimate exposure toelectromagnetic fields. This is a promising solution for exposure mapreconstruction, which does not require large training sets. The proposedmethod is compared with other machine learning approaches based on UNetnetworks and conditional generative adversarial networks withcompetitive results
Kentzoglanakis, Kyriakos. „Reconstructing gene regulatory networks : a swarm intelligence framework“. Thesis, University of Portsmouth, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523619.
Der volle Inhalt der QuelleZhao, Yu. „Channel Reconstruction for High-Rank User Equipment“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-256064.
Der volle Inhalt der QuelleI ett 5-generationsmassivt massivt multipel-inmatningsradio-nätverk spelar kanalstatens information en central roll i algoritmdesignen och systemutvärderingen. Förvärv av Channel State Information konsumerar emellertid systemresurser (t.ex. tid, frekvens) som i sin tur minskar länkanvändningen, dvs färre resurser kvar för faktisk dataöverföring. Detta problem är mer uppenbart i ett scenario när användarutrustningsterminaler har flera antenner och det skulle vara fördelaktigt att erhålla kanalstatusinformation mellan basstationen och olika användarutrustningsantenner, t.ex. för överföring av hög rang (antal strömmar) till denna användarutrustning. I nuvarande industriella implementeringar erhålls kanalstatusinformation för endast en av användarutrustningens antenner för att inte slösa bort systemresurser, vilket sedan begränsar överföringsrankningen för nedlänkning till 1. Därför syftar vi på en metod baserad på Deep learning-teknik. I detta dokument implementeras flerskiktsuppfattning och inblandat neuralt nätverk. Data genereras av MATLAB-simulator med hjälp av parametrarna som tillhandahålls av Huawei Technologies Co., Ltd. Slutligen ger modellen som föreslås av detta projekt bästa prestanda jämfört med baslinjealgoritmerna.
Elias, Rimon. „Towards obstacle reconstruction through wide baseline set of images“. Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29104.
Der volle Inhalt der Quelle關福延 und Folk-year Kwan. „An intelligent approach to automatic medical model reconstruction fromserial planar CT images“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31243216.
Der volle Inhalt der QuelleGayed, Said Simone. „Skull reconstruction through shape completion“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24057/.
Der volle Inhalt der QuellePapadopoulos, Georgios. „Towards a 3D building reconstruction using spatial multisource data and computational intelligence techniques“. Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0084/document.
Der volle Inhalt der QuelleBuilding reconstruction from aerial photographs and other multi-source urban spatial data is a task endeavored using a plethora of automated and semi-automated methods ranging from point processes, classic image processing and laser scanning. In this thesis, an iterative relaxation system is developed based on the examination of the local context of each edge according to multiple spatial input sources (optical, elevation, shadow & foliage masks as well as other pre-processed data as elaborated in Chapter 6). All these multisource and multiresolution data are fused so that probable line segments or edges are extracted that correspond to prominent building boundaries.Two novel sub-systems have also been developed in this thesis. They were designed with the purpose to provide additional, more reliable, information regarding building contours in a future version of the proposed relaxation system. The first is a deep convolutional neural network (CNN) method for the detection of building borders. In particular, the network is based on the state of the art super-resolution model SRCNN (Dong C. L., 2015). It accepts aerial photographs depicting densely populated urban area data as well as their corresponding digital elevation maps (DEM). Training is performed using three variations of this urban data set and aims at detecting building contours through a novel super-resolved heteroassociative mapping. Another innovation of this approach is the design of a modified custom loss layer named Top-N. In this variation, the mean square error (MSE) between the reconstructed output image and the provided ground truth (GT) image of building contours is computed on the 2N image pixels with highest values . Assuming that most of the N contour pixels of the GT image are also in the top 2N pixels of the re-construction, this modification balances the two pixel categories and improves the generalization behavior of the CNN model. It is shown in the experiments, that the Top-N cost function offers performance gains in comparison to standard MSE. Further improvement in generalization ability of the network is achieved by using dropout.The second sub-system is a super-resolution deep convolutional network, which performs an enhanced-input associative mapping between input low-resolution and high-resolution images. This network has been trained with low-resolution elevation data and the corresponding high-resolution optical urban photographs. Such a resolution discrepancy between optical aerial/satellite images and elevation data is often the case in real world applications. More specifically, low-resolution elevation data augmented by high-resolution optical aerial photographs are used with the aim of augmenting the resolution of the elevation data. This is a unique super-resolution problem where it was found that many of -the proposed general-image SR propositions do not perform as well. The network aptly named building super resolution CNN (BSRCNN) is trained using patches extracted from the aforementioned data. Results show that in comparison with a classic bicubic upscale of the elevation data the proposed implementation offers important improvement as attested by a modified PSNR and SSIM metric. In comparison, other proposed general-image SR methods performed poorer than a standard bicubic up-scaler.Finally, the relaxation system fuses together all these multisource data sources comprising of pre-processed optical data, elevation data, foliage masks, shadow masks and other pre-processed data in an attempt to assign confidence values to each pixel belonging to a building contour. Confidence is augmented or decremented iteratively until the MSE error fails below a specified threshold or a maximum number of iterations have been executed. The confidence matrix can then be used to extract the true building contours via thresholding
Hajjdiab, Hassan. „Vision-based localization, map building and obstacle reconstruction in ground plane environments“. Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29109.
Der volle Inhalt der QuelleSteinhauer, H. Joe. „A representation scheme for description and reconstruction of object configurations based on qualitative relations /“. Linköping : Department of Computer and Information Science, Linköpings universitet, 2008. http://www.bibl.liu.se/liupubl/disp/disp2008/tek1204s.pdf.
Der volle Inhalt der QuelleRavelomanantsoa, Andrianiaina. „Approche déterministe de l'acquisition comprimée et la reconstruction des signaux issus de capteurs intelligents distribués“. Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0136/document.
Der volle Inhalt der QuelleA wireless body area network (WBAN) is a new class of wireless networks dedicated to monitor human physiological parameters. It consists of small electronic devices, also called nodes, attached to or implanted in the human body. Each node comprises one or many sensors which measure physiological signals, such as electrocardiogram or body heat, and the characteristics of the surrounding environment. These nodes are mainly subject to a significant energy constraint due to the fact that the miniaturization has reduced the size of their batteries. A solution to minimize the energy consumption would be to compress the sensed data before wirelessly transmitting them. Indeed, research has shown that most of the available energy are consumed by the wireless transmitter. Conventional compression methods are not suitable for WBANs because they involve a high computational power and increase the energy consumption. To overcome these limitations, we use compressed sensing (CS) to compress and recover the sensed data. We propose a simple and efficient encoder to compress the data. We also introduce a new algorithm to reduce the complexity of the recovery process. A partnership with TEA (Technologie Ergonomie Appliquées) company allowed us to experimentally evaluate the performance of the proposed method during which a numeric version of the encoder has been used. We also developed and validated an analog version of the encoder
Ravelomanantsoa, Andrianiaina. „Approche déterministe de l'acquisition comprimée et la reconstruction des signaux issus de capteurs intelligents distribués“. Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0136.
Der volle Inhalt der QuelleA wireless body area network (WBAN) is a new class of wireless networks dedicated to monitor human physiological parameters. It consists of small electronic devices, also called nodes, attached to or implanted in the human body. Each node comprises one or many sensors which measure physiological signals, such as electrocardiogram or body heat, and the characteristics of the surrounding environment. These nodes are mainly subject to a significant energy constraint due to the fact that the miniaturization has reduced the size of their batteries. A solution to minimize the energy consumption would be to compress the sensed data before wirelessly transmitting them. Indeed, research has shown that most of the available energy are consumed by the wireless transmitter. Conventional compression methods are not suitable for WBANs because they involve a high computational power and increase the energy consumption. To overcome these limitations, we use compressed sensing (CS) to compress and recover the sensed data. We propose a simple and efficient encoder to compress the data. We also introduce a new algorithm to reduce the complexity of the recovery process. A partnership with TEA (Technologie Ergonomie Appliquées) company allowed us to experimentally evaluate the performance of the proposed method during which a numeric version of the encoder has been used. We also developed and validated an analog version of the encoder
Tearse, Brandon. „Skald| Exploring Story Generation and Interactive Storytelling by Reconstructing Minstrel“. Thesis, University of California, Santa Cruz, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423003.
Der volle Inhalt der QuelleWithin the realm of computational story generation sits Minstrel, a decades old system which was once used to explore the idea that, under the correct conditions, novel stories can be generated by taking an existing story and replacing some of its elements with similar ones found in a different story. This concept would eventually fall within the bounds of a strategy known as Case-Based Reasoning (CBR), in which problems are solved by recalling solutions to past problems (the cases), and mutating the recalled cases in order to create an appropriate solution to the current problem. This dissertation uses a rational reconstruction of Minstrel called Minstrel Remixed, a handful of upgraded variants of Minstrel Remixed, and a pair of similar but unrelated storytelling systems, to explore various characteristics of Minstrel-style storytelling systems.
In the first part of this dissertation I define the class of storytelling systems that are similar to Minstrel. This definition allows me to compare the features of these systems and discuss the various strengths and weaknesses of the variants. Furthermore, I briefly describe the rational reconstruction of Minstrel and then provide a detailed overview of the inner workings of the resulting system, Minstrel Remixed.
Once Minstrel Remixed was complete, I chose to upgrade it in order to explore the set of stories that it could produced and ways to alter or reconfigure the system with the goal of intentionally influencing the set of possible outputs. This investigation resulted in two new storytelling systems called Conspiracy Forever and Problem Planets. The second portion of this dissertation discusses these systems as well as a number of discoveries about the strengths and weaknesses of Minstrel Style Storytelling Systems in general. More specifically, I discuss that, 1) a human reader's capacity for creating patterns out of an assortment of statements is incredibly useful and output should be crafted to use this potential, 2) Minstrel-Style Storytelling tends to be amnesiac and do a poor job of creating long stories that remain cohesive, and 3) the domain that a storytelling system is working from is incredibly important and must be well engineered. I continue by discussing the methods that I discovered for cleaning up and maintaining a domain and conclude with a section covering interviews with other storytelling system creators about the strengths and weaknesses of their systems in light of my findings about Minstrel Remixed.
In the final portion of this document I create a framework of six interrelated attributes of stories (length, coherence, creativity, complexity, contextuality, and consolidation,) and use this along with the learning discussed in the first two portions of the dissertation to discuss the strengths and weaknesses of this class of CBR systems when applied to both static story generation and interactive storytelling. I discuss the finding that these systems seem to have some amount of power and although they can be tweaked to produce for example, longer or more consolidated stories, these improvements always come along with a reduction in complexity, coherence, or one of the other attributes. Further discussion of the output power of this class of storytelling systems revolves around the primary limiting factor to their potential, namely the fact that they have no understanding of the symbols and patterns that they are manipulating. Finally, I introduce a number of strategies that I found to be fruitful for increasing the 'output power' of the system and getting around the lack of commonsense reasoning, chiefly improving the domain and adding new subsystems.
Fu, Bo. „Towards Intelligent Telerobotics: Visualization and Control of Remote Robot“. UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/40.
Der volle Inhalt der QuelleLyubchyk, Leonid, Vladislav Kolbasin und Galina Grinberg. „Nonlinear dynamic system kernel based reconstruction from time series data“. Thesis, ТВіМС, 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/36826.
Der volle Inhalt der QuelleTanner, Michael. „BOR2G : Building Optimal Regularised Reconstructions with GPUs (in cubes)“. Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:1928c996-d913-4d7e-8ca5-cf247f90aa0f.
Der volle Inhalt der QuelleLacroix, Marie. „Méthodes pour la reconstruction, l'analyse et l'exploitation de réseaux tridimensionnels en milieu urbain“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066001/document.
Der volle Inhalt der QuelleDisasters like the ones that happened in Ghislenghien (Belgium), Ludwigshafen (Germany), or Lyon (France), have been attributed to excavations in the vicinity of gas pipelines. Though pipes are one of the safest methods of transportation for conveying hazardous substances, each year many cases of damage to gas pipes are recorded in France. Most of them are due to works in the vicinity of the networks and some illustrate the lack of reliability of the provided information. Concessionaries have to take stock of the situation and to suggest areas of improvement, so that everyone could benefit from networks becoming safer.To prevent such accidents which involve workers and the public, French authorities enforce two regulations: DT / DICT: reform of the network no-damage by securing the excavations, Multifluide: reform which is interested in securing networks of hazardous events.So, to avoid such accidents or other problems, it is necessary to acquire and control the 3D information concerning the different city networks, especially buried ones.Preventive strategies have to be adopted. That’s why working on the networks and their visualization and risk cartography, taking the blur into account, is a recent and appropriate research. The software applications I develop should help the utility and construction contractors and focus on the prevention of hazardous events thanks to accurate data sets for users and consumers, the definition of a geomatics network but also some methods such as triangulation methods, element modeling, geometrical calculations, Artificial Intelligence, Virtual Reality
Chen, Jiandan. „An Intelligent Multi Sensor System for a Human Activities Space---Aspects of Quality Measurement and Sensor Arrangement“. Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00487.
Der volle Inhalt der QuelleSkorburg, Joshua August. „Human Nature and Intelligence: The Implications of John Dewey's Philosophy“. University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1333663233.
Der volle Inhalt der QuelleLacroix, Marie. „Méthodes pour la reconstruction, l'analyse et l'exploitation de réseaux tridimensionnels en milieu urbain“. Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066001.
Der volle Inhalt der QuelleDisasters like the ones that happened in Ghislenghien (Belgium), Ludwigshafen (Germany), or Lyon (France), have been attributed to excavations in the vicinity of gas pipelines. Though pipes are one of the safest methods of transportation for conveying hazardous substances, each year many cases of damage to gas pipes are recorded in France. Most of them are due to works in the vicinity of the networks and some illustrate the lack of reliability of the provided information. Concessionaries have to take stock of the situation and to suggest areas of improvement, so that everyone could benefit from networks becoming safer.To prevent such accidents which involve workers and the public, French authorities enforce two regulations: DT / DICT: reform of the network no-damage by securing the excavations, Multifluide: reform which is interested in securing networks of hazardous events.So, to avoid such accidents or other problems, it is necessary to acquire and control the 3D information concerning the different city networks, especially buried ones.Preventive strategies have to be adopted. That’s why working on the networks and their visualization and risk cartography, taking the blur into account, is a recent and appropriate research. The software applications I develop should help the utility and construction contractors and focus on the prevention of hazardous events thanks to accurate data sets for users and consumers, the definition of a geomatics network but also some methods such as triangulation methods, element modeling, geometrical calculations, Artificial Intelligence, Virtual Reality
Tsenoglou, Theocharis. „Intelligent pattern recognition techniques for photo-realistic 3D modeling of urban planning objects“. Thesis, Limoges, 2014. http://www.theses.fr/2014LIMO0075.
Der volle Inhalt der QuelleRealistic 3D modeling of buildings and other urban planning objects is an active research area in the field of 3D city modeling, heritage documentation, virtual touring, urban planning, architectural design and computer gaming. The creation of such models, very often, requires merging of data from diverse sources such as optical images and laser scan point clouds. To imitate as realistically as possible the layouts, activities and functionalities of a real-world environment, these models need to attain high photo-realistic quality and accuracy in terms of the surface texture (e.g. stone or brick walls) and morphology (e.g. windows and doors) of the actual objects. Image-based rendering is an alternative for meeting these requirements. It uses photos, taken either from ground level or from the air, to add texture to the 3D model thus adding photo-realism. For full texture covering of large facades of 3D block models, images picturing the same façade need to be properly combined and correctly aligned with the side of the block. The pictures need to be merged appropriately so that the result does not present discontinuities, abrupt variations in lighting or gaps. Because these images were taken, in general, under various viewing conditions (viewing angles, zoom factors etc) they are under different perspective distortions, scaling, brightness, contrast and color shadings, they need to be corrected or adjusted. This process requires the extraction of key features from their visual content of images. The aim of the proposed work is to develop methods based on computer vision and pattern recognition techniques in order to assist this process. In particular, we propose a method for extracting implicit lines from poor quality images of buildings, including night views where only some lit windows are visible, in order to specify bundles of 3D parallel lines and their corresponding vanishing points. Then, based on this information, one can achieve better merging of the images and better alignment of the images to the block façades. Another important application dealt in this thesis is that of 3D modeling. We propose an edge preserving interpolation, based on the mean shift algorithm, that operates jointly on the optical and the elevation data. It succeeds in increasing the resolution of the elevation data (LiDAR) while improving the quality (i.e. straightness) of their edges. At the same time, the color homogeneity of the corresponding imagery is also improved. The reduction of color artifacts in the optical data and the improvement in the spatial resolution of elevation data results in more accurate 3D building models. Finally, in the problem of building detection, the application of the proposed mean shift-based edge preserving smoothing for increasing the quality of aerial/color images improves the performance of binary building vs non-building pixel classification
Dubovský, Peter. „Hezekiah and the Assyrian spies : reconstruction of the neo-Assyrian intelligence services and its significance for 2 Kings 18-19 /“. Roma : Ed. Pontificio istituto biblico, 2006. http://catalogue.bnf.fr/ark:/12148/cb410178717.
Der volle Inhalt der QuelleSteinhauer, Heike Joe. „A Representation Scheme for Description and Reconstruction of Object Configurations Based on Qualitative Relations“. Doctoral thesis, Linköpings universitet, CASL - Cognitive Autonomous Systems Laboratory, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12446.
Der volle Inhalt der QuelleFofi, David. „Navigation d'un véhicule intelligent à l'aide d'un capteur de vision en lumière structurée et codée“. Phd thesis, Université de Picardie Jules Verne, 2001. http://tel.archives-ouvertes.fr/tel-00005452.
Der volle Inhalt der Quellestructurée (capteur formé d'une caméra CCD et d'une source lumineuse) à la navigation de robots
mobiles. Ceci nous a conduit à étudier différentes techniques et approches de la vision par
ordinateur et du traitement des images. Tout d'abord, nous avons passé en revue les principaux
types de codage de la lumière structurée et les principales applications qu'elle trouve en robotique,
imagerie médicale et métrologie, pour en dégager une problématique quant à l'utilisation qui nous
intéresse. En second lieu, nous proposons une méthode de traitement des images en lumière
structurée ayant pour objectif l'extraction des segments de l'image et le décodage du motif
structurant. Nous détaillons ensuite une méthode de reconstruction tri-dimensionnelle à partir du
capteur non-calibré. La projection d'un motif lumineux sur l'environnement impose des contraintes
sévères aux techniques d'auto-calibration. Il ressort que la reconstruction doit être effectuée en
deux temps, en une seule prise de vue et une seule projection. Nous précisons la méthode de
reconstruction projective utilisée pour nos expérimentations et nous donnons une méthode
permettant le passage du projectif à l'Euclidien. En profitant des relations géométriques générées
par la projection du motif lumineux, nous montrons qu'il est possible de trouver des contraintes
Euclidiennes entre les points de la scène, indépendantes des objets de la scène. Nous proposons
également une technique de détection d'obstacles quantitative, permettant d'estimer la carte de
l'espace libre observé par le robot. Finalement, nous faisons une étude complète du capteur en
mouvement et nous en tirons un algorithme permettant d'estimer son déplacement dans
l'environnement à partir de la mise en correspondance des plans qui le compose.
Ye, Mao. „MONOCULAR POSE ESTIMATION AND SHAPE RECONSTRUCTION OF QUASI-ARTICULATED OBJECTS WITH CONSUMER DEPTH CAMERA“. UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/25.
Der volle Inhalt der QuelleAlkindy, Bassam. „Combining approaches for predicting genomic evolution“. Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2012/document.
Der volle Inhalt der QuelleIn Bioinformatics, understanding how DNA molecules have evolved over time remains an open and complex problem.Algorithms have been proposed to solve this problem, but they are limited either to the evolution of a given character (forexample, a specific nucleotide), or conversely focus on large nuclear genomes (several billion base pairs ), the latter havingknown multiple recombination events - the problem is NP complete when you consider the set of all possible operationson these sequences, no solution exists at present. In this thesis, we tackle the problem of reconstruction of ancestral DNAsequences by focusing on the nucleotide chains of intermediate size, and have experienced relatively little recombinationover time: chloroplast genomes. We show that at this level the problem of the reconstruction of ancestors can be resolved,even when you consider the set of all complete chloroplast genomes currently available. We focus specifically on the orderand ancestral gene content, as well as the technical problems this raises reconstruction in the case of chloroplasts. Weshow how to obtain a prediction of the coding sequences of a quality such as to allow said reconstruction and how toobtain a phylogenetic tree in agreement with the largest number of genes, on which we can then support our back in time- the latter being finalized. These methods, combining the use of tools already available (the quality of which has beenassessed) in high performance computing, artificial intelligence and bio-statistics were applied to a collection of more than450 chloroplast genomes
Garreau, Mireille. „Signal, image et intelligence artificielle : application a la decomposition du signal electromyographique et a la reconstruction et l'etiquetage 3-d de structures vasculaires“. Rennes 1, 1988. http://www.theses.fr/1988REN10090.
Der volle Inhalt der QuelleGarreau, Mireille. „Signal, image et intelligence artificielle application à la décomposition du signal électromyographique et à la reconstruction et l'étiquetage 3-D de structures vasculaires /“. Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb376138204.
Der volle Inhalt der QuelleChaib-Draa, Brahim. „Contribution à la résolution distribuée de problème : une approche basée sur les états intentionnels“. Valenciennes, 1990. https://ged.uphf.fr/nuxeo/site/esupversions/e6f0d4f6-4f91-4c3b-afb6-46782c867250.
Der volle Inhalt der QuelleOzcelik, Furkan. „Déchiffrer le langage visuel du cerveau : reconstruction d'images naturelles à l'aide de modèles génératifs profonds à partir de signaux IRMf“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES073.
Der volle Inhalt der QuelleThe great minds of humanity were always curious about the nature of mind, brain, and consciousness. Through physical and thought experiments, they tried to tackle challenging questions about visual perception. As neuroimaging techniques were developed, neural encoding and decoding techniques provided profound understanding about how we process visual information. Advancements in Artificial Intelligence and Deep Learning areas have also influenced neuroscientific research. With the emergence of deep generative models like Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Latent Diffusion Models (LDM), researchers also used these models in neural decoding tasks such as visual reconstruction of perceived stimuli from neuroimaging data. The current thesis provides two frameworks in the above-mentioned area of reconstructing perceived stimuli from neuroimaging data, particularly fMRI data, using deep generative models. These frameworks focus on different aspects of the visual reconstruction task than their predecessors, and hence they may bring valuable outcomes for the studies that will follow. The first study of the thesis (described in Chapter 2) utilizes a particular generative model called IC-GAN to capture both semantic and realistic aspects of the visual reconstruction. The second study (mentioned in Chapter 3) brings new perspective on visual reconstruction by fusing decoded information from different modalities (e.g. text and image) using recent latent diffusion models. These studies become state-of-the-art in their benchmarks by exhibiting high-fidelity reconstructions of different attributes of the stimuli. In both of our studies, we propose region-of-interest (ROI) analyses to understand the functional properties of specific visual regions using our neural decoding models. Statistical relations between ROIs and decoded latent features show that while early visual areas carry more information about low-level features (which focus on layout and orientation of objects), higher visual areas are more informative about high-level semantic features. We also observed that generated ROI-optimal images, using these visual reconstruction frameworks, are able to capture functional selectivity properties of the ROIs that have been examined in many prior studies in neuroscientific research. Our thesis attempts to bring valuable insights for future studies in neural decoding, visual reconstruction, and neuroscientific exploration using deep learning models by providing the results of two visual reconstruction frameworks and ROI analyses. The findings and contributions of the thesis may help researchers working in cognitive neuroscience and have implications for brain-computer-interface applications
Lutz, Christian. „Analyse, stratégie thérapeutique et innovations technologiques lors de la stabilisation rotatoire du genou dans les reconstructions du ligament croisé antérieur“. Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAJ009.
Der volle Inhalt der QuelleTreatment of the rotational instability induced by rupture of the anterior cruciate ligament is a major challenge in knee ligament surgery. Combining lateral tenodesis with anterior cruciate ligament reconstruction improves this control compared to isolated intra-articular plasty. However, the orthopaedic community is not unanimous about the use of lateral tenodesis. Their interest was at the origin of this anatomical, biomechanical and clinical research project. Anatomically and biomechanically, rotational control of the knee is ensured by the anterior cruciate ligament and the anterolateral ligament. Technically, lateral tenodesis must respect precise criteria to restore the function of the anterolateral ligament, via the concept of favorable anisometry. Clinically, this additional lateral plasty enhances rotational stability.This association of ligament reconstructions has increased the complexity of surgical procedures and spurred further research using innovative technologies to enhance accuracy and a more personalizated surgery
Zinsou, Omer. „Etude et mise en oeuvre d'un modeleur surfacique d'objets tridimensionnels : intégration dans une base de données relationnelle“. Compiègne, 1988. http://www.theses.fr/1988COMPD135.
Der volle Inhalt der QuelleNogueira, Sergio. „Localisation de mobiles par construction de modèles en 3D en utilisant la stéréovision“. Phd thesis, Université de Technologie de Belfort-Montbeliard, 2009. http://tel.archives-ouvertes.fr/tel-00596948.
Der volle Inhalt der QuelleFleute, Markus. „Shape reconstruction for computer assisted surgery based on non-rigid registration of statistical models with intra-operative point data and X-ray images“. Université Joseph Fourier (Grenoble), 2001. http://tel.archives-ouvertes.fr/tel-00005365.
Der volle Inhalt der QuelleThis thesis addresses the problem of reconstructing 3D anatomical surfaces based on intra-operatively acquired sparse scattered point data and few calibrated X-ray images. The approach consists in matching the data with a statistical deformable shape model thus incorporating a priori knowledge into the reconstruction process (. . ) It is further shown that hybrid matching combining both, 3D/3D and 3D/2D registration, might be an interesting option for certain Computer Assisted Surgery Applications
Figueroa, Teodora Pinheiro. „Estudo sobre a viabilidade da tomografia eletromagnética na medição do perfil de velocidades de escoamentos monofásicos em dutos“. Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/18/18135/tde-20122006-103316/.
Der volle Inhalt der QuelleThis work presents a prospective study on the development of an intelligent electromagnetic flow meter intended to determine output based on the reconstruction of velocity profile using tomographic techniques. As a result, the flow meter will be able to correct the output measure through the integration of the right velocity profile produced by tomography. The tomographic reconstruction technique utilized is based on the definition of an error functional generated from the difference between voltages simulated numerically for a experimental condition, being known the parameters which define the velocity within the pipe and approximate voltages simulated numerically for approaches of these parameters. In this work the physical model of the electromagnetic flow meter is based on a number of electrodes flush mounted on pipe walls and under a specific strategy of excitement, without electrical current input and considering the magnetic field uniform. From the expansion of the error functional over a set of known functions an error surface is generated. The characteristics of the pathology of this surface require other types of optimization techniques. Traditional optimization techniques are not viable since the search stops at the first local minimum. This convergence to local minimums is justified due to the presence of flat regions and valleys presenting several local minimums around the global minimum point (or point relative to the optimum parameters of velocity). Due to this fact techniques based on evolutionary algorithms are tested and presented for a series of cases demonstrating the usefulness of our research.
Wang, Chen. „Large-scale 3D environmental modelling and visualisation for flood hazard warning“. Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/3350.
Der volle Inhalt der QuelleGoulart, José Henrique De Morais. „Estimation de modèles tensoriels structurés et récupération de tenseurs de rang faible“. Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4147/document.
Der volle Inhalt der QuelleIn the first part of this thesis, we formulate two methods for computing a canonical polyadic decomposition having linearly structured matrix factors (such as, e.g., Toeplitz or banded factors): a general constrained alternating least squares (CALS) algorithm and an algebraic solution for the case where all factors are circulant. Exact and approximate versions of the former method are studied. The latter method relies on a multidimensional discrete-time Fourier transform of the target tensor, which leads to a system of homogeneous monomial equations whose resolution provides the desired circulant factors. Our simulations show that combining these approaches yields a statistically efficient estimator, which is also true for other combinations of CALS in scenarios involving non-circulant factors. The second part of the thesis concerns low-rank tensor recovery (LRTR) and, in particular, the tensor completion (TC) problem. We propose an efficient algorithm, called SeMPIHT, employing sequentially optimal modal projections as its hard thresholding operator. Then, a performance bound is derived under usual restricted isometry conditions, which however yield suboptimal sampling bounds. Yet, our simulations suggest SeMPIHT obeys optimal sampling bounds for Gaussian measurements. Step size selection and gradual rank increase heuristics are also elaborated in order to improve performance. We also devise an imputation scheme for TC based on soft thresholding of a Tucker model core and illustrate its utility in completing real-world road traffic data acquired by an intelligent transportation
Giraldo, Zuluaga Jhony Heriberto. „Graph-based Algorithms in Computer Vision, Machine Learning, and Signal Processing“. Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS037.
Der volle Inhalt der QuelleGraph representation learning and its applications have gained significant attention in recent years. Notably, Graph Neural Networks (GNNs) and Graph Signal Processing (GSP) have been extensively studied. GNNs extend the concepts of convolutional neural networks to non-Euclidean data modeled as graphs. Similarly, GSP extends the concepts of classical digital signal processing to signals supported on graphs. GNNs and GSP have numerous applications such as semi-supervised learning, point cloud semantic segmentation, prediction of individual relations in social networks, modeling proteins for drug discovery, image, and video processing. In this thesis, we propose novel approaches in video and image processing, GNNs, and recovery of time-varying graph signals. Our main motivation is to use the geometrical information that we can capture from the data to avoid data hungry methods, i.e., learning with minimal supervision. All our contributions rely heavily on the developments of GSP and spectral graph theory. In particular, the sampling and reconstruction theory of graph signals play a central role in this thesis. The main contributions of this thesis are summarized as follows: 1) we propose new algorithms for moving object segmentation using concepts of GSP and GNNs, 2) we propose a new algorithm for weakly-supervised semantic segmentation using hypergraph neural networks, 3) we propose and analyze GNNs using concepts from GSP and spectral graph theory, and 4) we introduce a novel algorithm based on the extension of a Sobolev smoothness function for the reconstruction of time-varying graph signals from discrete samples
Nagel, Kristine Susanne. „Using Availability Indicators to Enhance Context-Aware Family Communication Applications“. Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11547.
Der volle Inhalt der QuelleFan, Mingdong. „THREE INITIATIVES ADDRESSING MRI PROBLEMS“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1585863940821908.
Der volle Inhalt der QuelleCharvát, Michal. „System for People Detection and Localization Using Thermal Imaging Cameras“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-432478.
Der volle Inhalt der QuelleZhang, Zhongfei. „Three-dimensional reconstruction under varying constraints on camera geometry for robotic navigation scenarios“. 1996. https://scholarworks.umass.edu/dissertations/AAI9619460.
Der volle Inhalt der QuelleLu, Cheng-Chung, und 呂正中. „A Study of Intelligent Resource Allocating Decision Model of Recovery PlanningIn a Disaster Reconstruction“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/42387641116555572946.
Der volle Inhalt der Quelle輔仁大學
資訊管理學系
94
Due to disasters with short cycle and increasingly unpredictable, the preventing mechanisms do not work insufficiently, and the only way is to rely on reconstruction and recovery system. Thus, this study focuses on the recovery planning for different disaster areas caused by the calamity. By combining information technology and quantitative method, a decision model is developed in this study to help decision-makers judge the priority of severity about disaster areas, allocate proper number of resources from resources providers to disaster areas under the conditions that resources are limited and the space is scattered. Moreover, the recovery plan is set as a long-term process of interactive decision making, not the one time only. As describe earlier, the critical factors are found out from literature reviews and then are used to construct the analytic hierarchy process (AHP), which is a kind of multiple criteria decision making method in order to helps decision-makers judge the priority of severity and obtain the weight of each disaster area. Several kinds of data, such as resource demand of each disaster area, resource supply of each provider, distribution and routing time from sources to destinations, etc. are offered in a prototype system. A multi-objective recovery and allocation decision model is solved by using parameters mentioned above and famous software Lingo 9.0 for effectively and optimally allocating resources of recovery planning in a disaster reconstruction. Further, evolutionary prototyping method is used for developing a web-based prototype system for managerial analysis. Finally, several experimental designs and simulations settings are implemented for obtaining some management implications and guideline. What-if analysis from our model can improve decision quality and shorten the time in the decision-making process.
Wang, Chien-Min, und 王建民. „The Research of Intelligent Information Integration of Creative Concepts:The Reconstruction of Elegance for Ancient Taijiang“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75866051038735733472.
Der volle Inhalt der Quelle南榮技術學院
工程科技研究所碩士在職專班
100
In the 21st century, because of the information is easily to be accessed from internet, the government is actively promoting the features of local area with further improving the environment and humanity by using information technology. Recently, a program named "I236 life technology plan" is proposed and hopefully through the implementation of the plan can improve people's quality of life. Furthermore, through the applications of information technology with the development of the combination of hardware and software, the features of each local area can be perfectly presented in front of people who are interested in knowing it further. The scope of the research is focusing on the reestablishment of the picture of ancient Taijiang. Although the government has named Sicao Wildlife Refuge as National Taijiang Park a few years ago trying to maintain its historical landscapes, unfortunately, some places with characteristics such as Anping Castle and Fort Provintia (Sakam Tower) cannot be included in the National Taijiang Park. Taijiang, it is a place for people who migrated from the mainland China by past the dangerous black ditch stayed and grew during the period of Netherland’s occupation, Koxinga, and the Qing dynasty. The purpose of the research is to overcome the obstacles of reality and life through intelligent information integration of creative concepts, achieve balance between reality and history and then to present the whole picture of ancient Taijiang. By looking back through the period of Netherland’s occupation, Koxinga, and the Qing dynasty chronologically, we tried to identify four characteristic areas those were established under the circumstances of Humanities background to build up the picture of ancient Taijiang, which including: Anping Castle, the trading center during Netherland’s occupation stage; Fort Provintia, the city hall during Koxinga stage; Luerhmen house, the transportation center during early Qing dynasty stage; Eternal Golden Castle, the military Center during late Qing dynasty. Using the combination of several digital information technologies such as creative eBook, movie clip, and on site photo exploratory, we try to integrate the parts which are missed in the current National Taijiang Park as a whole ancient Taijiang. As a result, we hope people will be able to know more about the ancient Taijiang through the work we have done in this research.
„A lagrangian reconstruction of a class of local search methods“. 1998. http://library.cuhk.edu.hk/record=b5889537.
Der volle Inhalt der QuelleThesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 105-112).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Constraint Satisfaction Problems --- p.2
Chapter 1.2 --- Constraint Satisfaction Techniques --- p.2
Chapter 1.3 --- Motivation of the Research --- p.4
Chapter 1.4 --- Overview of the Thesis --- p.5
Chapter 2 --- Related Work --- p.7
Chapter 2.1 --- Min-conflicts Heuristic --- p.7
Chapter 2.2 --- GSAT --- p.8
Chapter 2.3 --- Breakout Method --- p.8
Chapter 2.4 --- GENET --- p.9
Chapter 2.5 --- E-GENET --- p.9
Chapter 2.6 --- DLM --- p.10
Chapter 2.7 --- Simulated Annealing --- p.11
Chapter 2.8 --- Genetic Algorithms --- p.12
Chapter 2.9 --- Tabu Search --- p.12
Chapter 2.10 --- Integer Programming --- p.13
Chapter 3 --- Background --- p.15
Chapter 3.1 --- GENET --- p.15
Chapter 3.1.1 --- Network Architecture --- p.15
Chapter 3.1.2 --- Convergence Procedure --- p.18
Chapter 3.2 --- Classical Optimization --- p.22
Chapter 3.2.1 --- Optimization Problems --- p.22
Chapter 3.2.2 --- The Lagrange Multiplier Method --- p.23
Chapter 3.2.3 --- Saddle Point of Lagrangian Function --- p.25
Chapter 4 --- Binary CSP's as Zero-One Integer Constrained Minimization Prob- lems --- p.27
Chapter 4.1 --- From CSP to SAT --- p.27
Chapter 4.2 --- From SAT to Zero-One Integer Constrained Minimization --- p.29
Chapter 5 --- A Continuous Lagrangian Approach for Solving Binary CSP's --- p.33
Chapter 5.1 --- From Integer Problems to Real Problems --- p.33
Chapter 5.2 --- The Lagrange Multiplier Method --- p.36
Chapter 5.3 --- Experiment --- p.37
Chapter 6 --- A Discrete Lagrangian Approach for Solving Binary CSP's --- p.39
Chapter 6.1 --- The Discrete Lagrange Multiplier Method --- p.39
Chapter 6.2 --- Parameters of CSVC --- p.43
Chapter 6.2.1 --- Objective Function --- p.43
Chapter 6.2.2 --- Discrete Gradient Operator --- p.44
Chapter 6.2.3 --- Integer Variables Initialization --- p.45
Chapter 6.2.4 --- Lagrange Multipliers Initialization --- p.46
Chapter 6.2.5 --- Condition for Updating Lagrange Multipliers --- p.46
Chapter 6.3 --- A Lagrangian Reconstruction of GENET --- p.46
Chapter 6.4 --- Experiments --- p.52
Chapter 6.4.1 --- Evaluation of LSDL(genet) --- p.53
Chapter 6.4.2 --- Evaluation of Various Parameters --- p.55
Chapter 6.4.3 --- Evaluation of LSDL(max) --- p.63
Chapter 6.5 --- Extension of LSDL --- p.66
Chapter 6.5.1 --- Arc Consistency --- p.66
Chapter 6.5.2 --- Lazy Arc Consistency --- p.67
Chapter 6.5.3 --- Experiments --- p.70
Chapter 7 --- Extending LSDL for General CSP's: Initial Results --- p.77
Chapter 7.1 --- General CSP's as Integer Constrained Minimization Problems --- p.77
Chapter 7.1.1 --- Formulation --- p.78
Chapter 7.1.2 --- Incompatibility Functions --- p.79
Chapter 7.2 --- The Discrete Lagrange Multiplier Method --- p.84
Chapter 7.3 --- A Comparison between the Binary and the General Formulation --- p.85
Chapter 7.4 --- Experiments --- p.87
Chapter 7.4.1 --- The N-queens Problems --- p.89
Chapter 7.4.2 --- The Graph-coloring Problems --- p.91
Chapter 7.4.3 --- The Car-Sequencing Problems --- p.92
Chapter 7.5 --- Inadequacy of the Formulation --- p.94
Chapter 7.5.1 --- Insufficiency of the Incompatibility Functions --- p.94
Chapter 7.5.2 --- Dynamic Illegal Constraint --- p.96
Chapter 7.5.3 --- Experiments --- p.97
Chapter 8 --- Concluding Remarks --- p.100
Chapter 8.1 --- Contributions --- p.100
Chapter 8.2 --- Discussions --- p.102
Chapter 8.3 --- Future Work --- p.103
Bibliography --- p.105
„Low to High Dimensional Modality Reconstruction Using Aggregated Fields of View“. Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54924.
Der volle Inhalt der QuelleDissertation/Thesis
Masters Thesis Computer Engineering 2019
„Locally Adaptive Stereo Vision Based 3D Visual Reconstruction“. Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.44195.
Der volle Inhalt der QuelleDissertation/Thesis
Doctoral Dissertation Electrical Engineering 2017
Schöning, Julius. „Interactive 3D Reconstruction“. Doctoral thesis, 2018. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2018052317188.
Der volle Inhalt der QuelleYeh, Fang-Tzu, und 葉芳慈. „Three-dimensional Reconstruction System of Intelligent Automatic Detection of Nasal Vestibule and Nasal Septum in Computed Tomography Images“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/16376431035084518654.
Der volle Inhalt der Quelle國立臺灣科技大學
自動化及控制研究所
100
This study entitled “Three-dimensional Reconstruction System of Intelligent Automatic Detection of Nasal Vestibule and Nasal Septum in Computed Tomography Images” attempted to combine the image processing technology to capture the computed tomography image signal and the back-propagation network for the automatic capturing of the nasal vestibule and nasal septum areas in computed tomography images. Moreover, it reconstructed the three-dimensional images by combining the two areas with skull and nose to measure the three-dimensional information. The present medical diagnosis often relies on computed tomography images to manually select the areas for reference, and use the software to conduct the three dimensional reconstruction measurement of the selected area for the reference of the pre-operational judgment. Therefore, this study developed a three-dimensional reconstruction system of intelligent automatic detection of nasal vestibule and nasal septum in computed tomography images. The proposed system employs the image processing technology combined with back-propagation network to segment the nasal vestibule and nasal septum areas, and mark each computed tomography image individually for the three dimensional reconstruction of the nasal vestibule and nasal septum areas. Finally, the representative points of three operational risky areas, including brain, eye rim internal side and the eye rim lower edge, were marked in order to measure the distance between intranasal information and marked points. The system could assist doctors in pre-operation analysis and judgment with more nasal information to reduce errors caused by human factors. The overall detection rate of the proposed three-dimensional measurement system of intelligent automatic detection of nasal vestibule and nasal septum in computed tomography images reached 99.7%. The three-dimensional image presentation combined with the skull and nose has been confirmed by doctors of the Department of Otolaryngology - Head and Neck Surgery, at Tri-Service General Hospital as valuable in reference. The study findings can facilitate the pre-operation diagnosis and judgment of doctors, as well as help to improve medical quality and the development of the medical industry.