Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Reconstruction intelligente.

Dissertationen zum Thema „Reconstruction intelligente“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Reconstruction intelligente" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Bonvard, Aurélien. „Algorithmes de détection et de reconstruction en aveugle de code correcteurs d'erreurs basés sur des informations souples“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0178.

Der volle Inhalt der Quelle
Annotation:
Les dernières décennies ont connu l’essor des communications numériques. Ceci a donné lieu à la prolifération des standards de communication, ce qui demande une plus grande adaptabilité des systèmes de communication. Une manière de rendre ces systèmes plus flexibles consiste à concevoir un récepteur intelligent qui serait capable de retrouver l’ensemble des paramètres de l’émetteur. Dans ce manuscrit, nous nous intéressons à l’identification en aveugle des codes correcteurs d’erreurs. Nous proposons des méthodes originales, basées sur le calcul de distances euclidiennes entre des séquences de symboles bruités. Tout d’abord, un premier algorithme de classification permet la détection d’un code puis l’identification de la longueur de ses mots de code. Un second algorithme basé sur le nombre de collisions permet quand à lui d’identifier la longueur des mots d’informations. Ensuite, nous proposons une autre méthode utilisant cette fois les distances euclidiennes minimales pour l’identification de la longueur d’un code en bloc. Enfin, une méthode de reconstruction du code dual d’un code correcteur d’erreurs est présentée
Recent decades have seen the rise of digital communications. This has led to a proliferation of communication standards, requiring greater adaptability of communication systems. One way to make these systems more flexible is to design an intelligent receiver that would be able to retreive all the parameters of the transmitter from the received signal. In this manuscript, we are interested in the blind identification of error-correcting codes. We propose original methods based on the calculation of Euclidean distances between noisy symbol sequences. First, a classification algorithm allows the detection of a code and then the identification of its code words lenght. A second algorithm based on the number of collisions allows to identify the length of the information words. Then, we propose another method using the minimum Euclidean distances to identify block codes length. Finally, a method for reconstructing the dual code of an error-correcting code is presented
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

El, Hage Josiana. „Smart Reconstruction after a natural or man-made disaster : Feedback, methodology, and application to the Beirut Harbor Disaster“. Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILN015.

Der volle Inhalt der Quelle
Annotation:
L'objectif de cette étude est de développer un cadre intelligent pour la reconstruction post-catastrophe des bâtiments, en se concentrant sur l'explosion de Beyrouth comme étude de cas, en raison de son contexte géopolitique complexe, de l'étendue de ses dommages et des crises socio-économiques qui ont frappé le pays pendant ces dernières années. L'étude explore diverses dimensions comprenant l'état structurel, l'économique et le social pour prioriser les groupes communautaires marginalisés dans les projets de reconstruction et plaider en faveur de l'approche "Build-Back-Better", selon les recommandations de « Sendai Framework For Disaster Risk Reduction ».Pour atteindre ces objectifs, la thèse commence par l'état de l'art du sujet (Chapitre 1) pour identifier les lacunes de recherche et les cadres de reconstruction post-catastrophe existants. S'appuyant sur cette revue, une méthodologie de recherche est formulée pour combler ces lacunes en mettant l'accent sur la ville de Beyrouth au Liban (Chapitre 2). Elle comprend l'étude du contexte local, les méthodes d'analyse des données et une compréhension des défis de la reconstruction post-catastrophe, plus particulièrement à Beyrouth. Un cadre complet pour évaluer les bâtiments post-catastrophe à Beyrouth suite à l'explosion est développé (Chapitre 3), comprenant 12 indicateurs couvrant les attributs physiques du bâtiment et le profil socio-économique de ses habitants. Ce cadre facilite le calcul d'un indice de priorité pour un grand ensemble de bâtiments endommagés à Beyrouth (Chapitre 4). L'évaluation aide les décideurs et les parties prenantes impliquées dans le processus de reconstruction à gérer et à surveiller les projets de rénovation des bâtiments tout en encourageant l'engagement de la communauté affectée. Elle donne la priorité aux individus les plus vulnérables, favorisant ainsi une approche centrée sur les personnes pour le rétablissement, soutenue par les principes de "Build-Back-Better" et d'inclusivité.Le cadre basé sur les données et les résultats présentés dans cette thèse constituent une avancée dans le domaine de la reconstruction post-catastrophe. Cependant, cette recherche montre certaines limitations, notamment la collecte de données via la méthode de crowdsourcing et le manque de participation des personnes, la dynamique et la complexité du contexte post-catastrophe, ainsi que la concentration uniquement sur le secteur du bâtiment. Les futures recherches pourraient se concentrer sur (i) la prise en compte de tous les secteurs affectés par la catastrophe, (ii) l'étude de l'acceptation sociale pour participer au processus de collecte de données, (iii) et la diversification des sources de collecte de données
The objective of this study is to develop a smart framework for post-disaster reconstruction of buildings, with a focus on the Beirut explosion as a case study, due to its complex geopolitical context, extensive damage, and socio-economic crises. The study delves into various dimensions encompassing physical, economic, and social to prioritize marginalized community groups in the recovery efforts and advocate for the “Build-Back-Better approach”, according to the recommendations of « Sendai Framework For Disaster Risk Reduction ».To attain these objectives, the thesis starts with a literature review (Chapter 1) to identify research gaps and existing post-disaster reconstruction frameworks. Drawing from this review, a research methodology is formulated to address these gaps with emphasis on Beirut city in Lebanon (Chapter 2). It includes the local context study, the data analysis methods, and an understanding of the challenges facing the post-disaster reconstruction with a focus on Beirut. A comprehensive framework for assessing post-disaster buildings in Beirut following the explosion is developed (Chapter 3), comprising 12 indicators spanning physical attributes of the building and socio-economic profile of its residents. This framework facilitates the calculation of a Priority Index for a large set of damaged buildings in Beirut (Chapter 4). The assessment assists decision-makers and stakeholders involved in the reconstruction process manage and monitor building renovation projects while encouraging the affected community engagement. It prioritizes the most vulnerable individuals, thereby fostering a people-centric approach to recovery, underpinned by the principles of building-back-better and inclusivity.The data-based framework and results presented in this thesis form a step forward in the post-disaster reconstruction field. However, this research shows some limitations including the data collection via crowdsourcing and the lack of people participation, the dynamics and the complexity of the post-disaster context, and the focus on the building sector only. Future research could focus on (i) considering all the sectors affected by the disaster, (ii) investigating the social acceptance for participating in the data collection process, (iii) and diversifying the data collection sources
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Mallik, Mohammed Tariqul Hassan. „Electromagnetic Field Exposure Reconstruction by Artificial Intelligence“. Electronic Thesis or Diss., Université de Lille (2022-....), 2023. https://pepite-depot.univ-lille.fr/ToutIDP/EDENGSYS/2023/2023ULILN052.pdf.

Der volle Inhalt der Quelle
Annotation:
Le sujet de l'exposition aux champs électromagnétiques a fait l'objetd'une grande attention à la lumière du déploiement actuel du réseaucellulaire de cinquième génération (5G). Malgré cela, il reste difficilede reconstituer avec précision le champ électromagnétique dans unerégion donnée, faute de données suffisantes. Les mesures in situ sontd'un grand intérêt, mais leur viabilité est limitée, ce qui renddifficile la compréhension complète de la dynamique du champ. Malgré legrand intérêt des mesures localisées, il existe encore des régions nontestées qui les empêchent de fournir une carte d'exposition complète. Larecherche a exploré des stratégies de reconstruction à partird'observations provenant de certains sites localisés ou de capteursdistribués dans l'espace, en utilisant des techniques basées sur lagéostatistique et les processus gaussiens. En particulier, desinitiatives récentes se sont concentrées sur l'utilisation del'apprentissage automatique et de l'intelligence artificielle à cettefin. Pour surmonter ces problèmes, ce travail propose de nouvellesméthodologies pour reconstruire les cartes d'exposition aux CEM dans unezone urbaine spécifique en France. L'objectif principal est dereconstruire des cartes d'exposition aux ondes électromagnétiques àpartir de données provenant de capteurs répartis dans l'espace. Nousavons proposé deux méthodologies basées sur l'apprentissage automatiquepour estimer l'exposition aux ondes électromagnétiques. Pour la premièreméthode, le problème de reconstruction de l'exposition est défini commeune tâche de traduction d'image à image. Tout d'abord, les données ducapteur sont converties en une image et l'image de référencecorrespondante est générée à l'aide d'un simulateur basé sur le tracédes rayons. Nous avons proposé un réseau adversarial cGAN conditionnépar la topologie de l'environnement pour estimer les cartes d'expositionà l'aide de ces images. Le modèle est entraîné sur des images de cartesde capteurs tandis qu'un environnement est donné comme entréeconditionnelle au modèle cGAN. En outre, la cartographie du champélectromagnétique basée sur le Generative Adversarial Network estcomparée au simple Krigeage. Les résultats montrent que la méthodeproposée produit des estimations précises et constitue une solutionprometteuse pour la reconstruction des cartes d'exposition. Cependant,la production de données de référence est une tâche complexe car elleimplique la prise en compte du nombre de stations de base actives dedifférentes technologies et opérateurs, dont la configuration du réseauest inconnue, par exemple les puissances et les faisceaux utilisés parles stations de base. En outre, l'évaluation de ces cartes nécessite dutemps et de l'expertise. Pour répondre à ces questions, nous avonsdéfini le problème comme une tâche d'imputation de données manquantes.La méthode que nous proposons prend en compte l'entraînement d'un réseauneuronal infini pour estimer l'exposition aux champs électromagnétiques.Il s'agit d'une solution prometteuse pour la reconstruction des cartesd'exposition, qui ne nécessite pas de grands ensembles d'apprentissage.La méthode proposée est comparée à d'autres approches d'apprentissageautomatique basées sur les réseaux UNet et les réseaux adversairesgénératifs conditionnels, avec des résultats compétitifs
The topic of exposure to electromagnetic fields has received muchattention in light of the current deployment of the fifth generation(5G) cellular network. Despite this, accurately reconstructing theelectromagnetic field across a region remains difficult due to a lack ofsufficient data. In situ measurements are of great interest, but theirviability is limited, making it difficult to fully understand the fielddynamics. Despite the great interest in localized measurements, thereare still untested regions that prevent them from providing a completeexposure map. The research explored reconstruction strategies fromobservations from certain localized sites or sensors distributed inspace, using techniques based on geostatistics and Gaussian processes.In particular, recent initiatives have focused on the use of machinelearning and artificial intelligence for this purpose. To overcome theseproblems, this work proposes new methodologies to reconstruct EMFexposure maps in a specific urban area in France. The main objective isto reconstruct exposure maps to electromagnetic waves from some datafrom sensors distributed in space. We proposed two methodologies basedon machine learning to estimate exposure to electromagnetic waves. Forthe first method, the exposure reconstruction problem is defined as animage-to-image translation task. First, the sensor data is convertedinto an image and the corresponding reference image is generated using aray tracing-based simulator. We proposed an adversarial network cGANconditioned by the environment topology to estimate exposure maps usingthese images. The model is trained on sensor map images while anenvironment is given as conditional input to the cGAN model.Furthermore, electromagnetic field mapping based on the GenerativeAdversarial Network is compared to simple Kriging. The results show thatthe proposed method produces accurate estimates and is a promisingsolution for exposure map reconstruction. However, producing referencedata is a complex task as it involves taking into account the number ofactive base stations of different technologies and operators, whosenetwork configuration is unknown, e.g. powers and beams used by basestations. Additionally, evaluating these maps requires time andexpertise. To answer these questions, we defined the problem as amissing data imputation task. The method we propose takes into accountthe training of an infinite neural network to estimate exposure toelectromagnetic fields. This is a promising solution for exposure mapreconstruction, which does not require large training sets. The proposedmethod is compared with other machine learning approaches based on UNetnetworks and conditional generative adversarial networks withcompetitive results
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kentzoglanakis, Kyriakos. „Reconstructing gene regulatory networks : a swarm intelligence framework“. Thesis, University of Portsmouth, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.523619.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhao, Yu. „Channel Reconstruction for High-Rank User Equipment“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-256064.

Der volle Inhalt der Quelle
Annotation:
In a 5 Generation massive Multiple Input Multiple Output radio network, the Channel State Information is playing a central role in the algorithm design and system evaluation. However, Acquisition of Channel State Information consumes system resources (e.g. time, frequency) which in turn decrease the link utilization, i.e. fewer resources left for actual data transmission. This problem is more apparent in a scenario when User Equipment terminals have multi-antennas and it would be beneficial to obtain Channel State Information between Base Station and different User Equipment antennas e.g. for purpose of high rank (number of streams) transmission towards this User Equipment. Typically, in current industrial implementations, in order to not waste system resources, Channel State Information is obtained for only one of the User Equipment antennas which then limits the downlink transmission rank to 1. Hence, we purpose a method based on Deep learning technique. In this paper, multi-layer perception and convolutional neural network are implemented. Data are generated by MATLAB simulator using the parameters provided by Huawei Technologies Co., Ltd. Finally, the model proposed by this project provides the best performance compared to the baseline algorithms.
I ett 5-generationsmassivt massivt multipel-inmatningsradio-nätverk spelar kanalstatens information en central roll i algoritmdesignen och systemutvärderingen. Förvärv av Channel State Information konsumerar emellertid systemresurser (t.ex. tid, frekvens) som i sin tur minskar länkanvändningen, dvs färre resurser kvar för faktisk dataöverföring. Detta problem är mer uppenbart i ett scenario när användarutrustningsterminaler har flera antenner och det skulle vara fördelaktigt att erhålla kanalstatusinformation mellan basstationen och olika användarutrustningsantenner, t.ex. för överföring av hög rang (antal strömmar) till denna användarutrustning. I nuvarande industriella implementeringar erhålls kanalstatusinformation för endast en av användarutrustningens antenner för att inte slösa bort systemresurser, vilket sedan begränsar överföringsrankningen för nedlänkning till 1. Därför syftar vi på en metod baserad på Deep learning-teknik. I detta dokument implementeras flerskiktsuppfattning och inblandat neuralt nätverk. Data genereras av MATLAB-simulator med hjälp av parametrarna som tillhandahålls av Huawei Technologies Co., Ltd. Slutligen ger modellen som föreslås av detta projekt bästa prestanda jämfört med baslinjealgoritmerna.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Elias, Rimon. „Towards obstacle reconstruction through wide baseline set of images“. Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29104.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we handle the problem of extracting 3D information from multiple images of a robotic work site in the context of teleoperation. A human operator determines the virtual path of a robotic vehicle and our mission is to provide him with the sequence of images that should be seen by the teleoperated robot moving along this path. The environment, in which the robotic vehicle moves, has a planar ground surface. In addition, a set of wide baseline images are available for the work site. This implies that a small number of points may be visible in more than two views. Moreover, camera parameters are known approximately. According to the sensor error margins, the parameters read lie within some range. Obstacles of different shapes are present in such an environment. In order to generate the sequence, this ground plane as well as the obstacles must be represented. The perspective image of the ground plane can be obtained through a homography matrix. This is done through the virtual camera parameters and the overhead view of the work site. In order to represent obstacles, we suggest different methods; these are volumetric and planar. Our algorithm to represent obstacles starts with detecting junctions. This is done through a new fast junction detection operator we propose. This operator provides the location of the junction as well as the orientations of the edges surrounding it. Junctions belonging to the obstacles are identified against those belonging to the ground plane through calculating the inter-image homography matrices. Fundamental matrices relating images can be estimated roughly through the available camera parameters. Strips surrounding epipolar lines are used as a search range for detecting possible matches. We introduce a novel homographic correlation method to be applied among candidates by reconstructing the planes of junctions in space. Two versions of homographic correlation are proposed; these are SAD and VNC. Both versions achieve matching results that outperform non-homographic correlation. The match set is then turned into a set of 3D points through triangulation. At this point, we propose a hierarchical structure to cluster points in space. This results in bounding boxes containing obstacles. A more accurate volumetric representation for the obstacle can be achieved through a voxelization approach. Another representation is suggested. That is to represent obstacles as planar patches. This is done through mapping among original and synthesized images. Finally, steps of the different algorithms presented throughout the thesis are supported by examples to show the usefulness we claim of our approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

關福延 und Folk-year Kwan. „An intelligent approach to automatic medical model reconstruction fromserial planar CT images“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31243216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Gayed, Said Simone. „Skull reconstruction through shape completion“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24057/.

Der volle Inhalt der Quelle
Annotation:
In this study, we present a shape completion approach to skull reconstruction. Our final goal is to reconstruct the complete mesh of a skull starting from its defective point cloud. Our approach is based on an existing deep neural network, opportunely modified, trained to reconstruct a complete 3D point cloud from an incomplete one. The complete point clouds are then processed through a multi-step pipeline in order to reconstruct the original skull surface. Moreover, we analyze and refine the Sant'Orsola skull dataset, designing functional pipelines for its processing. On the test set, the proposed approach is able to complete missing areas effectively, reaching high accuracy in terms of the predicted point locations and a good qualitative approximation of the complete skull.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Papadopoulos, Georgios. „Towards a 3D building reconstruction using spatial multisource data and computational intelligence techniques“. Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0084/document.

Der volle Inhalt der Quelle
Annotation:
La reconstruction de bâtiments à partir de photographies aériennes et d’autres données spatiales urbaines multi-sources est une tâche qui utilise une multitude de méthodes automatisées et semi-automatisées allant des processus ponctuels au traitement classique des images et au balayage laser. Dans cette thèse, un système de relaxation itératif est développé sur la base de l'examen du contexte local de chaque bord en fonction de multiples sources d'entrée spatiales (masques optiques, d'élévation, d'ombre et de feuillage ainsi que d'autres données prétraitées, décrites au chapitre 6). Toutes ces données multisource et multirésolution sont fusionnées de manière à extraire les segments de ligne probables ou les arêtes correspondant aux limites des bâtiments. Deux nouveaux sous-systèmes ont également été développés dans cette thèse. Ils ont été conçus dans le but de fournir des informations supplémentaires, plus fiables, sur les contours des bâtiments dans une future version du système de relaxation proposé. La première est une méthode de réseau de neurones à convolution profonde (CNN) pour la détection de frontières de construction. Le réseau est notamment basé sur le modèle SRCNN (Dong C. L., 2015) de super-résolution à la pointe de la technologie. Il accepte des photographies aériennes illustrant des données de zones urbaines densément peuplées ainsi que leurs cartes d'altitude numériques (DEM) correspondantes. La formation utilise trois variantes de cet ensemble de données urbaines et vise à détecter les contours des bâtiments grâce à une nouvelle cartographie hétéroassociative super-résolue. Une autre innovation de cette approche est la conception d'une couche de perte personnalisée modifiée appelée Top-N. Dans cette variante, l'erreur quadratique moyenne (MSE) entre l'image de sortie reconstruite et l'image de vérité de sol (GT) fournie des contours de bâtiment est calculée sur les 2N pixels de l'image avec les valeurs les plus élevées. En supposant que la plupart des N pixels de contour de l’image GT figurent également dans les 2N pixels supérieurs de la reconstruction, cette modification équilibre les deux catégories de pixels et améliore le comportement de généralisation du modèle CNN. Les expériences ont montré que la fonction de coût Top-N offre des gains de performance par rapport à une MSE standard. Une amélioration supplémentaire de la capacité de généralisation du réseau est obtenue en utilisant le décrochage. Le deuxième sous-système est un réseau de convolution profonde à super-résolution, qui effectue un mappage associatif à entrée améliorée entre les images d'entrée à basse résolution et à haute résolution. Ce réseau a été formé aux données d’altitude à basse résolution et aux photographies urbaines optiques à haute résolution correspondantes. Une telle différence de résolution entre les images optiques / satellites optiques et les données d'élévation est souvent le cas dans les applications du monde réel
Building reconstruction from aerial photographs and other multi-source urban spatial data is a task endeavored using a plethora of automated and semi-automated methods ranging from point processes, classic image processing and laser scanning. In this thesis, an iterative relaxation system is developed based on the examination of the local context of each edge according to multiple spatial input sources (optical, elevation, shadow & foliage masks as well as other pre-processed data as elaborated in Chapter 6). All these multisource and multiresolution data are fused so that probable line segments or edges are extracted that correspond to prominent building boundaries.Two novel sub-systems have also been developed in this thesis. They were designed with the purpose to provide additional, more reliable, information regarding building contours in a future version of the proposed relaxation system. The first is a deep convolutional neural network (CNN) method for the detection of building borders. In particular, the network is based on the state of the art super-resolution model SRCNN (Dong C. L., 2015). It accepts aerial photographs depicting densely populated urban area data as well as their corresponding digital elevation maps (DEM). Training is performed using three variations of this urban data set and aims at detecting building contours through a novel super-resolved heteroassociative mapping. Another innovation of this approach is the design of a modified custom loss layer named Top-N. In this variation, the mean square error (MSE) between the reconstructed output image and the provided ground truth (GT) image of building contours is computed on the 2N image pixels with highest values . Assuming that most of the N contour pixels of the GT image are also in the top 2N pixels of the re-construction, this modification balances the two pixel categories and improves the generalization behavior of the CNN model. It is shown in the experiments, that the Top-N cost function offers performance gains in comparison to standard MSE. Further improvement in generalization ability of the network is achieved by using dropout.The second sub-system is a super-resolution deep convolutional network, which performs an enhanced-input associative mapping between input low-resolution and high-resolution images. This network has been trained with low-resolution elevation data and the corresponding high-resolution optical urban photographs. Such a resolution discrepancy between optical aerial/satellite images and elevation data is often the case in real world applications. More specifically, low-resolution elevation data augmented by high-resolution optical aerial photographs are used with the aim of augmenting the resolution of the elevation data. This is a unique super-resolution problem where it was found that many of -the proposed general-image SR propositions do not perform as well. The network aptly named building super resolution CNN (BSRCNN) is trained using patches extracted from the aforementioned data. Results show that in comparison with a classic bicubic upscale of the elevation data the proposed implementation offers important improvement as attested by a modified PSNR and SSIM metric. In comparison, other proposed general-image SR methods performed poorer than a standard bicubic up-scaler.Finally, the relaxation system fuses together all these multisource data sources comprising of pre-processed optical data, elevation data, foliage masks, shadow masks and other pre-processed data in an attempt to assign confidence values to each pixel belonging to a building contour. Confidence is augmented or decremented iteratively until the MSE error fails below a specified threshold or a maximum number of iterations have been executed. The confidence matrix can then be used to extract the true building contours via thresholding
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hajjdiab, Hassan. „Vision-based localization, map building and obstacle reconstruction in ground plane environments“. Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/29109.

Der volle Inhalt der Quelle
Annotation:
The work described in this thesis develops the theory of 3D obstacle reconstruction and map building problems in the context of a robot, or a team of robots, equipped with one camera mounted on board. The study is composed of many problems representing the different phases of actions taken by the robot. This thesis first studies the problem of image matching for wide baseline images taken by moving robots. The ground plane is detected and the inter-image homography induced by the ground plane is calculated. A novel technique for ground plane matching is introduced using the overhead view transformation. The thesis then studies the simultaneous localization and map building (SLAM) problem for a team of robots collaborating in the same work site. A vision-based technique is introduced in this thesis to solve the SLAM problem. The third problem studied in this thesis is the 3D obstacle reconstruction of the obstacles lying on the ground surface. In this thesis a Geometric/Variational level set method is proposed to reconstruct the obstacles detected by the robots.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Steinhauer, H. Joe. „A representation scheme for description and reconstruction of object configurations based on qualitative relations /“. Linköping : Department of Computer and Information Science, Linköpings universitet, 2008. http://www.bibl.liu.se/liupubl/disp/disp2008/tek1204s.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Ravelomanantsoa, Andrianiaina. „Approche déterministe de l'acquisition comprimée et la reconstruction des signaux issus de capteurs intelligents distribués“. Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0136/document.

Der volle Inhalt der Quelle
Annotation:
Le réseau sans fil sur le corps humain ou « wireless body area network (WBAN) » est une nouvelle technologie de réseau sans fil dédié à la surveillance des paramètres physiologiques d’une personne. Le réseau est composé de dispositifs électroniques miniatures, appelés nœuds, disposés aux alentours ou à l’intérieur du corps humain. Chaque nœud est doté d’un ou plusieurs capteurs mesurant les paramètres physiologiques de la personne, comme l’électrocardiogramme ou bien la température du corps, et les caractéristiques de l’environnement qui l’entoure. Ces nœuds sont surtout soumis à une contrainte énergétique importante puisque la miniaturisation a réduit les dimensions de leurs batteries. Puisque les nœuds consomment la majorité de l’énergie pour transmettre les données, une solution pour diminuer leur consommation consisterait à compresser les données avant la transmission. Les méthodes classiques de compression ne sont pas adaptées pour le WBAN particulièrement à cause de la puissance de calcul requise et la consommation qui en résulterait. Dans cette thèse, pour contourner ces problèmes, nous utilisons une méthode à base de l’acquisition comprimée pour compresser et reconstruire les données provenant des nœuds. Nous proposons un encodeur simple et facile à mettre en œuvre pour compresser les signaux. Nous présentons aussi un algorithme permettant de réduire la complexité de la phase de reconstruction des signaux. Un travail collaboratif avec l’entreprise TEA (Technologie Ergonomie Appliquées) nous a permis de valider expérimentalement une version numérique de l’encodeur et l’algorithme de reconstruction. Nous avons aussi développé et validé une version analogique de l’encodeur en utilisant des composants standards
A wireless body area network (WBAN) is a new class of wireless networks dedicated to monitor human physiological parameters. It consists of small electronic devices, also called nodes, attached to or implanted in the human body. Each node comprises one or many sensors which measure physiological signals, such as electrocardiogram or body heat, and the characteristics of the surrounding environment. These nodes are mainly subject to a significant energy constraint due to the fact that the miniaturization has reduced the size of their batteries. A solution to minimize the energy consumption would be to compress the sensed data before wirelessly transmitting them. Indeed, research has shown that most of the available energy are consumed by the wireless transmitter. Conventional compression methods are not suitable for WBANs because they involve a high computational power and increase the energy consumption. To overcome these limitations, we use compressed sensing (CS) to compress and recover the sensed data. We propose a simple and efficient encoder to compress the data. We also introduce a new algorithm to reduce the complexity of the recovery process. A partnership with TEA (Technologie Ergonomie Appliquées) company allowed us to experimentally evaluate the performance of the proposed method during which a numeric version of the encoder has been used. We also developed and validated an analog version of the encoder
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Ravelomanantsoa, Andrianiaina. „Approche déterministe de l'acquisition comprimée et la reconstruction des signaux issus de capteurs intelligents distribués“. Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0136.

Der volle Inhalt der Quelle
Annotation:
Le réseau sans fil sur le corps humain ou « wireless body area network (WBAN) » est une nouvelle technologie de réseau sans fil dédié à la surveillance des paramètres physiologiques d’une personne. Le réseau est composé de dispositifs électroniques miniatures, appelés nœuds, disposés aux alentours ou à l’intérieur du corps humain. Chaque nœud est doté d’un ou plusieurs capteurs mesurant les paramètres physiologiques de la personne, comme l’électrocardiogramme ou bien la température du corps, et les caractéristiques de l’environnement qui l’entoure. Ces nœuds sont surtout soumis à une contrainte énergétique importante puisque la miniaturisation a réduit les dimensions de leurs batteries. Puisque les nœuds consomment la majorité de l’énergie pour transmettre les données, une solution pour diminuer leur consommation consisterait à compresser les données avant la transmission. Les méthodes classiques de compression ne sont pas adaptées pour le WBAN particulièrement à cause de la puissance de calcul requise et la consommation qui en résulterait. Dans cette thèse, pour contourner ces problèmes, nous utilisons une méthode à base de l’acquisition comprimée pour compresser et reconstruire les données provenant des nœuds. Nous proposons un encodeur simple et facile à mettre en œuvre pour compresser les signaux. Nous présentons aussi un algorithme permettant de réduire la complexité de la phase de reconstruction des signaux. Un travail collaboratif avec l’entreprise TEA (Technologie Ergonomie Appliquées) nous a permis de valider expérimentalement une version numérique de l’encodeur et l’algorithme de reconstruction. Nous avons aussi développé et validé une version analogique de l’encodeur en utilisant des composants standards
A wireless body area network (WBAN) is a new class of wireless networks dedicated to monitor human physiological parameters. It consists of small electronic devices, also called nodes, attached to or implanted in the human body. Each node comprises one or many sensors which measure physiological signals, such as electrocardiogram or body heat, and the characteristics of the surrounding environment. These nodes are mainly subject to a significant energy constraint due to the fact that the miniaturization has reduced the size of their batteries. A solution to minimize the energy consumption would be to compress the sensed data before wirelessly transmitting them. Indeed, research has shown that most of the available energy are consumed by the wireless transmitter. Conventional compression methods are not suitable for WBANs because they involve a high computational power and increase the energy consumption. To overcome these limitations, we use compressed sensing (CS) to compress and recover the sensed data. We propose a simple and efficient encoder to compress the data. We also introduce a new algorithm to reduce the complexity of the recovery process. A partnership with TEA (Technologie Ergonomie Appliquées) company allowed us to experimentally evaluate the performance of the proposed method during which a numeric version of the encoder has been used. We also developed and validated an analog version of the encoder
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Tearse, Brandon. „Skald| Exploring Story Generation and Interactive Storytelling by Reconstructing Minstrel“. Thesis, University of California, Santa Cruz, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423003.

Der volle Inhalt der Quelle
Annotation:

Within the realm of computational story generation sits Minstrel, a decades old system which was once used to explore the idea that, under the correct conditions, novel stories can be generated by taking an existing story and replacing some of its elements with similar ones found in a different story. This concept would eventually fall within the bounds of a strategy known as Case-Based Reasoning (CBR), in which problems are solved by recalling solutions to past problems (the cases), and mutating the recalled cases in order to create an appropriate solution to the current problem. This dissertation uses a rational reconstruction of Minstrel called Minstrel Remixed, a handful of upgraded variants of Minstrel Remixed, and a pair of similar but unrelated storytelling systems, to explore various characteristics of Minstrel-style storytelling systems.

In the first part of this dissertation I define the class of storytelling systems that are similar to Minstrel. This definition allows me to compare the features of these systems and discuss the various strengths and weaknesses of the variants. Furthermore, I briefly describe the rational reconstruction of Minstrel and then provide a detailed overview of the inner workings of the resulting system, Minstrel Remixed.

Once Minstrel Remixed was complete, I chose to upgrade it in order to explore the set of stories that it could produced and ways to alter or reconfigure the system with the goal of intentionally influencing the set of possible outputs. This investigation resulted in two new storytelling systems called Conspiracy Forever and Problem Planets. The second portion of this dissertation discusses these systems as well as a number of discoveries about the strengths and weaknesses of Minstrel Style Storytelling Systems in general. More specifically, I discuss that, 1) a human reader's capacity for creating patterns out of an assortment of statements is incredibly useful and output should be crafted to use this potential, 2) Minstrel-Style Storytelling tends to be amnesiac and do a poor job of creating long stories that remain cohesive, and 3) the domain that a storytelling system is working from is incredibly important and must be well engineered. I continue by discussing the methods that I discovered for cleaning up and maintaining a domain and conclude with a section covering interviews with other storytelling system creators about the strengths and weaknesses of their systems in light of my findings about Minstrel Remixed.

In the final portion of this document I create a framework of six interrelated attributes of stories (length, coherence, creativity, complexity, contextuality, and consolidation,) and use this along with the learning discussed in the first two portions of the dissertation to discuss the strengths and weaknesses of this class of CBR systems when applied to both static story generation and interactive storytelling. I discuss the finding that these systems seem to have some amount of power and although they can be tweaked to produce for example, longer or more consolidated stories, these improvements always come along with a reduction in complexity, coherence, or one of the other attributes. Further discussion of the output power of this class of storytelling systems revolves around the primary limiting factor to their potential, namely the fact that they have no understanding of the symbols and patterns that they are manipulating. Finally, I introduce a number of strategies that I found to be fruitful for increasing the 'output power' of the system and getting around the lack of commonsense reasoning, chiefly improving the domain and adding new subsystems.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Fu, Bo. „Towards Intelligent Telerobotics: Visualization and Control of Remote Robot“. UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/40.

Der volle Inhalt der Quelle
Annotation:
Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Lyubchyk, Leonid, Vladislav Kolbasin und Galina Grinberg. „Nonlinear dynamic system kernel based reconstruction from time series data“. Thesis, ТВіМС, 2015. http://repository.kpi.kharkov.ua/handle/KhPI-Press/36826.

Der volle Inhalt der Quelle
Annotation:
A unified approach to reccurent kernel identification algorithms design is proposed. In order to fix the auxiliary vector dimension, the reduced order model kernel method is proposed and proper reccurent identification algorithms are designed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Tanner, Michael. „BOR2G : Building Optimal Regularised Reconstructions with GPUs (in cubes)“. Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:1928c996-d913-4d7e-8ca5-cf247f90aa0f.

Der volle Inhalt der Quelle
Annotation:
Robots require high-quality maps - internal representations of their operating workspace - to localise, path plan, and perceive their environment. Until recently, these maps were restricted to sparse, 2D representations due to computational, memory, and sensor limitations. With the widespread adoption of high-quality sensors and graphics processors for parallel processing, these restrictions no longer apply: dense 3D maps are feasible to compute in real time (i.e., at the input sensor's frame rate). This thesis presents the theory and system to create large-scale dense 3D maps (i.e., reconstruct continuous surface models) using only sensors found on modern autonomous automobiles: 2D laser, 3D laser, and cameras. In contrast to active RGB-D cameras, passive cameras produce noisy surface observations and must be regularised in both 2D and 3D to create accurate reconstructions. Unfortunately, straight-forward application of 3D regularisation causes undesired surface interpolation and extrapolation in regions unexplored by the robot. We propose a method to overcome this challenge by informing the regulariser of the specific subsets of 3D surfaces upon which to operate. When combined with a compressed voxel grid data structure, we demonstrate our system fusing data from both laser and camera sensors to reconstruct 7.3 km of urban environments. We evaluate the quantitative performance of our proposed method through the use of synthetic and real-world datasets - including datasets from Stanford's Burghers of Calais, University of Oxford's RobotCar, University of Oxford's Dense Reconstruction, and Karlsruhe Institute of Technology's KITTI - compared to ground-truth laser data. With only stereo camera inputs, our regulariser reduces the 3D reconstruction metric error between 27% to 36% with a final median accuracy ranging between 4 cm to 8 cm. Furthermore, by augmenting our system with object detection, we remove ephemeral objects (e.g., automobiles, bicycles, and pedestrians) from the input sensor data and target our regulariser to interpolate the occluded urban surfaces. Augmented with Kernel Conditional Density Estimation, our regulariser creates reconstructions with median errors between 5.64 cm and 9.24 cm. Finally, we present a machine-learning pipeline that learns, in an automatic fashion, to recognise the errors in dense reconstructions. Our system trains on image and laser data from a 3.8 km urban sequence. Using a separate 2.2 km urban sequence, our pipeline consistently identifies error-prone regions in the image-based dense reconstruction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Lacroix, Marie. „Méthodes pour la reconstruction, l'analyse et l'exploitation de réseaux tridimensionnels en milieu urbain“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066001/document.

Der volle Inhalt der Quelle
Annotation:
Des catastrophes comme celles de Ghislenghien (Belgique), Ludwigshafen (Allemagne), ou Lyon (France), ont été attribuées à des travaux à proximité de réseaux de gaz. Bien que les canalisations soient une des méthodes les plus sures de transport pour les substances dangereuses, chaque année plusieurs cas d'accidents sont enregistrés en France. La plupart d'entre eux sont attribués à des travaux à proximité des réseaux et certains illustrent le manque de fiabilité des informations fournies. Pour prévenir de tels accidents qui impliquent les ouvriers et le public, les autorités françaises ont mis en place deux réglementations : DT-DICT : pour la sécurisation des réseaux à proximité d'excavation ; Multifluide : pour celle des réseaux dangereux lors d'événements aléatoires. Eviter de tels accidents nécessite d'acquérir et de contrôler des informations 3D concernant les différents réseaux urbains, et particulièrement ceux enterrés. Des stratégies de prévention doivent alors être adoptées. Voilà pourquoi travailler sur les réseaux et leur visualisation et la cartographie des risques, en prenant en compte le flou, est une recherche récente et importante. Les applications logicielles que je développe devraient aider les services publics et les entrepreneurs à se concentrer sur la prévention des événements dangereux grâce à des ensembles de données précises pour les utilisateurs, la définition d'un réseau de géomatique, mais aussi des méthodes telles que la triangulation, la modélisation par éléments, les calculs géométriques, l'intelligence artificielle, la réalité virtuelle
Disasters like the ones that happened in Ghislenghien (Belgium), Ludwigshafen (Germany), or Lyon (France), have been attributed to excavations in the vicinity of gas pipelines. Though pipes are one of the safest methods of transportation for conveying hazardous substances, each year many cases of damage to gas pipes are recorded in France. Most of them are due to works in the vicinity of the networks and some illustrate the lack of reliability of the provided information. Concessionaries have to take stock of the situation and to suggest areas of improvement, so that everyone could benefit from networks becoming safer.To prevent such accidents which involve workers and the public, French authorities enforce two regulations: DT / DICT: reform of the network no-damage by securing the excavations, Multifluide: reform which is interested in securing networks of hazardous events.So, to avoid such accidents or other problems, it is necessary to acquire and control the 3D information concerning the different city networks, especially buried ones.Preventive strategies have to be adopted. That’s why working on the networks and their visualization and risk cartography, taking the blur into account, is a recent and appropriate research. The software applications I develop should help the utility and construction contractors and focus on the prevention of hazardous events thanks to accurate data sets for users and consumers, the definition of a geomatics network but also some methods such as triangulation methods, element modeling, geometrical calculations, Artificial Intelligence, Virtual Reality
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Chen, Jiandan. „An Intelligent Multi Sensor System for a Human Activities Space---Aspects of Quality Measurement and Sensor Arrangement“. Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00487.

Der volle Inhalt der Quelle
Annotation:
In our society with its aging population, the design and implementation of a highperformance distributed multi-sensor and information system for autonomous physical services become more and more important. In line with this, this thesis proposes an Intelligent Multi-Sensor System, IMSS, that surveys a human activities space to detect and identify a target for a specific service. The subject of this thesis covers three main aspects related to the set-up of an IMSS: an improved depth measurement and reconstruction method and its related uncertainty, a surveillance and tracking algorithm and finally a way to validate and evaluate the proposed methods and algorithms. The thesis discusses how a model of the depth spatial quantisation uncertainty can be implemented to optimize the configuration of a sensor system to capture information of the target objects and their environment with required specifications. The thesis introduces the dithering algorithm which significantly reduces the depth reconstruction uncertainty. Furthermore, the dithering algorithm is implemented on a sensor-shifted stereo camera, thus simplifying depth reconstruction without compromising the common stereo field of view. To track multiple targets continuously, the Gaussian Mixture Probability Hypothesis Density, GM-PHD, algorithm is implemented with the help of vision and Radio Frequency Identification, RFID, technologies. The performance of the tracking algorithm in a vision system is evaluated by a circular motion test signal. The thesis introduces constraints to the target space, the stereo pair characteristics and the depth reconstruction accuracy to optimize the vision system and to control the performance of surveillance and 3D reconstruction through integer linear programming. The human being within the activity space is modelled as a tetrahedron, and a field of view in spherical coordinates are used in the control algorithms. In order to integrate human behaviour and perception into a technical system, the proposed adaptive measurement method makes use of the Fuzzily Defined Variable, FDV. The FDV approach enables an estimation of the quality index based on qualitative and quantitative factors for image quality evaluation using a neural network. The thesis consists of two parts, where Part I gives an overview of the applied theory and research methods used, and Part II comprises the eight papers included in the thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Skorburg, Joshua August. „Human Nature and Intelligence: The Implications of John Dewey's Philosophy“. University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1333663233.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Lacroix, Marie. „Méthodes pour la reconstruction, l'analyse et l'exploitation de réseaux tridimensionnels en milieu urbain“. Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066001.

Der volle Inhalt der Quelle
Annotation:
Des catastrophes comme celles de Ghislenghien (Belgique), Ludwigshafen (Allemagne), ou Lyon (France), ont été attribuées à des travaux à proximité de réseaux de gaz. Bien que les canalisations soient une des méthodes les plus sures de transport pour les substances dangereuses, chaque année plusieurs cas d'accidents sont enregistrés en France. La plupart d'entre eux sont attribués à des travaux à proximité des réseaux et certains illustrent le manque de fiabilité des informations fournies. Pour prévenir de tels accidents qui impliquent les ouvriers et le public, les autorités françaises ont mis en place deux réglementations : DT-DICT : pour la sécurisation des réseaux à proximité d'excavation ; Multifluide : pour celle des réseaux dangereux lors d'événements aléatoires. Eviter de tels accidents nécessite d'acquérir et de contrôler des informations 3D concernant les différents réseaux urbains, et particulièrement ceux enterrés. Des stratégies de prévention doivent alors être adoptées. Voilà pourquoi travailler sur les réseaux et leur visualisation et la cartographie des risques, en prenant en compte le flou, est une recherche récente et importante. Les applications logicielles que je développe devraient aider les services publics et les entrepreneurs à se concentrer sur la prévention des événements dangereux grâce à des ensembles de données précises pour les utilisateurs, la définition d'un réseau de géomatique, mais aussi des méthodes telles que la triangulation, la modélisation par éléments, les calculs géométriques, l'intelligence artificielle, la réalité virtuelle
Disasters like the ones that happened in Ghislenghien (Belgium), Ludwigshafen (Germany), or Lyon (France), have been attributed to excavations in the vicinity of gas pipelines. Though pipes are one of the safest methods of transportation for conveying hazardous substances, each year many cases of damage to gas pipes are recorded in France. Most of them are due to works in the vicinity of the networks and some illustrate the lack of reliability of the provided information. Concessionaries have to take stock of the situation and to suggest areas of improvement, so that everyone could benefit from networks becoming safer.To prevent such accidents which involve workers and the public, French authorities enforce two regulations: DT / DICT: reform of the network no-damage by securing the excavations, Multifluide: reform which is interested in securing networks of hazardous events.So, to avoid such accidents or other problems, it is necessary to acquire and control the 3D information concerning the different city networks, especially buried ones.Preventive strategies have to be adopted. That’s why working on the networks and their visualization and risk cartography, taking the blur into account, is a recent and appropriate research. The software applications I develop should help the utility and construction contractors and focus on the prevention of hazardous events thanks to accurate data sets for users and consumers, the definition of a geomatics network but also some methods such as triangulation methods, element modeling, geometrical calculations, Artificial Intelligence, Virtual Reality
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Tsenoglou, Theocharis. „Intelligent pattern recognition techniques for photo-realistic 3D modeling of urban planning objects“. Thesis, Limoges, 2014. http://www.theses.fr/2014LIMO0075.

Der volle Inhalt der Quelle
Annotation:
Modélisation 3D réaliste des bâtiments et d'autres objets de planification urbaine est un domaine de recherche actif dans le domaine de la modélisation 3D de la ville, la documentation du patrimoine, tourisme virtuel, la planification urbaine, la conception architecturale et les jeux d'ordinateur. La création de ces modèles, très souvent, nécessite la fusion des données provenant de diverses sources telles que les images optiques et de numérisation de nuages ​​de points laser. Pour imiter de façon aussi réaliste que possible les mises en page, les activités et les fonctionnalités d'un environnement du monde réel, ces modèles doivent atteindre de haute qualité et la précision de photo-réaliste en termes de la texture de surface (par exemple pierre ou de brique des murs) et de la morphologie (par exemple, les fenêtres et les portes) des objets réels. Rendu à base d'images est une alternative pour répondre à ces exigences. Il utilise des photos, prises soit au niveau du sol ou de l'air, à ajouter de la texture au modèle 3D ajoutant ainsi photo-réalisme.Pour revêtement de texture pleine de grandes façades des modèles de blocs 3D, des images qui dépeignent la même façade doivent être correctement combinée et correctement aligné avec le côté du bloc. Les photos doivent être fusionnés de manière appropriée afin que le résultat ne présente pas de discontinuités, de brusques variations de l'éclairage ou des lacunes. Parce que ces images ont été prises, en général, dans différentes conditions de visualisation (angles de vision, des facteurs de zoom, etc.) ils sont sous différentes distorsions de perspective, mise à l'échelle, de luminosité, de contraste et de couleur nuances, ils doivent être corrigés ou ajustés. Ce processus nécessite l'extraction de caractéristiques clés de leur contenu visuel d'images.Le but du travail proposé est de développer des méthodes basées sur la vision par ordinateur et les techniques de reconnaissance des formes, afin d'aider ce processus. En particulier, nous proposons une méthode pour extraire les lignes implicites à partir d'images de mauvaise qualité des bâtiments, y compris les vues de nuit où seules quelques fenêtres éclairées sont visibles, afin de préciser des faisceaux de lignes parallèles 3D et leurs points de fuite correspondants. Puis, sur la base de ces informations, on peut parvenir à une meilleure fusion des images et un meilleur alignement des images aux façades de blocs
Realistic 3D modeling of buildings and other urban planning objects is an active research area in the field of 3D city modeling, heritage documentation, virtual touring, urban planning, architectural design and computer gaming. The creation of such models, very often, requires merging of data from diverse sources such as optical images and laser scan point clouds. To imitate as realistically as possible the layouts, activities and functionalities of a real-world environment, these models need to attain high photo-realistic quality and accuracy in terms of the surface texture (e.g. stone or brick walls) and morphology (e.g. windows and doors) of the actual objects. Image-based rendering is an alternative for meeting these requirements. It uses photos, taken either from ground level or from the air, to add texture to the 3D model thus adding photo-realism. For full texture covering of large facades of 3D block models, images picturing the same façade need to be properly combined and correctly aligned with the side of the block. The pictures need to be merged appropriately so that the result does not present discontinuities, abrupt variations in lighting or gaps. Because these images were taken, in general, under various viewing conditions (viewing angles, zoom factors etc) they are under different perspective distortions, scaling, brightness, contrast and color shadings, they need to be corrected or adjusted. This process requires the extraction of key features from their visual content of images. The aim of the proposed work is to develop methods based on computer vision and pattern recognition techniques in order to assist this process. In particular, we propose a method for extracting implicit lines from poor quality images of buildings, including night views where only some lit windows are visible, in order to specify bundles of 3D parallel lines and their corresponding vanishing points. Then, based on this information, one can achieve better merging of the images and better alignment of the images to the block façades. Another important application dealt in this thesis is that of 3D modeling. We propose an edge preserving interpolation, based on the mean shift algorithm, that operates jointly on the optical and the elevation data. It succeeds in increasing the resolution of the elevation data (LiDAR) while improving the quality (i.e. straightness) of their edges. At the same time, the color homogeneity of the corresponding imagery is also improved. The reduction of color artifacts in the optical data and the improvement in the spatial resolution of elevation data results in more accurate 3D building models. Finally, in the problem of building detection, the application of the proposed mean shift-based edge preserving smoothing for increasing the quality of aerial/color images improves the performance of binary building vs non-building pixel classification
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Dubovský, Peter. „Hezekiah and the Assyrian spies : reconstruction of the neo-Assyrian intelligence services and its significance for 2 Kings 18-19 /“. Roma : Ed. Pontificio istituto biblico, 2006. http://catalogue.bnf.fr/ark:/12148/cb410178717.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Steinhauer, Heike Joe. „A Representation Scheme for Description and Reconstruction of Object Configurations Based on Qualitative Relations“. Doctoral thesis, Linköpings universitet, CASL - Cognitive Autonomous Systems Laboratory, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12446.

Der volle Inhalt der Quelle
Annotation:
One reason Qualitative Spatial Reasoning (QSR) is becoming increasingly important to Artificial Intelligence (AI) is the need for a smooth ‘human-like’ communication between autonomous agents and people. The selected, yet general, task motivating the work presented here is the scenario of an object configuration that has to be described by an observer on the ground using only relational object positions. The description provided should enable a second agent to create a map-like picture of the described configuration in order to recognize the configuration on a representation from the survey perspective, for instance on a geographic map or in the landscape itself while observing it from an aerial vehicle. Either agent might be an autonomous system or a person. Therefore, the particular focus of this work lies on the necessity to develop description and reconstruction methods that are cognitively easy to apply for a person. This thesis presents the representation scheme QuaDRO (Qualitative Description and Reconstruction of Object configurations). Its main contributions are a specification and qualitative classification of information available from different local viewpoints into nine qualitative equivalence classes. This classification allows the preservation of information needed for reconstruction nto a global frame of reference. The reconstruction takes place in an underlying qualitative grid with adjustable granularity. A novel approach for representing objects of eight different orientations by two different frames of reference is used. A substantial contribution to alleviate the reconstruction process is that new objects can be inserted anywhere within the reconstruction without the need for backtracking or rereconstructing. In addition, an approach to reconstruct configurations from underspecified descriptions using conceptual neighbourhood-based reasoning and coarse object relations is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Fofi, David. „Navigation d'un véhicule intelligent à l'aide d'un capteur de vision en lumière structurée et codée“. Phd thesis, Université de Picardie Jules Verne, 2001. http://tel.archives-ouvertes.fr/tel-00005452.

Der volle Inhalt der Quelle
Annotation:
Les travaux présentés dans ce mémoire ont pour but l'application de la vision en lumière
structurée (capteur formé d'une caméra CCD et d'une source lumineuse) à la navigation de robots
mobiles. Ceci nous a conduit à étudier différentes techniques et approches de la vision par
ordinateur et du traitement des images. Tout d'abord, nous avons passé en revue les principaux
types de codage de la lumière structurée et les principales applications qu'elle trouve en robotique,
imagerie médicale et métrologie, pour en dégager une problématique quant à l'utilisation qui nous
intéresse. En second lieu, nous proposons une méthode de traitement des images en lumière
structurée ayant pour objectif l'extraction des segments de l'image et le décodage du motif
structurant. Nous détaillons ensuite une méthode de reconstruction tri-dimensionnelle à partir du
capteur non-calibré. La projection d'un motif lumineux sur l'environnement impose des contraintes
sévères aux techniques d'auto-calibration. Il ressort que la reconstruction doit être effectuée en
deux temps, en une seule prise de vue et une seule projection. Nous précisons la méthode de
reconstruction projective utilisée pour nos expérimentations et nous donnons une méthode
permettant le passage du projectif à l'Euclidien. En profitant des relations géométriques générées
par la projection du motif lumineux, nous montrons qu'il est possible de trouver des contraintes
Euclidiennes entre les points de la scène, indépendantes des objets de la scène. Nous proposons
également une technique de détection d'obstacles quantitative, permettant d'estimer la carte de
l'espace libre observé par le robot. Finalement, nous faisons une étude complète du capteur en
mouvement et nous en tirons un algorithme permettant d'estimer son déplacement dans
l'environnement à partir de la mise en correspondance des plans qui le compose.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ye, Mao. „MONOCULAR POSE ESTIMATION AND SHAPE RECONSTRUCTION OF QUASI-ARTICULATED OBJECTS WITH CONSUMER DEPTH CAMERA“. UKnowledge, 2014. http://uknowledge.uky.edu/cs_etds/25.

Der volle Inhalt der Quelle
Annotation:
Quasi-articulated objects, such as human beings, are among the most commonly seen objects in our daily lives. Extensive research have been dedicated to 3D shape reconstruction and motion analysis for this type of objects for decades. A major motivation is their wide applications, such as in entertainment, surveillance and health care. Most of existing studies relied on one or more regular video cameras. In recent years, commodity depth sensors have become more and more widely available. The geometric measurements delivered by the depth sensors provide significantly valuable information for these tasks. In this dissertation, we propose three algorithms for monocular pose estimation and shape reconstruction of quasi-articulated objects using a single commodity depth sensor. These three algorithms achieve shape reconstruction with increasing levels of granularity and personalization. We then further develop a method for highly detailed shape reconstruction based on our pose estimation techniques. Our first algorithm takes advantage of a motion database acquired with an active marker-based motion capture system. This method combines pose detection through nearest neighbor search with pose refinement via non-rigid point cloud registration. It is capable of accommodating different body sizes and achieves more than twice higher accuracy compared to a previous state of the art on a publicly available dataset. The above algorithm performs frame by frame estimation and therefore is less prone to tracking failure. Nonetheless, it does not guarantee temporal consistent of the both the skeletal structure and the shape and could be problematic for some applications. To address this problem, we develop a real-time model-based approach for quasi-articulated pose and 3D shape estimation based on Iterative Closest Point (ICP) principal with several novel constraints that are critical for monocular scenario. In this algorithm, we further propose a novel method for automatic body size estimation that enables its capability to accommodate different subjects. Due to the local search nature, the ICP-based method could be trapped to local minima in the case of some complex and fast motions. To address this issue, we explore the potential of using statistical model for soft point correspondences association. Towards this end, we propose a unified framework based on Gaussian Mixture Model for joint pose and shape estimation of quasi-articulated objects. This method achieves state-of-the-art performance on various publicly available datasets. Based on our pose estimation techniques, we then develop a novel framework that achieves highly detailed shape reconstruction by only requiring the user to move naturally in front of a single depth sensor. Our experiments demonstrate reconstructed shapes with rich geometric details for various subjects with different apparels. Last but not the least, we explore the applicability of our method on two real-world applications. First of all, we combine our ICP-base method with cloth simulation techniques for Virtual Try-on. Our system delivers the first promising 3D-based virtual clothing system. Secondly, we explore the possibility to extend our pose estimation algorithms to assist physical therapist to identify their patients’ movement dysfunctions that are related to injuries. Our preliminary experiments have demonstrated promising results by comparison with the gold standard active marker-based commercial system. Throughout the dissertation, we develop various state-of-the-art algorithms for pose estimation and shape reconstruction of quasi-articulated objects by leveraging the geometric information from depth sensors. We also demonstrate their great potentials for different real-world applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Alkindy, Bassam. „Combining approaches for predicting genomic evolution“. Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2012/document.

Der volle Inhalt der Quelle
Annotation:
En bio-informatique, comprendre comment les molécules d’ADN ont évolué au cours du temps reste un problème ouvert etcomplexe. Des algorithmes ont été proposés pour résoudre ce problème, mais ils se limitent soit à l’évolution d’un caractèredonné (par exemple, un nucléotide précis), ou se focalisent a contrario sur de gros génomes nucléaires (plusieurs milliardsde paires de base), ces derniers ayant connus de multiples événements de recombinaison – le problème étant NP completquand on considère l’ensemble de toutes les opérations possibles sur ces séquences, aucune solution n’existe à l’heureactuelle. Dans cette thèse, nous nous attaquons au problème de reconstruction des séquences ADN ancestrales en nousfocalisant sur des chaînes nucléotidiques de taille intermédiaire, et ayant connu assez peu de recombinaison au coursdu temps : les génomes de chloroplastes. Nous montrons qu’à cette échelle le problème de la reconstruction d’ancêtrespeut être résolu, même quand on considère l’ensemble de tous les génomes chloroplastiques complets actuellementdisponibles. Nous nous concentrons plus précisément sur l’ordre et le contenu ancestral en gènes, ainsi que sur lesproblèmes techniques que cette reconstruction soulève dans le cas des chloroplastes. Nous montrons comment obtenirune prédiction des séquences codantes d’une qualité telle qu’elle permette ladite reconstruction, puis comment obtenir unarbre phylogénétique en accord avec le plus grand nombre possible de gènes, sur lesquels nous pouvons ensuite appuyernotre remontée dans le temps – cette dernière étant en cours de finalisation. Ces méthodes, combinant l’utilisation d’outilsdéjà disponibles (dont la qualité a été évaluée) à du calcul haute performance, de l’intelligence artificielle et de la biostatistique,ont été appliquées à une collection de plus de 450 génomes chloroplastiques
In Bioinformatics, understanding how DNA molecules have evolved over time remains an open and complex problem.Algorithms have been proposed to solve this problem, but they are limited either to the evolution of a given character (forexample, a specific nucleotide), or conversely focus on large nuclear genomes (several billion base pairs ), the latter havingknown multiple recombination events - the problem is NP complete when you consider the set of all possible operationson these sequences, no solution exists at present. In this thesis, we tackle the problem of reconstruction of ancestral DNAsequences by focusing on the nucleotide chains of intermediate size, and have experienced relatively little recombinationover time: chloroplast genomes. We show that at this level the problem of the reconstruction of ancestors can be resolved,even when you consider the set of all complete chloroplast genomes currently available. We focus specifically on the orderand ancestral gene content, as well as the technical problems this raises reconstruction in the case of chloroplasts. Weshow how to obtain a prediction of the coding sequences of a quality such as to allow said reconstruction and how toobtain a phylogenetic tree in agreement with the largest number of genes, on which we can then support our back in time- the latter being finalized. These methods, combining the use of tools already available (the quality of which has beenassessed) in high performance computing, artificial intelligence and bio-statistics were applied to a collection of more than450 chloroplast genomes
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Garreau, Mireille. „Signal, image et intelligence artificielle : application a la decomposition du signal electromyographique et a la reconstruction et l'etiquetage 3-d de structures vasculaires“. Rennes 1, 1988. http://www.theses.fr/1988REN10090.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Garreau, Mireille. „Signal, image et intelligence artificielle application à la décomposition du signal électromyographique et à la reconstruction et l'étiquetage 3-D de structures vasculaires /“. Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb376138204.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chaib-Draa, Brahim. „Contribution à la résolution distribuée de problème : une approche basée sur les états intentionnels“. Valenciennes, 1990. https://ged.uphf.fr/nuxeo/site/esupversions/e6f0d4f6-4f91-4c3b-afb6-46782c867250.

Der volle Inhalt der Quelle
Annotation:
L'objectif de cette thèse est d'élaborer une interaction rationnelle entre systèmes intelligents, particulièrement lorsqu'ils ont à résoudre en commun une tâche donnée. Tout d'abord, les problèmes inhérents à la résolution distribuée d'une tâche donnée sont appréhendés. Au cœur de ces problèmes apparaît d'une part, la difficulté d'élaborer une structure organisationnelle et sa dynamique pour les systèmes intelligents coopérants, et d'autre part la difficulté pour ces systèmes de connaître quel est leur degré de coopération et quelle est leur politique d'échange d'informations. Pour aborder ces problèmes, la première étape de ce travail explore la formalisation des principes de base qui permettent à un système intelligent de se comporter rationnellement. Une fois l'étude des principes de base achevée, la seconde étape s'attache à développer une méthode originale de planification des actes dans un environnement multi-agents. C’est ainsi qu'un plan n'est plus considéré comme une séquence d'actions à réaliser en vue d'atteindre un but donné, mais comme un processus mental mettant en jeu les croyances, les engagements et les intentions de chaque système intelligent. Cette forme de planification sert ensuite de fondation pour une interaction rationnelle entre systèmes intelligents, dans laquelle les communications exprimées en langage formel et basées sur les états intentionnels prennent place. Finalement, ce travail est complété par la modélisation, la simulation et l'évaluation de stratégies coopératives entre systèmes intelligents autonomes, sur un exemple concret emprunté au contrôle de trafic aérien.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Ozcelik, Furkan. „Déchiffrer le langage visuel du cerveau : reconstruction d'images naturelles à l'aide de modèles génératifs profonds à partir de signaux IRMf“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES073.

Der volle Inhalt der Quelle
Annotation:
Les grands esprits de l'humanité ont toujours été curieux de la nature de l'esprit, du cerveau et de la conscience. Par le biais d'expériences physiques et mentales, ils ont tenté de répondre à des questions difficiles sur la perception visuelle. Avec le développement des techniques de neuro-imagerie, les techniques de codage et de décodage neuronaux ont permis de mieux comprendre la manière dont nous traitons les informations visuelles. Les progrès réalisés dans les domaines de l'intelligence artificielle et de l'apprentissage profond ont également influencé la recherche en neuroscience. Avec l'émergence de modèles génératifs profonds tels que les autoencodeurs variationnels (VAE), les réseaux adversariaux génératifs (GAN) et les modèles de diffusion latente (LDM), les chercheurs ont également utilisé ces modèles dans des tâches de décodage neuronal telles que la reconstruction visuelle des stimuli perçus à partir de données de neuro-imagerie. La présente thèse fournit deux bases théoriques dans le domaine de la reconstruction des stimuli perçus à partir de données de neuro-imagerie, en particulier les données IRMf, en utilisant des modèles génératifs profonds. Ces bases théoriques se concentrent sur des aspects différents de la tâche de reconstruction visuelle que leurs prédécesseurs, et donc ils peuvent apporter des résultats précieux pour les études qui suivront. La première étude dans la thèse (décrite au chapitre 2) utilise un modèle génératif particulier appelé IC-GAN pour capturer les aspects sémantiques et réalistes de la reconstruction visuelle. La seconde étude (décrite au chapitre 3) apporte une nouvelle perspective sur la reconstruction visuelle en fusionnant les informations décodées à partir de différentes modalités (par exemple, le texte et l'image) en utilisant des modèles de diffusion latente récents. Ces études sont à la pointe de la technologie dans leurs domaines de référence en présentant des reconstructions très fidèles des différents attributs des stimuli. Dans nos deux études, nous proposons des analyses de régions d'intérêt (ROI) pour comprendre les propriétés fonctionnelles de régions visuelles spécifiques en utilisant nos modèles de décodage neuronal. Les relations statistiques entre les régions d'intérêt et les caractéristiques latentes décodées montrent que les zones visuelles précoces contiennent plus d'informations sur les caractéristiques de bas niveau (qui se concentrent sur la disposition et l'orientation des objets), tandis que les zones visuelles supérieures sont plus informatives sur les caractéristiques sémantiques de haut niveau. Nous avons également observé que les images optimales de ROI générées à l'aide de nos techniques de reconstruction visuelle sont capables de capturer les propriétés de sélectivité fonctionnelle des ROI qui ont été examinées dans de nombreuses études antérieures dans le domaine de la recherche neuroscientifique. Notre thèse tente d'apporter des informations précieuses pour les études futures sur le décodage neuronal, la reconstruction visuelle et l'exploration neuroscientifique à l'aide de modèles d'apprentissage profond en fournissant les résultats de deux bases théoriques de reconstruction visuelle et d'analyses de ROI. Les résultats et les contributions de la thèse peuvent aider les chercheurs travaillant dans le domaine des neurosciences cognitives et avoir des implications pour les applications d'interface cerveau-ordinateur
The great minds of humanity were always curious about the nature of mind, brain, and consciousness. Through physical and thought experiments, they tried to tackle challenging questions about visual perception. As neuroimaging techniques were developed, neural encoding and decoding techniques provided profound understanding about how we process visual information. Advancements in Artificial Intelligence and Deep Learning areas have also influenced neuroscientific research. With the emergence of deep generative models like Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Latent Diffusion Models (LDM), researchers also used these models in neural decoding tasks such as visual reconstruction of perceived stimuli from neuroimaging data. The current thesis provides two frameworks in the above-mentioned area of reconstructing perceived stimuli from neuroimaging data, particularly fMRI data, using deep generative models. These frameworks focus on different aspects of the visual reconstruction task than their predecessors, and hence they may bring valuable outcomes for the studies that will follow. The first study of the thesis (described in Chapter 2) utilizes a particular generative model called IC-GAN to capture both semantic and realistic aspects of the visual reconstruction. The second study (mentioned in Chapter 3) brings new perspective on visual reconstruction by fusing decoded information from different modalities (e.g. text and image) using recent latent diffusion models. These studies become state-of-the-art in their benchmarks by exhibiting high-fidelity reconstructions of different attributes of the stimuli. In both of our studies, we propose region-of-interest (ROI) analyses to understand the functional properties of specific visual regions using our neural decoding models. Statistical relations between ROIs and decoded latent features show that while early visual areas carry more information about low-level features (which focus on layout and orientation of objects), higher visual areas are more informative about high-level semantic features. We also observed that generated ROI-optimal images, using these visual reconstruction frameworks, are able to capture functional selectivity properties of the ROIs that have been examined in many prior studies in neuroscientific research. Our thesis attempts to bring valuable insights for future studies in neural decoding, visual reconstruction, and neuroscientific exploration using deep learning models by providing the results of two visual reconstruction frameworks and ROI analyses. The findings and contributions of the thesis may help researchers working in cognitive neuroscience and have implications for brain-computer-interface applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Lutz, Christian. „Analyse, stratégie thérapeutique et innovations technologiques lors de la stabilisation rotatoire du genou dans les reconstructions du ligament croisé antérieur“. Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAJ009.

Der volle Inhalt der Quelle
Annotation:
Le contrôle du ressaut rotatoire induit par la rupture du ligament croisé antérieur est un enjeu majeur de la chirurgie ligamentaire du genou. L’association d’une ténodèse latérale à la reconstruction du ligament croisé antérieur améliore ce contrôle comparativement à une plastie intra-articulaire isolée. Pour autant, l’utilisation de ces ténodèses ne fait pas l’unanimité au sein de la communauté orthopédique. Leur intérêt a été à l’origine de ce projet de recherche anatomique, biomécanique et clinique. Au niveau anatomique et biomécanique, le contrôle rotatoire du genou est assuré par le ligament croisé antérieur et le ligament antéro-latéral. Au niveau technique, la réalisation de ténodèses latérales doit respecter des critères précis pour restituer la fonction du ligament antéro-latéral via le concept d’anisométrie favorable. Au niveau clinique, le contrôle du ressaut est amélioré cette plastie latérale additionnelle. Cette association de plasties ligamentaires a rendu la chirurgie plus complexe et ouvert la voie à un autre projet recherche sur l’utilisation de technologies innovantes pour améliorer la précision et la personnalisation du geste chirurgical
Treatment of the rotational instability induced by rupture of the anterior cruciate ligament is a major challenge in knee ligament surgery. Combining lateral tenodesis with anterior cruciate ligament reconstruction improves this control compared to isolated intra-articular plasty. However, the orthopaedic community is not unanimous about the use of lateral tenodesis. Their interest was at the origin of this anatomical, biomechanical and clinical research project. Anatomically and biomechanically, rotational control of the knee is ensured by the anterior cruciate ligament and the anterolateral ligament. Technically, lateral tenodesis must respect precise criteria to restore the function of the anterolateral ligament, via the concept of favorable anisometry. Clinically, this additional lateral plasty enhances rotational stability.This association of ligament reconstructions has increased the complexity of surgical procedures and spurred further research using innovative technologies to enhance accuracy and a more personalizated surgery
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Zinsou, Omer. „Etude et mise en oeuvre d'un modeleur surfacique d'objets tridimensionnels : intégration dans une base de données relationnelle“. Compiègne, 1988. http://www.theses.fr/1988COMPD135.

Der volle Inhalt der Quelle
Annotation:
Proposition d'un système global et interactif, appelé modeleur surfacique, permettant d'aborder la modélisation d'objets complexes sous un angle plus guidé et fonctionnel contrairement à la plupart des systèmes classiques qui sont séquentiels et automatiques. Sont utilisés un ensemble de contours (planes ou gauches, ouvertes ou fermes, parallèles ou non) et la théorie des B-splines pour la création interactive de formes nouvelles ou la reconstruction interactive de formes existantes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Nogueira, Sergio. „Localisation de mobiles par construction de modèles en 3D en utilisant la stéréovision“. Phd thesis, Université de Technologie de Belfort-Montbeliard, 2009. http://tel.archives-ouvertes.fr/tel-00596948.

Der volle Inhalt der Quelle
Annotation:
Les travaux présentés dans cette thèse contribuent aux systèmes de localisation pour un robot mobile en utilisant la stéréovision. Ces travaux s'inscrivent dans le cadre d'une collaboration entre le LORIA-INRIA de Nancy et le laboratoire SeT de l'UTBM. L'approche proposée est décomposée en deux étapes. La première étape constitue une phase d'apprentissage qui permet de construire un modèle 3D de l'environnement de navigation. La deuxième étape est consacrée à la localisation du véhicule par rapport au modèle 3D. La phase d'apprentissage a pour objectif de construire un modèle tridimensionnel, à partir de points d'intérêt pouvant être appariés sous différentes contraintes géométriques (translation, rotation, changement d'échelle) et/ou contraintes de changements d'illumination. Dans l'objectif de répondre à toutes ces contraintes, nous utilisons la méthode SIFT (Scale Invariant Feature Transform) permettant des mises en correspondance de vues éloignées. Ces points d'intérêt sont décrits par de nombreux attributs qui font d'eux des caractéristiques très intéressantes pour une localisation robuste. Suite à la mise en correspondance de ces points, un modèle tridimensionnel est construit, en utilisant une méthode incrémentale. Un ajustement des positions est effectué afin d'écarter les éventuelles déviations. La phase de localisation consiste à déterminer la position du mobile par rapport au modèle 3D représentant l'environnement de navigation. Elle consiste à apparier les points 3D reconstruits à partir d'une pose du capteur stéréoscopie et les points 3D du modèle. Cet appariement est effectué par l'intermédiaire des points d'intérêt, issus de la méthode d'extraction SIFT. L'approche proposée a été évaluée en utilisant une plate-forme de simulation permettant de simuler un capteur stéréoscopique, installé sur véhicule naviguant dans un environnement 3D virtuel. Par ailleurs, le système de localisation développé a été testé en utilisant le véhicule instrumenté du laboratoire SeT afin d'évaluer ses performances en conditions réelles d'utilisation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Fleute, Markus. „Shape reconstruction for computer assisted surgery based on non-rigid registration of statistical models with intra-operative point data and X-ray images“. Université Joseph Fourier (Grenoble), 2001. http://tel.archives-ouvertes.fr/tel-00005365.

Der volle Inhalt der Quelle
Annotation:
L'objectif de cette thèse est la reconstruction de surfaces anatomiques à partir d'un nombre restreint de radiographies et de points acquis en phase per-operatoire. L'approche propos'ee repose sur une mise en correspondance des donn'ees avec un modèle d'eformable statistique afin d'incorporer de la connaissance à priori sur la forme de l'objet à reconstruire. L''elaboration d'un tel modèle statistique n'ecessite l'analyse de forme dans une population donn'ee. Pour cette analyse un modèle g'en'erique de l'objet est utilis'e afin d'effectuer simultan'ement la segmentation des structures et la mise en correspondance de points appari'es dans un ensemble d'examens tomodensitom'etriques. La reconstruction à partir d'un nuage de points est effectu'ee par une m'ethode de recalage 3D/3D non rigide. L'application de cette technique d'interpolation et d'extrapolation de donn'ees incomplètes est montr'ee dans un système pour la reconstruction du ligament crois'e ant'erieur. Pour la reconstruction à partir de radiographies une m'ethode de recalage 3D/2D non rigide est propos'ee afin de mettre en correspondance le modèle statistique avec les contours de l'objet segment'e dans les radiographies calibr'ees. Des exp'erimentations ont 'et'e effectu'ees avec un modèle statistique de vertèbres lombaires, en vue de l'application clinique du vissage p'ediculaire. De plus il est montr'e que la mise en correspondance hybride combinant le recalage 3D/3D et le recalage D/2D pourrait être une option int'eressante pour certaines applications dans le domaine des Gestes M'edicaux Chirurgicaux Assist'es par Ordinateur
This thesis addresses the problem of reconstructing 3D anatomical surfaces based on intra-operatively acquired sparse scattered point data and few calibrated X-ray images. The approach consists in matching the data with a statistical deformable shape model thus incorporating a priori knowledge into the reconstruction process (. . ) It is further shown that hybrid matching combining both, 3D/3D and 3D/2D registration, might be an interesting option for certain Computer Assisted Surgery Applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Figueroa, Teodora Pinheiro. „Estudo sobre a viabilidade da tomografia eletromagnética na medição do perfil de velocidades de escoamentos monofásicos em dutos“. Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/18/18135/tde-20122006-103316/.

Der volle Inhalt der Quelle
Annotation:
Este trabalho apresenta um estudo prospectivo referente ao desenvolvimento de um medidor eletromagnético inteligente de vazão, cuja finalidade é determinar a vazão de escoamento a partir da reconstrução do perfil de velocidade utilizando técnicas tomográficas. Em conseqüência disso, o medidor de vazão será capaz de corrigir a vazão dada, através da integração do perfil de velocidade correto reconstruído por tomografia. A técnica de reconstrução tomográfica utilizada é baseada na construção de um funcional de erro, gerado a partir da diferença entre voltagens simuladas numericamente para uma condição experimental, conhecidos os parâmetros determinantes da velocidade no interior da tubulação, e voltagens aproximadas simuladas numericamente para aproximações destes parâmetros. Neste trabalho, o modelo físico do medidor eletromagnético de vazão é baseado em um número de eletrodos colocados sobre as paredes do tubo sob uma estratégia de excitação específica, sem injeção de corrente, considerando o campo magnético uniforme. A partir da expansão do funcional de erro, sobre um conjunto de funções conhecidas, uma superfície de erro é gerada. As características da patologia desta superfície requerem outros tipos de técnicas de otimização. Técnicas tradicionais de otimização não são viáveis, pois o processo de busca pára no primeiro mínimo local encontrado. Essa convergência para mínimos locais é justificada devido à presença de regiões planas e vales apresentando vários mínimos locais circundando o ponto de mínimo global (ou ponto referente aos parâmetros ótimos da velocidade). Em vista da ocorrência deste fato, técnicas baseadas em algoritmos evolucionários são testadas e apresentadas para uma série de casos demonstrando a praticidade de nossa pesquisa.
This work presents a prospective study on the development of an intelligent electromagnetic flow meter intended to determine output based on the reconstruction of velocity profile using tomographic techniques. As a result, the flow meter will be able to correct the output measure through the integration of the right velocity profile produced by tomography. The tomographic reconstruction technique utilized is based on the definition of an error functional generated from the difference between voltages simulated numerically for a experimental condition, being known the parameters which define the velocity within the pipe and approximate voltages simulated numerically for approaches of these parameters. In this work the physical model of the electromagnetic flow meter is based on a number of electrodes flush mounted on pipe walls and under a specific strategy of excitement, without electrical current input and considering the magnetic field uniform. From the expansion of the error functional over a set of known functions an error surface is generated. The characteristics of the pathology of this surface require other types of optimization techniques. Traditional optimization techniques are not viable since the search stops at the first local minimum. This convergence to local minimums is justified due to the presence of flat regions and valleys presenting several local minimums around the global minimum point (or point relative to the optimum parameters of velocity). Due to this fact techniques based on evolutionary algorithms are tested and presented for a series of cases demonstrating the usefulness of our research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wang, Chen. „Large-scale 3D environmental modelling and visualisation for flood hazard warning“. Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/3350.

Der volle Inhalt der Quelle
Annotation:
3D environment reconstruction has received great interest in recent years in areas such as city planning, virtual tourism and flood hazard warning. With the rapid development of computer technologies, it has become possible and necessary to develop new methodologies and techniques for real time simulation for virtual environments applications. This thesis proposes a novel dynamic simulation scheme for flood hazard warning. The work consists of three main parts: digital terrain modelling; 3D environmental reconstruction and system development; flood simulation models. The digital terrain model is constructed using real world measurement data of GIS, in terms of digital elevation data and satellite image data. An NTSP algorithm is proposed for very large data assessing, terrain modelling and visualisation. A pyramidal data arrangement structure is used for dealing with the requirements of terrain details with different resolutions. The 3D environmental reconstruction system is made up of environmental image segmentation for object identification, a new shape match method and an intelligent reconstruction system. The active contours-based multi-resolution vector-valued framework and the multi-seed region growing method are both used for extracting necessary objects from images. The shape match method is used with a template in the spatial domain for a 3D detailed small scale urban environment reconstruction. The intelligent reconstruction system is designed to recreate the whole model based on specific features of objects for large scale environment reconstruction. This study then proposes a new flood simulation scheme which is an important application of the 3D environmental reconstruction system. Two new flooding models have been developed. The first one is flood spreading model which is useful for large scale flood simulation. It consists of flooding image spatial segmentation, a water level calculation process, a standard gradient descent method for energy minimization, a flood region search and a merge process. The finite volume hydrodynamic model is built from shallow water equations which is useful for urban area flood simulation. The proposed 3D urban environment reconstruction system was tested on our simulation platform. The experiment results indicate that this method is capable of dealing with complicated and high resolution region reconstruction which is useful for many applications. When testing the 3D flood simulation system, the simulation results are very close to the real flood situation, and this method has faster speed and greater accuracy of simulating the inundation area in comparison to the conventional flood simulation models
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Goulart, José Henrique De Morais. „Estimation de modèles tensoriels structurés et récupération de tenseurs de rang faible“. Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4147/document.

Der volle Inhalt der Quelle
Annotation:
Dans la première partie de cette thèse, on formule deux méthodes pour le calcul d'une décomposition polyadique canonique avec facteurs matriciels linéairement structurés (tels que des facteurs de Toeplitz ou en bande): un algorithme de moindres carrés alternés contraint (CALS) et une solution algébrique dans le cas où tous les facteurs sont circulants. Des versions exacte et approchée de la première méthode sont étudiées. La deuxième méthode fait appel à la transformée de Fourier multidimensionnelle du tenseur considéré, ce qui conduit à la résolution d'un système d'équations monomiales homogènes. Nos simulations montrent que la combinaison de ces approches fournit un estimateur statistiquement efficace, ce qui reste vrai pour d'autres combinaisons de CALS dans des scénarios impliquant des facteurs non-circulants. La seconde partie de la thèse porte sur la récupération de tenseurs de rang faible et, en particulier, sur le problème de reconstruction tensorielle (TC). On propose un algorithme efficace, noté SeMPIHT, qui emploie des projections séquentiellement optimales par mode comme opérateur de seuillage dur. Une borne de performance est dérivée sous des conditions d'isométrie restreinte habituelles, ce qui fournit des bornes d'échantillonnage sous-optimales. Cependant, nos simulations suggèrent que SeMPIHT obéit à des bornes optimales pour des mesures Gaussiennes. Des heuristiques de sélection du pas et d'augmentation graduelle du rang sont aussi élaborées dans le but d'améliorer sa performance. On propose aussi un schéma d'imputation pour TC basé sur un seuillage doux du coeur du modèle de Tucker et son utilité est illustrée avec des données réelles de trafic routier
In the first part of this thesis, we formulate two methods for computing a canonical polyadic decomposition having linearly structured matrix factors (such as, e.g., Toeplitz or banded factors): a general constrained alternating least squares (CALS) algorithm and an algebraic solution for the case where all factors are circulant. Exact and approximate versions of the former method are studied. The latter method relies on a multidimensional discrete-time Fourier transform of the target tensor, which leads to a system of homogeneous monomial equations whose resolution provides the desired circulant factors. Our simulations show that combining these approaches yields a statistically efficient estimator, which is also true for other combinations of CALS in scenarios involving non-circulant factors. The second part of the thesis concerns low-rank tensor recovery (LRTR) and, in particular, the tensor completion (TC) problem. We propose an efficient algorithm, called SeMPIHT, employing sequentially optimal modal projections as its hard thresholding operator. Then, a performance bound is derived under usual restricted isometry conditions, which however yield suboptimal sampling bounds. Yet, our simulations suggest SeMPIHT obeys optimal sampling bounds for Gaussian measurements. Step size selection and gradual rank increase heuristics are also elaborated in order to improve performance. We also devise an imputation scheme for TC based on soft thresholding of a Tucker model core and illustrate its utility in completing real-world road traffic data acquired by an intelligent transportation
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Giraldo, Zuluaga Jhony Heriberto. „Graph-based Algorithms in Computer Vision, Machine Learning, and Signal Processing“. Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS037.

Der volle Inhalt der Quelle
Annotation:
L'apprentissage de la représentation graphique et ses applications ont suscité une attention considérable ces dernières années. En particulier, les Réseaux Neuronaux Graphiques (RNG) et le Traitement du Signal Graphique (TSG) ont été largement étudiés. Les RNGs étendent les concepts des réseaux neuronaux convolutionnels aux données non euclidiennes modélisées sous forme de graphes. De même, le TSG étend les concepts du traitement classique des signaux numériques aux signaux supportés par des graphes. Les RNGs et TSG ont de nombreuses applications telles que l'apprentissage semi-supervisé, la segmentation sémantique de nuages de points, la prédiction de relations individuelles dans les réseaux sociaux, la modélisation de protéines pour la découverte de médicaments, le traitement d'images et de vidéos. Dans cette thèse, nous proposons de nouvelles approches pour le traitement des images et des vidéos, les RNGs, et la récupération des signaux de graphes variant dans le temps. Notre principale motivation est d'utiliser l'information géométrique que nous pouvons capturer à partir des données pour éviter les méthodes avides de données, c'est-à-dire l'apprentissage avec une supervision minimale. Toutes nos contributions s'appuient fortement sur les développements de la TSG et de la théorie spectrale des graphes. En particulier, la théorie de l'échantillonnage et de la reconstruction des signaux de graphes joue un rôle central dans cette thèse. Les principales contributions de cette thèse sont résumées comme suit : 1) nous proposons de nouveaux algorithmes pour la segmentation d'objets en mouvement en utilisant les concepts de la TSG et des RNGs, 2) nous proposons un nouvel algorithme pour la segmentation sémantique faiblement supervisée en utilisant des réseaux de neurones hypergraphiques, 3) nous proposons et analysons les RNGs en utilisant les concepts de la TSG et de la théorie des graphes spectraux, et 4) nous introduisons un nouvel algorithme basé sur l'extension d'une fonction de lissage de Sobolev pour la reconstruction de signaux graphiques variant dans le temps à partir d'échantillons discrets
Graph representation learning and its applications have gained significant attention in recent years. Notably, Graph Neural Networks (GNNs) and Graph Signal Processing (GSP) have been extensively studied. GNNs extend the concepts of convolutional neural networks to non-Euclidean data modeled as graphs. Similarly, GSP extends the concepts of classical digital signal processing to signals supported on graphs. GNNs and GSP have numerous applications such as semi-supervised learning, point cloud semantic segmentation, prediction of individual relations in social networks, modeling proteins for drug discovery, image, and video processing. In this thesis, we propose novel approaches in video and image processing, GNNs, and recovery of time-varying graph signals. Our main motivation is to use the geometrical information that we can capture from the data to avoid data hungry methods, i.e., learning with minimal supervision. All our contributions rely heavily on the developments of GSP and spectral graph theory. In particular, the sampling and reconstruction theory of graph signals play a central role in this thesis. The main contributions of this thesis are summarized as follows: 1) we propose new algorithms for moving object segmentation using concepts of GSP and GNNs, 2) we propose a new algorithm for weakly-supervised semantic segmentation using hypergraph neural networks, 3) we propose and analyze GNNs using concepts from GSP and spectral graph theory, and 4) we introduce a novel algorithm based on the extension of a Sobolev smoothness function for the reconstruction of time-varying graph signals from discrete samples
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Nagel, Kristine Susanne. „Using Availability Indicators to Enhance Context-Aware Family Communication Applications“. Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11547.

Der volle Inhalt der Quelle
Annotation:
Family conversation between homes is difficult to initiate at mutually agreeable times as neither participant has exact knowledge of the other's activities or intentions. Whether calling to plan an important family gathering or simply to connect with family members, the question is: Is now a good time to call? People expect friends and family to learn their activity patterns and to minimize interruptions when calling. Can technology provide awareness cues to the caller, even prior to the initiation of the call? This research focuses on sampling the everyday activities of home life to determine environmental factors, which may serve as an indicator for availability. These external factors may be effective for identifying household routines of availability and useful in determining when to initiate conversation across homes. Several workplace studies have shown a person's interruptibility can be reliably assessed and modeled from specific environmental cues; this work looks for similar predictive power in the home. Copresence, location, and activity in the home were investigated as correlates to availability and for their effectiveness within the social protocol of family conversation. These studies indicate there are activities that can be sensed, either in real-time or over some time span, that correlate to self-reported availability. However, the type and amount of information shared is dependent upon individual preferences, social accessibility, and patterns of activities. This research shows friends and family can improve their predictions of when to call if provided additional context, and suggests that abstract representations of either routines or explicit availability status is sufficient and may be preferred by providers. Availability prediction is feasible in the home and useful to those outside the home, but the level of detail to provide in particular situations needs further study. This work has implications for the development of groupware systems, the automatic sensing of activity to deal with interruption, and activity recognition in the home.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Fan, Mingdong. „THREE INITIATIVES ADDRESSING MRI PROBLEMS“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1585863940821908.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Charvát, Michal. „System for People Detection and Localization Using Thermal Imaging Cameras“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-432478.

Der volle Inhalt der Quelle
Annotation:
V dnešním světě je neustále se zvyšující poptávka po spolehlivých automatizovaných mechanismech pro detekci a lokalizaci osob pro různé účely -- od analýzy pohybu návštěvníků v muzeích přes ovládání chytrých domovů až po hlídání nebezpečných oblastí, jimiž jsou například nástupiště vlakových stanic. Představujeme metodu detekce a lokalizace osob s pomocí nízkonákladových termálních kamer FLIR Lepton 3.5 a malých počítačů Raspberry Pi 3B+. Tento projekt, navazující na předchozí bakalářský projekt "Detekce lidí v místnosti za použití nízkonákladové termální kamery", nově podporuje modelování komplexních scén s polygonálními okraji a více termálními kamerami. V této práci představujeme vylepšenou knihovnu řízení a snímání pro kameru Lepton 3.5, novou techniku detekce lidí používající nejmodernější YOLO (You Only Look Once) detektor objektů v reálném čase, založený na hlubokých neuronových sítích, dále novou automaticky konfigurovatelnou termální jednotku, chráněnou schránkou z 3D tiskárny pro bezpečnou manipulaci, a v neposlední řadě také podrobný návod instalace detekčního systému do nového prostředí a další podpůrné nástroje a vylepšení. Výsledky nového systému demonstrujeme příkladem analýzy pohybu osob v Národním muzeu v Praze.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Zhang, Zhongfei. „Three-dimensional reconstruction under varying constraints on camera geometry for robotic navigation scenarios“. 1996. https://scholarworks.umass.edu/dissertations/AAI9619460.

Der volle Inhalt der Quelle
Annotation:
3D reconstruction is an important research area in computer vision. With the wide spectrum of camera geometry constraints, a general solution is still open. In this dissertation, the topic of 3D reconstruction is addressed under several special constraints on camera geometry, and the 3D reconstruction techniques developed under these constraints have been applied to a robotic navigation scenario. The robotic navigation problems addressed include automatic camera calibration, visual servoing for navigation control, obstacle detection, and 3D model acquisition and extension. The problem of visual servoing control is investigated under the assumption of a structured environment where parallel path boundaries exist. A visual servoing control algorithm has been developed based on geometric variables extracted from this structured environment. This algorithm has been used for both automatic camera calibration and navigation servoing control. Close to real time performance is achieved. The problem of qualitative and quantitative obstacle detection is addressed with a proposal of three algorithms. The first two are purely qualitative in the sense that they only return yes/no answers. The third is quantitative in that it recovers height information for all the points in the scene. Three different constraints on camera geometry are employed. The first algorithm assumes known relative pose between cameras; the second algorithm is based on completely unknown camera relative pose; the third algorithm assumes partial calibration. Experimental results are presented for real and simulated data, and the performance of the three algorithms under different noise levels are compared in simulation. Finally the problem of model acquisition and extension is studied by proposing a 3D reconstruction algorithm using homography mapping. It is shown that given four coplanar correspondences, 3D structures can be recovered up to two solutions and with only one uniform scale factor, which is the distance from the camera center to the 3D plane formed by the four 3D points corresponding to the given four correspondences in the two camera planes. It is also shown that this algorithm is optimal in terms of the number of minimum required correspondences and in terms of the assumption of internal calibration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Lu, Cheng-Chung, und 呂正中. „A Study of Intelligent Resource Allocating Decision Model of Recovery PlanningIn a Disaster Reconstruction“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/42387641116555572946.

Der volle Inhalt der Quelle
Annotation:
碩士
輔仁大學
資訊管理學系
94
Due to disasters with short cycle and increasingly unpredictable, the preventing mechanisms do not work insufficiently, and the only way is to rely on reconstruction and recovery system. Thus, this study focuses on the recovery planning for different disaster areas caused by the calamity. By combining information technology and quantitative method, a decision model is developed in this study to help decision-makers judge the priority of severity about disaster areas, allocate proper number of resources from resources providers to disaster areas under the conditions that resources are limited and the space is scattered. Moreover, the recovery plan is set as a long-term process of interactive decision making, not the one time only. As describe earlier, the critical factors are found out from literature reviews and then are used to construct the analytic hierarchy process (AHP), which is a kind of multiple criteria decision making method in order to helps decision-makers judge the priority of severity and obtain the weight of each disaster area. Several kinds of data, such as resource demand of each disaster area, resource supply of each provider, distribution and routing time from sources to destinations, etc. are offered in a prototype system. A multi-objective recovery and allocation decision model is solved by using parameters mentioned above and famous software Lingo 9.0 for effectively and optimally allocating resources of recovery planning in a disaster reconstruction. Further, evolutionary prototyping method is used for developing a web-based prototype system for managerial analysis. Finally, several experimental designs and simulations settings are implemented for obtaining some management implications and guideline. What-if analysis from our model can improve decision quality and shorten the time in the decision-making process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Wang, Chien-Min, und 王建民. „The Research of Intelligent Information Integration of Creative Concepts:The Reconstruction of Elegance for Ancient Taijiang“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75866051038735733472.

Der volle Inhalt der Quelle
Annotation:
碩士
南榮技術學院
工程科技研究所碩士在職專班
100
In the 21st century, because of the information is easily to be accessed from internet, the government is actively promoting the features of local area with further improving the environment and humanity by using information technology. Recently, a program named "I236 life technology plan" is proposed and hopefully through the implementation of the plan can improve people's quality of life. Furthermore, through the applications of information technology with the development of the combination of hardware and software, the features of each local area can be perfectly presented in front of people who are interested in knowing it further. The scope of the research is focusing on the reestablishment of the picture of ancient Taijiang. Although the government has named Sicao Wildlife Refuge as National Taijiang Park a few years ago trying to maintain its historical landscapes, unfortunately, some places with characteristics such as Anping Castle and Fort Provintia (Sakam Tower) cannot be included in the National Taijiang Park. Taijiang, it is a place for people who migrated from the mainland China by past the dangerous black ditch stayed and grew during the period of Netherland’s occupation, Koxinga, and the Qing dynasty. The purpose of the research is to overcome the obstacles of reality and life through intelligent information integration of creative concepts, achieve balance between reality and history and then to present the whole picture of ancient Taijiang. By looking back through the period of Netherland’s occupation, Koxinga, and the Qing dynasty chronologically, we tried to identify four characteristic areas those were established under the circumstances of Humanities background to build up the picture of ancient Taijiang, which including: Anping Castle, the trading center during Netherland’s occupation stage; Fort Provintia, the city hall during Koxinga stage; Luerhmen house, the transportation center during early Qing dynasty stage; Eternal Golden Castle, the military Center during late Qing dynasty. Using the combination of several digital information technologies such as creative eBook, movie clip, and on site photo exploratory, we try to integrate the parts which are missed in the current National Taijiang Park as a whole ancient Taijiang. As a result, we hope people will be able to know more about the ancient Taijiang through the work we have done in this research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

„A lagrangian reconstruction of a class of local search methods“. 1998. http://library.cuhk.edu.hk/record=b5889537.

Der volle Inhalt der Quelle
Annotation:
by Choi Mo Fung Kenneth.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 105-112).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Constraint Satisfaction Problems --- p.2
Chapter 1.2 --- Constraint Satisfaction Techniques --- p.2
Chapter 1.3 --- Motivation of the Research --- p.4
Chapter 1.4 --- Overview of the Thesis --- p.5
Chapter 2 --- Related Work --- p.7
Chapter 2.1 --- Min-conflicts Heuristic --- p.7
Chapter 2.2 --- GSAT --- p.8
Chapter 2.3 --- Breakout Method --- p.8
Chapter 2.4 --- GENET --- p.9
Chapter 2.5 --- E-GENET --- p.9
Chapter 2.6 --- DLM --- p.10
Chapter 2.7 --- Simulated Annealing --- p.11
Chapter 2.8 --- Genetic Algorithms --- p.12
Chapter 2.9 --- Tabu Search --- p.12
Chapter 2.10 --- Integer Programming --- p.13
Chapter 3 --- Background --- p.15
Chapter 3.1 --- GENET --- p.15
Chapter 3.1.1 --- Network Architecture --- p.15
Chapter 3.1.2 --- Convergence Procedure --- p.18
Chapter 3.2 --- Classical Optimization --- p.22
Chapter 3.2.1 --- Optimization Problems --- p.22
Chapter 3.2.2 --- The Lagrange Multiplier Method --- p.23
Chapter 3.2.3 --- Saddle Point of Lagrangian Function --- p.25
Chapter 4 --- Binary CSP's as Zero-One Integer Constrained Minimization Prob- lems --- p.27
Chapter 4.1 --- From CSP to SAT --- p.27
Chapter 4.2 --- From SAT to Zero-One Integer Constrained Minimization --- p.29
Chapter 5 --- A Continuous Lagrangian Approach for Solving Binary CSP's --- p.33
Chapter 5.1 --- From Integer Problems to Real Problems --- p.33
Chapter 5.2 --- The Lagrange Multiplier Method --- p.36
Chapter 5.3 --- Experiment --- p.37
Chapter 6 --- A Discrete Lagrangian Approach for Solving Binary CSP's --- p.39
Chapter 6.1 --- The Discrete Lagrange Multiplier Method --- p.39
Chapter 6.2 --- Parameters of CSVC --- p.43
Chapter 6.2.1 --- Objective Function --- p.43
Chapter 6.2.2 --- Discrete Gradient Operator --- p.44
Chapter 6.2.3 --- Integer Variables Initialization --- p.45
Chapter 6.2.4 --- Lagrange Multipliers Initialization --- p.46
Chapter 6.2.5 --- Condition for Updating Lagrange Multipliers --- p.46
Chapter 6.3 --- A Lagrangian Reconstruction of GENET --- p.46
Chapter 6.4 --- Experiments --- p.52
Chapter 6.4.1 --- Evaluation of LSDL(genet) --- p.53
Chapter 6.4.2 --- Evaluation of Various Parameters --- p.55
Chapter 6.4.3 --- Evaluation of LSDL(max) --- p.63
Chapter 6.5 --- Extension of LSDL --- p.66
Chapter 6.5.1 --- Arc Consistency --- p.66
Chapter 6.5.2 --- Lazy Arc Consistency --- p.67
Chapter 6.5.3 --- Experiments --- p.70
Chapter 7 --- Extending LSDL for General CSP's: Initial Results --- p.77
Chapter 7.1 --- General CSP's as Integer Constrained Minimization Problems --- p.77
Chapter 7.1.1 --- Formulation --- p.78
Chapter 7.1.2 --- Incompatibility Functions --- p.79
Chapter 7.2 --- The Discrete Lagrange Multiplier Method --- p.84
Chapter 7.3 --- A Comparison between the Binary and the General Formulation --- p.85
Chapter 7.4 --- Experiments --- p.87
Chapter 7.4.1 --- The N-queens Problems --- p.89
Chapter 7.4.2 --- The Graph-coloring Problems --- p.91
Chapter 7.4.3 --- The Car-Sequencing Problems --- p.92
Chapter 7.5 --- Inadequacy of the Formulation --- p.94
Chapter 7.5.1 --- Insufficiency of the Incompatibility Functions --- p.94
Chapter 7.5.2 --- Dynamic Illegal Constraint --- p.96
Chapter 7.5.3 --- Experiments --- p.97
Chapter 8 --- Concluding Remarks --- p.100
Chapter 8.1 --- Contributions --- p.100
Chapter 8.2 --- Discussions --- p.102
Chapter 8.3 --- Future Work --- p.103
Bibliography --- p.105
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

„Low to High Dimensional Modality Reconstruction Using Aggregated Fields of View“. Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54924.

Der volle Inhalt der Quelle
Annotation:
abstract: Autonomous systems that are out in the real world today deal with a slew of different data modalities to perform effectively in tasks ranging from robot navigation in complex maneuverable robots to identity verification in simpler static systems. The performance of the system heavily banks on the continuous supply of data from all modalities. These systems can face drastically increased risk with the loss of one or multiple modalities due to an adverse scenario like that of hardware malfunction, inimical environmental conditions, etc. This thesis investigates modality hallucination and its efficacy in mitigating the risks posed to the autonomous system. Modality hallucination is proposed as one effective way to ensure consistent modality availability thereby reducing unfavorable consequences. While there has been a significant research effort in high-to-low dimensional modality hallucination, like that of RGB to depth, there is considerably lesser interest in the other direction( low-to-high dimensional modality prediction). This thesis serves to demonstrate the effectiveness of this low-to-high modality hallucination in reducing the uncertainty in the affected system while also ensuring that the method remains task agnostic. A deep neural network based encoder-decoder architecture that aggregates multiple fields of view in its encoder blocks to recover the lost information of the affected modality from the extant modality is presented with evidence of its efficacy. The hallucination process is implemented by capturing a non-linear mapping between the data modalities and the learned mapping is used to aid the extant modality to mitigate the risk posed to the system in the adverse scenarios which involve modality loss. The results are compared with a well known generative model built for the task of image translation, as well as an off-the-shelf semantic segmentation architecture re-purposed for hallucination. To validate the practicality of hallucinated modality, extensive classification and segmentation experiments are conducted on the University of Washington's depth image database (UWRGBD) database and the New York University database (NYUD) and demonstrate that hallucination indeed lessens the negative effects of the modality loss.
Dissertation/Thesis
Masters Thesis Computer Engineering 2019
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

„Locally Adaptive Stereo Vision Based 3D Visual Reconstruction“. Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.44195.

Der volle Inhalt der Quelle
Annotation:
abstract: Using stereo vision for 3D reconstruction and depth estimation has become a popular and promising research area as it has a simple setup with passive cameras and relatively efficient processing procedure. The work in this dissertation focuses on locally adaptive stereo vision methods and applications to different imaging setups and image scenes. Solder ball height and substrate coplanarity inspection is essential to the detection of potential connectivity issues in semi-conductor units. Current ball height and substrate coplanarity inspection tools are expensive and slow, which makes them difficult to use in a real-time manufacturing setting. In this dissertation, an automatic, stereo vision based, in-line ball height and coplanarity inspection method is presented. The proposed method includes an imaging setup together with a computer vision algorithm for reliable, in-line ball height measurement. The imaging setup and calibration, ball height estimation and substrate coplanarity calculation are presented with novel stereo vision methods. The results of the proposed method are evaluated in a measurement capability analysis (MCA) procedure and compared with the ground-truth obtained by an existing laser scanning tool and an existing confocal inspection tool. The proposed system outperforms existing inspection tools in terms of accuracy and stability. In a rectified stereo vision system, stereo matching methods can be categorized into global methods and local methods. Local stereo methods are more suitable for real-time processing purposes with competitive accuracy as compared with global methods. This work proposes a stereo matching method based on sparse locally adaptive cost aggregation. In order to reduce outlier disparity values that correspond to mis-matches, a novel sparse disparity subset selection method is proposed by assigning a significance status to candidate disparity values, and selecting the significant disparity values adaptively. An adaptive guided filtering method using the disparity subset for refined cost aggregation and disparity calculation is demonstrated. The proposed stereo matching algorithm is tested on the Middlebury and the KITTI stereo evaluation benchmark images. A performance analysis of the proposed method in terms of the I0 norm of the disparity subset is presented to demonstrate the achieved efficiency and accuracy.
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2017
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Schöning, Julius. „Interactive 3D Reconstruction“. Doctoral thesis, 2018. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2018052317188.

Der volle Inhalt der Quelle
Annotation:
Applicable image-based reconstruction of three-dimensional (3D) objects offers many interesting industrial as well as private use cases, such as augmented reality, reverse engineering, 3D printing and simulation tasks. Unfortunately, image-based 3D reconstruction is not yet applicable to these quite complex tasks, since the resulting 3D models are single, monolithic objects without any division into logical or functional subparts. This thesis aims at making image-based 3D reconstruction feasible such that captures of standard cameras can be used for creating functional 3D models. The research presented in the following does not focus on the fine-tuning of algorithms to achieve minor improvements, but evaluates the entire processing pipeline of image-based 3D reconstruction and tries to contribute at four critical points, where significant improvement can be achieved by advanced human-computer interaction: (i) As the starting point of any 3D reconstruction process, the object of interest (OOI) that should be reconstructed needs to be annotated. For this task, novel pixel-accurate OOI annotation as an interactive process is presented, and an appropriate software solution is released. (ii) To improve the interactive annotation process, traditional interface devices, like mouse and keyboard, are supplemented with human sensory data to achieve closer user interaction. (iii) In practice, a major obstacle is the so far missing standard for file formats for annotation, which leads to numerous proprietary solutions. Therefore, a uniform standard file format is implemented and used for prototyping the first gaze-improved computer vision algorithms. As a sideline of this research, analogies between the close interaction of humans and computer vision systems and 3D perception are identified and evaluated. (iv) Finally, to reduce the processing time of the underlying algorithms used for 3D reconstruction, the ability of artificial neural networks to reconstruct 3D models of unknown OOIs is investigated. Summarizing, the gained improvements show that applicable image-based 3D reconstruction is within reach but nowadays only feasible by supporting human-computer interaction. Two software solutions, one for visual video analytics and one for spare part reconstruction are implemented. In the future, automated 3D reconstruction that produces functional 3D models can be reached only when algorithms become capable of acquiring semantic knowledge. Until then, the world knowledge provided to the 3D reconstruction pipeline by human computer interaction is indispensable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Yeh, Fang-Tzu, und 葉芳慈. „Three-dimensional Reconstruction System of Intelligent Automatic Detection of Nasal Vestibule and Nasal Septum in Computed Tomography Images“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/16376431035084518654.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣科技大學
自動化及控制研究所
100
This study entitled “Three-dimensional Reconstruction System of Intelligent Automatic Detection of Nasal Vestibule and Nasal Septum in Computed Tomography Images” attempted to combine the image processing technology to capture the computed tomography image signal and the back-propagation network for the automatic capturing of the nasal vestibule and nasal septum areas in computed tomography images. Moreover, it reconstructed the three-dimensional images by combining the two areas with skull and nose to measure the three-dimensional information. The present medical diagnosis often relies on computed tomography images to manually select the areas for reference, and use the software to conduct the three dimensional reconstruction measurement of the selected area for the reference of the pre-operational judgment. Therefore, this study developed a three-dimensional reconstruction system of intelligent automatic detection of nasal vestibule and nasal septum in computed tomography images. The proposed system employs the image processing technology combined with back-propagation network to segment the nasal vestibule and nasal septum areas, and mark each computed tomography image individually for the three dimensional reconstruction of the nasal vestibule and nasal septum areas. Finally, the representative points of three operational risky areas, including brain, eye rim internal side and the eye rim lower edge, were marked in order to measure the distance between intranasal information and marked points. The system could assist doctors in pre-operation analysis and judgment with more nasal information to reduce errors caused by human factors. The overall detection rate of the proposed three-dimensional measurement system of intelligent automatic detection of nasal vestibule and nasal septum in computed tomography images reached 99.7%. The three-dimensional image presentation combined with the skull and nose has been confirmed by doctors of the Department of Otolaryngology - Head and Neck Surgery, at Tri-Service General Hospital as valuable in reference. The study findings can facilitate the pre-operation diagnosis and judgment of doctors, as well as help to improve medical quality and the development of the medical industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie