Дисертації з теми "3D data analysi"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "3D data analysi".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Deighton, M. J. "3D texture analysis in seismic data." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/842764/.
Повний текст джерелаMadrigali, Andrea. "Analysis of Local Search Methods for 3D Data." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016.
Знайти повний текст джерелаOrriols, Majoral Xavier. "Generative Models for Video Analysis and 3D Range Data Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/3037.
Повний текст джерелаLa elección de una representación adecuada para los datos toma una relevancia significativa cuando se tratan invariancias, dado que estas siempre implican una reducción del los grados de libertad del sistema, i.e., el número necesario de coordenadas para la representación es menor que el empleado en la captura de datos. De este modo, la descomposición en unidades básicas y el cambio de representación dan lugar a que un problema complejo se pueda transformar en uno de manejable. Esta simplificación del problema de la estimación debe depender del mecanismo propio de combinación de estas primitivas con el fin de obtener una descripción óptima del modelo complejo global. Esta tesis muestra como los Modelos de Variables Latentes reducen dimensionalidad, que teniendo en cuenta las simetrías internas del problema, ofrecen una manera de tratar con datos parciales y dan lugar a la posibilidad de predicciones de nuevas observaciones.
Las líneas de investigación de esta tesis están dirigidas al manejo de datos provinentes de múltiples fuentes. Concretamente, esta tesis presenta un conjunto de nuevos algoritmos aplicados a dos áreas diferentes dentro de la Visión por Computador: i) video análisis y sumarización y ii) datos range 3D. Ambas áreas se han enfocado a través del marco de los Modelos Generativos, donde se han empleado protocolos similares para representar datos.
The majority of problems in Computer Vision do not contain a direct relation between the stimuli provided by a general purpose sensor and its corresponding perceptual category. A complex learning task must be involved in order to provide such a connection. In fact, the basic forms of energy, and their possible combinations are a reduced number compared to the infinite possible perceptual categories corresponding to objects, actions, relations among objects... Two main factors determine the level of difficulty of a specific problem: i) The different levels of information that are employed and ii) The complexity of the model that is intended to explain the observations.
The choice of an appropriate representation for the data takes a significant relevance when it comes to deal with invariances, since these usually imply that the number of intrinsic degrees of
freedom in the data distribution is lower than the coordinates used to represent it. Therefore, the decomposition into basic units (model parameters) and the change of representation, make that a complex problem can be transformed into a manageable one. This simplification of the estimation problem has to rely on a proper mechanism of combination of those primitives in order to give an optimal description of the global complex model. This thesis shows how Latent Variable Models reduce dimensionality, taking into account the internal symmetries of a problem, provide a manner of dealing with missing data and make possible predicting new observations.
The lines of research of this thesis are directed to the management of multiple data sources. More specifically, this thesis presents a set of new algorithms applied to two different areas in Computer Vision: i) video analysis and summarization, and ii) 3D range data. Both areas have been approached through the Generative Models framework, where similar protocols for representing data have been employed.
Qian, Zhongping. "Analysis of seismic anisotropy in 3D multi-component seismic data." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3515.
Повний текст джерелаLaha, Bireswar. "Immersive Virtual Reality and 3D Interaction for Volume Data Analysis." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51817.
Повний текст джерелаPh. D.
Patel, Ankur. "3D morphable models : data pre-processing, statistical analysis and fitting." Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/1576/.
Повний текст джерелаPolat, Songül. "Combined use of 3D and hyperspectral data for environmental applications." Thesis, Lyon, 2021. http://www.theses.fr/2021LYSES049.
Повний текст джерелаEver-increasing demands for solutions that describe our environment and the resources it contains, require technologies that support efficient and comprehensive description, leading to a better content-understanding. Optical technologies, the combination of these technologies and effective processing are crucial in this context. The focus of this thesis lies on 3D scanning and hyperspectral technologies. Rapid developments in hyperspectral imaging are opening up new possibilities for better understanding the physical aspects of materials and scenes in a wide range of applications due to their high spatial and spectral resolutions, while 3D technologies help to understand scenes in a more detailed way by using geometrical, topological and depth information. The investigations of this thesis aim at the combined use of 3D and hyperspectral data and demonstrates the potential and added value of a combined approach by means of different applications. Special focus is given to the identification and extraction of features in both domains and the use of these features to detect objects of interest. More specifically, we propose different approaches to combine 3D and hyperspectral data depending on the HSI/3D technologies used and show how each sensor could compensate the weaknesses of the other. Furthermore, a new shape and rule-based method for the analysis of spectral signatures was developed and presented. The strengths and weaknesses compared to existing approach-es are discussed and the outperformance compared to SVM methods are demonstrated on the basis of practical findings from the field of cultural heritage and waste management.Additionally, a newly developed analytical method based on 3D and hyperspectral characteristics is presented. The evaluation of this methodology is based on a practical exam-ple from the field of WEEE and focuses on the separation of materials like plastics, PCBs and electronic components on PCBs. The results obtained confirms that an improvement of classification results could be achieved compared to previously proposed methods.The claim of the individual methods and processes developed in this thesis is general validity and simple transferability to any field of application
Landström, Anders. "Adaptive tensor-based morphological filtering and analysis of 3D profile data." Licentiate thesis, Luleå tekniska universitet, Signaler och system, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26510.
Повний текст джерелаGodkänd; 2012; 20121017 (andlan); LICENTIATSEMINARIUM Ämne: Signalbehandling/Signal Processing Examinator: Universitetslektor Matthew Thurley, Institutionen för system- och rymdteknik, Luleå tekniska universitet Diskutant: Associate Professor Cris Luengo, Centre for Image Analysis, Uppsala Tid: Onsdag den 21 november 2012 kl 12.30 Plats: A1545, Luleå tekniska universitet
Cheewinsiriwat, Pannee. "Development of a 3D geospatial data representation and spatial analysis system." Thesis, University of Newcastle Upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.514467.
Повний текст джерелаVick, Louise Mary. "Evaluation of field data and 3D modelling for rockfall hazard analysis." Thesis, University of Canterbury. Geological Sciences, 2015. http://hdl.handle.net/10092/10845.
Повний текст джерелаXinyu, Chang. "Neuron Segmentation and Inner Structure Analysis of 3D Electron Microscopy Data." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1369834525.
Повний текст джерелаRajamanoharan, Georgia. "Towards spatial and temporal analysis of facial expressions in 3D data." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/31549.
Повний текст джерелаCoban, Sophia. "Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomography." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/practical-approaches-to-reconstruction-and-analysis-for-3d-and-dynamic-3d-computed-tomography(f34a2617-09f9-4c4e-9669-f86f6cf2bce5).html.
Повний текст джерелаTrapp, Matthias. "Analysis and exploration of virtual 3D city models using 3D information lenses." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1393/.
Повний текст джерелаDiese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen.
Böniger, Urs. "Attributes and their potential to analyze and interpret 3D GPR data." Phd thesis, Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2011/5012/.
Повний текст джерелаGeophysikalische Erkundungsmethoden haben in den vergangenen Jahrzehnten eine weite Verbreitung bei der zerstörungsfreien beziehungsweise zerstörungsarmen Erkundung des oberflächennahen Untergrundes gefunden. Im Vergleich zur Vielzahl anderer existierender Verfahrenstypen ermöglicht das Georadar (auch als Ground Penetrating Radar bezeichnet) unter günstigen Standortbedingungen Untersuchungen mit der höchsten räumlichen Auflösung. Georadar zählt zu den elektromagnetischen (EM) Verfahren und beruht als Wellenverfahren auf der Ausbreitung von hochfrequenten EM-Wellen, das heisst deren Reflektion, Refraktion und Transmission im Untergrund. Während zweidimensionale Messstrategien bereits weit verbreitet sind, steigt gegenwärtig das Interesse an hochauflösenden, flächenhaften Messstrategien, die es erlauben, Untergrundstrukturen dreidimensional abzubilden. Ein dem Georadar prinzipiell ähnliches Verfahren ist die Reflexionsseismik, deren Hauptanwendung in der Lagerstättenerkundung liegt. Im Laufe des letzten Jahrzehnts führte der zunehmende Bedarf an neuen Öl- und Gaslagerstätten sowie die Notwendigkeit zur optimalen Nutzung existierender Reservoirs zu einer verstärkten Anwendung und Entwicklung sogenannter seismischer Attribute. Attribute repräsentieren ein Datenmaß, welches zu einer verbesserten visuellen Darstellung oder Quantifizierung von Dateneigenschaften führt die von Relevanz für die jeweilige Fragestellung sind. Trotz des Erfolgs von Attributanalysen bei reservoirbezogenen Anwendungen und der grundlegenden Ähnlichkeit von reflexionsseismischen und durch Georadar erhobenen Datensätzen haben attributbasierte Ansätze bisher nur eine geringe Verbreitung in der Georadargemeinschaft gefunden. Das Ziel dieser Arbeit ist es, das Potential von Attributanalysen zur verbesserten Interpretation von Georadardaten zu untersuchen. Dabei liegt der Schwerpunkt auf Anwendungen aus der Archäologie und dem Ingenieurwesen. Der Erfolg von Attributen im Allgemeinen und von solchen mit Berücksichtigung von Nachbarschaftsbeziehungen im Speziellen steht in engem Zusammenhang mit der Genauigkeit, mit welcher die gemessenen Daten räumlich lokalisiert werden können. Vor der eigentlichen Attributuntersuchung wurden deshalb die Möglichkeiten zur kinematischen Positionierung in Echtzeit beim Georadarverfahren untersucht. Ich konnte zeigen, dass die Kombination von modernen selbstverfolgenden Totalstationen mit Georadarinstrumenten unter Verwendung von leistungsfähigen Funkmodems eine zentimetergenaue Positionierung ermöglicht. Experimentelle Studien haben gezeigt, dass die beiden potentiell limitierenden Faktoren - systeminduzierte Signalstöreffekte und Datenverzögerung (sogenannte Latenzzeiten) - vernachlässigt beziehungsweise korrigiert werden können. In der Archäologie ist die Untersuchung oberflächennaher Strukturen und deren räumlicher Gestalt wichtig zur Optimierung geplanter Grabungen. Das Georadar hat sich hierbei zu einem der wohl am meisten genutzten zerstörungsfreien geophysikalischen Verfahren entwickelt. Archäologische Georadardatensätze zeichnen sich jedoch oft durch eine hohe Komplexität aus, was mit der wiederholten anthropogenen Nutzung des oberflächennahen Untergrundes in Verbindung gebracht werden kann. In dieser Arbeit konnte gezeigt werden, dass die Verwendung zweier unterschiedlicher Attribute zur Beschreibung der Variabilität zwischen benachbarten Datenspuren eine deutlich verbesserte Interpretation in Bezug auf die Fragestellung ermöglicht. Des Weiteren konnte ich zeigen, dass eine integrative Auswertung von mehreren Datensätzen (methodisch sowie bearbeitungstechnisch) zu einer fundierteren Interpretation führen kann, zum Beispiel bei komplementären Informationen der Datensätze. Im Ingenieurwesen stellen Beschädigungen oder Zerstörungen von Versorgungsleitungen im Untergrund eine große finanzielle Schadensquelle dar. Polarisationseffekte, das heisst Änderungen der Signalamplitude in Abhängigkeit von Akquisitions- sowie physikalischen Parametern stellen ein bekanntes Phänomen dar, welches in der Anwendung bisher jedoch kaum genutzt wird. In dieser Arbeit wurde gezeigt, wie Polarisationseffekte zu einer verbesserten Interpretation verwendet werden können. Die Überführung von geometrischen und physikalischen Attributen in ein neues, so genanntes Depolarisationsattribut hat gezeigt, wie unterschiedliche Leitungstypen extrahiert und anhand ihrer Polarisationscharakteristika klassifiziert werden können. Weitere wichtige physikalische Charakteristika des Georadarwellenfeldes können mit dem Matching Pursuit-Verfahren untersucht werden. Dieses Verfahren hatte in den letzten Jahren einen großen Einfluss auf moderne Signal- und Bildverarbeitungsansätze. Matching Pursuit wurde in der Geophysik bis jetzt hauptsächlich zur hochauflösenden Zeit-Frequenzanalyse verwendet. Anhand eines modifizierten Tree-based Matching Pursuit Algorithmus habe ich demonstriert, welche weiterführenden Möglichkeiten solche Datenzerlegungen für die Bearbeitung und Interpretation von Georadardaten eröffnen. Insgesamt zeigt diese Arbeit, wie moderne Vermessungstechniken und attributbasierte Analysestrategien genutzt werden können um dreidimensionale Daten effektiv und genau zu akquirieren beziehungsweise die resultierenden Datensätze effizient und verlässlich zu interpretieren.
Wright, Gabriel J. T. "Automated 3D echocardiography analysis : advanced methods and their evaluation on clinical data." Thesis, University of Oxford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275378.
Повний текст джерелаLi, Tianyou. "3D Representation of EyeTracking Data : An Implementation in Automotive Perceived Quality Analysis." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291222.
Повний текст джерелаBetydelsen av upplevd kvalitet inom bilindustrin har ökat kraftigt dessa år. Eftersom uppfattningar om upplevd kvalitet är en mycket subjektivt är ögonspårningsteknik en av de bästa metoderna för att extrahera kundernas undermedvetna visuella aktivitet under interaktion med produkten. Denna avhandling syftar till att hitta en lämplig lösning för att representera 3Dögonspårningsdata för ytterligare förbättringar av validitets- och verifieringseffektiviteten hos upplevd kvalitetsanalys, och försöker svara på frågan: Hur kan ögonspårningsdata presenteras och integreras i 3D-arbetsflödet för bildesign som ett material som gör det möjligt för designers att bättre förstå sina kunder? I studien byggdes ett prototypsystem för bilinteriörinspektion i showroomet för virtuell verklighet (VR) genom en explorativ forskningsprocess inklusive undersökningar i förvärv av blickdata i VR, klassificering av ögonrörelse från insamlad blicksdata och visualiseringar för de klassificerade ögonrörelserna. Prototypsystemet utvärderades sedan genom jämförelser mellan algoritmer och återkopplingar från ingenjörerna som deltog i pilotstudien. Följaktligen implementerades en metod som kombinerar I-VT (identifiering med hastighetströskel) och DBSCAN (densitetsbaserad spatial gruppering av applikation med brus) som den optimala algoritmen för ögonrörelseklassificering. En modifierad värmekarta, ett klusterdiagram, en konvex skrovdiagram, tillsammans med textinformation, användes för att konstruera den fullständiga visualiseringen av ögonspårningsdata. Prototypsystemet har gjort det möjligt för bilkonstruktörer och ingenjörer att undersöka både kundernas och deras visuella beteende i det virtuella 3D-utställningsrummet under en bilinspektion, följt av utvinning och visualisering av den insamlade blicken. Denna uppsats presenterar forskningsprocessen, inklusive introduktion till relevant teori, implementeringen av prototypsystemet och dess resultat. Så småningom diskuteras styrkor och svagheter, liksom det framtida arbetet i både prototyplösningen och potentiella experimentella användningsfall.
黃卓鴻 and Cheok-hung Wong. "An analysis of the use of an interactive 3D hypermedia paradigm for architecture." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31237824.
Повний текст джерелаAchanta, Leela Venkata Naga Satish. "Data extraction for scale factor determination used in 3D-photogrammetry for plant analysis." Kansas State University, 2013. http://hdl.handle.net/2097/15975.
Повний текст джерелаDepartment of Computing and Information Sciences
Mitchell L. Neilsen
ImageJ and its recent upgrade, Fiji, are image processing tools that provide extensibility via Java plug-ins and recordable macros [2]. The aim of this project is to develop a plug-in compatible with ImageJ/Fiji, which extracts length information from images for scale factor determination used in 3-D Photogrammetry for plant analysis [5]. Plant images when processed using Agisoft software, gives an image consisting of the images processed merged into a single 3-D model. The coordinate system of the 3-D image generated is a relative coordinate system. The distances in the relative coordinate system are proportional to but not numerically the same as the real world distances. To know the length of any feature represented in 3-D model in real world distance, a scale factor is required. This scale factor when multiplied by some distance in the relative coordinate system, yields the actual length of that feature in the real coordinate system. For determining the scale factor we process images consisting of unsharpened yellow colored pencils which are all the same shape, color and size. The plug-in considers each pencil as a unique region by assigning unique value and unique color to all its pixels. The distance between the end midpoints of each pencil is calculated. The date and time on which the image file gets processed, name of the image file, image file creation and modification date and time, total number of valid (complete) pencils processed, the midpoints of ends of each valid pencil, length (distance) i.e., the number of pixels between the two end midpoints are all written to the output file. The length of the pencils written to the output file is used by the researchers to calculate the scale factor. Plug-in was tested on real images and the results obtained were same as the expected result.
Boguslawski, Pawel. "Modelling and analysing 3D building interiors with the dual half-edge data structure." Thesis, University of South Wales, 2011. https://pure.southwales.ac.uk/en/studentthesis/modelling-and-analysing-3d-building-interiors-with-the-dual-halfedge-data-structure(ac1af643-835a-4093-90cd-3d51c696e280).html.
Повний текст джерелаWang, Xiyao. "Augmented reality environments for the interactive exploration of 3D data." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG052.
Повний текст джерелаExploratory visualization of 3D data is fundamental in many scientific domains. Traditionally, experts use a PC workstation and rely on mouse and keyboard to interactively adjust the view to observe the data. This setup provides immersion through interaction---users can precisely control the view and the parameters, but it does not provide any depth clues which can limit the comprehension of large and complex 3D data. Virtual or augmented reality (V/AR) setups, in contrast, provide visual immersion with stereoscopic views. Although their benefits have been proven, several limitations restrict their application to existing workflows, including high setup/maintenance needs, difficulties of precise control, and, more importantly, the separation from traditional analysis tools. To benefit from both sides, we thus investigated a hybrid setting combining an AR environment with a traditional PC to provide both interactive and visual immersions for 3D data exploration. We closely collaborated with particle physicists to understand their general working process and visualization requirements to motivate our design. First, building on our observations and discussions with physicists, we built up a prototype that supports fundamental tasks for exploring their datasets. This prototype treated the AR space as an extension to the PC screen and allowed users to freely interact with each using the mouse. Thus, experts could benefit from the visual immersion while using analysis tools on the PC. An observational study with 7 physicists in CERN validated the feasibility of such a hybrid setting, and confirmed the benefits. We also found that the large canvas of the AR and walking around to observe the data in AR had a great potential for data exploration. However, the design of mouse interaction in AR and the use of PC widgets in AR needed improvements. Second, based on the results of the first study, we decided against intensively using flat widgets in AR. But we wondered if using the mouse for navigating in AR is problematic compared to high degrees of freedom (DOFs) input, and then attempted to investigate if the match or mismatch of dimensionality between input and output devices play an important role in users’ performance. Results of user studies (that compared the performance of using mouse, space mouse, and tangible tablet paired with the screen or the AR space) did not show that the (mis-)match was important. We thus concluded that the dimensionality was not a critical point to consider, which suggested that users are free to choose any input that is suitable for a specific task. Moreover, our results suggested that the mouse was still an efficient tool compared to high DOFs input. We can therefore validate our design of keeping the mouse as the primary input for the hybrid setting, while other modalities should only serve as an addition for specific use cases. Next, to support the interaction and to keep the background information while users are walking around to observe the data in AR, we proposed to add a mobile device. We introduced a novel approach that augments tactile interaction with pressure sensing for 3D object manipulation/view navigation. Results showed that this method could efficiently improve the accuracy, with limited influence on completion time. We thus believe that it is useful for visualization purposes where a high accuracy is usually demanded. Finally, we summed up in this thesis all the findings we have and came up with an envisioned setup for a realistic data exploration scenario that makes use of a PC workstation, an AR headset, and a mobile device. The work presented in this thesis shows the potential of combining a PC workstation with AR environments to improve the process of 3D data exploration and confirms its feasibility, all of which will hopefully inspire future designs that seamlessly bring immersive visualization to existing scientific workflows
Afsar, Fatima. "ANALYSIS AND INTERPRETATION OF 2D/3D SEISMIC DATA OVER DHURNAL OIL FIELD, NORTHERN PAKISTAN." Thesis, Uppsala universitet, Geofysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-202565.
Повний текст джерелаFangerau, Jens [Verfasser], and Heike [Akademischer Betreuer] Leitte. "Interactive Similarity Analysis for 3D+t Cell Trajectory Data / Jens Fangerau ; Betreuer: Heike Leitte." Heidelberg : Universitätsbibliothek Heidelberg, 2015. http://d-nb.info/1180301900/34.
Повний текст джерелаMillán, Vaquero Ricardo Manuel [Verfasser]. "Visualization methods for analysis of 3D multi-scale medical data / Ricardo Manuel Millán Vaquero." Hannover : Technische Informationsbibliothek (TIB), 2016. http://d-nb.info/111916088X/34.
Повний текст джерелаNellist, Clara. "Characterisation and beam test data analysis of 3D silicon pixel detectors for the ATLAS upgrade." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/characterisation-and-beam-test-data-analysis-of-3d-silicon-pixel-detectors-for-the-atlas-upgrade(22a82583-5588-4675-af5c-c3595b4ceb38).html.
Повний текст джерелаRamírez, Jiménez Guillermo. "Electric sustainability analysis for concrete 3D printing machine." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258928.
Повний текст джерелаNumera blir tillverkningstekniken alltmer medveten om effektivitet och hållbarhet. En av dem är den så kallade 3Dutskriften. Medan 3Dutskrift ofta är kopplad till plast, är verkligheten att det finns många andra material som testas, vilket kan ha flera förbättringar över plast.Ett av dessa alternativ är sten eller betong, vilket är mer lämpligt inom arkitektur och konstnärliga fält. På grund av sin natur inbegriper denna nya teknik användningen av nya tekniker jämfört med de vanligare 3Dskrivarna. Detta innebär att det kan vara intressant att veta hur mycket mer energieffektiva dessa tekniker är och hur de kan förbättras i framtida revisioner.Denna avhandling är ett försök att studera och analysera de olika enheter som utgör en av dessa skrivare och med denna information, bygga en modell som exakt beskriver dess beteende.För detta ändamål mäts effekten på många punkter och senare analyseras och anpassas den till en fördefinierad funktion. Efter anpassning har gjorts beräknas felet för att visa hur exakt modellen är jämfört med originaldata.Det visade sig att många av dessa enheter producerar spänningsspikar på grund av dess olinjära beteende. Detta beteende är vanligtvis relaterat till omkoppling och kan undvikas med olika enheter.Slutligen ges några råd om framtida forskning och revideringar, vilket kan vara till hjälp för säkerhet, effektivitet och kvalitet.
Bagesteiro, Leia Bernardi. "Development of a ground reaction force-measuring treadmill for the analysis of prosthetic limbs during amputee running." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/676/.
Повний текст джерелаMorlot, Jean-Baptiste. "Annotation of the human genome through the unsupervised analysis of high-dimensional genomic data." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066641/document.
Повний текст джерелаThe human body has more than 200 different cell types each containing an identical copy of the genome but expressing a different set of genes. The control of gene expression is ensured by a set of regulatory mechanisms acting at different scales of time and space. Several diseases are caused by a disturbance of this system, notably some cancers, and many therapeutic applications, such as regenerative medicine, rely on understanding the mechanisms of gene regulation. This thesis proposes, in a first part, an annotation algorithm (GABI) to identify recurrent patterns in the high-throughput sequencing data. The particularity of this algorithm is to take into account the variability observed in experimental replicates by optimizing the rate of false positive and false negative, increasing significantly the annotation reliability compared to the state of the art. The annotation provides simplified and robust information from a large dataset. Applied to a database of regulators activity in hematopoiesis, we propose original results, in agreement with previous studies. The second part of this work focuses on the 3D organization of the genome, intimately linked to gene expression. This structure is now accessible thanks to 3D reconstruction algorithm from contact data between chromosomes. We offer improvements to the currently most efficient algorithm of the domain, ShRec3D, allowing to adjust the reconstruction according to the user needs
Ragnucci, Beatrice. "Data analysis of collapse mechanisms of a 3D printed groin vault in shaking table testing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22365/.
Повний текст джерелаMorotti, Elena. "Reconstruction of 3D X-ray tomographic images from sparse data with TV-based methods." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3423265.
Повний текст джерелаQuesta tesi propone l'implementazione efficiente di due metodi iterativi per la ricostruzione di immagini tridimensionali di tomografia a raggi X, nel caso specifico in cui il volume debba essere ottenuto da dati sottocampionati. Quando le proiezioni non possono essere acquisite completamente, la risultante tecnica di Tomografia Computerizzata Sparsa (SpCT) è descritta da un sistema lineare sottodeterminato, quindi ne riformuliamo il modello aggiungendo il termine di Variazione Totale (TV). Definiamo pertanto un problema di ottimizzazione e lo risolviamo con un algoritmo di Gradiente Scalato Proiettato e uno di Punto Fisso. Entrambi i metodi sono stati accelerati con valide strategie, calibrate appositamente per la SpCT. In questo contesto è infatti necessario ricostruire un'immagine in brevissimo tempo, risolvendo un problema di ampie dimensioni. Alcuni test di simulazione forniscono buoni risultati che attestano la validità sia dell'approccio model-based che dei metodi proposti. Accurate ricostruzioni sono state ottenute a partire da proiezioni mediche reali, in poche iterazioni: ciò conferma l'adeguatezza di quanto proposto per la ricostruzione di immagini nel campo della SpCT.
Tabbi, Giuseppe Teodoro Maria [Verfasser]. "Parallelization of a Data-Driven Independent Component Analysis to Analyze Large 3D-Polarized Light Imaging Data Sets / Giuseppe Teodoro Maria Tabbi." Wuppertal : Universitätsbibliothek Wuppertal, 2016. http://d-nb.info/1120027241/34.
Повний текст джерелаMeinhardt, Llopis Enric. "Morphological and statistical techniques for the analysis of 3D images." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/22719.
Повний текст джерелаThis thesis proposes a tree data structure to encode the connected components of level sets of 3D images. This data structure is applied as a main tool in several proposed applications: 3D morphological operators, medical image visualization, analysis of color histograms, object tracking in videos and edge detection. Motivated by the problem of edge linking, the thesis contains also an study of anisotropic total variation denoising as a tool for computing anisotropic Cheeger sets. These anisotropic Cheeger sets can be used to find global optima of a class of edge linking functionals. They are also related to some affine invariant descriptors which are used in object recognition, and this relationship is laid out explicitly.
Momcheva, Ivelina G., Gabriel B. Brammer, Dokkum Pieter G. van, Rosalind E. Skelton, Katherine E. Whitaker, Erica J. Nelson, Mattia Fumagalli, et al. "THE 3D-HST SURVEY: HUBBLE SPACE TELESCOPE WFC3/G141 GRISM SPECTRA, REDSHIFTS, AND EMISSION LINE MEASUREMENTS FOR ∼100,000 GALAXIES." IOP PUBLISHING LTD, 2016. http://hdl.handle.net/10150/621407.
Повний текст джерелаComino, Trinidad Marc. "Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/670373.
Повний текст джерелаDurant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges.
Tašárová, Zuzana. "Gravity data analysis and interdisciplinary 3D modelling of a convergent plate margin (Chile, 36°-42°S)." [S.l. : s.n.], 2004. http://www.diss.fu-berlin.de/2005/19/index.html.
Повний текст джерелаBorke, Lukas. "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA." Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18307.
Повний текст джерелаWith the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
Khokhlova, Margarita. "Évaluation clinique de la démarche à partir de données 3D." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK079.
Повний текст джерелаClinical Gait analysis is traditionally subjective, being performed by clinicians observing patients gait. A common alternative to such analysis is markers-based systems and ground-force platforms based systems. However, this standard gait analysis requires specialized locomotion laboratories, expensive equipment, and lengthy setup and post-processing times. Researchers made numerous attempts to propose a computer vision based alternative for clinical gait analysis. With the appearance of commercial 3D cameras, the problem of qualitative gait assessment was reviewed. Researchers realized the potential of depth-sensing devices for motion analysis applications. However, despite much encouraging progress in 3D sensing technologies, their real use in clinical application remains scarce.In this dissertation, we develop models and techniques for movement assessment using a Microsoft Kinect sensor. In particular, we study the possibility to use different data provided by an RGBD camera for motion and posture analysis. The main contributions of this dissertation are the following. First, we executed a literature study to estimate the important gait parameters, the feasibility of different possible technical solutions and existing gait assessment methods. Second, we propose a 3D point cloud based posture descriptor. The designed descriptor can classify static human postures based on 3D data without the use of skeletonization algorithms. Third, we build an acquisition system to be used for gait analysis based on the Kinect v2 sensor. Fourth, we propose an abnormal gait detection approach based on the skeleton data. We demonstrate that our gait analysis tool works well on a collection of custom data and existing benchmarks. Weshow that our gait assessment approach advances the progress in the field, is ready to be used for gait assessment scenario and requires a minimum of the equipment
Maglo, Adrien Enam. "Progressive and Random Accessible Mesh Compression." Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00966180.
Повний текст джерелаRoberts, Ronald Anthony. "A new approach to Road Pavement Management Systems by exploiting Data Analytics, Image Analysis and Deep Learning." Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/492523.
Повний текст джерелаMotakis, Efthimios. "Multi-scale approaches for the statistical analysis of microarray data (with an application to 3D vesicle tracking)." Thesis, University of Bristol, 2007. http://hdl.handle.net/1983/6a764dc8-c4b8-4034-94cc-e58457825a47.
Повний текст джерелаGafeira, Gonçalves Joana. "Submarine mass movement processes on the North Sea Fan as interpreted from the 3D seismic data." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4714.
Повний текст джерелаDudziak, William James. "PRESENTATION AND ANALYSIS OF A MULTI-DIMENSIONAL INTERPOLATION FUNCTION FOR NON-UNIFORM DATA: MICROSPHERE PROJECTION." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1183403994.
Повний текст джерелаHofmann, Alexandra. "An Approach to 3D Building Model Reconstruction from Airborne Laser Scanner Data Using Parameter Space Analysis and Fusion of Primitives." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1121943034550-40151.
Повний текст джерелаIn der vorliegenden Arbeit wird eine neue Methode zur automatischen Rekonstruktion von 3D Gebäudemodellen aus Flugzeuglaserscannerdaten vorgestellt. Diese 3D Gebäudemodelle können in technischer und landschaftsplanerischer Hinsicht genutzt werden. Bezüglich der zu entwickelnden Methode wurden Regelungen und Bedingungen erstellt, die eine voll automatische und robuste Arbeitsweise sowie eine flexible und praktikable Nutzung gewährleisten sollten. Die entwickelte Methode verwendet Punktwolken, welche mittels einer Vorsegmentierung aus dem gesamten Laserscannerdatensatz extrahiert wurden und jeweils nur ein Gebäude beinhalten. Diese Laserscannerdatenpunktwolken werden separat analysiert. Eine 2,5D-Delaunay-Dreiecksvermaschung (TIN) wird in jede Punktwolke gerechnet. Für jedes Dreieck dieser Vermaschung werden die Lageparameter im Raum (Ausrichtung, Neigungsgrad und senkrechter Abstand der Ebene des Dreiecks zum Schwerpunkt der Punktwolke) bestimmt und in einen Parameterraum aufgetragen. Im Parameterraum bilden diejenigen Dreiecke Gruppen, welche sich im Objektraum auf ebenen Flächen befinden. Mit der Annahme, dass sich ein Gebäude aus ebenen Flächen zusammensetzt, dient die Identifizierung von Clustern im Parameterraum der Detektierung dieser Flächen. Um diese Gruppen/Cluster aufzufinden wurde eine Clusteranalysetechnik genutzt. Über die detektierten Cluster können jene Laserscannerpunkte im Objektraum bestimmt werden, die eine Dachfläche formen. In die Laserscannerpunkte der somit gefundenen Dachflächen werden Ebenen interpoliert. Alle abgeleiteten Ebenen gehen in den entwickelten Rekonstruktionsalgorithmus ein, der eine Topologie zwischen den einzelnen Ebenen aufbaut. Anhand dieser Topologie erhalten die Ebenen ?Kenntnis? über ihre jeweiligen Nachbarn und können miteinander verschnitten werden. Der fertigen Dachgestalt werden Wände zugefügt und das komplette 3D Gebäudemodell wird mittels VRML (Virtual Reality Macro Language) visualisiert. Diese Studie bezieht sich neben der Entwicklung eines Schemas zu automatischen Gebäuderekonstruktion auch auf die Ableitung von Attributen der 3D Gebäudemodellen. Die entwickelte Methode wurde an verschiedenen Flugzeuglaserscannerdatensätzen getestet. Es wird gezeigt, welche Potentiale und Grenzen die entwickelte Methode bei der Bearbeitung dieser verschiedenen Laserscannerdatensätze hat
Yalcin, Bayramoglu Neslihan. "Range Data Recognition: Segmentation, Matching, And Similarity Retrieval." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613586/index.pdf.
Повний текст джерелаhowever, there is still a gap in the 3D semantic analysis between the requirements of the applications and the obtained results. In this thesis we studied 3D semantic analysis of range data. Under this broad title we address segmentation of range scenes, correspondence matching of range images and the similarity retrieval of range models. Inputs are considered as single view depth images. First, possible research topics related to 3D semantic analysis are introduced. Planar structure detection in range scenes are analyzed and some modifications on available methods are proposed. Also, a novel algorithm to segment 3D point cloud (obtained via TOF camera) into objects by using the spatial information is presented. We proposed a novel local range image matching method that combines 3D surface properties with the 2D scale invariant feature transform. Next, our proposal for retrieving similar models where the query and the database both consist of only range models is presented. Finally, analysis of heat diffusion process on range data is presented. Challenges and some experimental results are presented.
Aijazi, Ahmad Kamal. "3D urban cartography incorporating recognition and temporal integration." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22528/document.
Повний текст джерелаOver the years, 3D urban cartography has gained widespread interest and importance in the scientific community due to an ever increasing demand for urban landscape analysis for different popular applications, coupled with advances in 3D data acquisition technology. As a result, in the last few years, work on the 3D modeling and visualization of cities has intensified. Lately, applications have been very successful in delivering effective visualizations of large scale models based on aerial and satellite imagery to a broad audience. This has created a demand for ground based models as the next logical step to offer 3D visualizations of cities. Integrated in several geographical navigators, like Google Street View, Microsoft Visual Earth or Geoportail, several such models are accessible to large public who enthusiastically view the real-like representation of the terrain, created by mobile terrestrial image acquisition techniques. However, in urban environments, the quality of data acquired by these hybrid terrestrial vehicles is widely hampered by the presence of temporary stationary and dynamic objects (pedestrians, cars, etc.) in the scene. Other associated problems include efficient update of the urban cartography, effective change detection in the urban environment and issues like processing noisy data in the cluttered urban environment, matching / registration of point clouds in successive passages, and wide variations in environmental conditions, etc. Another aspect that has attracted a lot of attention recently is the semantic analysis of the urban environment to enrich semantically 3D mapping of urban cities, necessary for various perception tasks and modern applications. In this thesis, we present a scalable framework for automatic 3D urban cartography which incorporates recognition and temporal integration. We present in details the current practices in the domain along with the different methods, applications, recent data acquisition and mapping technologies as well as the different problems and challenges associated with them. The work presented addresses many of these challenges mainly pertaining to classification of urban environment, automatic change detection, efficient updating of 3D urban cartography and semantic analysis of the urban environment. In the proposed method, we first classify the urban environment into permanent and temporary classes. The objects classified as temporary are then removed from the 3D point cloud leaving behind a perforated 3D point cloud of the urban environment. These perforations along with other imperfections are then analyzed and progressively removed by incremental updating exploiting the concept of multiple passages. We also show that the proposed method of temporal integration also helps in improved semantic analysis of the urban environment, specially building façades. The proposed methods ensure that the resulting 3D cartography contains only the exact, accurate and well updated permanent features of the urban environment. These methods are validated on real data obtained from different sources in different environments. The results not only demonstrate the efficiency, scalability and technical strength of the method but also that it is ideally suited for applications pertaining to urban landscape modeling and cartography requiring frequent database updating
Heldreich, Georgina. "A quantitative analysis of the fluvio-deltaic Mungaroo Formation : better-defining architectural elements from 3D seismic and well data." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/a-quantitative-analysis-of-the-fluviodeltaic-mungaroo-formation-betterdefining-architectural-elements-from-3d-seismic-and-well-data(866e245b-ba19-455d-924c-6d20af3dd700).html.
Повний текст джерелаSanchez, Rojas Javier [Verfasser]. "Gravity Data Analysis and 3D Modeling of the Caribe-South America Boundary (76°– 64° W) / Javier Sanchez-Rojas." Kiel : Universitätsbibliothek Kiel, 2012. http://d-nb.info/102256112X/34.
Повний текст джерелаKunde, Felix. "CityGML in PostGIS : Portierung, Anwendung und Performanz-Analyse am Beipiel der 3D City Database von Berlin." Bachelor's thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6365/.
Повний текст джерелаThe international standard CityGML has become a key interface for describing 3D city models in a geometric and semantic manner. With the relational database schema 3D City Database and an according Importer/Exporter tool the Institute for Geodesy and Geoinformation (IGG) of the Technische Universität Berlin plays a leading role in developing concepts and tools that help to facilitate the understanding and handling of the complex CityGML data model. The software itself runs under the Open Source label but yet the only supported database management system (DBMS) is Oracle Spatial (since version 10g), which is proprietary. Within this Master's thesis the 3D City Database and the Importer/Exporter were ported to the free DBMS PostgreSQL/PostGIS and compared to the performance of the Oracle version. PostGIS is one the most sophisticated spatial database systems and was recently extended by several features (like 3D support) for the release of the version 2.0. The results of the comparison analysis as well as a detailed explanation of concepts and implementations (SQL, PL, Java) will provide insights in the characteristics of the two DBMS that go beyond the project focus.
Hassan, Raju Chandrashekara. "ANALYSIS OF VERY LARGE SCALE IMAGE DATA USING OUT-OF-CORE TECHNIQUE AND AUTOMATED 3D RECONSTRUCTION USING CALIBRATED IMAGES." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1189785164.
Повний текст джерелаAhmed-Chaouch, Nabil. "Analyse historique et comparative des deux villes : la vieille ville d'Aix-en-Provence, la médina de Constantine à l'aide des S.I.G. : Comparaison historique et géographique de la croissance de deux villes méditerranéennes." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM3025.
Повний текст джерелаMany fields of applications use spatial representations. This is the case of architecture, town planning or geography. The acquisition of these spatial datas in town planning these last years has experienced a significant progress with the introduction of new instruments. This acquisition allows to get urban support analysis at different levels of details and for different purposes. This thesis proposes an approach to combine two disciplines, the urban typomorphology and geomatics. We have explained the central notion of morphological process, the different steps of operation peculiar to the historical analysis for the treatment of map datas with the GIS instrument, primarily our work consist to explore the GIS contribution to the historical data treatment and analysis. We focused particularly on the approach to complete typomorphological potential interpretive and descriptive. Our thesis work has been made from different stages, we can mention the construction of a formal classification, concepts related to the historical development and morphology of Constantine and Aix-en-Provence. Starting from this urban history compare the two cities has established a chronology of the evolution of urban forms, to better understand the challenges each of these latter. Specifically, this work allows us to contribute to improving the mastery of the urban project. Finally tracks are proposed to continue this work by exploiting the platform exploration of 3D representation proved very useful for making historical analysis