Dissertations / Theses on the topic 'Reconstruction approaches'

To see the other types of publications on this topic, follow the link: Reconstruction approaches.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Reconstruction approaches.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Liyuan. "Variational approaches in image recovery and segmentation." HKBU Institutional Repository, 2015. https://repository.hkbu.edu.hk/etd_oa/227.

Full text
Abstract:
Image recovery and segmentation are always the fundamental tasks in image processing field, because of their so many contributions in practical applications. As in the past ten years, variational methods have achieved a great success on these two issues, in this thesis, we continue to work on proposing several new variational approaches for restoring and segmenting an image. This thesis contains two parts. The first part addresses recovering an image and the second part emphasizes on segmenting. Along with the wide utilization of magnetic resonance imaging (MRI) technique, we particularly deal with blurry images corrupted by Rician noise. In chapter 1, two new convex variational models for recovering an image corrupted by Rician noise with blur are presented. These two models are motivated by the non-convex maximum-a-posteriori (MAP) model proposed in the prior papers. In the first method, we use an approximation item to the zero order of the modified Bessel function in the MAP model and add an entropy-like item to obtain a convex model. Through studying on the statistical properties of Rician noise, we bring up a strictly convex model by adding an additional data-fidelity term in the MAP model in the second method. Primal-dual methods are applied to solve the models. The simulation outcomes show that our models outperform some existed effective models in both recovery image quality and computational time. Cone beam CT (CBCT) is routinely applied in image guided radiation therapy (IGRT) to help patient setup. Its imaging dose, however, is still a concern, limiting its wide applications. It has been an active research topic to develop novel technologies for radiation dose reduction. In chapter 2, we propose an improvement of practical CBCT dose control scheme - temporal non-local means (TNLM) scheme for IGRT. We denoise the scanned image with low dose by using the previous images as prior knowledge. We combine deformation image registration and TNLM. Different from the TNLM, in the new method, for each pixel, the search range is not fixed, but based on the motion vector between the prior image and the obtained image. By doing this, it is easy to find the similar pixels in the previous images, but also can reduce the computational time since it does not need large search windows. The phantom and patient studies illuminate that the new method outperforms the original one in both image quality and computational time. In the second part, we present a two-stage method for segmenting an image corrupted by blur and Rician noise. The method is motivated by the two-stage segmentation method developed by the authors in 2013 and restoration method for images with Rician noise. First, based on the statistical properties of Rician noise, we present a new convex variant of the modified Mumford-Shah model to get the smooth cartoon part {dollar}u{dollar} of the image. Then, we cluster the cartoon {dollar}u{dollar} into different parts to obtain the final contour of different phases of the image. Moreover, {dollar}u{dollar} from the first stage is unique because of the convexity of the new model, and it needs to be computed only once whenever the thresholds and the number of the phases {dollar}K{dollar} in the second stage change. We implement the simulation on the synthetic and real images to show that our model outperforms some existed segmentation models in both precision and computational time
APA, Harvard, Vancouver, ISO, and other styles
2

Krolla, Bernd [Verfasser]. "Heterogeneous Reconstruction Approaches for Object and Scene Representation / Bernd Krolla." München : Verlag Dr. Hut, 2016. http://d-nb.info/1100967923/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Santos, Botelho Oliveira Leite Ana Paula. "Integrative approaches for systematic reconstruction of regulatory circuits in mammals." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77783.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Computational and Systems Biology Program, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 141-149).
The reconstruction of regulatory networks is one of the most challenging tasks in systems biology. Although some models for inferring regulatory networks can make useful predictions about the wiring and mechanisms of molecular interactions, these approaches are still limited and there is a strong need to develop increasingly universal and accurate approaches for network reconstruction. This problem is particularly challenging in mammals, due to the higher complexity of mammalian regulatory networks and limitations in experimental manipulation. In this thesis, I present three systematic approachs to reconstruct, analyse and refine models of gene regulation. In Chapter 1, I devise a method for deriving an observational model from temporal genomic profiles. I use it to choose targets for perturbation experiments in order to determine a network controlling the responses of mouse primary dendritic cells to stimulation with pathogen components. In Chapter 2, I introduce the algorithm Exigo, for identifying essential interactions in regulatory networks reconstructed from experimental data where regulators have been silenced, using a network reduction strategy. Exigo outperforms previous approaches on simulated data, uncovers the core network structure when applied to real networks derived from perturbation studies in mammals, and improves the performance of network inference methods. Lastly, I introduce in Chapter 3 an approach to learn a module network from multiple highthroughput assays. Analysis of a diffuse large B-cell lymphoma dataset identifies candidate regulator genes, microRNAs and copy number aberrations with biological, and possibly therapeutic, importance.
by Ana Paula Santos Botelho Oliveira Leite.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Urimi, Lakshmi P. "Image reconstruction techniques and measure of quality classical vs.modern approaches/." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2887.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Applied Mathematics and Scientific Computation Program. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

khan, saad. "MULTI-VIEW APPROACHES TO TRACKING, 3D RECONSTRUCTION AND OBJECT CLASS DETECTION." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4066.

Full text
Abstract:
Multi-camera systems are becoming ubiquitous and have found application in a variety of domains including surveillance, immersive visualization, sports entertainment and movie special effects amongst others. From a computer vision perspective, the challenging task is how to most efficiently fuse information from multiple views in the absence of detailed calibration information and a minimum of human intervention. This thesis presents a new approach to fuse foreground likelihood information from multiple views onto a reference view without explicit processing in 3D space, thereby circumventing the need for complete calibration. Our approach uses a homographic occupancy constraint (HOC), which states that if a foreground pixel has a piercing point that is occupied by foreground object, then the pixel warps to foreground regions in every view under homographies induced by the reference plane, in effect using cameras as occupancy detectors. Using the HOC we are able to resolve occlusions and robustly determine ground plane localizations of the people in the scene. To find tracks we obtain ground localizations over a window of frames and stack them creating a space time volume. Regions belonging to the same person form contiguous spatio-temporal tracks that are clustered using a graph cuts segmentation approach. Second, we demonstrate that the HOC is equivalent to performing visual hull intersection in the image-plane, resulting in a cross-sectional slice of the object. The process is extended to multiple planes parallel to the reference plane in the framework of plane to plane homologies. Slices from multiple planes are accumulated and the 3D structure of the object is segmented out. Unlike other visual hull based approaches that use 3D constructs like visual cones, voxels or polygonal meshes requiring calibrated views, ours is purely-image based and uses only 2D constructs i.e. planar homographies between views. This feature also renders it conducive to graphics hardware acceleration. The current GPU implementation of our approach is capable of fusing 60 views (480x720 pixels) at the rate of 50 slices/second. We then present an extension of this approach to reconstructing non-rigid articulated objects from monocular video sequences. The basic premise is that due to motion of the object, scene occupancies are blurred out with non-occupancies in a manner analogous to motion blurred imagery. Using our HOC and a novel construct: the temporal occupancy point (TOP), we are able to fuse multiple views of non-rigid objects obtained from a monocular video sequence. The result is a set of blurred scene occupancy images in the corresponding views, where the values at each pixel correspond to the fraction of total time duration that the pixel observed an occupied scene location. We then use a motion de-blurring approach to de-blur the occupancy images and obtain the 3D structure of the non-rigid object. In the final part of this thesis, we present an object class detection method employing 3D models of rigid objects constructed using the above 3D reconstruction approach. Instead of using a complicated mechanism for relating multiple 2D training views, our approach establishes spatial connections between these views by mapping them directly to the surface of a 3D model. To generalize the model for object class detection, features from supplemental views (obtained from Google Image search) are also considered. Given a 2D test image, correspondences between the 3D feature model and the testing view are identified by matching the detected features. Based on the 3D locations of the corresponding features, several hypotheses of viewing planes can be made. The one with the highest confidence is then used to detect the object using feature location matching. Performance of the proposed method has been evaluated by using the PASCAL VOC challenge dataset and promising results are demonstrated.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
6

Fang, Yingying. "Investigations on models and algorithms in variational approaches for image restoration." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/804.

Full text
Abstract:
Variational methods, which have proven to be very useful to solve the ill-posed inverse problems, have been generating a lot of research interest in the image restoration problem. It transforms the restoration problem into the optimization of a well-designed variational model. While the designed model is convex, the recovered image is the global solution found by an appropriate numerical algorithm and the quality of the restored image depends on the accuracy of the designed model. Thus, a lot of efforts have been put to propose a more precise model that can produce a result with more pleasing visual quality. Besides, due to the high- dimension and the nonsmoothness of the imaging model, an efficient algorithm to find the exact solution of the variational model, is also of the research interest, since it influences the efficiency of the restoration techniques in the practical applications. In this thesis, we are interested in the designing of both the variational models for image restoration problems and the numerical algorithms to solve these models. The first objective of this thesis is to make improvements on two models for image denoising. For the multiplicative noise removal, we designed a regularizer based on the statistical property of the speckle noise, which can transform the traditional model (named by AA) into a convex one. Therefore, a global solution can be found independent of the initialization of the numerical algorithm. Moreover, the regularization term added on the AA model can help produce a sharper result. The second model is improved on the traditional ROF model by adding an edge regularization which incorporates an edge prior obtained from the observed image. Extensive experiments show that designed edge regularization has superiority to increase the texture of the recovered result and remove the staircase artifacts in the meanwhile. It is also presented that the edge regularization designed can be easily adapted into other restoration task, such as image deblurring. The second objective of this thesis is to study the numerical algorithms for a general nonsmooth imaging restoration model. As the imaging models are usually high-dimensional, the existing algorithms usually only use the first-order information of the image. Differently, a novel numerical algorithm based on the inexact Lagrangian function is proposed in this thesis, which exploits the second-order information to reach a superlinear convergence rate. Experiments show that the proposed algorithm is able to efficiently reach the solution with higher accuracy compared to the state-of-the-art algorithm
APA, Harvard, Vancouver, ISO, and other styles
7

Barnes, Karen 1977. "Through a gendered lens? : institutional approaches to gender mainstreaming in post-conflict reconstruction." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33870.

Full text
Abstract:
Although civil war affects all civilians, it impacts men and women in different ways, and it influences their gender roles and responsibilities. Comparatively little attention has been given to assessing the gender sensitivity of international organizations who implement post-conflict reconstruction programs. The different social, economic and political dimensions of war to peace transitions, and how they impact on gender relations, can shed some light on the complicated intersections of needs and interests in wartorn societies. An examination of the policies of the United Nations High Commissioner for Refugees and the World Bank reveals that there is relatively little gender mainstreaming within their post-conflict operations. This research finds that the lack of resources and coordination, the failure to build on local capacities, and a lack of commitment to gender mainstreaming are the main obstacles these organizations face. To improve the situation it is recommended that organizations develop and use a 'gender checklist' at all stages of project planning, implementation and monitoring to ensure increased gender sensitivity in post-conflict programming.
APA, Harvard, Vancouver, ISO, and other styles
8

Steel, Blair Andrew. "Molecular and palaeontological approaches to the reconstruction of neogene spinose planktic foraminiferal phylogeny." Thesis, Royal Holloway, University of London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Coban, Sophia. "Practical approaches to reconstruction and analysis for 3D and dynamic 3D computed tomography." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/practical-approaches-to-reconstruction-and-analysis-for-3d-and-dynamic-3d-computed-tomography(f34a2617-09f9-4c4e-9669-f86f6cf2bce5).html.

Full text
Abstract:
The problem of reconstructing an image from a set of tomographic data is not new, nor is it lacking attention. However there is still a distinct gap between the mathematicians and the experimental scientists working in the computed tomography (CT) imaging community. One of the aims in this thesis is to bridge this gap with mathematical reconstruction algorithms and analysis approaches applied to practical CT problems. The thesis begins with an extensive analysis for assessing the suitability of reconstruction algorithms for a given problem. The paper presented examines the idea of extracting physical information from a reconstructed sample and comparing against the known sample characteristics to determine the accuracy of a reconstructed volume. Various test cases are studied, which are relevant to both mathematicians and experimental scientists. These include the variance in quality of reconstructed volume as the dose is reduced or the implementation of the level set evolution method, used as part of a simultaneous reconstruction and segmentation technique. The work shows that the assessment of physical attributes results in more accurate conclusions. Furthermore, this approach allows for further analysis into interesting questions in CT. This theme is continued throughout the thesis. Recent results in compressive sensing (CS) gained attention in the CT community as they indicate the possibility of obtaining an accurate reconstruction of a sparse image from severely limited or reduced amount of measured data. Literature produced so far has not shown that CS directly guarantees a successful recovery in X-ray CT, and it is still unclear under which conditions a successful sparsity regularized reconstruction can be achieved. The work presented in the thesis aims to answer this question in a practical setting, and seeks to establish a direct connection between the success of sparsity regularization methods and the sparsity level of the image, which is similar to CS. Using this connection, one can determine the sufficient amount of measurements to collect from just the sparsity of an image. A link was found in a previous study using simulated data, and the work is repeated here with experimental data, where the sparsity level of the scanned object varies. The preliminary work presented here verifies the results from simulated data, showing an "almost-linear" relationship between the sparsity of the image and the sufficient amount of data for a successful sparsity regularized reconstruction. Several unexplained artefacts are noted in the literature as the `partial volume', the 'exponential edge gradient' or the 'penumbra' effect, with no clear explanation for their cause, or established techniques to remove them. The work presented in this paper shows that these artefacts are due to a non-linearity in the measured data, which comes from either the set up of the system, the scattering of rays or the dependency of linear attenuation on wavelength in the polychromatic case. However, even in monochromatic CT systems, the non-linearity effect can be detected. The paper shows that in some cases, the non-linearity effect is too large to ignore, and the reconstruction problem should be adapted to solve a non-linear problem. We derive this non-linear problem and solve it using a numerical optimization technique for both simulatedand real, gamma-ray data. When compared to reconstructions obtained using the standard linear model, the non-linear reconstructed images show clear improvements in that the non-linear effect is largely eliminated. The thesis is finished with a highlight article in the special issue of Solid Earth, named "Pore-scale tomography & imaging - applications, techniques and recommended practice". The paper presents a major technical advancement in a dynamic 3D CT data acquisition, where the latest hardware and optimal data acquisition plan are applied and as a result, ultra fast 3D volume acquisition was made possible. The experiment comprised of fast, free-falling water-saline drops traveling through a pack of rock grains with varying porosities. The imaging work was enhanced by the use of iterative methods and physical quantification analysis performed. The data acquisition and imaging work is the first in the field to capture a free falling drop and the imaging work clearly shows the fluid interaction with speed, gravity and more importantly, the inter- and intra-grain fluid transfers.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, G. "Numerical approaches for solving the combined reconstruction and registration of digital breast tomosynthesis." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1356652/.

Full text
Abstract:
Heavy demands on the development of medical imaging modalities for breast cancer detection have been witnessed in the last three decades in an attempt to reduce the mortality associated with the disease. Recently, Digital Breast Tomosynthesis (DBT) shows its promising in the early diagnosis when lesions are small. In particular, it offers potential benefits over X-ray mammography - the current modality of choice for breast screening - of increased sensitivity and specificity for comparable X-ray dose, speed, and cost. An important feature of DBT is that it provides a pseudo-3D image of the breast. This is of particular relevance for heterogeneous dense breasts of young women, which can inhibit detection of cancer using conventional mammography. In the same way that it is difficult to see a bird from the edge of the forest, detecting cancer in a conventional 2D mammogram is a challenging task. Three-dimensional DBT, however, enables us to step through the forest, i.e., the breast, reducing the confounding effect of superimposed tissue and so (potentially) increasing the sensitivity and specificity of cancer detection. The workflow in which DBT would be used clinically, involves two key tasks: reconstruction, to generate a 3D image of the breast, and registration, to enable images from different visits to be compared as is routinely performed by radiologists working with conventional mammograms. Conventional approaches proposed in the literature separate these steps, solving each task independently. This can be effective if reconstructing using a complete set of data. However, for ill-posed limited-angle problems such as DBT, estimating the deformation is difficult because of the significant artefacts associated with DBT reconstructions, leading to severe inaccuracies in the registration. The aim of my work is to find and evaluate methods capable of allying these two tasks, which will enhance the performance of each process as a result. Consequently, I prove that the processes of reconstruction and registration of DBT are not independent but reciprocal. This thesis proposes innovative numerical approaches combining reconstruction of a pair of temporal DBT acquisitions with their registration iteratively and simultaneously. To evaluate the performance of my methods I use synthetic images, breast MRI, and DBT simulations with in-vivo breast compressions. I show that, compared to the conventional sequential method, jointly estimating image intensities and transformation parameters gives superior results with respect to both reconstruction fidelity and registration accuracy.
APA, Harvard, Vancouver, ISO, and other styles
11

Smale, Kenneth. "Relating Subjective and Objective Knee Function After Anterior Cruciate Ligament Injury Through Biomechanical and Neuromusculoskeletal Modelling Approaches." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37947.

Full text
Abstract:
Background: Knee injuries have a considerable impact on both the person’s psychological and physical health. We currently have tools to address each of these aspects but they are often considered independent of each other. Little work has been done to consolidate the subjective and objective functional ability of anterior cruciate ligament (ACL) injured individuals, which can be detrimental when implementing a return-to-play decision-making scheme. The lack of understanding concerning the relationship of these two measures may account for the high incidence of re-injury rates and lower quality of life exhibited by so many of these patients. Purpose: The purpose of this doctoral thesis is to investigate the relationship between subjective and objective measures of functional ability in ACL deficient and ACL reconstructed conditions through biomechanical and neuromusculoskeletal modelling approaches. Methods: This thesis is comprised of five studies based on a single in vivo data collection protocol, medical imaging and in silico data analyses. The in vivo data collection was of test-retest design where ACL deficient patients participated prior to their operation and approximately ten months post-reconstruction. This experimental group was matched to a healthy, uninjured control group, which was tested a single time. The first study of this thesis involved a descriptive analysis of spatiotemporal, neuromuscular, and biomechanical patterns during hopping and side cut tasks in addition to subjective functional ability questionnaires. Then, two novel measures of dynamic knee joint control were developed and applied along with a third measure to determine if changes in joint control exist between the three groups (Study 2). The relationships of these objective measures of functional ability to subjective measures were then examined through correlation and regression models (Study 3). Following this, a method of including magnetic resonance imaging to construct patient-specific models was developed and implemented to determine realistic kinematic and ligament lengthening profiles (Study 4). These patient-specific models were then applied to quantify knee joint loading in the form of contact and ligament forces, which were correlated to subjective measures of functional ability (Study 5). Results: Even though no major differences in neuromuscular patterns were observed between all three groups, it was found that subjective patient-reported outcome measures scores and biomechanical measures in the form of knee flexion angles and extensor moments were lower in the ACL deficient group compared to healthy controls. These differences continued to exist 10 months post-operation as the ACL reconstructed group had not fully recovered to patterns observed in the healthy controls. The current findings also suggest a possible hierarchy in the relationships between objective and subjective measures of functional ability. Basic kinematic objective measures such as knee flexion angle show small to moderate correlations, while more comprehensive measures such as stiffness and joint compressive force show moderate to strong correlations to subjective questionnaires. Finally, this thesis developed patient-specific OpenSim models that were used to produce appropriate kinematics and ligament lengthening with the reduction in soft tissue artefact. Conclusion: This thesis demonstrated that patients who are high-functioning in the ACL deficient state show greater improvements in subjective outcome scores after ACL reconstruction compared to objective measures. Biomechanical and neuromusculoskeletal modelling approaches identified important differences between the healthy and ACL deficient groups that were not resolved post-operatively. Our results also demonstrate that certain subjective and objective measures of functional ability are strongly correlated. The knowledge gained from this test-retest design and novel patient-specific in silico models aids clinicians in managing their expectations regarding the effectiveness of reconstruction and the respective long-term sequelae.
APA, Harvard, Vancouver, ISO, and other styles
12

Alkindy, Bassam. "Combining approaches for predicting genomic evolution." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2012/document.

Full text
Abstract:
En bio-informatique, comprendre comment les molécules d’ADN ont évolué au cours du temps reste un problème ouvert etcomplexe. Des algorithmes ont été proposés pour résoudre ce problème, mais ils se limitent soit à l’évolution d’un caractèredonné (par exemple, un nucléotide précis), ou se focalisent a contrario sur de gros génomes nucléaires (plusieurs milliardsde paires de base), ces derniers ayant connus de multiples événements de recombinaison – le problème étant NP completquand on considère l’ensemble de toutes les opérations possibles sur ces séquences, aucune solution n’existe à l’heureactuelle. Dans cette thèse, nous nous attaquons au problème de reconstruction des séquences ADN ancestrales en nousfocalisant sur des chaînes nucléotidiques de taille intermédiaire, et ayant connu assez peu de recombinaison au coursdu temps : les génomes de chloroplastes. Nous montrons qu’à cette échelle le problème de la reconstruction d’ancêtrespeut être résolu, même quand on considère l’ensemble de tous les génomes chloroplastiques complets actuellementdisponibles. Nous nous concentrons plus précisément sur l’ordre et le contenu ancestral en gènes, ainsi que sur lesproblèmes techniques que cette reconstruction soulève dans le cas des chloroplastes. Nous montrons comment obtenirune prédiction des séquences codantes d’une qualité telle qu’elle permette ladite reconstruction, puis comment obtenir unarbre phylogénétique en accord avec le plus grand nombre possible de gènes, sur lesquels nous pouvons ensuite appuyernotre remontée dans le temps – cette dernière étant en cours de finalisation. Ces méthodes, combinant l’utilisation d’outilsdéjà disponibles (dont la qualité a été évaluée) à du calcul haute performance, de l’intelligence artificielle et de la biostatistique,ont été appliquées à une collection de plus de 450 génomes chloroplastiques
In Bioinformatics, understanding how DNA molecules have evolved over time remains an open and complex problem.Algorithms have been proposed to solve this problem, but they are limited either to the evolution of a given character (forexample, a specific nucleotide), or conversely focus on large nuclear genomes (several billion base pairs ), the latter havingknown multiple recombination events - the problem is NP complete when you consider the set of all possible operationson these sequences, no solution exists at present. In this thesis, we tackle the problem of reconstruction of ancestral DNAsequences by focusing on the nucleotide chains of intermediate size, and have experienced relatively little recombinationover time: chloroplast genomes. We show that at this level the problem of the reconstruction of ancestors can be resolved,even when you consider the set of all complete chloroplast genomes currently available. We focus specifically on the orderand ancestral gene content, as well as the technical problems this raises reconstruction in the case of chloroplasts. Weshow how to obtain a prediction of the coding sequences of a quality such as to allow said reconstruction and how toobtain a phylogenetic tree in agreement with the largest number of genes, on which we can then support our back in time- the latter being finalized. These methods, combining the use of tools already available (the quality of which has beenassessed) in high performance computing, artificial intelligence and bio-statistics were applied to a collection of more than450 chloroplast genomes
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Yongchang. "Novel Approaches in Structured Light Illumination." UKnowledge, 2010. http://uknowledge.uky.edu/gradschool_diss/116.

Full text
Abstract:
Among the various approaches to 3-D imaging, structured light illumination (SLI) is widely spread. SLI employs a pair of digital projector and digital camera such that the correspondences can be found based upon the projecting and capturing of a group of designed light patterns. As an active sensing method, SLI is known for its robustness and high accuracy. In this dissertation, I study the phase shifting method (PSM), which is one of the most employed strategy in SLI. And, three novel approaches in PSM have been proposed in this dissertation. First, by regarding the design of patterns as placing points in an N-dimensional space, I take the phase measuring profilometry (PMP) as an example and propose the edge-pattern strategy which achieves maximum signal to noise ratio (SNR) for the projected patterns. Second, I develop a novel period information embedded pattern strategy for fast, reliable 3-D data acquisition and reconstruction. The proposed period coded phase shifting strategy removes the depth ambiguity associated with traditional phase shifting patterns without reducing phase accuracy or increasing the number of projected patterns. Thus, it can be employed for high accuracy realtime 3-D system. Then, I propose a hybrid approach for high quality 3-D reconstructions with only a small number of illumination patterns by maximizing the use of correspondence information from the phase, texture, and modulation data derived from multi-view, PMP-based, SLI images, without rigorously synchronizing the cameras and projectors and calibrating the device gammas. Experimental results demonstrate the advantages of the proposed novel strategies for 3-D SLI systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Farahibozorg, Seyedehrezvan. "Uncovering dynamic semantic networks in the brain using novel approaches for EEG/MEG connectome reconstruction." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278024.

Full text
Abstract:
The current thesis addresses some of the unresolved predictions of recent models of the semantic brain system, such as the hub-and-spokes model. In particular, we tackle different aspects of the hypothesis that a widespread network of interacting heteromodal (hub(s)) and unimodal (spokes) cortices underlie semantic cognition. For this purpose, we use connectivity analyses, measures of graph theory and permutation-based statistics with source reconstructed Electro-/MagnetoEncephaloGraphy (EEG/MEG) data in order to track dynamic modulations of activity and connectivity within the semantic networks while a concept unfolds in the brain. Moreover, in order to obtain more accurate connectivity estimates of the semantic networks, we propose novel methods for some of the challenges associated with EEG/MEG connectivity analysis in source space. We utilised data-driven analyses of EEG/MEG recordings of visual word recognition paradigms and found that: 1) Bilateral Anterior Temporal Lobes (ATLs) acted as potential processor hubs for higher-level abstract representation of concepts. This was reflected in modulations of activity by multiple contrasts of semantic variables; 2) ATL and Angular Gyrus (AG) acted as potential integrator hubs for integration of information produced in distributed semantic areas. This was observed using Dynamic Causal Modelling of connectivity among the main left-hemispheric candidate hubs and modulations of functional connectivity of ATL and AG to semantic spokes by word concreteness. Furthermore, examining whole-brain connectomes using measures of graph theory revealed modules in the right ATL and parietal cortex as global hubs; 3) Brain oscillations associated with perception and action in low-level cortices, in particular Alpha and Gamma rhythms, were modulated in response to words with those sensory-motor attributes in the corresponding spokes, shedding light on the mechanism of semantic representations in spokes; 4) Three types of hub-hub, hub-spoke and spoke-spoke connectivity were found to underlie dynamic semantic graphs. Importantly, these results were obtained using novel approaches proposed to address two challenges associated with EEG/MEG connectivity. Firstly, in order to find the most suitable of several connectivity metrics, we utilised principal component analysis (PCA) to find commonalities and differences of those methods when applied to a dataset and identified the most suitable metric based on the maximum explained variance. Secondly, reconstruction of EEG/MEG connectomes using anatomical or fMRI-based parcellations can be significantly contaminated by spurious leakage-induced connections in source space. We, therefore, utilised cross-talk functions in order to optimise the number, size and locations of cortical parcels, obtaining EEG/MEG-adaptive parcellations. In summary, this thesis proposes approaches for optimising EEG/MEG connectivity analyses and applies them to provide the first empirical evidence regarding some of the core predictions of the hub-and-spokes model. The key findings support the general framework of the hub(s)-and-spokes, but also suggest modifications to the model, particularly regarding the definition of semantic hub(s).
APA, Harvard, Vancouver, ISO, and other styles
15

IVAN, MIHAI. "RETHINKING THE AXIS: APPROACHES IN THE DEVELOPMENT OF COMMUNIST INITIATED/UNCOMPLETED ARCHITECTURE IN BUCHAREST AFTER 1989." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1155584865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hasse, Christoph [Verfasser], Johannes [Akademischer Betreuer] Albrecht, and Kevin [Gutachter] Kröninger. "Alternative approaches in the event reconstruction of LHCb / Christoph Hasse ; Gutachter: Kevin Kröninger ; Betreuer: Johannes Albrecht." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/1201885574/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Belhi, Abdelhak. "Digital Cultural Heritage Preservation : enrichment and Reconstruction based on Hierarchical Multimodal CNNs and Image Inpainting Approaches." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE2019.

Full text
Abstract:
Le patrimoine culturel joue un rôle important dans la définition de l'identité d’une société. La préservation physique à long terme du patrimoine culturel reste fragile et peut induire de multiples risques liés à la destruction et aux dommages accidentels. Les technologies numériques telles que la photographie et la numérisation 3D ont fourni de nouvelles alternatives pour la préservation numérique. Cependant, les adapter au contexte du patrimoine culturel est une tâche difficile. En effet, la numérisation complète des objets culturels (visuelle avec une copie digitale et historique avec des métadonnées) n'est facile que lorsqu'il s'agit d’objets physiquement en bon état possédant toutes leurs données (entièrement annotés). Cependant, dans le monde réel, de nombreux objets culturels souffrent de dégradation physique et de perte d'informations. Habituellement, pour annoter et conserver ces objets, les institutions culturelles font appel à des spécialistes de l'art, à des historiens et à d'autres institutions. Ce processus est fastidieux, nécessite beaucoup de temps et de ressources financières et peut souvent s’avérer inexact. Notre travail se concentre sur la préservation effective et rentable du patrimoine culturel, basée sur des méthodes avancées d'apprentissage automatique. L'objectif est de fournir un Framework à la phase d'enrichissement du processus de préservation numérique du patrimoine culturel. A travers cette thèse, nous proposons de nouvelles méthodes permettant d’améliorer le processus de préservation des objets culturels. Nos défis sont principalement liés au processus d'annotation et d'enrichissement des objets dont les données sont manquantes et/ou incomplètes (annotations et données visuelles) ; ce processus est souvent inefficace lorsqu’il est effectué manuellement. Nous introduisons diverses approches basées sur l'apprentissage automatique et l'apprentissage profond pour compléter automatiquement les données culturelles manquantes. Nous nous concentrons principalement sur deux types essentiels de données manquantes : les données textuelles (métadonnées) et les données visuelles.La première étape est principalement liée à l'annotation et à l'étiquetage des objets culturels à l'aide de l'apprentissage profond. Nous avons proposé des approches exploitant des caractéristiques visuelles et textuelles disponibles des objets culturels pour effectuer efficacement leur classification. (i) La première approche est proposée pour la Classification Hiérarchique des objets afin de mieux répondre aux exigences de métadonnées de chaque type d’objets et augmenter les performances de classification. (ii) La seconde approche est dédiée à la Classification Multimodale des objets culturels où un quelconque objet peut être représenté, lors de la classification, avec les métadonnées disponibles en plus de sa capture visuelle. La deuxième étape considère le manque d'informations visuelles lorsqu’il s’agit d’objets culturels incomplets et endommagés. Nous avons proposé dans ce cas, une approche basée sur l'apprentissage profond à travers des modèles génératifs et le clustering d’images pour effectuer la reconstruction visuelle d’objets culturels. Pour nos expérimentations, nous avons collecté une grande base de données culturelles mais nous avons sélectionné les tableaux d’arts pour nos tests et validations car ils possèdent une meilleure qualité d’annotation et sont donc mieux adapté pour mesurer les performances de nos algorithmes
Cultural heritage plays an important role in defining the identity of a society. Long-term physical preservation of cultural heritage remains risky and can lead to multiple problems related to destruction and accidental damage. Digital technologies such as photography and 3D scanning provided new alternatives for digital preservation. However, adapting them to the context of cultural heritage is a challenging task. In fact, fully digitizing cultural assets (visually and historically) is only easy when it comes to assets that are in a good physical shape and all their data is at possession (fully annotated). However, in the real-world, many assets suffer from physical degradation and information loss. Usually, to annotate and curate these assets, heritage institutions need the help of art specialists and historians. This process is tedious, involves considerable time and financial resources, and can often be inaccurate. Our work focuses on the cost-effective preservation of cultural heritage through advanced machine learning methods. The aim is to provide a technical framework for the enrichment phase of the cultural heritage digital preservation/curation process. Through this thesis, we propose new methods to improve the process of cultural heritage preservation. Our challenges are mainly related to the annotation and enrichment of cultural objects suffering from missing and incomplete data (annotations and visual data) which is often considered ineffective when performed manually. Thus, we propose approaches based on machine learning and deep learning to tackle these challenges. These approaches consist of the automatic completion of missing cultural data. We mainly focus on two types of missing data: textual data (metadata) and visual data.The first stage is mainly related to the annotation and labeling of cultural objects using deep learning. We have proposed approaches, that take advantage of cultural objects’ visual features as well as partially available textual annotations, to perform an effective classification. (i) the first approach is related to the Hierarchical Classification of Objects to better meet the metadata requirements of each cultural object type and increase the classification algorithm performance. (ii) the second proposed approach is dedicated to the Multimodal Classification of cultural objects where any object can be represented, during classification, with a subset of available metadata in addition to its visual capture. The second stage considers the lack of visual information when dealing with incomplete and damaged cultural objects. In this case, we proposed an approach based on deep learning through generative models and image data clustering to optimize the image completion process of damaged cultural heritage objects. For our experiments, we collected a large database of cultural objects. We chose to use fine-art paintings in our tests and validations as they were the best in terms of annotations quality
APA, Harvard, Vancouver, ISO, and other styles
18

Chaari, Lotfi. "Parallel magnetic resonance imaging reconstruction problems using wavelet representations." Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00587410.

Full text
Abstract:
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le " k-space ". Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle
APA, Harvard, Vancouver, ISO, and other styles
19

Jack, Dominic. "Deep learning approaches for 3D inference from monocular vision." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/204267/1/Dominic_Jack_Thesis.pdf.

Full text
Abstract:
This thesis looks at deep learning approaches to 3D computer vision problems, using representations including occupancy grids, deformable meshes, key points, point clouds, and event streams. We focussed on methods targeted towards medium-sized mobile robotics platforms with modest computational power on board. Key results include state-of-the-art accuracies on single-view high resolution voxel reconstruction and event camera classification tasks, point cloud convolution networks capable of performing inference an order of magnitude faster than similar methods, and a 3D human pose lifting model with significantly fewer floating point operations and learnable weights than baseline deep learning methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Tran, Dai viet. "Patch-based Bayesian approaches for image restoration." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD049.

Full text
Abstract:
Les travaux présentés dans cette thèse concernent les approches bayésiennes par patchs des problèmes d’amélioration de la qualité d’images. Notre contribution réside en le choix du dictionnaire construit grâce à un ensemble d’images de haute qualité et en la définition et l’utilisation d’un modèle à priori pour la distribution des patchs dans l’espace du dictionnaire. Nous avons montré qu’un choix attentif du dictionnaire représentant les informations locales des images permettait une amélioration de la qualité des images dégradées. Plus précisément, d’un dictionnaire construit de façon exhaustive sur les images de haute qualité nous avons sélectionné, pour chaque patch de l’image dégradée, un sous dictionnaire fait de ses voisins les plus proches. La similarité entre les patchs a été mesurée grâce à l’utilisation de la distance du cantonnier (Earth Mover’s Distance) entre les distributions des intensités de ces patchs. L’algorithme de super résolution présenté a conduit à de meilleurs résultats que les algorithmes les plus connus. Pour les problèmes de débruitage d’images nous nous sommes intéressés à la distribution à priori des patchs dans l’espace du dictionnaire afin de l’utiliser comme pré requis pour régulariser le problème d’optimisation donné par le Maximum à Posteriori. Dans le cas d’un dictionnaire de petite dimension, nous avons proposé une distribution constante par morceaux. Pour les dictionnaires de grande dimension, la distribution à priori a été recherchée comme un mélange de gaussiennes (GMM). Nous avons finalement justifié le nombre de gaussiennes utiles pour une bonne reconstruction apportant ainsi un nouvel éclairage sur l’utilisation des GMM
In this thesis, we investigate the patch-based image denoising and super-resolution under the Bayesian Maximum A Posteriori framework, with the help of a set of high quality images which are known as standard images. Our contributions are to address the construction of the dictionary, which is used to represent image patches, and the prior distribution in dictionary space. We have demonstrated that the careful selection of dictionary to represent the local information of image can improve the image reconstruction. By establishing an exhaustive dictionary from the standard images, our main attribute is to locally select a sub-dictionary of matched patches to recover each patch in the degraded image. Beside the conventional Euclidean measure, we propose an effective similarity metric based on the Earth Mover's Distance (EMD) for image patch-selection by considering each patch as a distribution of image intensities. Our EMD-based super-resolution algorithm has outperformed comparing to some state-of-the-art super-resolution methods.To enhance the quality of image denoising, we exploit the distribution of patches in the dictionary space as a an image prior to regularize the optimization problem. We develop a computationally efficient procedure, based on piece-wise constant function estimation, for low dimension dictionaries and then proposed a Gaussian Mixture Model (GMM) for higher complexity dictionary spaces. Finally, we justify the practical number of Gaussian components required for recovering patches. Our researches on multiple datasets with combination of different dictionaries and GMM models have complemented the lack of evidence of using GMM in the literature
APA, Harvard, Vancouver, ISO, and other styles
21

Horesh, Lior. "Some novel approaches in modelling and image reconstruction for multi-frequency Electrical Impedance Tomography of the human brain." Thesis, University College London (University of London), 2006. http://discovery.ucl.ac.uk/1444744/.

Full text
Abstract:
Electrical Impedance Tomography (EIT) is a recently developed imaging technique. Small insensible currents are injected into the body using electrodes. Measured voltages are used for reconstruction of images of the internal dielectric properties of the body. This imaging technique is portable, safe, rapid, inexpensive and has the potential to provide a new method for imaging in remote or acute situations, where other large scanners, such as MRI, are either impractical or unavailable. It has been in use in clinical research for about two decades but has not yet been adopted into routine clinical practice. One potentially powerful clinical application lies in its use for imaging acute stroke, where it could be used to distinguish haemorrhage from infarction. Hitherto, image reconstruction has mainly been for the more tractable case of changes in impedance over time. For acute stroke, it is best operated in multiple frequency mode, where data is collected at multiple frequencies and images can be recovered with higher fidelity. Whereas the eventual idea appears to be good, there are several important issues which affect the likelihood of its success in producing clinically reliable images. These include limitations in accuracy of finite element modelling, image reconstruction, and accuracy of recorded voltage data due to noise and confounding factors. The purpose of this work was to address these issues in the hope that, at the end, a clinical study of EIT in acute stroke would have a much greater chance of success. In order to address the feasibility of this application, a comprehensive literature review regarding the dielectric properties of human head tissues in normal and pathological states was conducted in this thesis. Novel generic tools were developed in order to enable modelling and non-linear image reconstruction of large-scale problems, such as those arising from the head EIT problem.
APA, Harvard, Vancouver, ISO, and other styles
22

Morris, Jennifer Louise. "Integrated approaches to the reconstruction of early land vegetation and environments from lower Devonian Strata, Central-South Wales." Thesis, Cardiff University, 2009. http://orca.cf.ac.uk/15352/.

Full text
Abstract:
Integrated approaches to the reconstruction of Lower Devonian vegetation and environments are presented, combining palaeobotanical, palynological and sedimentological evidence from Old Red Sandstone strata of the Anglo-Welsh Basin. A new lower Lochkovian plant assemblage from central-south Wales is similar in diversity to contemporaneous assemblages along the southern margins of Laurussia. Coalified megafossils of rhyniophytes and rhyniophytoids e.g. Cooksonia hemisphaerica, represent basal embryophytes. Geometric morphometric analysis of sporangial morphology revealed a strong taphonomic control on shape. Newly discovered highly-branched mesofossils are synonymous with published charcoalified specimens from lower Prídolí and middle Lochkovian localities, and represent stem-group embryophytes with bryophytic characters. The non-embryophytes, with the largest biomass, include the fungal-like Prototaxites and associated mycelia, Pachytheca, and evidence for microbial biofilms. Several new dispersed palynomorph taxa are described, assemblages dominated by cryptospores. With additional published palynomorph and sedimentological data, broad palynofacies are constructed to reveal some information regarding lower Lochkovian habitats. Using core data from this locality, lower strata are correlated to the Raglan Mudstone Formation, and a two-stage, ephemeral, mud-dominated, dryland river system is envisaged. The appearance of sandier, meandering channel deposits in upper strata are correlated to the St. Maughans Formation, which suggests either a change in fluvial morphology or the switching-on of trunk channels, the causes for which are discussed. By combining palaeobotanical and sedimentological data, several plant taphofacies are recognised and a taphofacies model envisaged, the most significant taphonomic constraint on palaeoecological studies being the stratinomic partitioning of vegetation prior to burial by fluvial hydraulic sorting. Plant material is restricted to channel elements with low preservational potential, therefore the extent of phytoterrestrialisation and soil productivity may have previously been underestimated. Indirect evidence for significant soil productivity, which may have increased chemical weathering, potentially altering atmospheric CO2 levels, is calculated from the stable carbon isotopic values of pedogenic carbonate nodules.
APA, Harvard, Vancouver, ISO, and other styles
23

Ni, Fenbiao. "Analysis and reconstruction of the relationship between a circulation anomaly feature and tree rings: Linear and nonlinear approaches." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284104.

Full text
Abstract:
Tree rings can be reliable recorders of past weather and climate variations. Tree rings from mountain regions can be linked to upper air atmospheric sounding observations and large-scale atmospheric circulation patterns. A "synoptic dendroclimatology" approach is used to define the relationship between tree rings and a specific upper air anomaly feature that affects climate in the western US. I have also reconstructed this anomaly feature using both regression and fuzzy logic approaches. Correlation analysis between 500 mb geopotential heights and tree rings at a site near Eagle, Colorado reveals an important anomaly centered over the western US. This center can be viewed as a circulation anomaly center index (CACI) that can quantitatively represent the relationship between atmospheric circulation and tree growth variations. To reconstruct this index from tree rings, I used both a multiple linear regression (MLR) and a fuzzy-rule-based (FRB) model. The fuzzy-rule-based model provides a simple structural approach to capture nonlinear relationships between tree rings and circulation. The reconstructing capability of both models is validated directly from an independent data set. Results show that the fuzzy-rule-based model performs better in terms of calibration and verification statistics than the multiple linear regression model. The reconstructed anomaly index can provide a long-term temporal context for evaluation of circulation variability and how it is linked to both climate and tree rings.
APA, Harvard, Vancouver, ISO, and other styles
24

Ackermann, Jens [Verfasser], Michael [Akademischer Betreuer] Goesele, and Reinhard [Akademischer Betreuer] Klein. "Photometric Reconstruction from Images: New Scenarios and Approaches for Uncontrolled Input Data / Jens Ackermann. Betreuer: Michael Goesele ; Reinhard Klein." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2014. http://d-nb.info/1110978979/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ye, Hongwei. "Development and implementation of fully three-dimensional iterative reconstruction approaches in spect with parallel, fan- and cone-beam collimators." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Baker, Jennifer. "From Comparative Genomics to Synthetic Biology| Using Ancestral Gene Reconstruction Approaches to Test Hypotheses Regarding Proximate Mechanisms in our Evolutionary History." Thesis, The George Washington University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3688029.

Full text
Abstract:

At its core human evolutionary biology seeks to answer the question of how the defining characteristics of modern humans evolved, such as large-brains, obligatory bipedal gait, extended juvenile period, and increased longevity. Traditional fossil-based research uses morphology to infer behavior and life history and only recently have researchers been able to make predictions regarding the effect of modifications to the DNA and proteins of our forbearers. Using these innovative methods we investigated the molecular evolution of a superfamily of transcription factors called the Nuclear Receptors. The patterns of sequence evolution observed in our bioinformatic analyses suggest a shift in the intensity of selection pressure occurred on NR2C1, a gene that plays a role early in embryonic stem cell proliferation and neuronal differentiation. Methods are now available to reconstruct ancestral DNA and its corresponding protein sequences and thus generate testable hypotheses about the functional evolution of genes on specific lineages. These methods allowed us to analyze how modifications to the modern human version of NR2C1 affected the ability of an embryonic stem cell to remain in its proliferative state. We began by creating three different copies of our gene of interest: the human copy, the chimpanzee copy, and the ancestral copy of NR2C1 for the inferred last common ancestor of chimpanzee and modern humans. Inserting these three different gene variants into mouse embryonic stem cells that have had NR2C1 knocked down allowed us to quantitatively analyze the transcriptional and regulatory functions of NR2C1.

APA, Harvard, Vancouver, ISO, and other styles
27

Lopez, Radcenco Manuel. "Data-driven approaches for ocean remote sensing : from the non-negative decomposition of operators to the reconstruction of satellite-derived sea surface dynamics." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0107/document.

Full text
Abstract:
Au cours des dernières années, la disponibilité toujours croissante de données de télédétection multi-source de l'océan a été un facteur clé pour améliorer notre compréhension des dynamiques de la surface de l'océan. A cet égard, il est essentiel de mettre au point des approches efficaces pour exploiter ces ensembles de données. En particulier, la décomposition des processus géophysiques en modes pertinents est une question clé pour les problèmes de caractérisation, de prédiction et de reconstruction. Inspirés par des progrès récents en séparation aveugle des sources, nous visons, dans la première partie de cette thèse, à étendre les modèles de séparation aveugle de sources sous contraintes de non-négativité au problème de la caractérisation et décomposition d'opérateurs ou fonctions de transfert entre variables d'intérêt. Nous développons des schémas computationnels efficaces reposant sur des fondations mathématiques solides. Nous illustrons la pertinence des modèles de décomposition proposés dans différentes applications impliquant l'analyse et la prédiction de dynamiques géophysiques. Par la suite, étant donné que la disponibilité toujours croissante d'ensembles de données multi-sources supporte l'exploration des approches pilotées par les données en tant qu'alternative aux formulations classiques basées sur des modèles, nous explorons des approches basées sur les données récemment introduits pour l'interpolation des champs géophysiques à partir d'observations satellitaires irrégulièrement échantillonnées. De plus, en vue de la future mission SWOT, la première mission satellitaire à produire des observations d'altimétrie par satellite complètement bidimensionnelles et à large fauchée, nous nous intéressons à évaluer dans quelle mesure les données SWOT permettraient une meilleure reconstruction des champs altimétriques
In the last few decades, the ever-growing availability of multi-source ocean remote sensing data has been a key factor for improving our understanding of upper ocean dynamics. In this regard, developing efficient approaches to exploit these datasets is of major importance. Particularly, the decomposition of geophysical processes into relevant modes is a key issue for characterization, forecasting and reconstruction problems. Inspired by recent advances in blind source separation, we aim, in the first part of this thesis dissertation, at extending non-negative blind source separation models to the problem of the observation-based characterization and decomposition of linear operators or transfer functions between variables of interest. We develop mathematically sound and computationally efficient schemes. We illustrate the relevance of the proposed decomposition models in different applications involving the analysis and forecasting of geophysical dynamics. Subsequently, given that the ever-increasing availability of multi-source datasets supports the exploration of data-driven alternatives to classical model-driven formulations, we explore recently introduced data-driven models for the interpolation of geophysical fields from irregularly sampled satellite-derived observations. Importantly, with a view towards the future SWOT mission, the first satellite mission to produce complete two-dimensional wide-swath satellite altimetry observations, we focus on assessing the extent to which SWOT data may lead to an improved reconstruction of altimetry fields
APA, Harvard, Vancouver, ISO, and other styles
28

Tulipani, Svenja. "Novel biomarker and stable isotopic approaches for palaeoenvironmental reconstruction of saline and stratified ecosystems : the modern Coorong Lagoon and Devonian reefs of the Canning Basin." Thesis, Curtin University, 2013. http://hdl.handle.net/20.500.11937/147.

Full text
Abstract:
An integrated elemental, biomarker and stable isotope approach was used to explore environmental and ecological changes, particularly of salinity and water-column stratification, in (i) a modern estuarine ecosystem recently impacted by human water management practices and drought; and (ii) a marine palaeoenvironment associated with the Late Devonian extinctions. A pyrolysis method was developed to investigate methyltrimethyltridecylchroman (MTTC) sources and a proxy for reconstruction of freshwater incursion into marine palaeoenvironments based on these biomarkers was introduced.
APA, Harvard, Vancouver, ISO, and other styles
29

Baumann, Andrea Barbara. "Clash of organisational cultures? : a comparative analysis of American and British approaches to the coordination of defence, diplomacy and development in stability operations, 2001-2010." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:80c8f9c6-fb4f-4c03-9f8f-26d89fcb8339.

Full text
Abstract:
This thesis examines the challenge of coordinating civilian and military efforts within a so-called ‘whole-of-government’ approach to stability operations. The empirical analysis focuses on British and American attempts to implement an integrated civilian-military strategy in Afghanistan and Iraq between 2001 and 2010. Unlike many existing analyses, the thesis consciously avoids jumping to the search for solutions to fix the problem of coordination and instead offers a nuanced explanation of why it arises in the first instance. Empirical data was gathered through personal interviews with a wide range of civilian and military practitioners between 2007 and 2011. Together with the in-depth study of official documents released by, and on, the defence, diplomatic and development components of the British and American governments, they provide the basis for a fine-grained analysis of obstacles to interagency coordination. The thesis offers a framework for analysis that is grounded in organisation theory and distinguishes between material, bureaucratic and cultural dimensions of obstacles to interagency coordination. It identifies organisational cultures as a crucial force behind government agencies’ reluctance to participate and invest in an integrated approach. The empirical chapters cover interagency dynamics within the government bureaucracy and in operations on the ground, including the role of specialised coordination units and Provincial Reconstruction Teams in the pursuit of coordination. The thesis concludes that stabilisation remains an inherently contested endeavour for all organisations involved and that the roles and expectations implied by contemporary templates for coordination clash with prevailing organisational identities and self-perceptions. These findings caution against the procedural and technocratic approach to interagency coordination that permeates the existing literature on the subject and many proposals for reform. While the thesis examines a specific empirical context, its conclusions have broader implications for civilian-military coordination and the quest for an integrated approach to security in the twenty-first century.
APA, Harvard, Vancouver, ISO, and other styles
30

Fuchs, Mirco Verfasser], Jens [Akademischer Betreuer] [Haueisen, Thomas R. [Gutachter] Knösche, and Christoph [Gutachter] Braun. "The smoothness constraint in spatially informed minimum norm approaches for the reconstruction of neuroelectromagnetic sources / Mirco Fuchs ; Gutachter: Thomas R. Knösche, Christoph Braun ; Betreuer: Jens Haueisen." Ilmenau : TU Ilmenau, 2017. http://d-nb.info/1178141594/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Eickel, Klaus [Verfasser], Matthias [Akademischer Betreuer] Günther, Matthias [Gutachter] Günther, and Tony [Gutachter] Stöcker. "New Approaches to Simultaneous Multislice Magnetic Resonance Imaging : Sequence Optimization and Deep Learning based Image Reconstruction / Klaus Eickel ; Gutachter: Matthias Günther, Tony Stöcker ; Betreuer: Matthias Günther." Bremen : Staats- und Universitätsbibliothek Bremen, 2019. http://d-nb.info/1186248920/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lutzweiler, Christian [Verfasser], Vasilis [Akademischer Betreuer] [Gutachter] Ntziachristos, Hans-Joachim [Gutachter] Bungartz, and Oliver [Gutachter] Hayden. "Towards Real-Time Clinical Imaging with Multi-Spectral Optoacoustic Tomography: Reconstruction Approaches and Initial Experimental Studies / Christian Lutzweiler ; Gutachter: Hans-Joachim Bungartz, Vasilis Ntziachristos, Oliver Hayden ; Betreuer: Vasilis Ntziachristos." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1140165836/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Robin, Vincent. "Reconstruction of fire and forest history on several investigation sites in Germany, based on long and short-term investigations - Multiproxy approaches contributing to naturalness assessment on a local scale." Thesis, Aix-Marseille 3, 2011. http://www.theses.fr/2011AIX30057.

Full text
Abstract:
Sur la base de constats globaux concernant l’importance d’appliquer des modes de gestion durable des zones forestières et le manque d’investigation concernant l’histoire passée des feux en Europe centrale, il a été entrepris de reconstruire l’histoire des événements de feux et de la dynamique forestière pour des sites d’étude en Allemagne. L’ensemble des données obtenues et analysées ont été utilisées pour l’évaluation du niveau de naturalité des sites étudiés, cette notion étant essentielle pour la mise en place d’une gestion durable, et/ou pour des projets de conservation et / ou de restauration des systèmes perturbés. Concernant les dynamiques des écosystèmes en Europe centrale, il a été souvent mis en évidence que l’homme joue un rôle essentiel depuis des millénaires. Par conséquent, l’approche historique des événements de feux et de la dynamique forestière à été réalisée sur de longues échelles temporelles. Neuf sites d’étude ont été sélectionnés incluant une large gamme de systèmes forestiers d’Europe centrale. Les sites d’études sont répartis dans deux zones générales d’étude : le nord de l’Allemagne (Schleswig-Holstein), qui comprend quatre sites d’étude, et le centre de l’Allemagne (le Harz), qui comprend cinq sites d’étude. Quatre disciplines ont été principalement utilisées. Pour définir l’état actuel des sites d’études ceux-ci ont été caractérisés, utilisant divers indicateurs dendrométriques concernant la structure et la composition des parcelles analysées. Pour obtenir des informations à propos de la dynamique forestière des peuplements forestiers en place des analyses dendroécologiques ont été utilisées. Pour analyser la dynamique forestière sur une longue échelle temporelle, à une échelle spatiale comparable, des analyses pédoanthracologiques ont été menées, combinées à des analyses de sols. De plus, des analyses anthracologiques de séquences de tourbes ont été réalisées, fournissant, combinées avec les données pedoanthracologiques, des enseignements à propos de l’histoire des incendies. L’état actuel et la dynamique forestière récente des sites étudiés indiquent divers niveaux de complexité des peuplements forestiers, correspondant souvent à divers niveaux postulés d’impact anthropique. Il a été obtenu huit chronologies moyennes, standardisées en haute et moyenne fréquences, âgées au maximum de 1744 et au minimum de 1923 ans. A partir de ces chronologies des changements dans les conditions de croissance de peuplements forestiers ont été mises en évidence. Basées sur un ensemble de 71 charbons de bois datés par radiocarbone, il a été mis en évidence, à l’échelle locale et globale, deux principales phases présentant plus d’événements de feux datés, une durant le Pléistocène supérieur/Holocène inférieur, une autre durant l’Holocène supérieur. Pour les deux phases identifiées des forçages climatique et anthropogénique ont été respectivement postulés comme déterminisme des occurrences de feux. Finalement, les différentes données collectées ont été utilisées de façon combinée pour reconstruire l’histoire des feux et des forêts des sites étudiés, afin de contribuer à l’évaluation de leur niveau de naturalité
Considering two global observations in Central Europe of, firstly, the need for, and development of, sustainable and biological conservation practices for forest and/or woodland areas and, secondly, the lack of long-term fire history, an attempt has been made to reconstruct the fire and the forest history at several investigation sites in Germany. The overall data set gathered and analyzed has been used for on-site naturalness assessment. This latter notion is crucial for forest system conservation/restoration planning, considering the past human impact on forest dynamics. Also, in view of this past human impact on forest systems, which is well-documented for Central Europe, as occurring on a multi-millennium scale, an historical perspective perceptive that combined a long and short temporal scale of investigation was used.Nine investigation sites were selected, in order to include various and representative types of Central European forest. Therefore, the investigation sites were located in two main investigation areas. One is in Northern Germany (Schleswig-Holstein) and includes four investigation sites. The other is in Central Germany (Harz Mountains) and includes five investigation sites. Four main approaches were used. To assess the current state of the investigated site, forest stand characterization was undertaken (i.e. based on various forest attributes that concern stand structure and composition). Tree ring series were analyzed to provide insights about short-term forest tree population dynamics. Then, charcoal records from soil (combined with soil analysis) and peat sequences were qualitatively and quantitatively analyzed. These last two approaches also provide information about the past fire history. Forest current and short-term dynamics illustrated various levels of stand complexity, often corresponding to various levels of human impact that had been postulated. Eight mean site tree-ring chronologies, standardized in high and mid-frequency signal, spanning at a maximum of up to AD 1744 and at a minimum of up to AD 1923, were obtained. The insight, about the identification of events of growing changes and the correlated temporal and, if possible, spatial patterns, was discussed. Charcoal analysis provided a long-term insight about fire history. Based on 71 charcoal radiocarbon dates, it was shown on a macro-scale that there were two phases that had a greater frequency of fire - one during the transition from the late Pleistocene to the early Holocene, and one during the mid- and late Holocene. A strong human control during the most recent fire phase has been postulated. This is supported by on-site soil and peat charcoal record analysis, allowing one to point out the event of environmental changes (disturbances), at local scales. In the end, the on-site data from the various indicators were combined to assess the fire and forest history and the naturalness level of the investigated sites, based on past insights, thereby contributing to a better understanding of the present and helping to anticipate the future
APA, Harvard, Vancouver, ISO, and other styles
34

Nifuku, Ko. "Oceanic redox conditions during the Early Aptian Oceanic Anoxic Event (OAEla) in the Vocontian Basin, SE France: A high-resolution reconstruction from a combination of ichnological and geochemical approaches." 京都大学 (Kyoto University), 2010. http://hdl.handle.net/2433/120669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Robin, Vincent [Verfasser]. "Reconstruction of fire and forest history on several investigation sites in Germany, based on long and short-term investigations - Multiproxy approaches contributing to naturalness assessment on a local scale / Vincent Robin." Kiel : Universitätsbibliothek Kiel, 2011. http://d-nb.info/1045968153/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kucharczak, Florentin. "Quantification en tomographie par émission de positons au moyen d'un algorithme itératif par intervalles. Contributions au diagnostic des démences neurodégénératives." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS048.

Full text
Abstract:
La tomographie par émission de positons (TEP) est une modalité d’imagerie nucléaire qui possède une place de choix dans la démarche diagnostique des démences neuro-dégénératives. Le traceur radiopharmaceutique le plus utilisé, le 18F-FDG, permet d’obtenir une cartographie volumique du métabolisme cérébral. L’argument scintigraphique de démence repose sur la mise en lumière d’un hypo-métabolisme relatif d’une région d’intérêt (ROI) particulière par rapport à une autre, habituellement son symétrique controlatéral. Certains cas d’étude sont cependant très difficiles à interpréter à l’oeil nu, principalement à un stade précoce d’évolution de la maladie. Jusqu’à présent, le développement d’outils (semi-)automatiques de comparaison directe de ROIs s’est vu limité par la méconnaissance de la statistique que suivent les données reconstruites ; les principales méthodes déjà mises au point préférant alors utiliser d’imposantes bases de données pour évaluer la reconstruction à analyser à travers un score de dissimilarité par rapport à un groupe de contrôles. Dans cette thèse, nous proposons une nouvelle méthodologie entièrement intégrée allant de la reconstruction d’images d’activité à l’aide au diagnostic de démences. Basée sur la reconstruction d’intervalles de confiance, l’approche proposée permet 1/ d’accéder directement à une information sur la variabilité statistique des données, 2/ de reconstruire des images qualitativement et quantitativement probantes pour faciliter la lecture de l’examen par le médecin nucléaire, 3/ de fournir un score de risque du patient d’être atteint de démence neuro-dégénérative. Les résultats obtenus avec cette dernière sont comparables avec des outils validés en routine clinique, sans nécessiter aucune autre information que les seules données d’acquisiton TEP
Positron emission tomography (PET) is a nuclear imaging modality that has a prominent place in the neurodegenerative dementias diagnosis. After a reconstruction step, the most widely used radiopharmaceutical tracer, the 18F-FDG, provides a volume mapping of brain metabolism. The scintigraphic argument for dementia is based on the finding of a relative hypo-metabolism of one particular region of interest (ROI) to another, usually its contralateral symmetric. However, some case studies are very difficult to interpret with the naked eye, mainly at an early stage of the disease’s development. Until now, the development of (semi-)automatic tools for direct comparison of ROIs has been limited by the lack of statistical knowledge of the reconstructed data ; the main methods already developed preferring to use large databases to evaluate the reconstruction through a dissimilarity score compared to a group of control patients. In this thesis, we propose a new, fully integrated methodology, from reconstruction to assistance in the diagnosis of dementia. Based on the reconstruction of confidence intervals, the proposed approach allows 1/ direct access to information on the statistical variability of the data, 2/ reconstruction of qualitatively and quantitatively convincing images to facilitate the reading of the examination by the physician, 3/ provision of a risk score for the patient to be affected by a neurodegenerative dementia. The results obtained with the latter are comparable with tools validated in clinical routine, except that this method does not require any other information than PET acquisition data itself
APA, Harvard, Vancouver, ISO, and other styles
37

Hammad, Mira. "Reconstruction of auricular cartilage using natural-derived scaffolds with an in vivo application in rabbit model Effects of hypoxia on chondrogenic differentiation of progenitor cells from different origins Cell sheets as tools for ear cartilage reconstruction in vivo Cartilage tissue engineering using apple cellulosic scaffolds Cell-secreted matrices as cell supports: Novel approaches for cell culture applications." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC404.

Full text
Abstract:
La reconstruction des défauts du cartilage auriculaire nécessite une restauration appropriée par des sources cellulaires adéquates ainsi que la fourniture de supports appropriés pour les tissus. Ce travail visait à étudier différents échafaudages et biomatériaux pour l'ingénierie in vitro du cartilage auriculaire ainsi que la réparation in vivo du cartilage auriculaire chez des modèles de lapin. Nous avons d'abord montré que les périchondrocytes auriculaires sont les meilleurs candidats pour la régénération du cartilage auriculaire et que l'hypoxie n'est pas nécessaire à leur différenciation chondrogénique. Ces cellules ont formé avec succès des feuillets de cellules cartilagineuses que nous avons utilisés pour régénérer le tissu cartilagineux in vitro et pour combler et reconstruire les défauts du cartilage in vivo dans des modèles allogèniques de lapins. Nous avons ensuite testé des tissus dérivés de la cellulose en décellularisant un tissu de pomme. Une fois recolonisés avec des cellules, ces échafaudages ont surpassé les hydrogels d'alginate en augmentant la croissance et en régulant l'expression cartilagineuse dans différentes cellules de mammifères. Dans la dernière partie de la thèse, nous avons examiné des matrices sécrétées par les cellules et les avons utilisées comme revêtement pour différentes applications de culture cellulaire. Il est intéressant de noter que ces supports, une fois lyophilisés, ont favorisé la culture de cellules allo- et xénogéniques, augmenté la prolifération et stimulé la chondrogenèse. Nous avons également mis en évidence la préservation du phénotype lors de l’amplification des chondrocytes par passages successifs. Notre étude fournit de nouveaux outils et approches pour de multiples applications de culture cellulaire
Successful reconstruction of auricular cartilage defects requires the appropriate restoration of the cartilaginous deformities by potential cell sources as well as providing suitable tissue supports. This work aimed to investigate different scaffolds and biomaterials for in vitro auricular cartilage engineering as well as in vivo auricular cartilage repair in rabbit models. We first showed that auricular perichondrocytes are the best candidates for auricular cartilage regeneration and hypoxia is not necessary for their chondrogenic differentiation. These cells successfully formed cartilaginous cell sheets which were used to regenerate cartilage tissue in vitro and to fill and reconstruct cartilage defects in vivo in allogenic rabbit models. Furthermore, we tested cellulose-derived tissue by decellularizing apple tissue and its use as a scaffold. Repopulated with cells, these scaffolds surpassed alginate hydrogels by enhancing colonization and upregulating the cartilaginous expression in different mammalian cells. In the final part of the thesis, we examined cell-secreted matrices and used them as a coating for different cell culture applications. Interestingly, these coatings promoted both allo- and xenogeneic cell culture, increased proliferation, and boosted chondrogenesis. We also highlighted phenotype preservation during chondrocytes expansion on these cell-secreted matrices. Our study provides novel tools and approaches for multiple cell culture applications
APA, Harvard, Vancouver, ISO, and other styles
38

Rambour, Clément. "Approches tomographiques structurelles pour l'analyse du milieu urbain par tomographie SAR THR : TomoSAR." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT007/document.

Full text
Abstract:
La tomographie SAR exploite plusieurs acquisitions d'une même zone acquises d'un point de vue légerement différent pour reconstruire la densité complexe de réflectivité au sol. Cette technique d'imagerie s'appuyant sur l'émission et la réception d'ondes électromagnétiques cohérentes, les données analysées sont complexes et l'information spatiale manquante (selon la verticale) est codée dans la phase. De nombreuse méthodes ont pu être proposées pour retrouver cette information. L'utilisation des redondances naturelles à certains milieux n'est toutefois généralement pas exploitée pour améliorer l'estimation tomographique. Cette thèse propose d'utiliser l'information structurelle propre aux structures urbaines pour régulariser les densités de réflecteurs obtenues par cette technique
SAR tomography consists in exploiting multiple images from the same area acquired from a slightly different angle to retrieve the 3-D distribution of the complex reflectivity on the ground. As the transmitted waves are coherent, the desired spatial information (along with the vertical axis) is coded in the phase of the pixels. Many methods have been proposed to retrieve this information in the past years. However, the natural redundancies of the scene are generally not exploited to improve the tomographic estimation step. This Ph.D. presents new approaches to regularize the estimated reflectivity density obtained through SAR tomography by exploiting the urban geometrical structures
APA, Harvard, Vancouver, ISO, and other styles
39

Conjeevaram, Krishnakumar Naveen Kartik. "A Bayesian approach to feed reconstruction." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82414.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 83-86).
In this thesis, we developed a Bayesian approach to estimate the detailed composition of an unknown feedstock in a chemical plant by combining information from a few bulk measurements of the feedstock in the plant along with some detailed composition information of a similar feedstock that was measured in a laboratory. The complexity of the Bayesian model combined with the simplex-type constraints on the weight fractions makes it difficult to sample from the resulting high-dimensional posterior distribution. We reviewed and implemented different algorithms to generate samples from this posterior that satisfy the given constraints. We tested our approach on a data set from a plant.
by Naveen Kartik Conjeevaram Krishnakumar.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
40

Saribekyan, Hayk. "An amalgamating approach to connectomic reconstruction." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113173.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-60).
The reconstruction of neurons from connectomics image stacks presents a challenging problem in computer vision. The neuronal objects and their shapes, unlike the objects in natural images, vary greatly in shape and size. Recent methods for reconstruction of individual objects like the flood-filling networks and mask-extend showed a possibility of a new direction in the field. By using a CNN to track a single continuously changing object in the stack, much like a human tracer would do, they achieve a better accuracy than previous agglomeration algorithms. Unfortunately, these methods are costly for dense reconstruction of neurons in a volume as the number of CNN computations increases linearly in the number of objects. The cross classification clustering algorithm generalizes these accurate methods and tracks all objects in the volume at the same time. It uses only a logarithmic number of fully convolutional CNN passes by reformulating the complex clustering problem of unknown number of objects into a series of independent classifications on image pixels. These classifications together uniquely define the labels in each slice of the volume. We present a pipeline based on cross classification clustering that delivers improved reconstruction accuracy. A significant contribution of our pipeline is its streaming nature which will allow very large datasets to be segmented without storing them.
by Hayk Saribekyan.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
41

Scholze, Stephan. "A probabilistic approach to building roof reconstruction /." [S.l.] : [s.n.], 2002. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Moothandassery, Ramdevan Krishnadev. "Reconstruction Approach for Partially Truncated CT Data." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231159.

Full text
Abstract:
For various reasons it might be required to scan an object that partially lies outside the field of view(FOV) of a CT scanner. The parts of the object that lie outside the FOV will not contribute to the line integrals measured by the detector which will cause image artifacts that affect the final image quality. In this paper, I suggest a novel reconstruction approach that estimates the attenuation by the object outside the FOV using a priori knowledge about the outline of the object. It is shown that, knowing the object’s outline, it is possible to determine whether the attenuation along a given line is truncated. The total attenuation for a truncated projection is then estimated by interpolating the data between the consistent projections. The method therefore requires some of the projections to be consistent. This estimate, along with the knowledge of the distance traversed by the X-Ray inside the object is then used to determine the average attenuation. The method was tested on both numerical and physical phantoms. The results are satisfactory even when up to 80% of the projections are truncated. Structural Similarity Index (SSIM) was compared for the complete reconstructed images,and regions of truncations before and after the algorithm was applied. Reconstructed images from completely consistent projections served as ground truth. The results indicate that the algorithm can be used to reconstruct partially truncated CT data, which was tested on numerical and physical phantoms (of semicircular cross section). There is scope for further testing of the algorithm on irregularly shaped objects.
Technology
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Peng. "Nonlinear statistical approach for 2.5D human face reconstruction." Thesis, University of Newcastle Upon Tyne, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chou, George Tao-Shun. "Large-scale 3D reconstruction : a triangulation-based approach." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86296.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. [153]-157).
by George Tao-Shun Chou.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
45

Broadgate, Marianne L. "An integrated approach to palaeoenvironmental reconstruction using GIS." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/16947.

Full text
Abstract:
The data, methods and research organisation involved in palaeoenvironmental reconstruction and paleoclimatic investigation are analysed to establish the research requirements that information systems approaches must support. The issues of using GIS technology in general and for palaeoenvironmental research in particular are then explored to ascertain how computing technology is currently being used in palaeoenvironmental work, how this could be enhanced, and what further work is required. A conceptual model (PERIS, PalaeoEnvironmental Research and Information System) and organisational framework is then proposed which would support international palaeoenvironmental research and allow coherent development by maximising the use of current resources and capitalising on existing data, techniques and knowledge. A role for GIS is thus established in the context of international collaboration and individual scientific endeavour and a clear path of development is provided for the production of a system which is flexible enough to accommodate changes in ideas and theories. Two case studies are used to exemplify the issues involved and illustrate the conceptual and methodological approaches generated. These focus on the creation of a system to handle data and explore theories associated with sea level change and glacial geomorphology for the Scandinavian area in North West Europe during the last glacial-interglacial cycle. These data sets play an important role in the reconstruction of ice sheet evolution and related environmental parameters to derive knowledge about the controls on and consequences of climate change. They are felt to be representative of the variety of data available and methods used, and serve as a basis for identifying the issues. The adoption of GIS technology for research makes the inherent issues in this study, which it has been possible to avoid addressing until now, more immediate, and therefore implementing GIS supported research must revolutionise the way in which scientific work is conducted. Conventional methods of research and collaboration will have to adapt and will become more rigorous in order to exploit GIS technology. In addition there are important areas in GIS technology that need further development to allow flexible handling of palaeoenvironmental data for reconstruction purposes. These issues are examined, and the utility of the conceptual PERIS model is explored in some detail, using the case studies.
APA, Harvard, Vancouver, ISO, and other styles
46

Ali, Zehra (Zehra Hyder). "Sustainable shelters for post disaster reconstruction : an integrated approach for reconstruction after the South Asia earthquake." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40402.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2007.
Includes bibliographical references (p. 67-69).
A year after the South Asia earthquake, over 60% of the survivors are still vulnerable due to the lack of adequate shelter, the absence of basic facilities for water and sanitation and livelihood restoration. The harsh topography, limited financial and human resources of the displaced and the environmental impact have resulted in the construction of shelters that do not directly address the improvement in living conditions and remain vulnerable to future disasters. This thesis presents an overview of an integrated approach towards making the reconstruction in the earthquake affected areas of Northern Pakistan more sustainable. The review of shelter solutions and practical recommendations aim at showing that there is no 'single best' solution in terms of shelter design. Rather a synthesis of low tech solutions for improving the sustainability and safety of existing shelters has been provided along with an understanding of the social mechanisms necessary to address local needs and priorities. There are three main components that have been included to provide the primary context and discuss the role and design for sustainable shelters in the earthquake affected areas of Northern Pakistan are the 'Review of Housing', 'Design' and the 'Structural test'.
(cont.) The review of housing focuses on understanding the current role of stakeholder participation in the construction of homes, the feasibility of constructing homes using indigenous building technology, criteria for assessing the sustainability of designs and in depth case studies on the different housing mechanisms ( owner driven reconstruction, participatory housing and contractor driven reconstruction). The best practices for shelter design and construction have been rearticulated in the 'Design' section, which provides an overview of some of the construction practices that exist and are being implemented in the field for the reasons of their efficiency, affordability and resourcefulness. The 'Structural Test' corroborates suggestions for improving layout and floor plan of unreinforced masonry construction. Apart from the design of the main structural components, innovations for improved seismic resistance, thermal efficiency, ventilation and roof-rainwater harvesting have been presented to improve the functionality of shelter. Thus by integrating use of suitable shelter materials, design and construction techniques, while also considering the implications indoor lighting, ,heating and cooking and the opportunities for livelihood generation, the construction of sustainable and safer shelters has been encouraged.
by Zehra Ali.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
47

Baumgartner, Kyrie A. "Neogene Climate Change in Eastern North America: A Quantitative Reconstruction." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etd/2348.

Full text
Abstract:
Though much is known of the global paleoclimate during the Neogene, little is understood about eastern North America at that time. During the Neogene the global paleoclimate was transitioning from the warm temperatures and higher levels of precipitation of the Paleogene to the cooler temperatures and lower levels of precipitation during the Pleistocene. Eleven fossil sites from Neogene eastern North America were analyzed using the Coexistence Approach: Pollack Farm, Brandon Lignite, Legler Lignite, Alum Bluff, Bryn Mawr, Big Creek on Sicily Island, Brandywine, Gray Fossil Site, Citronelle, Peace Creek, and Ohoopee River Dune Field. Analyses showed a general trend that early and middle Miocene sites were warmer than the area today, while middle and late Miocene sites were comparable to the area today, and Pliocene sites were comparable to or cooler than the area today. However, there is no clear trend of increased precipitation during the Neogene.
APA, Harvard, Vancouver, ISO, and other styles
48

Yin, Jianfeng. "Toward an alternative approach to multi-camera scene reconstruction." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21970.

Full text
Abstract:
This dissertation addresses several issues related to 3D object reconstruction in a video-projected immersive environment, based on the views obtained from multiple cameras. One such issue is color correction to account for the differences between cameras and projectors. Various methods are investigated, and a neural network approach is proposed as an effective solution. The problem of textureless or occluded regions on the construction of depth maps is also considered. As an improvement, depth information is propagated by nonlinear diffusion processing based on image gradient constraints. Unlike traditional methods such as space carving and shape from silhouettes, this dissertation treats 3D reconstruction as a classification problem. The challenge is to find a suitable feature to distinguish surface points from non-surface ones. Two such features are proposed, one based on the color histogram of the projections of each voxel onto every camera, and the other, the Frobenius norm of the camera agreement matrix. Tensor voting is used to refine the reconstruction and the results are evaluated experimentally on synthetic and physical data.
Cette dissertation traite de différents aspects reliés à la reconstruction d'objets en 3D à partir d'images provenant de plusieurs caméras dans un environnement immersif de projection vidéo. Un des aspects est la correction des couleurs servant à compenser les différences entre caméras et projecteurs. Plusieurs méthodes sont analysées et une approche basée sur les réseaux neuronaux est proposée comme solution. Le probléme des régions cachées ou uniformes sur la construction des cartes de profondeur est aussi considéré. Comme amélioration, l'information de profondeur est propagée à l'aide de traitement non linéaire de diffusion basé sur des contraintes de gradient d'image. Contrairement aux méthodes traditionnelles telles le space carving et le shape-from-silhouettes, cette dissertation considére la reconstruction 3D comme un probléme de classication. Le défi consiste à trouver un attribut approprié afin de distinguer les points de surface de ceux qui n'en sont pas. Deux attributs sont proposés, l'un basé sur l'histogramme de couleurs des projections de chaque voxel sur toutes les caméras, l'autre sur la norme de Frobenius de la matrice d'entente des caméras. Le vote de tenseurs est employé pour raffiner la reconstruction et les résultats sont évalués expérimentalement sur des données réelles et synthétiques.
APA, Harvard, Vancouver, ISO, and other styles
49

Robinson, Bruce T. "A multilevel approach to the algebraic image reconstruction problem." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/28382.

Full text
Abstract:
Approved for public release; distribution is unlimited
The problem of reconstructing an image from its Radon transform profiles is outlined. This problem has medical, industrial and military applications. Using the computer assisted tomography (CAT) scan as an example, a discretization of the problem based on natural pixels is described, leading to a symmetric linear system that is in general smaller than that resulting from the conventional discretization. The linear algebraic properties of the system matrix are examined, and the convergence of the Gauss-Seidel iteration applied to the linear system is established. Next, multilevel technology is successfully incorporated through a multilevel projection method (PML) formulation of the problem. This results in a V-cycle algorithm, the convergence of which is established. Finally, the problem of spotlight computed tomography, where high quality reconstructions for only a portion of the image are required, is outlined. We establish the formalism necessary to apply fast adaptive composite (FAC) grids in this setting, and formulate the problem in a block Gauss-Seidel form. Numerical results and reconstructed images are presented which demonstrate the usefulness of these two multilevel approaches
APA, Harvard, Vancouver, ISO, and other styles
50

Singh, Inderjeet. "Curve based approach for shape reconstruction of continuum manipulators." Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I042/document.

Full text
Abstract:
Ce travail de thèse propose une nouvelle méthode de modélisation et de reconstruction de la forme d’une classe de manipulateurs continuum, basée sur la géométrie des courbes. Les Hodographes Pythagoriens (courbes HP) sont utilisées pour reconstruire des formes optimales pour ce type de robots, par une optimisation des énergies potentielles de flexion et de torsion. Cette méthode nous permis de déduire la cinématique optimale des bras manipulateurs continuum. La validation de la méthode proposée a été réalisée sur le robot dit trompe d’éléphant ‘Compact Bionic Handling Assistant (CBHA)’. Une calibration a été réalisée sur la méthode de reconstruction afin d’améliorer les performances en terme de précision et de prendre en considération les incertitudes dues à la structure du bras manipulateur. La méthode proposée est également testée dans le cas de la préhension, en s’appuyant sur une approche qualitative à base de réseaux de neurones. De plus, l'approche HP est étendue à la modélisation des structures de robots hétérogènes avec plusieurs sections. Ce dernier a été validé pour une chaîne cinématique fermée, composée de deux manipulateurs CBHA, manipulant conjointement une corde flexible
This work provides a new methodology to reconstruct the shape of continuum manipulators using a curve based approach. Pythagorean Hodograph (PH) curves are used to reconstruct the optimal shape of continuum manipulators using minimum potential energy (bending and twisting energy) criteria. This methodology allows us to obtain the optimal kinematics of continuum manipulators. The models are applied to a continuum manipulator, namely, the Compact Bionic Handling Assistant (CBHA) for experimental validation under free load manipulation. The calibration of the PH-based shape reconstruction methodology is performed to improve its accuracy to accommodate the uncertainties due to the structure of the manipulator. The proposed method is also tested under the loaded manipulation after combining it with a qualitative Neural Network approach. Furthermore, the PH-based methodology is extended to model multi-section heterogeneous bodies. This model is experimentally validated for a closed loop kinematic chain formed using two CBHA manipulating jointly a rope
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography