Academic literature on the topic 'Document reconstruction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Document reconstruction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Document reconstruction"

1

Zhang, Ce, and Hady W. Lauw. "Topic Modeling on Document Networks with Adjacent-Encoder." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6737–45. http://dx.doi.org/10.1609/aaai.v34i04.6152.

Full text
Abstract:
Oftentimes documents are linked to one another in a network structure,e.g., academic papers cite other papers, Web pages link to other pages. In this paper we propose a holistic topic model to learn meaningful and unified low-dimensional representations for networked documents that seek to preserve both textual content and network structure. On the basis of reconstructing not only the input document but also its adjacent neighbors, we develop two neural encoder architectures. Adjacent-Encoder, or AdjEnc, induces competition among documents for topic propagation, and reconstruction among neighbors for semantic capture. Adjacent-Encoder-X, or AdjEnc-X, extends this to also encode the network structure in addition to document content. We evaluate our models on real-world document networks quantitatively and qualitatively, outperforming comparable baselines comprehensively.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Hedong, Jing Zheng, Ziwei Zhuang, and Suohai Fan. "A Solution to Reconstruct Cross-Cut Shredded Text Documents Based on Character Recognition and Genetic Algorithm." Abstract and Applied Analysis 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/829602.

Full text
Abstract:
The reconstruction of destroyed paper documents is of more interest during the last years. This topic is relevant to the fields of forensics, investigative sciences, and archeology. Previous research and analysis on the reconstruction of cross-cut shredded text document (RCCSTD) are mainly based on the likelihood and the traditional heuristic algorithm. In this paper, a feature-matching algorithm based on the character recognition via establishing the database of the letters is presented, reconstructing the shredded document by row clustering, intrarow splicing, and interrow splicing. Row clustering is executed through the clustering algorithm according to the clustering vectors of the fragments. Intrarow splicing regarded as the travelling salesman problem is solved by the improved genetic algorithm. Finally, the document is reconstructed by the interrow splicing according to the line spacing and the proximity of the fragments. Computational experiments suggest that the presented algorithm is of high precision and efficiency, and that the algorithm may be useful for the different size of cross-cut shredded text document.
APA, Harvard, Vancouver, ISO, and other styles
3

He, Zhanying, Chun Chen, Jiajun Bu, Can Wang, Lijun Zhang, Deng Cai, and Xiaofei He. "Document Summarization Based on Data Reconstruction." Proceedings of the AAAI Conference on Artificial Intelligence 26, no. 1 (September 20, 2021): 620–26. http://dx.doi.org/10.1609/aaai.v26i1.8202.

Full text
Abstract:
Document summarization is of great value to many real world applications, such as snippets generation for search results and news headlines generation. Traditionally, document summarization is implemented by extracting sentences that cover the main topics of a document with a minimum redundancy. In this paper, we take a different perspective from data reconstruction and propose a novel framework named Document Summarization based on Data Reconstruction (DSDR). Specifically, our approach generates a summary which consist of those sentences that can best reconstruct the original document. To model the relationship among sentences, we introduce two objective functions: (1) linear reconstruction, which approximates the document by linear combinations of the selected sentences; (2) nonnegative linear reconstruction, which allows only additive, not subtractive, linear combinations. In this framework, the reconstruction error becomes a natural criterion for measuring the quality of the summary. For each objective function, we develop an efficient algorithm to solve the corresponding optimization problem. Extensive experiments on summarization benchmark data sets DUC 2006 and DUC 2007 demonstrate the effectiveness of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Marques, M. A. O., and C. O. A. Freitas. "Document Decipherment-restoration: Strip-shredded Document Reconstruction based on Color." IEEE Latin America Transactions 11, no. 6 (December 2013): 1359–65. http://dx.doi.org/10.1109/tla.2013.6710384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Apollonio, Fabrizio Ivan, Federico Fallavollita, Elisabetta Caterina Giovannini, Riccardo Foschi, and Salvatore Corso. "The reconstruction of drawn architecture." Studies in Digital Heritage 1, no. 2 (December 14, 2017): 380–95. http://dx.doi.org/10.14434/sdh.v1i2.23243.

Full text
Abstract:
Among the many cases concerning the process of digital hypothetical 3D reconstruction a particular case is constituted by never realized projects and plans. They constitute projects designed and remained on paper that, albeit documented by technical drawings, they pose the typical problems that are common to all other cases. From 3D reconstructions of transformed architectures, to destroyed/lost buildings and part of towns.This case studies start from original old drawings which has to be implemented by different kind of documentary sources, able to provide - by means evidence, induction, deduction, analogy - information characterized by different level of uncertainty and related to different level of accuracy.All methods adopted in a digital hypothetical 3D reconstruction process show us that the goal of all researchers is to be able to make explicit, or at least intelligible, through a graphical system a synthetic/communicative level representative or the value of the reconstructive process that is behind a particular result.The result of a reconstructive process acts in the definition of three areas intimately related one each other which concur to define the digital consistency of the artifact object of study: Shape (geometry, size, spatial position); Appearance (surface features); Constitutive elements (physical form, stratification of building/manufacturing systems)The paper, within a general framework aimed to use 3D models as a means to document and communicate the shape and appearance of never built architecture, as well as to depict temporal correspondence and allow the traceability of uncertainty and accuracy that characterizes each reconstructed element.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Wang, Kehai Chen, and Tiejun Zhao. "Document-Level Relation Extraction with Reconstruction." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 16 (May 18, 2021): 14167–75. http://dx.doi.org/10.1609/aaai.v35i16.17667.

Full text
Abstract:
In document-level relation extraction (DocRE), graph structure is generally used to encode relation information in the input document to classify the relation category between each entity pair, and has greatly advanced the DocRE task over the past several years. However, the learned graph representation universally models relation information between all entity pairs regardless of whether there are relationships between these entity pairs. Thus, those entity pairs without relationships disperse the attention of the encoder-classifier DocRE for ones with relationships, which may further hind the improvement of DocRE. To alleviate this issue, we propose a novel encoder-classifier-reconstructor model for DocRE. The reconstructor manages to reconstruct the ground-truth path dependencies from the graph representation, to ensure that the proposed DocRE model pays more attention to encode entity pairs with relationships in the training. Furthermore, the reconstructor is regarded as a relationship indicator to assist relation classification in the inference, which can further improve the performance of DocRE model. Experimental results on a large-scale DocRE dataset show that the proposed model can significantly improve the accuracy of relation extraction on a strong heterogeneous graph-based baseline. The code is publicly available at https://github.com/xwjim/DocRE-Rec.
APA, Harvard, Vancouver, ISO, and other styles
7

P. Rane, Kantilal, and S. G. Bhirud. "Text Reconstruction using Torn Document Mosaicing." International Journal of Computer Applications 30, no. 10 (September 29, 2011): 21–27. http://dx.doi.org/10.5120/3669-5170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gomez, Juan Carlos, and Marie-Francine Moens. "PCA document reconstruction for email classification." Computational Statistics & Data Analysis 56, no. 3 (March 2012): 741–51. http://dx.doi.org/10.1016/j.csda.2011.09.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tucker, James M., and Peter Porzig. "Between Artefacts, Fragments, and Texts: An Analysis of 4Q266 Column I." Dead Sea Discoveries 25, no. 3 (November 20, 2018): 335–58. http://dx.doi.org/10.1163/15685179-12341484.

Full text
Abstract:
Abstract In this article, we propose a new reconstruction of column I of 4Q266 (4QDa), which is part of our new edition of the Damascus Document. Our proposed reconstruction results from a careful assessment of previous reconstructions of this column, as it pertains to fragment 1b and its relationship to frag. 1a. Specifically, we argue that the DJD line 1 reading of ב]נ֯י̇ אור לה̇נז֯ר֯ מדר֯[כי is better understood as a scribal gloss and not as the first line of the column. We conclude the article by discussing the compositional history of the Damascus Document, especially in terms of how our new reconstruction relates to the Cairo Genizah Codex CD A.
APA, Harvard, Vancouver, ISO, and other styles
10

Rey, Jean-Sébastien. "Codicological Reconstruction of the Cairo Damascus Document (CD A) and 4QDa." Dead Sea Discoveries 25, no. 3 (November 20, 2018): 319–34. http://dx.doi.org/10.1163/15685179-12341483.

Full text
Abstract:
Abstract Despite the fact that scholars often rely on the medieval Cairo Damascus Document manuscripts (CD) when reconstructing the Qumran Damascus Document scrolls (4QD), there has yet to be an attempt to reconstruct the medieval codex on the basis of the Qumran scrolls. The purpose of this contribution, then, is to offer a reconstruction of CD A that is both informed by the Qumran scrolls as well as being informative for the reconstruction of 4QD. This article will try to answer three questions: 1) the number of quires that comprised CD A; 2) the width of the first column of 4QDa; and 3) the length of the missing part of the CD A codex.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Document reconstruction"

1

Chhatkuli, Ajad. "Local analytic and global convex methods for the 3D reconstruction of isometric deformable surfaces." Thesis, Clermont-Ferrand 1, 2016. http://www.theses.fr/2016CLF1MM27/document.

Full text
Abstract:
Cette thèse contribue au problème de la reconstruction 3D pour les surfaces déformables avec une seule caméra. Afin de modéliser la déformation de la surface, nous considérons l’isométrie puisque de nombreuses déformations d’objets réels sont quasi-isométriques. L’isométrie implique que, lors de sa déformation, la surface ne peut pas être étirée ou compressée. Nous étudions deux problèmes. Le premier est le problème basé sur une modèle 3D de référence et une seule image. L’état de l’art propose une méthode locale et analytique de calcul direct de profondeur sous l’hypothèse d’isométrie. Dans cette méthode, la solution pour le gradient de la profondeur n’est pas utilisée. Nous prouvons que cette méthode s’avère instable lorsque la géométrie de la caméra tend à être affine. Nous fournissons des méthodes alternatives basées sur les solutions analytiques locales des quantités de premier ordre, telles que les gradients de profondeur ou les normales de la surface. Nos méthodes sont stables dans toutes les géométries de projection. Dans le deuxième type de problème de reconstruction sans modèle 3D de référence, on obtient les formes de l’objet à partir d’un ensemble d’images où il apparaît déformé. Nous fournissons des solutions locales et globales basées sur le modéle de la caméra perspective. Dans la méthode locale ou par point, nous résolvons pour la normale de la surface en chaque point en supposant que la surface est infinitésimalement plane. Nous calculons ensuite la surface par intégration. Dans la méthode globale, nous trouvons une relaxation convexe du problème. Celle-ci est basée sur la relaxation de l’isométrie en contrainte d’inextensibilité et sur la maximisation de la profondeur en chaque point de la surface. Cette solution combine toutes les contraintes en un seul programme d’optimisation convexe qui calcule la profondeur et utilise une représentation éparse de la surface. Nous détaillons les expériences approfondies qui ont été réalisées pour démontrer l’efficacité de chacune des méthodes. Les expériences montrent que notre solution libre de modèle de référence local fonctionne mieux que la plupart des méthodes précédentes. Notre méthode local avec un modèle 3D de référence et notre méthode globale sans modèle 3D apportent de meilleurs résultats que les méthodes de l’état de l’art en étant robuste au bruit de la correspondance. En particulier, nous sommes en mesure de reconstruire des déformations complexes, non-lisses et d’articulations avec la seconde méthode; alors qu’avec la première, nous pouvons reconstruire avec précision de déformations larges à partir d’images prises avec des très longues focales
This thesis contributes to the problem of 3D reconstruction for deformable surfaces using a single camera. In order to model surface deformation, we use the isometric prior because many real object deformations are near-isometric. Isometry implies that the surface cannot stretch or compress. We tackle two different problems. The first is called Shape-from-Template where the object’s deformed shape is computed from a single image and a texture-mapped 3D template of the object surface. Previous methods propose a differential model of the problem and compute the local analytic solutions. In the methods the solution related to the depth-gradient is discarded and only the depth solution is used. We demonstrate that the depth solution lacks stability as the projection geometry tends to affine. We provide alternative methods based on the local analytic solutions of first-order quantities, such as the depth-gradient or surface normals. Our methods are stable in all projection geometries. The second type of problem, called Non-Rigid Shape-from-Motion is the more general templatefree reconstruction scenario. In this case one obtains the object’s shapes from a set of images where it appears deformed. We contribute to this problem for both local and global solutions using the perspective camera. In the local or point-wise method, we solve for the surface normal at each point assuming infinitesimal planarity of the surface. We then compute the surface by integration. In the global method we find a convex relaxation of the problem. This is based on relaxing isometry to inextensibility and maximizing the surface’s average depth. This solution combines all constraints into a single convex optimization program to compute depth and works for a sparse point representation of the surface. We detail the extensive experiments that were used to demonstrate the effectiveness of each of the proposed methods. The experiments show that our local template-free solution performs better than most of the previous methods. Our local template-based method and our global template-free method performs better than the state-of-the-art methods with robustness to correspondence noise. In particular, we are able to reconstruct difficult, non-smooth and articulating deformations with the latter; while with the former we can accurately reconstruct large deformations with images taken at very long focal lengths
APA, Harvard, Vancouver, ISO, and other styles
2

Al, Moussawi Ali. "Reconstruction 3D de vaisseaux sanguins." Thesis, Toulon, 2014. http://www.theses.fr/2014TOUL0014/document.

Full text
Abstract:
Ce travail concerne la reconstruction 3D de vaisseaux sanguins à partir de coupes transversales en nombre éventuellement réduit. Si des données sont manquantes, une reconstruction cohérente avec un réseau de vaisseaux est obtenue. Cette approche permet en outre de limiter les interventions humaines lors du traitement des images des coupes transversales 2D. Sachant que les images utilisées sont obtenues par scanner,la difficulté est de connecter les vaisseaux sanguins entre deux coupes espacées pour obtenir un graphe qui correspond au cœur des vaisseaux. En associant les vaisseaux sanguins sur les coupes à des masses à transporter, on construit un graphe solution d’un problème de transport ramifié. La reconstruction 3D de la géométrie résulte des données 2D d’imagerie issues des différentes coupes transversales et du graphe. La géométrie 3D des vaisseaux sanguins est représentée par la donnée d’une fonction Level Set définie en tout point de l’espace dont l’iso-valeur zéro correspond aux parois des vaisseaux. On s’intéresse ensuite à résoudre numériquement le modèle de Navier-Stokes en écoulement incompressible sur un maillage cartésien inclus dans la géométrie reconstruite. Ce choix est motivé par la rapidité d’assemblage du maillage et des opérateurs discrets de dérivation, en vue d’éventuelles déformation des vaisseaux. L’inadaptation du maillage avec l’interface de la géométrie amène à considérer une condition limite modifiée permettant un calcul consistant des contraintes aux parois
This work concerns the 3D reconstruction of blood vessels from a limited number of 2D transversal cuts obtained from scanners. If data are missing, a coherentreconstruction with a vessel network is obtained. This approach allows to limit human interventions in processing images of 2D transversal cuts. Knowing that the images used are obtained by scanner, the difficulty is to connect the blood vessels between some widely spaced cuts in order to produce the graph corresponding to the network of vessels. We identify the vessels on each trnasversal cut as a mass to be transported, we construct a graph solution of a branched transport problem. At this stage, we are able to reconstruct the 3D geometry by using the 2D Level Set Functions given by the transversal cuts and the graph information. The 3D geometry of blood vessels is represented by the data of the Level Set function defined at any point of the space whose 0-level corresponds to the vessel walls. The resulting geometry is usually integrated in a fluid mechanic code solving the incompressible Navier-Stokes equations on a Cartesian grid strictly included in a reconstructed geometry. The inadequacy of the mesh with the interface of the geometry is overcomed thanks to a modified boundary condition leading to an accurate computation of the constraints to the walls
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Xiaoyi. "Background reconstruction from multiple images." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT020/document.

Full text
Abstract:
La problématique générale de cette thèse est de reconstituer la scène de fond à partir d’une séquence d’images en présence de masques d’avant-plan. Nous nous sommes intéressés aux méthodes pour détecter ce qui constitue le fond ainsi que les solutions pour corriger les parties cachées et les distor­sions géométrique et chromatique introduites lors de la photographie.Une série de processus est proposée, dont la mise en œuvre comporte dans l'ordre l’aligne­ment géométrique, le réglage chromatique, la fusion des images et la correction des défauts.Nous nous plaçons dans l’hypothèse où le fond est porté sur une surface plane. L'aligne­ment géométrique est alors réalisé par calcul de l'homographie entre une image quelconque et l’image qui sert de référence, suivi d’une interpolation bilinéaire.Le réglage chromatique vise à retrouver un même contraste dans les différentes images. Nous proposons de modéliser la mise en cor­respondance chromatique entre images par une approximation linéaire dont les para­mètres sont déterminés par les résultats de la mise en correspondance des points de contrôle (SIFT).Ces deux étapes sont suivies par une étape de fusion. Plusieurs techniques sont comparées.La première proposition est d’étendre la définition de la médiane dans l’espace vec­toriel. Elle est robuste lorsqu’il y a plus de la moitié des images qui voient les pixels d’arrière-plan. En outre, nous concevons un algorithme original basé sur la notion de clique. Il permet de détecter le plus grand nuage de pixels dans l'espace RGB. Cette approche est fiable même lorsque les pixels d’arrière-plan sont minoritaires.Lors de la mise en œuvre de ce protocole, on constate que certains résultats de fusion présentent des défauts de type flou dus à l’existence d’erreurs d’alignement géomé­trique. Nous proposons donc un traitement complémentaire. Il est basé sur une compa­raison entre le résultat de fusion et les images alignées après passage d'un filtre gaussien. Sa sortie est un assemblage des morceaux très détaillés d'image alignés qui ressemblent le plus au résultat de fusion associés.La performance de nos méthodes est éva­luée par un ensemble de données contenant de nombreuses images de qualités diffé­rentes. Les expériences confirment la fiabi­lisé et la robustesse de notre conception dans diverses conditions de photographie
The general topic of this thesis is to reconstruct the background scene from a burst of images in presence of masks. We focus on the background detection methods as well as on solutions to geometric and chromatic distortions introduced during ph-otography. A series of process is proposed, which con­sists of geometric alignment, chromatic adjustment, image fusion and defect correction.We consider the case where the background scene is a flat surface. The geometric align­ment between a reference image and any other images in the sequence, depends on the computation of a homography followed by a bilinear interpolation.The chromatic adjustment aims to attach a similar contrast to the scene in different im­ages. We propose to model the chromatic mapping between images with linear approximations whose parameters are decided by matched pixels of SIFT .These two steps are followed by a discus­sion on image fusion. Several methods have been compared.The first proposition is a generation of typical median filter to the vector range. It is robust when more than half of the images convey the background information. Besides, we design an original algorithm based on the notion of clique. It serves to distinguish the biggest cloud of pixels in RGB space. This approach is highly reliable even when the background pixels are the minority.During the implementation, we notice that some fusion results bear blur-like defects due to the existence of geometric alignment errors. We provide therefore a combination method as a complementary step to ameli-orate the fusion results. It is based on a com-parison between the fusion image and other aligned images after applying a Gaussian filter. The output is a mosaic of patches with clear details issued from the aligned images which are the most similar to their related fusion patches.The performance of our methods is evaluated by a data set containing extensive images of different qualities. Experiments confirm the reliability and robustness of our design under a variety of photography conditions
APA, Harvard, Vancouver, ISO, and other styles
4

Slysz, Rémi. "Reconstruction de surface 3D d'objets vivants." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0022/document.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre du projet CPER Bramms, dont un des objectifs était de développer une méthode d'acquisition de la surface du buste féminin. Les travaux menés ont donc eu pour but la conception, le développement et la réalisation d'une machine de mesure tridimensionnelle adaptée aux objets vivants. Parmi le nombre important de méthodes de mesures tridimensionnelles existantes, l'attention a été portée sur la mise en correspondance par stéréovision ainsi que sur l'utilisation de lumière structurée. La mise en correspondance par stéréovision consiste à retrouver les pixels homologues dans deux images d'une même scène, prise de deux points de vue différents. Une des manières de réaliser la mise en correspondance est de faire appel à des mesures de corrélation. Les algorithmes utilisés se heurtent alors à certaines difficultés : les changements de luminosité, les bruits, les déformations, les occultations, les zones peu texturées et les larges zones homogènes. L'utilisation de lumière structurée a permis essentiellement d'ajouter de l'information dans les zones homogènes lors des travaux menés. En développant cette approche, une méthode de reconstruction originale basée sur l'exploitation d'un motif particulier projeté sur la surface a ainsi été conçue. Un appariement basé sur la comparaison de signatures de points particuliers du motif a été mis en place. Ce procédé permet une reconstruction éparse en une unique acquisition et simplifie l'étape de gestion du nuage de points pour en faire un maillage surfacique
This thesis is part of the CPER BRAMSS project, one of its objectives was to develop an surface's retrieval method applied to the female bust. Therefore the work has aimed at the design, development and implementation of a three-dimensional measuring machine adapted to living objects.Among the large number of existing methods of three-dimensional measurements, attention was paid to the stereo matching as well as the use of structured light. Matching in stereovision is to find homologous pixels in two images of the same scene, taken from two different points of view. One way to achieve the mapping is to use correlation measurements. The algorithms used come up against certain difficulties: the changing light, noises, distortions, occlusions, low textured areas and large homogeneous areas. The use of structured light allow essentially the adding of information in homogeneous areas in this work. Developing this approach, an original method of reconstruction based on the exploitation of a particular pattern projected on the surface has been designed. A matching based on a comparison of the signatures of specific points in the pattern was implemented. This method allows a single sparse reconstruction acquisition step and simplifies the handling of the point cloud when transforming it in a surface mesh
APA, Harvard, Vancouver, ISO, and other styles
5

Weber, Loriane. "Iterative tomographic X-Ray phase reconstruction." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI085/document.

Full text
Abstract:
L’imagerie par contraste de phase suscite un intérêt croissant dans le domaine biomédical, puisqu’il offre un contraste amélioré par rapport à l’imagerie d’atténuation conventionnelle. En effet, le décalage en phase induit par les tissus mous, dans la gamme d’énergie utilisée en imagerie, est environ mille fois plus important que leur atténuation. Le contraste de phase peut être obtenu, entre autres, en laissant un faisceau de rayons X cohérent se propager librement après avoir traversé un échantillon. Dans ce cas, les signaux obtenus peuvent être modélisés par la diffraction de Fresnel. Le défi de l’imagerie de phase quantitative est de retrouver l’atténuation et l’information de phase de l’objet observé, à partir des motifs diffractés enregistrés à une ou plusieurs distances. Ces deux quantités d’atténuation et de phase, sont entremêlées de manière non-linéaire dans le signal acquis. Dans ces travaux, nous considérons les développements et les applications de la micro- et nanotomographie de phase. D’abord, nous nous sommes intéressés à la reconstruction quantitative de biomatériaux à partir d’une acquisition multi-distance. L’estimation de la phase a été effectuée via une approche mixte, basée sur la linéarisation du modèle de contraste. Elle a été suivie d’une étape de reconstruction tomographique. Nous avons automatisé le processus de reconstruction de phase, permettant ainsi l’analyse d’un grand nombre d’échantillons. Cette méthode a été utilisée pour étudier l’influence de différentes cellules osseuses sur la croissance de l’os. Ensuite, des échantillons d’os humains ont été observés en nanotomographie de phase. Nous avons montré le potentiel d’une telle technique sur l’observation et l’analyse du réseau lacuno-canaliculaire de l’os. Nous avons appliqué des outils existants pour caractériser de manière plus approfondie la minéralisation et les l’orientation des fibres de collagènes de certains échantillons. L’estimation de phase, est, néanmoins, un problème inverse mal posé. Il n’existe pas de méthode de reconstruction générale. Les méthodes existantes sont soit sensibles au bruit basse fréquence, soit exigent des conditions strictes sur l’objet observé. Ainsi, nous considérons le problème inverse joint, qui combine l’estimation de phase et la reconstruction tomographique en une seule étape. Nous avons proposé des algorithmes itératifs innovants qui couplent ces deux étapes dans une seule boucle régularisée. Nous avons considéré un modèle de contraste linéarisé, couplé à un algorithme algébrique de reconstruction tomographique. Ces algorithmes sont testés sur des données simulées
Phase contrast imaging has been of growing interest in the biomedical field, since it provides an enhanced contrast compared to attenuation-based imaging. Actually, the phase shift of the incoming X-ray beam induced by an object can be up to three orders of magnitude higher than its attenuation, particularly for soft tissues in the imaging energy range. Phase contrast can be, among others existing techniques, achieved by letting a coherent X-ray beam freely propagate after the sample. In this case, the obtained and recorded signals can be modeled as Fresnel diffraction patterns. The challenge of quantitative phase imaging is to retrieve, from these diffraction patterns, both the attenuation and the phase information of the imaged object, quantities that are non-linearly entangled in the recorded signal. In this work we consider developments and applications of X-ray phase micro and nano-CT. First, we investigated the reconstruction of seeded bone scaffolds using sed multiple distance phase acquisitions. Phase retrieval is here performed using the mixed approach, based on a linearization of the contrast model, and followed by filtered-back projection. We implemented an automatic version of the phase reconstruction process, to allow for the reconstruction of large sets of samples. The method was applied to bone scaffold data in order to study the influence of different bone cells cultures on bone formation. Then, human bone samples were imaged using phase nano-CT, and the potential of phase nano-imaging to analyze the morphology of the lacuno-canalicular network is shown. We applied existing tools to further characterize the mineralization and the collagen orientation of these samples. Phase retrieval, however, is an ill-posed inverse problem. A general reconstruction method does not exist. Existing methods are either sensitive to low frequency noise, or put stringent requirements on the imaged object. Therefore, we considered the joint inverse problem of combining both phase retrieval and tomographic reconstruction. We proposed an innovative algorithm for this problem, which combines phase retrieval and tomographic reconstruction into a single iterative regularized loop, where a linear phase contrast model is coupled with an algebraic tomographic reconstruction algorithm. This algorithm is applied to numerical simulated data
APA, Harvard, Vancouver, ISO, and other styles
6

Eijk, Rutger Mark van der. "Track reconstruction in the LHCb experiment." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2002. http://dare.uva.nl/document/66446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Boulch, Alexandre. "Reconstruction automatique de maquettes numériques 3D." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1099/document.

Full text
Abstract:
La maquette numérique de bâtiment est un outil nouveau et en plein essor dans les métiers de la construction. Elle centralise les informations et facilite la communication entre les acteurs : évaluation des coûts, simulations physiques, présentations virtuelles, suivis de travaux, etc. Si une maquette numérique est désormais utilisée pour les grands chantiers de bâtiments nouveaux, il n'en existe pas en revanche pour la plupart des bâtiments déjà construits. Or, avec le vieillissement du parc immobilier et le développement du marché de la rénovation, la maquette numérique serait une aide considérable pour des bâtiments anciens. Des techniques de reconstruction plus ou moins automatique ont été développées ces dernières années, à base de mesures laser ou de photogrammétrie. Les lasers, précis et denses, sont chers mais restent abordables pour les industriels, tandis que la photogrammétrie, souvent moins précise et moins fiable dans les zones uniformes (p.ex. les murs), est beaucoup plus bon marché. Mais la plupart des approches s'arrêtent à la reconstruction de surfaces, sans produire de maquettes numériques. À la géométrie doit cependant s'ajouter des informations sémantiques décrivant les éléments de la scène. L'objectif de cette thèse est de fournir un cadre de reconstruction de maquettes numériques, à la fois en ce qui concerne la géométrie et la sémantique, à partir de nuages de points. Pour cela, plusieurs étapes sont proposées. Elles s'inscrivent dans un processus d'enrichissement des données, depuis les points jusqu'à la maquette numérique finale. Dans un premier temps, un estimateur de normales pour les nuages de points est défini. Basé sur une transformée de Hough robuste, il permet de retrouver correctement les normales, y compris dans les zones anguleuses et s'adapte à l'anisotropie des données. Dans un second temps, des primitives géométriques sont extraites du nuage de points avec leur normales. Afin d'identifier les primitives identiques existantes en cas de sur-segmentation, nous développons un critère statistique robuste et général pour l'identification de formes, ne requérant qu'une fonction distance entre points et formes. Ensuite, une surface planaire par morceaux est reconstruite. Des hypothèses de plans pour les zones visibles et les parties cachées sont générées et insérées dans un arrangement. La surface est extraite avec une nouvelle régularisation sur le nombre de coins et la longueur des arêtes. L'utilisation d'une formulation linéaire permet, après relaxation continue, d'extraire efficacement une surface proche de l'optimum. Enfin, nous proposons une approche basée sur des grammaires attribuées avec contraintes pour l'enrichissement sémantique de modèles 3D. Cette méthode est bottom-up : nous partons des données pour construire des objets de complexité croissante. La possible explosion combinatoire est gérée efficacement via l'introduction d'opérateurs maximaux et d'un ordre pour l'instanciation des variables
The interest for digital models in the building industry is growing rapidly. These centralize all the information concerning the building and facilitate communication between the players of construction : cost evaluation, physical simulations, virtual presentations, building lifecycle management, site supervision, etc. Although building models now tend to be used for large projects of new constructions, there is no such models for existing building. In particular, old buildings do not enjoy digital 3D model and information whereas they would benefit the most from them, e.g., to plan cost-effective renovation that achieves good thermal performance. Such 3D models are reconstructed from the real building. Lately a number of automatic reconstruction methods have been developed either from laser or photogrammetric data. Lasers are precise and produce dense point clouds. Their price have greatly reduced in the past few years, making them affordable for industries. Photogrammetry, often less precise and failing in uniform regions (e.g. bare walls), is a lot cheaper than the lasers. However most approaches only reconstruct a surface from point clouds, not a semantically rich building model. A building information model is the alliance of a geometry and a semantics for the scene elements. The main objective of this thesis is to define a framework for digital model production regarding both geometry and semantic, using point clouds as an entry. The reconstruction process is divided in four parts, gradually enriching information, from the points to the final digital mockup. First, we define a normal estimator for unstructured point clouds based on a robust Hough transform. It allows to estimate accurate normals, even near sharp edges and corners, and deals with the anisotropy inherent to laser scans. Then, primitives such as planes are extracted from the point cloud. To avoid over-segmentation issues, we develop a general and robust statistical criterion for shape merging. It only requires a distance function from points to shapes. A piecewise-planar surface is then reconstructed. Planes hypothesis for visible and hidden parts of the scene are inserted in a 3D plane arrangement. Cells of the arrangement are labelled full or empty using a new regularization on corner count and edge length. A linear formulation allow us to efficiently solve this labelling problem with a continuous relaxation. Finally, we propose an approach based on constrained attribute grammars for 3D model semantization. This method is entirely bottom-up. We prevent the possible combinatorial explosion by introducing maximal operators and an order on variable instantiation
APA, Harvard, Vancouver, ISO, and other styles
8

Viswanathan, Kartik. "Représentation reconstruction adaptative des hologrammes numériques." Thesis, Rennes, INSA, 2016. http://www.theses.fr/2016ISAR0012/document.

Full text
Abstract:
On constate une forte augmentation de l’intérêt porté sur l’utilisation des technologies vidéo 3D pour des besoins commerciaux, notamment par l’application de l’holographie, pour fournir des images réalistes, qui semblent vivantes. Surtout, pour sa capacité à reconstruire tous les parallaxes nécessaires, afin de permettre de réaliser une vision véritablement immersive qui peut être observée par quiconque (humains, machine ou animal). Malheureusement la grande quantité d'information contenue dans un hologramme le rend inapte à être transmis en temps réel sur les réseaux existants. Cette thèse présente des techniques afin de réduire efficacement la taille de l'hologramme par l'élagage de portions de l'hologramme en fonction de la position de l'observateur. Un grand nombre d'informations contenues dans l'hologramme n'est pas utilisé si le nombre d'observateurs d'une scène immersive est limité. Sous cette hypothèse, éléments de l'hologramme peuvent être décomposés pour que seules les parties requises sensibles au phénomène de diffraction vers un point d'observation particulier soient conservés. Les reconstructions de ces hologrammes élagués peuvent être propagées numériquement ou optiquement. On utilise la transformation en ondelettes pour capter les informations de fréquences localisées depuis l'hologramme. La sélection des ondelettes est basée sur des capacités de localisation en espace et en fréquence. Par exemple, les ondelettes de Gabor et Morlet possèdent une bonne localisation dans l'espace et la fréquence. Ce sont des bons candidats pour la reconstruction des hologrammes suivant la position de l'observateur. Pour cette raison les ondelettes de Shannon sont également utilisées. De plus l'application en fonction du domaine de fréquence des ondelettes de Shannon est présentée pour fournir des calculs rapides de l'élagage en temps réel et de la reconstruction
With the increased interest in 3D video technologies for commercial purposes, there is renewed interest in holography for providing true, life-like images. Mainly for the hologram's capability to reconstruct all the parallaxes that are needed for a truly immersive views that can be observed by anyone (human, machine or animal). But the large amount of information that is contained in a hologram make it quite unsuitable to be transmitted over existing networks in real-time. In this thesis we present techniques to effectively reduce the size of the hologram by pruning portions of the hologram based on the position of the observer. A large amount of information contained in the hologram is not used if the number of observers of an immersive scene is limited. Under this assumption, parts of the hologram can be pruned out and only the requisite parts that can cause diffraction at an observer point can be retained. For reconstructions these pruned holograms can be propagated numerically or optically. Wavelet transforms are employed to capture the localized frequency information from the hologram. The selection of the wavelets is based on the localization capabilities in the space and frequency domains. Gabor and Morlet wavelets possess good localization in space and frequency and form good candidates for the view based reconstruction system. Shannon wavelets are also employed for this cause and the frequency domain based application using the Shannon wavelet is shown to provide fast calculations for real-time pruning and reconstruction
APA, Harvard, Vancouver, ISO, and other styles
9

Vuiets, Anatoliy. "Reconstruction empirique du spectre ultraviolet solaire." Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2015/document.

Full text
Abstract:
L’irradiance spectrale solaire (SSI) dans la bande ultraviolette est un paramètre-clé pour la spécification de la moyenne et la haute atmosphère terrestre. Elle est requise dans de nombreuses applications en météorologie de l’espace, et aussi pour l’étude du climat. Or les observations souffrent de plusieurs défauts : manque de couverture spectrale et temporelle permanente, dégradation des capteurs, désaccords entre les instruments, etc. Plusieurs modèles de reconstruction de la SSI ont été développés pour pallier à ces difficultés. Chacun souffre de défauts, et la reconstruction du spectre en-dessous de 120nm est un réel défi. C’est dans ce contexte que nous avons développé un modèle empirique, qui recourt au champ magnétique photosphérique pour reconstruire les variations du spectre solaire. Ce modèle décompose les magnétogrammes solaires en différentes structures qui sont classées à partir de leur aire (et non sur la base de leur intensité, comme dans la plupart des autres modèles). La signature spectrale de ces structures est déduite des observations, et non pas imposée par des modèles de l’atmosphère solaire. La qualité de la reconstruction s’avère être comparable à celle d’autres modèles. Parmi les principaux résultats, relevons que deux classes seulement de structures solaires suffisent à reproduire correctement la variabilité spectrale solaire. En outre, seule une faible résolution radiale suffit pour reproduire les variations de centre-bord. Enfin, nous montrons que l’amélioration apportée par la décomposition du modèle en deux constantes de temps peut être attribuée à l’effet des raies optiquement minces
The spectrally-resolved radiative output of the Sun (SSI) in the UV band, i.e. at wavelengths below 300 nm, is a key quantity for specifying the state of the middle and upper terrestrial atmosphere. This quantity is required in numerous space weather applications, and also for climate studies. Unfortunately, SSI observations suffer from several problems : they have numerous spectral and temporal gaps, instruments are prone to degradation and often disagree, etc. This has stimulated the development of various types of SSI models. Proxy-based models suffer from lack of the physical interpretation and are as good as the proxies are. Semi-empirical models do not perform well below 300 nm, where the local thermodynamic equilibrium approximation does not hold anymore. We have developed an empirical model, which assumes that variations in the SSI are driven by solar surface magnetic flux. This model proceeds by segmenting solar magnetograms into different structures. In contrast to existing models, these features are classified by their area (and not their intensity), and their spectral signatures are derived from the observations (and not from models). The quality of the reconstruction is comparable to that of other models. More importantly, we find that two classes only of solar features are required to properly reproduce the spectral variability. Furthermore, we find that a coarse radial resolution suffices to account for geometrical line-of-sight effects. Finally, we show how the performance of the model on different time-scales is related to the optical thickness of the emission lines
APA, Harvard, Vancouver, ISO, and other styles
10

Lejeune, Joseph. "Surface recovery and reconstruction after deformation." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAE031/document.

Full text
Abstract:
Les propriétés des polymères sont intéressantes pour des applications pneumatiques, de verres organiques, de joints, … Leurs propriétés mécaniques sont néanmoins mal comprises. Dans ce manuscrit, le comportement mécanique du PMMA et du CR39 est étudié en fonction du temps.Il en résulte des courbes maîtresses à partir d’expérience de relaxation de contrainte et de fluage d’indentation. D'autre part, le comportement mécanique au contact est analysé sur des expériences de fluage et recouvrance d’indentation et de rayures analysées pour la première fois dans cette thèse. Finalement, des lois de comportements sont construites, leurs précisions sont comparées grâce à des calculs par éléments finis aux expériences en contact
Polymer's low weight, deformability and easy manufacturing make them attractive materials for tire, organic glasses, sealing applications … Their mechanical properties are nonetheless poorly understood. In particular, two fields are searched over this thesis: time dependency and contact behavior for two transparent polymer: PMMA and CR39. The mechanical behavior time dependency is observed by the construction of stress relaxation and contact master curves. The mechanical contact behavior is analyzed by indentation creep and recovery experiments. Moreover the immediate scratch recovery is measured in the thesis. Finally, the uniaxial data is used to build constitutive laws, which accuracy is compared by Finite Element Modeling to contact tests
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Document reconstruction"

1

Q: A reconstruction and commentary. Leuven: Peeters, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

The new Damascus Document: The midrash on the eschatological Torah : reconstruction, translation, and commentary. Leiden: Brill, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

M, Vincent Luc, Hull Jonathan J, IS & T--the Society for Imaging Science and Technology., and Society of Photo-optical Instrumentation Engineers., eds. Document recognition III: 29-30 January, 1996, San Jose, California. Bellingham, Wash: SPIE, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lew, H. S. Supporting document for rehabilitation cost estimates of FEMA buildings. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lew, H. S. Supporting document for rehabilitation cost estimates of FEMA buildings. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lew, H. S. Supporting document for rehabilitation cost estimates of FEMA buildings. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, National Institute of Standards and Technology, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Friedheim, William. Freedom's unfinished revolution: An inquiry into the civil war and reconstruction : a primary source text and document reader. New York: New press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Plus fort que le destin: Comment ils se sont reconstruits. Paris: Ed. Anne Carrière, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Husraw Ier, reconstructions d'un règne: Sources et documents. Paris: Association pour l'Avancement des Études Iraniennes, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

W, Graham Malbone. The Lithuanian renaissance and reconstruction 1920-1925: With select documents. Chicago, Ill: Lithuanian Research and Studies Center, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Document reconstruction"

1

Gribov, Alexander, and Eugene Bodansky. "Reconstruction of Orthogonal Polygonal Lines." In Document Analysis Systems VII, 462–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11669487_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ranca, Razvan, and Iain Murray. "A Composable Strategy for Shredded Document Reconstruction." In Computer Analysis of Images and Patterns, 324–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40246-3_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Huei-Yung, and Wen-Cheng Fan-Chiang. "Image-Based Techniques for Shredded Document Reconstruction." In Advances in Image and Video Technology, 155–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-92957-4_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Badawy, H. A., E. Emary, Mohamed Yassien, and Mahmoud Fathi. "Discrete Grey Wolf Optimization for Shredded Document Reconstruction." In Advances in Intelligent Systems and Computing, 284–93. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99010-1_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gomez, Juan Carlos, and Marie-Francine Moens. "Document Categorization Based on Minimum Loss of Reconstruction Information." In Advances in Computational Intelligence, 91–103. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37798-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ngo, Phuc. "Digital Line Segment Detection for Table Reconstruction in Document Images." In Image Analysis and Processing – ICIAP 2022, 211–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06430-2_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Zhuoying, Qingkai Fang, and Yongtao Wang. "Geometric Object 3D Reconstruction from Single Line Drawing Image Based on a Network for Classification and Sketch Extraction." In Document Analysis and Recognition – ICDAR 2021, 598–613. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86549-8_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Li, Haibin Liao, and Youbin Chen. "Document Image Super-Resolution Reconstruction Based on Clustering Learning and Kernel Regression." In Communications in Computer and Information Science, 65–77. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3005-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alhéritière, Héloïse, Walid Amaïeur, Florence Cloppet, Camille Kurtz, Jean-Marc Ogier, and Nicole Vincent. "Straight Line Reconstruction for Fully Materialized Table Extraction in Degraded Document Images." In Discrete Geometry for Computer Imagery, 317–29. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14085-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Lin, Ning Li, Xin Peng, and Qi Liang. "An Improved Algorithm of Logical Structure Reconstruction for Re-flowable Document Understanding." In Natural Language Processing and Chinese Computing, 339–46. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25207-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Document reconstruction"

1

Pimenta, Andre, Edson Justino, Luiz S. Oliveira, and Robert Sabourin. "Document reconstruction using dynamic programming." In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4959853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ukovich, Anna, Alessandra Zacchigna, Giovanni Ramponi, and Gabriella Schoier. "Using clustering for document reconstruction." In Electronic Imaging 2006, edited by Edward R. Dougherty, Jaakko T. Astola, Karen O. Egiazarian, Nasser M. Nasrabadi, and Syed A. Rizvi. SPIE, 2006. http://dx.doi.org/10.1117/12.643761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Solana, C. "Document Reconstruction Based on Feature Matching." In XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05). IEEE, 2005. http://dx.doi.org/10.1109/sibgrapi.2005.26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shao, Yuanlong, Xinguo Liu, Xueying Qin, Yi Xu, and Hujun Bao. "Locally Developable Constraint for Document Surface Reconstruction." In 2009 10th International Conference on Document Analysis and Recognition. IEEE, 2009. http://dx.doi.org/10.1109/icdar.2009.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

SantoshKumar, S. A., and B. K. ShreyamshaKumar. "Edge envelope based reconstruction of torn document." In the Seventh Indian Conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1924559.1924611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Yehong, Honghao Qiu, Jiaqi Lu, and Yong Fang. "Shredded Document Reconstruction Based on Intelligent Algorithms." In 2014 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2014. http://dx.doi.org/10.1109/csci.2014.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Qin, and John M. Danskin. "Bit map reconstruction for document image compression." In Photonics East '96, edited by C. C. Jay Kuo. SPIE, 1996. http://dx.doi.org/10.1117/12.257288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Deshmukh, Pooja, and Prashant Paikrao. "Reconstruction of Torn Document by Moore Algorithm." In 2019 International Conference on Intelligent Computing and Remote Sensing (ICICRS). IEEE, 2019. http://dx.doi.org/10.1109/icicrs46726.2019.9555867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kleber, Florian, Markus Diem, and Robert Sablatnig. "Document reconstruction by layout analysis of snippets." In IS&T/SPIE Electronic Imaging, edited by David G. Stork, Jim Coddington, and Anna Bentkowska-Kafel. SPIE, 2010. http://dx.doi.org/10.1117/12.843687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Flagg, Cristopher, and Ophir Frieder. "Searching Document Repositories using 3D Model Reconstruction." In DocEng '19: ACM Symposium on Document Engineering 2019. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3342558.3345389.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Document reconstruction"

1

Hendricks, Kasey. Data for Alabama Taxation and Changing Discourse from Reconstruction to Redemption. University of Tennessee, Knoxville Libraries, 2021. http://dx.doi.org/10.7290/wdyvftwo4u.

Full text
Abstract:
At their most basic level taxes carry, in the words of Schumpeter ([1918] 1991), “the thunder of history” (p. 101). They say something about the ever-changing structures of social, economic, and political life. Taxes offer a blueprint, in both symbolic and concrete terms, for uncovering the most fundamental arrangements in society – stratification included. The historical retellings captured within these data highlight the politics of taxation in Alabama from 1856 to 1901, including conflicts over whom money is expended upon as well as struggles over who carries their fair share of the tax burden. The selected timeline overlaps with the formation of five of six constitutions adopted in the State of Alabama, including 1861, 1865, 1868, 1875, and 1901. Having these years as the focal point makes for an especially meaningful case study, given how much these constitutional formations made the state a site for much political debate. These data contain 5,121 pages of periodicals from newspapers throughout the state, including: Alabama Sentinel, Alabama State Intelligencer, Alabama State Journal, Athens Herald, Daily Alabama Journal, Daily Confederation, Elyton Herald, Mobile Daily Tribune, Mobile Tribune, Mobile Weekly Tribune, Morning Herald, Nationalist, New Era, Observer, Tuscaloosa Observer, Tuskegee News, Universalist Herald, and Wilcox News and Pacificator. The contemporary relevance of these historical debates manifests in Alabama’s current constitution which was adopted in 1901. This constitution departs from well-established conventions of treating the document as a legal framework that specifies a general role of governance but is firm enough to protect the civil rights and liberties of the population. Instead, it stands more as a legislative document, or procedural straightjacket, that preempts through statutory material what regulatory action is possible by the state. These barriers included a refusal to establish a state board of education and enact a tax structure for local education in addition to debt and tax limitations that constrained government capacity more broadly. Prohibitive features like these are among the reasons that, by 2020, the 1901 Constitution has been amended nearly 1,000 times since its adoption. However, similar procedural barriers have been duplicated across the U.S. since (e.g., California’s Proposition 13 of 1978). Reference: Schumpeter, Joseph. [1918] 1991. “The Crisis of the Tax State.” Pp. 99-140 in The Economics and Sociology of Capitalism, edited by Richard Swedberg. Princeton University Press.
APA, Harvard, Vancouver, ISO, and other styles
2

Carrera-Marquis, Daniela, Marisela Canache, and Franklin Espiga. Open configuration options Hurricane Dorian “AT-A-GLANCE” Assessment of the Effects and Impacts DALA Visualization. Inter-American Development Bank, March 2022. http://dx.doi.org/10.18235/0004056.

Full text
Abstract:
fter hurricane Dorian and the provision of initial emergency services, the government of The Bahamas asked the Inter-American Development Bank (IDB) to assess the resulting damage, losses and additional costs. The IDB requested the United Nations Economic Commission for Latin America and the Caribbean (ECLAC) for technical assistance with the assessment. The report, Assessment of the Effects and Impacts of HURRICANE DORIAN in THE BAHAMAS, published in August 2020 presents the results in detail (1). It also brings recommendations to guide a resilient reconstruction process that can reduce vulnerabilities and risks for the population and for every sector of the economy. Since 2015, it is the fourth assessment in this kind conducted by IDB and ECLAC in The Bahamas. The Bahamas Country Office Preparedness Recovery and Reconstruction Team (P2RCT) has prepared a visual summary of the Assessment of the Effects and Impacts of HURRICANE DORIAN in THE BAHAMAS. This brief will facilitate the dissemination and awareness of key information related to The Bahamas vulnerability to the effects of natural disasters, as well as emphasize the need to strengthen efforts in policy management and disaster risk management (DRM) to achieve greater levels of resilience and risk mitigation. The HURRICANE DORIAN “AT-A-AGLANCE” Assessment of the Effects and Impacts DALA Visualization document, collects economic data and the most relevant aspects of the work carried out during the field sessions, with IDB and ECLAC experts analysis and recommendations.
APA, Harvard, Vancouver, ISO, and other styles
3

Gydesen, S. P. Documents containing operating data for Hanford separations processes, 1944--1972. Hanford Environmental Dose Reconstruction Project. Office of Scientific and Technical Information (OSTI), September 1992. http://dx.doi.org/10.2172/10184454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gydesen, S. P. Declassifications requested by the Technical Steering Panel of Hanford documents produced 1944--1960. Hanford Environmental Dose Reconstruction Project. Office of Scientific and Technical Information (OSTI), September 1992. http://dx.doi.org/10.2172/10185833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saville, Alan, and Caroline Wickham-Jones, eds. Palaeolithic and Mesolithic Scotland : Scottish Archaeological Research Framework Panel Report. Society for Antiquaries of Scotland, June 2012. http://dx.doi.org/10.9750/scarf.06.2012.163.

Full text
Abstract:
Why research Palaeolithic and Mesolithic Scotland? Palaeolithic and Mesolithic archaeology sheds light on the first colonisation and subsequent early inhabitation of Scotland. It is a growing and exciting field where increasing Scottish evidence has been given wider significance in the context of European prehistory. It extends over a long period, which saw great changes, including substantial environmental transformations, and the impact of, and societal response to, climate change. The period as a whole provides the foundation for the human occupation of Scotland and is crucial for understanding prehistoric society, both for Scotland and across North-West Europe. Within the Palaeolithic and Mesolithic periods there are considerable opportunities for pioneering research. Individual projects can still have a substantial impact and there remain opportunities for pioneering discoveries including cemeteries, domestic and other structures, stratified sites, and for exploring the huge evidential potential of water-logged and underwater sites. Palaeolithic and Mesolithic archaeology also stimulates and draws upon exciting multi-disciplinary collaborations. Panel Task and Remit The panel remit was to review critically the current state of knowledge and consider promising areas of future research into the earliest prehistory of Scotland. This was undertaken with a view to improved understanding of all aspects of the colonization and inhabitation of the country by peoples practising a wholly hunter-fisher-gatherer way of life prior to the advent of farming. In so doing, it was recognised as particularly important that both environmental data (including vegetation, fauna, sea level, and landscape work) and cultural change during this period be evaluated. The resultant report, outlines the different areas of research in which archaeologists interested in early prehistory work, and highlights the research topics to which they aspire. The report is structured by theme: history of investigation; reconstruction of the environment; the nature of the archaeological record; methodologies for recreating the past; and finally, the lifestyles of past people – the latter representing both a statement of current knowledge and the ultimate aim for archaeologists; the goal of all the former sections. The document is reinforced by material on-line which provides further detail and resources. The Palaeolithic and Mesolithic panel report of ScARF is intended as a resource to be utilised, built upon, and kept updated, hopefully by those it has helped inspire and inform as well as those who follow in their footsteps. Future Research The main recommendations of the panel report can be summarized under four key headings:  Visibility: Due to the considerable length of time over which sites were formed, and the predominant mobility of the population, early prehistoric remains are to be found right across the landscape, although they often survive as ephemeral traces and in low densities. Therefore, all archaeological work should take into account the expectation of Palaeolithic and Mesolithic ScARF Panel Report iv encountering early prehistoric remains. This applies equally to both commercial and research archaeology, and to amateur activity which often makes the initial discovery. This should not be seen as an obstacle, but as a benefit, and not finding such remains should be cause for question. There is no doubt that important evidence of these periods remains unrecognised in private, public, and commercial collections and there is a strong need for backlog evaluation, proper curation and analysis. The inadequate representation of Palaeolithic and Mesolithic information in existing national and local databases must be addressed.  Collaboration: Multi-disciplinary, collaborative, and cross- sector approaches must be encouraged – site prospection, prediction, recognition, and contextualisation are key areas to this end. Reconstructing past environments and their chronological frameworks, and exploring submerged and buried landscapes offer existing examples of fruitful, cross-disciplinary work. Palaeolithic and Mesolithic archaeology has an important place within Quaternary science and the potential for deeply buried remains means that geoarchaeology should have a prominent role.  Innovation: Research-led projects are currently making a substantial impact across all aspects of Palaeolithic and Mesolithic archaeology; a funding policy that acknowledges risk and promotes the innovation that these periods demand should be encouraged. The exploration of lesser known areas, work on different types of site, new approaches to artefacts, and the application of novel methodologies should all be promoted when engaging with the challenges of early prehistory.  Tackling the ‘big questions’: Archaeologists should engage with the big questions of earliest prehistory in Scotland, including the colonisation of new land, how lifestyles in past societies were organized, the effects of and the responses to environmental change, and the transitions to new modes of life. This should be done through a holistic view of the available data, encompassing all the complexities of interpretation and developing competing and testable models. Scottish data can be used to address many of the currently topical research topics in archaeology, and will provide a springboard to a better understanding of early prehistoric life in Scotland and beyond.
APA, Harvard, Vancouver, ISO, and other styles
6

Technical documents used in dose reconstruction. U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, December 2005. http://dx.doi.org/10.26616/nioshpub2005140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography