Dissertations / Theses on the topic 'Projective code'

To see the other types of publications on this topic, follow the link: Projective code.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Projective code.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Qian, Liqin. "Contributions to the theory of algebraic coding on finite fields and rings and their applications." Electronic Thesis or Diss., Paris 8, 2022. http://www.theses.fr/2022PA080064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La théorie du codage algébrique sur les corps et les anneaux finis a une grande importance dans la théorie de l'information en raison de leurs diverses applications dans les schémas de partage de secrets, les graphes fortement réguliers, les codes d'authentification et de communication. Cette thèse aborde plusieurs sujets de recherche selon les orientations dans ce contexte, dont les méthodes de construction sont au cœur de nos préoccupations. Plus précisément, nous nous intéressons aux constructions de codes optimaux (ou codes asymptotiquement optimaux), aux constructions de codes linéaires à "hull" unidimensionnelle, aux constructions de codes minimaux et aux constructions de codes linéaires projectifs. Les principales contributions sont résumé comme suit. Cette thèse fournie une description explicite des caractères additifs et multiplicatifs sur les anneaux finis (précisément S\mathbb{F}_q+u\mathbb{F}_q~(u^2= 0)S et _\mathbb{F} _q +u\mathbb{F}_q~(u^2=u)s), utilise des sommes Gaussiennes, hyper Eisenstein et Jacobi et fournit plusieurs classes de nouveaux codes optimaux (ou asymptotiquement optimaux) avec des paramètres flexibles, propose des codes linéaires (optimaux ou quasi-optimal) avec une "hull" unidimensionnelle sur des corps finis en utilisant des outils de la théorie de la somme Gaussienne. De plus, cette thèse explore plusieurs classes de codes linéaires binaires (optimaux pour la borne de Griesmer bien connue) sur des corps finis basés sur deux constructions génériques utilisant des fonctions. Aussi, elle détermine leurs paramètres et leurs distributions de poids et en déduit plusieurs familles infinies de codes linéaires minimaux. Enfin, elle étudie des constructions optimales de plusieurs classes de codes linéaires binaires projectifs avec peu de poids et leurs codes duaux correspondants
Algebraic coding theory over finite fields and rings has always been an important research topic in information theory thanks to their various applications in secret sharing schemes, strongly regular graphs, authentication and communication codes.This thesis addresses several research topics according to the orientations in this context, whose construction methods are at the heart of our concerns. Specifically, we are interested in the constructions of optimal codebooks (or asymptotically optimal codebooks), the constructions of linear codes with a one-dimensional hull, the constructions of minimal codes, and the constructions of projective linear codes. The main contributions are summarized as follows. This thesis gives an explicit description of additive and multiplicative characters on finite rings (precisely _\mathbb{F}_q+u\mathbb{F}_q~(u^2= 0)s and S\mathbb{F}_q+u\mathbb{F}_q~(u^2=u)S), employees Gaussian, hyper Eisenstein and Jacobi sums and proposes several classes of optimal (or asymptotically optimal) new codebooks with flexible parameters. Next, it proposes(optimal or nearly optimal) linear codes with a one-dimensional hull over finite fields by employing tools from the theory of Gaussian sums. It develops an original method to construct these codes. It presents sufficient conditions for one-dimensional hull codes and a lower bound on its minimum distance. Besides, this thesis explores several classes of (optimal for the well-known Griesmer bound) binary linear codes over finite fields based on two generic constructions using functions. It determines their parameters and weight distributions and derives several infinite families of minimal linear codes. Finally, it studies (optimal for the sphere packing bound) constructions of several classes of projective binary linear codes with a few weight and their corresponding duals codes
2

Patraucean, Viorica. "Detection and identification of elliptical structure arrangements in images : theory and algorithms." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0020/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur différentes problématiques liées à la détection, l'ajustement et l'identification de structures elliptiques en images. Nous plaçons la détection de primitives géométriques dans le cadre statistique des méthodes a contrario afin d'obtenir un détecteur de segments de droites et d'arcs circulaires/elliptiques sans paramètres et capable de contrôler le nombre de fausses détections. Pour améliorer la précision des primitives détectées, une technique analytique simple d'ajustement de coniques est proposée ; elle combine la distance algébrique et l'orientation du gradient. L'identification d'une configuration de cercles coplanaires en images par une signature discriminante demande normalement la rectification Euclidienne du plan contenant les cercles. Nous proposons une technique efficace de calcul de la signature qui s'affranchit de l'étape de rectification ; elle est fondée exclusivement sur des propriétés invariantes du plan projectif, devenant elle même projectivement invariante
This thesis deals with different aspects concerning the detection, fitting, and identification of elliptical features in digital images. We put the geometric feature detection in the a contrario statistical framework in order to obtain a combined parameter-free line segment, circular/elliptical arc detector, which controls the number of false detections. To improve the accuracy of the detected features, especially in cases of occluded circles/ellipses, a simple closed-form technique for conic fitting is introduced, which merges efficiently the algebraic distance with the gradient orientation. Identifying a configuration of coplanar circles in images through a discriminant signature usually requires the Euclidean reconstruction of the plane containing the circles. We propose an efficient signature computation method that bypasses the Euclidean reconstruction; it relies exclusively on invariant properties of the projective plane, being thus itself invariant under perspective
3

Caullery, Florian. "Polynomes sur les corps finis pour la cryptographie." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4013/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les fonctions de F_q dans lui-même sont des objets étudiés dans de divers domaines tels que la cryptographie, la théorie des codes correcteurs d'erreurs, la géométrie finie ainsi que la géométrie algébrique. Il est bien connu que ces fonctions sont en correspondance exacte avec les polynômes en une variable à coefficients dans F_q. Nous étudierons trois classes de polynômes particulières: les polynômes Presque Parfaitement Non linéaires (Almost Perfect Nonlinear (APN)), les polynômes planaires ou parfaitement non linéaire (PN) et les o-polynômes.Les fonctions APN sont principalement étudiées pour leurs applications en cryptographie. En effet, ces fonctions sont celles qui offre la meilleure résistance contre la cryptanalyse différentielle.Les polynômes PN et les o-polynômes sont eux liés à des problèmes célèbres de géométrie finie. Les premiers décrivent des plans projectifs et les seconds sont en correspondance directe avec les ovales et hyperovales de P^2(F_q). Néanmoins, leurs champ d'application a été récemment étendu à la cryptographie symétrique et à la théorie des codes correcteurs d'erreurs.L'un des moyens utilisé pour compléter la classification est de considérer les polynômes présentant l'une des propriétés recherchées sur une infinité d'extension de F_q. Ces fonctions sont appelées fonction APN (respectivement PN ou o-polynômes) exceptionnelles.Nous étendrons la classification des polynômes APN et PN exceptionnels et nous donneront une description complète des o-polynômes exceptionnels. Les techniques employées sont basées principalement sur la borne de Lang-Weil et sur des méthodes élémentaires
Functions from F_q to itself are interesting objects arising in various domains such as cryptography, coding theory, finite geometry or algebraic geometry. It is well known that these functions admit a univariate polynomial representation. There exists many interesting classes of such polynomials with plenty of applications in pure or applied maths. We are interested in three of them: Almost Perfect Nonlinear (APN) polynomials, Planar (PN) polynomials and o-polynomials. APN polynomials are mostly used in cryptography to provide S-boxes with the best resistance to differential cryptanalysis and in coding theory to construct double error-correcting codes. PN polynomials and o-polynomials first appeared in finite geometry. They give rise respectively to projective planes and ovals in P^2(F_q). Also, their field of applications was recently extended to symmetric cryptography and error-correcting codes.A complete classification of APN, PN and o-polynomials is an interesting open problem that has been widely studied by many authors. A first approach toward the classification was to consider only power functions and the studies were recently extended to polynomial functions.One way to face the problem of the classification is to consider the polynomials that are APN, PN or o-polynomials over infinitely many extensions of F_q, namely, the exceptional APN, PN or o-polynomials.We improve the partial classification of exceptional APN and PN polynomials and give a full classification of exceptional o-polynomials. The proof technique is based on the Lang-Weil bound for the number of rational points in algebraic varieties together with elementary methods
4

Wong, Chee Heng. "Aerodynamic analysis of M33 projectile using the CFX code." Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/10711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The M33 projectile has been analyzed using the ANSYS CFX code that is based on the numerical solution of the full Navier-Stokes equations. Simulation data were obtained against various Mach numbers ranging from M= 0.5 to M2.6 at 0 and 2 angles of attack. Simulation data were also obtained against various angles of attack from 0 to 85 for M= 0.5. For Mach numbers between M= 0.5 to 2.6, the results obtained using the combined k-epsilon and Shear Stress Transport model show good agreement with the experimental range data for the normal force and pitching moment coefficient. The drag coefficient at zero angle of attack tended to be over predicted by an average error of 11.6% with the highest error occurring at M= 1.5. For varying angle of attack up to 85 at M= 0.5, the results obtained from CFX code were compared with simulation results obtained from AP09. The data showed good agreement only up to 20 angles of attack.
5

St, George Julia. "Visual codes of secrecy photography of death and projective identification /." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060608.143049/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Junhua. "Geometric structures and linear codes related to conics in classical projective planes of odd orders." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 105 p, 2009. http://proquest.umi.com/pqdweb?did=1654490971&sid=2&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Jen-Fa. "On finding generator polynomials and parity-check sums of binary projective geometry codes." Thesis, University of Ottawa (Canada), 1985. http://hdl.handle.net/10393/4800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chu, Lei. "Colouring Cayley Graphs." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We will discuss three ways to bound the chromatic number on a Cayley graph. 1. If the connection set contains information about a smaller graph, then these two graphs are related. Using this information, we will show that Cayley graphs cannot have chromatic number three. 2. We will prove a general statement that all vertex-transitive maximal triangle-free graphs on n vertices with valency greater than n/3 are 3-colourable. Since Cayley graphs are vertex-transitive, the bound of general graphs also applies to Cayley graphs. 3. Since Cayley graphs for abelian groups arise from vector spaces, we can view the connection set as a set of points in a projective geometry. We will give a characterization of all large complete caps, from which we derive that all maximal triangle-free cubelike graphs on 2n vertices and valency greater than 2n/4 are either bipartite or 4-colourable.
9

Passuello, Alberto. "Semidefinite programming in combinatorial optimization with applications to coding theory and geometry." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00948055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We apply the semidefinite programming method to obtain a new upper bound on the cardinality of codes made of subspaces of a linear vector space over a finite field. Such codes are of interest in network coding.Next, with the same method, we prove an upper bound on the cardinality of sets avoiding one distance in the Johnson space, which is essentially Schrijver semidefinite program. This bound is used to improve existing results on the measurable chromatic number of the Euclidean space.We build a new hierarchy of semidefinite programs whose optimal values give upper bounds on the independence number of a graph. This hierarchy is based on matrices arising from simplicial complexes. We show some properties that our hierarchy shares with other classical ones. As an example, we show its application to the problem of determining the independence number of Paley graphs.
10

Alshawish, H. M. M. "3-D object classification using space-time coded light projection." Thesis, University of Newcastle Upon Tyne, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bauer, Karl Gregory. "Projection based image restoration, super-resolution and error correction codes." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Super-resolution is the ability of a restoration algorithm to restore meaningful spatial frequency content beyond the diffraction limit of the imaging system. The Gerchberg-Papoulis (GP) algorithm is one of the most celebrated algorithms for super-resolution. The GP algorithm is conceptually simple and demonstrates the importance of using a priori information in the formation of the object estimate. In the first part of this dissertation the continuous GP algorithm is discussed in detail and shown to be a projection on convex sets algorithm. The discrete GP algorithm is shown to converge in the exactly-, over- and under-determined cases. A direct formula for the computation of the estimate at the kth iteration and at convergence is given. This analysis of the discrete GP algorithm sets the stage to connect super-resolution to error-correction codes. Reed-Solomon codes are used for error-correction in magnetic recording devices, compact disk players and by NASA for space communications. Reed-Solomon codes have a very simple description when analyzed with the Fourier transform. This signal processing approach to error-correction codes allows the error-correction problem to be compared with the super-resolution problem. The GP algorithm for super-resolution is shown to be equivalent to the correction of errors with a Reed-Solomon code over an erasure channel. The Restoration from Magnitude (RFM) problem seeks to recover a signal from the magnitude of the spectrum. This problem has applications to imaging through a turbulent atmosphere. The turbulent atmosphere causes localized changes in the index of refraction and introduces different phase delays in the data collected. Synthetic aperture radar (SAR) and hyperspectral imaging systems are capable of simultaneously recording multiple images of different polarizations or wavelengths. Each of these images will experience the same turbulent atmosphere and have a common phase distortion. A projection based restoration algorithm for the simultaneous restoration of pairs of images experiencing a common phase distortion is presented.
12

Ograhn, Fredrik, and August Wande. "Automatiserade regressionstester avseende arbetsflöden och behörigheter i ProjectWise. : En fallstudie om ProjectWise på Trafikverket." Thesis, Högskolan Dalarna, Informatik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:du-23290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Test av mjukvara görs i syfte att se ifall systemet uppfyller specificerade krav samt för att hitta fel. Det är en viktig del i systemutveckling och involverar bland annat regressionstestning. Regressionstester utförs för att säkerställa att en ändring i systemet inte medför att andra delar i systemet påverkas negativt. Dokumenthanteringssystem hanterar ofta känslig data hos organisationer vilket ställer höga krav på säkerheten. Behörigheter i system måste därför testas noggrant för att säkerställa att data inte hamnar i fel händer. Dokumenthanteringssystem gör det möjligt för flera organisationer att samla sina resurser och kunskaper för att nå gemensamma mål. Gemensamma arbetsprocesser stöds med hjälp av arbetsflöden som innehåller ett antal olika tillstånd. Vid dessa olika tillstånd gäller olika behörigheter. När en behörighet ändras krävs regressionstester för att försäkra att ändringen inte har gjort inverkan på andra behörigheter. Denna studie har utförts som en kvalitativ fallstudie vars syfte var att beskriva utmaningar med regressionstestning av roller och behörigheter i arbetsflöden för dokument i dokumenthanteringssystem. Genom intervjuer och en observation så framkom det att stora utmaningar med dessa tester är att arbetsflödens tillstånd följer en förutbestämd sekvens. För att fullfölja denna sekvens så involveras en enorm mängd behörigheter som måste testas. Det ger ett mycket omfattande testarbete avseende bland annat tid och kostnad. Studien har riktat sig mot dokumenthanteringssystemet ProjectWise som förvaltas av Trafikverket. Beslutsunderlag togs fram för en teknisk lösning för automatiserad regressionstestning av roller och behörigheter i arbetsflöden åt ProjectWise. Utifrån en kravinsamling tillhandahölls beslutsunderlag som involverade Team Foundation Server (TFS), Coded UI och en nyckelordsdriven testmetod som en teknisk lösning. Slutligen jämfördes vilka skillnader den tekniska lösningen kan utgöra mot manuell testning. Utifrån litteratur, dokumentstudie och förstahandserfarenheter visade sig testautomatisering kunna utgöra skillnader inom ett antal identifierade problemområden, bland annat tid och kostnad.
Software testing is done in order to see whether the system meets specified requirements and to find bugs. It is an important part of system development and involves, among other things, regression testing. Regression tests are performed to ensure that a change in the system does not affect other parts of the system adversely. Document management systems often deals with sensitive data for organizations, which place high demands on safety. Permissions in the system has to be tested thoroughly to ensure that data does not fall into the wrong hands. Document management systems make it possible for organizations to pool their resources and knowledge together to achieve common goals. Common work processes are supported through workflows that contains a variety of states. These different permissions apply to different states. When a permission changes regression tests are required to ensure that the changes has not made an impact on other permissions. This study was conducted as a qualitative case study whose purpose was to describe the challenges of regression testing of roles and permissions in document workflows in a document management system. Through interviews and an observation it emerged that the major challenges of these tests is that workflow states follow a predetermined sequence. To complete this sequence, a huge amount of permissions must be tested. This provides a very extensive test work that is time consuming and costly. The study was directed toward the document management system ProjectWise, managed by Trafikverket. Supporting documentation for decision making was produced for a technical solution for automated regression testing of roles and permissions in workflows for ProjectWise. Based on a requirement gathering decision-making was provided that involved the Team Foundation Server (TFS), Coded UI and a keyword-driven test method for a technical solution. Finally, a comparison was made of differences in the technical solution versus today's manual testing. Based on literature, document studies and first hand experiences, test automation provides differences in a number of problem areas, including time and cost.
13

Bevilacqua, Francesca. "Projection and reconstruction-based noise filtering methods in cone beam CT." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis has been motivated and developed within a collaboration project between the Department of Mathematics of the University of Bologna and the company Skanray Europe Srl. In detail, this work focuses on Cone Beam CT reconstruction of the human head, with particular interest in detecting small soft tissue elements within white and grey brain matter. This problem is expecially difficult due to the small contrast in the X-ray attenuation coefficient between the two substances to be discriminated. After accurately modelling the data acquisition process (also according to the requirements of the company), we propose a novel reconstruction algorithm based on two subsequent steps. First, the measured and degraded projections are restored, namely the blur and the noise are removed. Second, the restored projections are feeded as input to a variational reconstruction method named TV3D-L2, so as to get an accurate final reconstruciton of the head. Several experimental tests are presented which provide evidence for the proposed approach outperforming classical ones expecially for low-dose CT.
14

Baratov, Rishat. "Efficient conic decomposition and projection onto a cone in a Banach ordered space." Thesis, University of Ballarat, 2005. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/61401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Mingqing. "Development of a diaphragm tracking algorithm for megavoltage cone beam CT projection data." Thesis, University of Iowa, 2009. https://ir.uiowa.edu/etd/228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work several algorithms for diaphragm detection in 2D views of cone-beam computed tomography (CBCT) raw data are developed. These algorithms are tested on 21 Siemens megavoltage CBCT scans of lungs and the result is compared against the diaphragm apex identified by human experts. Among these algorithms dynamic Hough transform is sufficiently quick and accurate for motion determination prior to radiation therapy. The diaphragm was successfully detected in all 21 data sets, even for views with poor image quality and confounding objects. Each CBCT scan analysis (200 frames) took about 38 seconds on a 2.66 GHz Intel quad-core 2 CPU. The average cranio-caudal position error was 1.707 ± 1.117 mm. Other directions were not assessed due to uncertainties in expert identification.
16

Mueller, Klaus. "Fast and accurate three-dimensional reconstrution from cone-beam projection data using algebraic methods /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487950658545496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chavez, Daniel. "Parallelizing Map Projection of Raster Data on Multi-core CPU and GPU Parallel Programming Frameworks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Map projections lie at the core of geographic information systems and numerous projections are used today. The reprojection between different map projections is recurring in a geographic information system and it can be parallelized with multi-core CPUs and GPUs. This thesis implements a parallel analytic reprojection algorithm of raster data in C/C++ with the parallel programming frameworks Pthreads, C++11 STL threads, OpenMP, Intel TBB, CUDA and OpenCL. The thesis compares the execution times from the different implementations on small, medium and large raster data sets, where OpenMP had the best speedup of 6, 6.2 and 5.5, respectively. Meanwhile, the GPU implementations were 293 % faster than the fastest CPU implementations, where profiling shows that the CPU implementations spend most time on trigonometry functions. The results show that reprojection algorithm is well suited for the GPU, while OpenMP and Intel TBB are the fastest of the CPU frameworks.
Kartprojektioner är en central del av geografiska informationssystem och en otalig mängd av kartprojektioner används idag. Omprojiceringen mellan olika kartprojektioner sker regelbundet i ett geografiskt informationssystem och den kan parallelliseras med flerkärniga CPU:er och GPU:er. Denna masteruppsats implementerar en parallel och analytisk omprojicering av rasterdata i C/C++ med ramverken Pthreads, C++11 STL threads, OpenMP, Intel TBB, CUDA och OpenCL. Uppsatsen jämför de olika implementationernas exekveringstider på tre rasterdata av varierande storlek, där OpenMP hade bäst speedup på 6, 6.2 och 5.5. GPU-implementationerna var 293 % snabbare än de snabbaste CPU-implementationerna, där profileringen visar att de senare spenderade mest tid på trigonometriska funktioner. Resultaten visar att GPU:n är bäst lämpad för omprojicering av rasterdata, medan OpenMP är den snabbaste inom CPU ramverken.
18

Belloni, Sofia. "Implementazione in linguaggio C++ in versione ottimizzata del tool Reconstruction and Visualization from a Single Projection (ReViSP)." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24301/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Il volume è una delle feature più importanti per la caratterizzazione di un tumore su scala macroscopica. Viene spesso utilizzato per valutare l'efficacia dei trattamenti assistenziali, rendendo così la sua corretta valutazione una questione cruciale per la cura del paziente. Allo stesso modo, il volume è una caratteristica fondamentale anche su una scala microscopica ma, ad esempio in letteratura, esistono pochissimi metodi proposti per stimare il volume di sferoidi tumorali, modelli tumorali 3D di circa 1 mm di diametro, creati in vitro e ampiamente utilizzati in studi pre-clinici per testare gli effetti dei farmaci e trattamenti radioterapici. Tra i metodi proposti per stimare il volume degli sferoidi tumorali, il più utilizzato è conosciuto con il nome di Reconstruction and Visualization from a Single Projection (ReViSP). E’ un metodo automatico concepito per ricostruire la superficie 3D e stimare il volume di sferoidi multicellulari utilizzando una semplice proiezione 2D, che ad esempio potrebbe essere un'immagine acquisita con un semplice microscopio ottico a campo largo. Il software ReViSP è distribuito come strumento open source con codice sviluppato in MATLAB. Tuttavia, il codice C++ risulta essere più versatile e, non essendo collegato a licenze di utilizzo, meglio si adatta alla politica di codice distribuito in forma di Open Source. In questo lavoro, utilizzando il MATLAB Coder, la libreria OpenCV e molti altri toolbox e librerie, abbiamo creato una versione ottimizzata di ReViSP in linguaggio C++. Il codice è distribuito come strumento open source e può essere integrato in diversi framework C++ utilizzati in oncologia per la stima di feature radiomiche dei tumori.
19

Staub, David. "Time dependent cone-beam CT reconstruction via a motion model optimized with forward iterative projection matching." VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this work is to present the development and validation of a novel method for reconstructing time-dependent, or 4D, cone-beam CT (4DCBCT) images. 4DCBCT can have a variety of applications in the radiotherapy of moving targets, such as lung tumors, including treatment planning, dose verification, and real time treatment adaptation. However, in its current incarnation it suffers from poor reconstruction quality and limited temporal resolution that may restrict its efficacy. Our algorithm remedies these issues by deforming a previously acquired high quality reference fan-beam CT (FBCT) to match the projection data in the 4DCBCT data-set, essentially creating a 3D animation of the moving patient anatomy. This approach combines the high image quality of the FBCT with the fine temporal resolution of the raw 4DCBCT projection data-set. Deformation of the reference CT is accomplished via a patient specific motion model. The motion model is constrained spatially using eigenvectors generated by a principal component analysis (PCA) of patient motion data, and is regularized in time using parametric functions of a patient breathing surrogate recorded simultaneously with 4DCBCT acquisition. The parametric motion model is constrained using forward iterative projection matching (FIPM), a scheme which iteratively alters model parameters until digitally reconstructed radiographs (DRRs) cast through the deforming CT optimally match the projections in the raw 4DCBCT data-set. We term our method FIPM-PCA 4DCBCT. In developing our algorithm we proceed through three stages of development. In the first, we establish the mathematical groundwork for the algorithm and perform proof of concept testing on simulated data. In the second, we tune the algorithm for real world use; specifically we improve our DRR algorithm to achieve maximal realism by incorporating physical principles of image formation combined with empirical measurements of system properties. In the third stage we test our algorithm on actual patient data and evaluate its performance against gold standard and ground truth data-sets. In this phase we use our method to track the motion of an implanted fiducial marker and observe agreement with our gold standard data that is typically within a millimeter.
20

Eichert, Pascale. "Etude de l'écoulement gazeux, au sein et à l'extérieur d'une torche de projection à plasma d'arc soufflé, à l'aide du code PHOENICSTm." Besançon, 1996. http://www.theses.fr/1996BESA2063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les travaux développés dans cette étude ont visé à modéliser les écoulements de jets de plasmas thermiques utilisé&s habituellement en projection à la torche à plasma d'arc soufflé à l'aide du code de calcul PHOENICSTm. La détermination des différents paramètres de l'étude a été réalisé à travers une recherche bibliographique, à savoir, par exemple, les hypothèses simplificatrices, le domaine d'étude, les conditions aux limites et les propriétés du mélange de gaz étudié. La modélisation de l'écoulement, depuis l'injection des gazs froids au sein de la buse jusqu'à l'éjection du plasma, ne nécessite pas l'utilisation d'équations de conservation des espèces chimiques car l'hypothèse d'un écoulement à l'équilibre chimique permet d'employer des données de la littérature où les grandeurs thermodynamiques et les coefficients de transport du mélange étudié (argon-hydrogène) ont été calculés à partir de la composition à l'équilibre du mélange et donnés en fonction de la température et pour la pression atmosphérique. Les équations de bilan sont discréditées par une approche de type volumes finis et le système d'équations algébriques non-linéaire et couplé est résolu à l'aide de l'algorithme SIMPLEST implanté dans le code. La confrontation des résultats avec des données de la littérature montre un bon accord dans la prédiction des écoulements. L'application du modèle est réalisée par l'étude de l'influence de la valeur de la puissance électrique sur l'écoulement. Les caractéristiques des jets libres thermiques subsoniques ainsi que les évolutions des grandeurs (température et vitesse) dans les directions axiale et radiale sont comparées aux observations puisées dans la littérature. L'estimation de l'entraînement de fluide ambiant dans les jets apporte un complément d'information. La difficulté d'approximation de la forme des profils radiaux en sortie de buse par des expressions mathématiques classiques est mise en évidence
The present work was devoted to the modeling of flows of plasma jets often used within the D. C. Plasma spraying technology by implementation of the CFD PHOENICSTm code. The definition of the different parameters of the study was made through the literature search, for example concerning the assumptions, the studied domain, the boundary conditions and the gas mixtures properties. The flow modelling, from the cold gases injection into the torch to the plasma ejection, not necessitates the use of equations of chemical species conservation because of the chemical equilibrium assumption which permits to use data from the literature where the thermodynamic data and the transport coefficients have been calculated from the chemical composition of gas mixtures and are given in function of the temperature and for the atmospheric pressure. The conservation equations are discretized by a finite volume approach and the non-linear and coupled set of algebraic equations is solved using the SIMPLEST algorithm implemented into the code. The comparison of the results with data from the literature shows a good agreement for the flow prediction of plasma jets. The model is applied to the study of the electric power value influence on the flow. The free thermal subsonic jets characteristics (temperature and velocity) are compared to observations taken from the literature. The study of the ambient gas entrainment gives a complementary information. The difficulty of approximation of the radial profiles at the torch exit by simple relationships is also put into evidence
21

Gillies, Peter. "'Poems to the Sea', and, Painterly poetics : Charles Olson, Robert Creeley, Cole Swensen." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/5225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Poems to the Sea: Rather than narrating or describing a work of visual art, the poems that form this collection show an accumulation, juxtaposition and realignment of material ranging from art historical detail and critique to a more personal, location specific response to works viewed in galleries and museums. Many of the poems engage with non-representational artworks and question how best to reflect, translate or expand upon their transformative effects. The first section, ‘Museum Notes’, explores Charles Olson’s open field poetics by giving artists and writers a conversational voice. ‘Sound Fields’, the second section, responds to individual works of art and reflects a systems-based approach. The authorial voice within ‘Poems to the Sea’, the third section, is that of an artist involved in making a series of palimpsest drawings to capture a sense of place as drawing and writing overlaps and intertwines. Painterly Poetics: Charles Olson, Robert Creeley, Cole Swensen: This thesis explores three American poets from successive generations to examine three related types of engagement with visual art. As literary models that have informed my own poetic practice, Charles Olson, Robert Creeley and Cole Swensen have theorized their own writing process to consider ways of using language to enhance the transmission and transcription of their visual stimuli and ideas. All three are interested in visual art as a model for the writing process: as a means of seeing, thinking and perceiving. After an introduction that surveys relations between verbal and visual art, a chapter is devoted to each of the three poets. In the opening and longest chapter, examples of Olson’s writing are compared to the approach of several Abstract Expressionist painters who contributed to the culture of experimentation and spontaneity that emerged under Olson’s leadership at Black Mountain College in the early 1950s. Following a discussion of Olson as a uniquely influential figure, the chapter on Creeley considers the role of visual art in his poetics. Swensen’s writing is subsequently explored for its extension of the Black Mountain legacy: how she builds upon established critical methods to achieve what she calls ‘a side-by-side, walking-along-with’ relationship between the poem and the artwork.
22

Sunnegårdh, Johan. "Combining analytical and iterative reconstruction in helical cone-beam CT." Licentiate thesis, Computer Vision, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Contemporary algorithms employed for reconstruction of 3D volumes from helical cone beam projections are so called non-exact algorithms. This means that the reconstructed volumes contain artifacts irrespective of the detector resolution and number of projection angles employed in the process. In this thesis, three iterative schemes for suppression of these so called cone artifacts are investigated.

The first scheme, iterative weighted filtered backprojection (IWFBP), is based on iterative application of a non-exact algorithm. For this method, artifact reduction, as well as spatial resolution and noise properties are measured. During the first five iterations, cone artifacts are clearly reduced. As a side effect, spatial resolution and noise are increased. To avoid this side effect and improve the convergence properties, a regularization procedure is proposed and evaluated.

In order to reduce the cost of the IWBP scheme, a second scheme is created by combining IWFBP with the so called ordered subsets technique, which we call OSIWFBP. This method divides the projection data set into subsets, and operates sequentially on each of these in a certain order, hence the name “ordered subsets”. We investigate two different ordering schemes and number of subsets, as well as the possibility to accelerate cone artifact suppression. The main conclusion is that the ordered subsets technique indeed reduces the number of iterations needed, but that it suffers from the drawback of noise amplification.

The third scheme starts by dividing input data into high- and low-frequency data, followed by non-iterative reconstruction of the high-frequency part and IWFBP reconstruction of the low-frequency part. This could open for acceleration by reduction of data in the iterative part. The results show that a suppression of artifacts similar to that of the IWFBP method can be obtained, even if a significant part of high-frequency data is non-iteratively reconstructed.

23

Sunnegårdh, Johan. "Iterative Enhancement of Non-Exact Reconstruction in Cone Beam CT." Thesis, Computer Vision, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

Contemporary algorithms employed for reconstruction of 3D volumes from helical cone beam projections are so called non-exact algorithms. This means that the reconstructed volumes will contain artifacts irrespective of the detector resolution and number of projections angles employed in the process.

It has been proposed that these artifacts can be suppressed using an iterative scheme which comprises computation of projections from the already reconstructed volume as well as the non-exact reconstruction itself.

The purpose of the present work is to examine if the iterative scheme can be applied to the non-exact reconstruction method PI-original in order to improve the reconstruction result. An important part in this implementation is a careful design of the projection operator, as a poorly designed projection operator may result in aliasing and/or other artifacts in the reconstruction result. Since the projection data is truncated, special care must be taken along the boundaries of the detector. Three different ways of handling this interpolation problem is proposed and examined.

The results show that artifacts caused by the PI-original method can indeed be reduced by the iterative scheme. However, each iteration requires at least three times more processing time than the initial reconstruction, which may call for certain compromises, smartness and/or parallelization in the innermost loops. Furthermore, at higher cone angles certain types of artifacts seem to grow by each iteration instead of being suppressed.

24

Pokhrel, Damodar. "Brachytherapy Seed and Applicator Localization via Iterative Forward Projection Matching Algorithm using Digital X-ray Projections." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Interstitial and intracavitary brachytherapy plays an essential role in management of several malignancies. However, the achievable accuracy of brachytherapy treatment for prostate and cervical cancer is limited due to the lack of intraoperative planning and adaptive replanning. A major problem in implementing TRUS-based intraoperative planning is an inability of TRUS to accurately localize individual seed poses (positions and orientations) relative to the prostate volume during or after the implantation. For the locally advanced cervical cancer patient, manual drawing of the source positions on orthogonal films can not localize the full 3D intracavitary brachytherapy (ICB) applicator geometry. A new iterative forward projection matching (IFPM) algorithm can explicitly localize each individual seed/applicator by iteratively matching computed projections of the post-implant patient with the measured projections. This thesis describes adaptation and implementation of a novel IFPM algorithm that addresses hitherto unsolved problems in localization of brachytherapy seeds and applicators. The prototype implementation of 3-parameter point-seed IFPM algorithm was experimentally validated using a set of a few cone-beam CT (CBCT) projections of both the phantom and post-implant patient’s datasets. Geometric uncertainty due to gantry angle inaccuracy was incorporated. After this, IFPM algorithm was extended to 5-parameter elongated line-seed model which automatically reconstructs individual seed orientation as well as position. The accuracy of this algorithm was tested using both the synthetic-measured projections of clinically-realistic Model-6711 125I seed arrangements and measured projections of an in-house precision-machined prostate implant phantom that allows the orientations and locations of up to 100 seeds to be set to known values. The seed reconstruction error for simulation was less than 0.6 mm/3o. For the physical phantom experiments, IFPM absolute accuracy for position, polar angle, and azimuthal angel were (0.78 ± 0.57) mm, (5.8 ± 4.8)o, and (6.8 ± 4.0)o, respectively. It avoids the need to match corresponding seeds in each projection and accommodates incomplete data, overlapping seed clusters, and highly-migrated seeds. IFPM was further generalized from 5-parameter to 6-parameter model which was needed to reconstruct 3D pose of arbitrary-shape applicators. The voxelized 3D model of the applicator was obtained from external complex combinatorial geometric modeling. It is then integrated into the forward projection matching method for computing the 2D projections of the 3D ICB applicators, iteratively. The applicator reconstruction error for simulation was about 0.5 mm/2o. The residual 2D registration error (positional difference) between computed and actual measured applicator images was less than 1 mm for the intrauterine tandem and about 1.5 mm for the bilateral colpostats in each detector plane. By localizing the applicator’s internal structure and the sources, the effect of intra and inter-applicator attenuation can be included in the resultant dose distribution and CBCT metal streaking artifact mitigation. The localization accuracy of better than 1 mm and 6o has the potential to support more accurate Monte Carlo-based or 2D TG-43 dose calculations in clinical practice. It is hoped the clinical implementation of IFPM approach to localize elongated line-seed/applicator for intraoperative brachytherapy planning may have a positive impact on the treatment of prostate and cervical cancers.
25

Benremdane, Ahmed. "La projection de la culture marocaine dans "Makbara" et "Reivindicacion del conde Don Julian" de Juan Goytisolo : un dialogue avec l'Espagne." Montpellier 3, 1986. http://www.theses.fr/1986MON30027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'objectif de la these consiste a montrer jusqu'a quel point juan goytisolo, romancier espagnol contemporain, a pu se servir dans ses derniers romans, en particulier dans la "reivindicacion. . . " et "malbara", de la culture marocaine pour detruire, d'une part, le mythe de "l'espagne sacree" et d'autre part, pour denoncer les inconvenients du capitalisme europeen. La these contient cinq grandes parties: la premiere partie traite le rapprochement de l'auteur de la culture marocaine, ses reflexions sur l'arabisme espagnol, ainsi que certains problemes que connait la culture marocaine. L'interet de l'auteur pour le maroc et la culture marocaine est l'objet de la deuxieme partie. La troisieme partie est relative a l'image de l'arabe dans la societe occidentale etant donne que l'auteur a choisi des personnages marocains dans l'intention d'effacer la mauvaise image que donne l'occident de l'arabe. La quatrieme partie est relative au lexique d'origine arabe vue l'insertion d'un nombre important d'arabisme. .
The aim of the thesis shows to what extent the spanish novelist juan goytisole, has been able to use in his recent novels especially in "la reivindicacion" and "makbara" the moroccan culture with a view to destroy on one hand, the myth of the "sacred spain" and on the other, to reveal the disadvantages of the europeean capitalism. The thesis contains three principal parts: the first part deals with the anthor's closeness to the arabic his reflexions upon the spanish arabism and problems that the moroccan culture faces. The interest of the author in morocco and in the moroccan culture is the main theme in the second part. The third part is related to the image of arabie in the western society, the author has indeed chosen moroccan characters with the intention to erase the bad picture given to arabic by the west. The fourth part is related to the lexicon of the arabic origin including a great deal or arabism and the use of the moroccan dialect in the two novels
26

Couvreur, Alain. "Résidus de 2-formes différentielles sur les surfaces algébriques et applications aux codes correcteurs d'erreurs." Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00376546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La théorie des codes géométriques s'est développée au début des années 80 sur l'impulsion d'un article de V.D. Goppa publié en 1981. Etant donnée une courbe algébrique projective lisse X sur un corps fini, on dispose de deux constructions de codes correcteurs d'erreurs. Une construction dite fonctionnelle qui fait intervenir certaines fonctions rationnelles sur X et une construction différentielle qui fait appel à certaines 1-formes différentielles rationnelles sur X . L'étude de ces codes construits sur des courbes a donné lieu à la publication de plusieurs centaines d'articles. Parallèlement à ces travaux, une généralisation de la construction fonctionnelle à des variétés algébriques de dimension quelconque est proposée par Y. Manin dans un article publié en 1984. On dénombre quelques dizaines de travaux publiés portant sur l'étude de tels codes. Cependant, aucun développement n'a été effectué dans le sens d'une généralisation de la construction différentielle. Dans cette thèse nous proposons une construction différentielle de codes sur des surfaces algébriques. Nous étudions ensuite les propriétés de ces codes et plus particulièrement leurs relations avec les codes fonctionnels. De façon un peu surprenante, on observe l'apparition d'une différence majeure avec le cas des courbes. En effet, si sur une courbe l'orthogonal d'un code fonctionnel est différentiel, ce fait est en général faux sur une surface. Ce résultat motive l'étude des orthogonaux de codes fonctionnels. Des formules pour l'estimation de la distance minimale de tels codes sont données en utilisant des propriétés de systèmes linéaires sur une variété. On montre également que, sous certaines conditions sur la surface, ces codes sont somme de codes différentiels et que des réponses à certains problèmes ouverts de géométrie algébrique "à la Bertini" fourniraient des informations supplémentaires sur les paramètres de ces codes.
27

Almazmomi, Afnan. "Likelihood Inference for Order Restricted Models." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As we know the most popular inference methods for order restricted model are likelihood inference. In such models, the Maximum Likelihood Estimation (MLE) and Likelihood Ratio Testing (LRT) appear some suspect behaviour and unsatisfactory. In this thesis, I review the articles that focused in the behaviour of the Likelihood methods on Order restricted models. For those situations, an alternative method is preferred. However, likelihood inference is satisfactory for simple order cone restriction. But it is unsatisfactory when the restrictions are of the tree order, umbrella order, star-shaped and stochastic order types.
28

Dudin, Bashar. "Compactification ELSV des champs de Hurwitz." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
On s'intéresse à une compactification, due à Ekedahl, Lando, Shapiro et Vainshtein, du champ des courbes lisses munies de fonctions méromorphes d'ordres fixés. Celle-ci est obtenue comme une adhérence du champ de départ dans un champ propre. On commence par en donner deux constructions alternatives et on étudie les déformations de ses points. On la relie par la suite à la compactification à la Harris-Mumford par les revêtements admissibles et on donne une interprétation modulaire des points du bord
We study a compactification, due to Ekedahl, Lando, Shapiro and Vainshtein, of the stack of smooth curves endowed with meromorphic functions having fixed orders. The original compactification is obtained as the closure of the initial stack in a proper substack. We start by giving two alternative constructions of the E.L.S.V compactification and by studying the deformation theory of its points. We finally link it to the Harris-Mumford compactification by admissible covers and give a modular interpretation of boundary points
29

Couderc, Frédéric. "Développement d'un code de calcul pour la simulation d'écoulements de fluides non miscibles : application à la désintégration assistée d'un jet liquide par un courant gazeux." Phd thesis, Ecole nationale superieure de l'aeronautique et de l'espace, 2007. http://tel.archives-ouvertes.fr/tel-00143709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'objet de cette thèse a été de développer un code de calcul pour la simulation d'écoulements diphasiques de fluides non miscibles, incompressibles et isothermes afin de l'appliquer au phénomène de fragmentation d'un jet liquide, et plus particulièrement à la désintégration assistée d'une nappe de liquide par deux écoulements d'air portés à haute vitesse.

Les choix des hypothèses physiques et des schémas numériques associés ont été fait afin de respecter au mieux la physique complexe de brisure d'un jet liquide. La résolution des équations de Navier-Stokes incompressibles est faite de façon directe par le biais d'une méthode de projection. La méthode naissante et prometteuse Level-Set assure quant à elle le suivi numérique de la surface de séparation entre deux fluides non miscibles. Enfin, la méthode Ghost Fluid permet un traitement correct des discontinuités à la traversée de l'interface en préservant au niveau discret les conditions de saut entre équilibre des forces de capillarité, de pression et de viscosité. Les bonnes aptitudes de tels schémas numériques sont montrées à travers une batterie de cas tests académiques.

Les mécanismes physiques mis en jeu lors de l'atomisation primaire d'un jet liquide ont fait l'objet de nombreuses études théoriques et expérimentales. Les instabilités se développant à la surface du liquide sont multiples et clairement tridimensionnelles. Actuellement et malgré beaucoup d'efforts de recherche, aucune théorie ou modèle ne rend compte rigoureusement de ce phénomène. Or, il est montré que l'outil numérique peut apporter une lumière nouvelle. Par exemple, l'impact de la couche limite gazeuse sur la fréquence d'oscillation est étudié. Nous avons également pu retrouver par la simulation la dynamique ligamentaire intrinsèque à la désintégration assistée d'un jet liquide.
30

Deltour, Guillaume. "Propriétés symplectiques et hamiltoniennes des orbites coadjointes holomorphes." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2010. http://tel.archives-ouvertes.fr/tel-00552150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'objet de cette thèse est l'étude de la structure symplectique des orbites coadjointes holomorphes, et de leurs projections. Une orbite coadjointe holomorphe O est une orbite coadjointe elliptique d'un groupe de Lie réel semi-simple, connexe, non compact et à centre fini, provenant d'un espace symétrique hermitien G/K, telle que O puisse être naturellement munie d'une structure kählérienne G-invariante. Ces orbites sont une généralisation de l'espace symétrique hermitien G/K. Dans cette thèse, nous prouvons que le symplectomorphisme de McDuff se généralise aux orbites coadjointes holomorphes, décrivant la structure symplectique de l'orbite O par le produit direct d'une orbite coadjointe compacte et d'un espace vectoriel symplectique. Ce symplectomorphisme est ensuite utilisé pour déterminer les équations de la projection de l'orbite O relative au sous-groupe compact maximal K de G, en faisant intervenir des résultats récents de Ressayre en Théorie Géométrique des Invariants.
31

Beaudry, Joel. "4D cone-beam CT image reconstruction of Varian TrueBeam v1.6 projection images for clinical quality assurance of stereotactic ablative radiotherapy to the lung." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
On-board cone-beam computed tomography (CBCT) imaging integrated with medical linear accelerators offers a viable tool for tumor localization just prior to radiation treatment delivery. However, the exact tumor location during treatment is not well-defined due to respiratory motion. This is taken into account during treatment planning by adding margins to the visible tumor volume defining the high dose region. The respiratory motion used to optimize the treatment plan is not guaranteed to be reproducible on the day of treatment, suggesting that the high dose region may not fully contain the tumor at all points of its trajectory during treatment. In this thesis, to image the tumor at the different portions of the breathing cycle, CBCT projections were binned by the respiratory signal at their time of acquisition. Reconstructing each bin created a 3D image depicting the tumor at one point of its trajectory. Combining the binned reconstructions added in a temporal component, defining a 4D-CBCT. 4D-CBCT reconstructions were performed on 6 stereotactic ablative radiotherapy (SABR) lung cancer patients. Imaging was performed using the Varian TrueBeam (v1.6) and respiratory information was captured with the infra-red camera-based Varian real-time position management (RPM) system. Both analytical and iterative reconstruction algorithms, and image quality metrics were used for a comparative study. Tumor motion was measured by tracking the visible tumor volume centroid from each 4D-CBCT image. The high dose regions defined during treatment planning were compared to the 4D-CBCT tumor volume during its trajectory using an overlap metric to determine if the tumor remained confined to the treatment volume, or not. 4D-CBCTs were found to be well reconstructed using iterative methods. When viewed sequentially the 4D-CBCT images visibly show tumor motion following a sinusoidal-like behavior. Examination of the tumor motion and overlap metric verify that the margins currently used to define the high dose region fully encompass the tumor during all times of its trajectory, i.e 100% overlap within error. The results indicate the current margins used for SABR patients at the British Columbia Cancer Agency are sufficient in providing adequate tumor coverage when accounting for tumor motion and setup uncertainties.
Science, Faculty of
Physics and Astronomy, Department of
Graduate
32

Hrouza, Ondřej. "LDPC kódy." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of this thesis are problematics about LDPC codes. There are described metods to create parity check matrix, where are important structured metods using finite geometry: Euclidean geometry and projectice geometry. Next area in this thesis is decoding LDPC codes. There are presented four metods: Hard-Decision algorithm, Bit-Flipping algorithm, The Sum-Product algorithm and Log Likelihood algorithm, where is mainly focused on iterative decoding methods. Practical output of this work is program LDPC codes created in environment Matlab. The program is divided to two parts -- Practise LDPC codes and Simulation LDPC codes. The result reached by program Simulation LDPC codes is used to create a comparison of creating and decoding methods LDPC codes. For comparison of decoding methods LDPC codes were used BER characteristics and time dependence each method on various parameters LDPC code (number of iteration or size of parity matrix).
33

Dias, Guilherme dos Santos Martins. "Códigos projetivos parametrizados." Universidade Federal de Uberlândia, 2017. https://repositorio.ufu.br/handle/123456789/18286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Este trabalho tem como objetivo estudar os parametros de um codigo projetivo gerado por um conjunto algebrico torico X que e parametrizado por uma quantidade nita de monomios em varias variaveis. Tambem podemos obter conjuntos algebricos toricos associados a matrizes de incidencia de grafos e cluters, e nestes casos obtemos resultados mais precisos, ja que os conjuntos algebricos toricos obtidos sao parametrizados por monomios com mesmo grau. Nos capitulos iniciais sao apresentados os conceitos basicos que servirao de ferramentas para atingir estes objetivos.
This work aims at studying the parameters of a projective code generated by an algebraic toric set X which is parameterized by a nite number of monomials in several variables. We also can obtain algebraic toric sets associated to graph or clutter incidence matrices and in these cases we obtain more precise results since the algebraic toric sets which are obtained are parameterized by monomials of the same degree. In the rst chapters we introduce basic concepts which will serve as tools to reach our aim.
Dissertação (Mestrado)
34

Laviole, Jérémy. "Interaction en réalité augmentée spatiale pour le dessin physique." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00935602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette dissertation décrit le design, implémentation et évaluation de nouvelles applications en Réalité Augmentée Spatiale (RAS). Ces applications sont concentrées sur l'amélioration du dessin physique tel que les dessins au crayons ou peintures en projetant des outils numériques.Tout d'abord, nous décrivons notre système de RAS et ses possibilités. Il prend en comptes les paramètres internes et externes d'un couple caméra/projecteur pour permettre une projection précise sur des feuilles de papier. De plus, il permet la détection du toucher des feuilles et de la position de la main au dessus de celles-ci grâce à une caméra de profondeur. Par conséquent, il permet la création d'écrans tactiles interactifs sur des feuilles de papier posées sur une table.Ensuite, nous nous penchons sur la création d'art visuel, plus précisément sur les premières étapes de la création quand l'artiste créer la structure. Nous offrons la possibilité de créer et d'éditer des lignes de construction numériques (LCN) projetées sur le papier. Ces outils sont des outils de Réalité Augmentée (RA), c'est-à-dire qu'ils s'ajoutent aux outils existants: toute l'interface utilisateur est sur la table, et l'utilisateur n'utilise jamais ni une souris, ni un clavier ni un écran. En plus des simples LCN (lignes et courbes), nous proposons une spécialisation pour les dessins spécifiques tels que les dessins en perspective, les dessins de personnages ainsi que les dessins à partir d'un modèle 3D. Nous proposons de nouvelles méthodes pour afficher et interagir avec des objets 3D sur papier. Nous proposons également de créer des dessins mixtes:art visuel interactif qui tire parti à la fois des possibilités physiques et numériques.Pour finir, nous décrivons des nouveaux usages pour notre système de RAS de nombreux contextes différents à travers des démonstrations publiques. L'acceptabilité de ce genre de système a été très bonne, et "magique" par la plupart des utilisateurs. Ils ont juste vu et interagis avec des feuilles de papier sans remarquer le système de projection et suivi.
35

Mazade, Marc. "Ensembles localement prox-réguliers et inéquations variationnelles." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les propriétés des ensembles localement prox-réguliers ont été étudiées par R.A. Poliquin, R.T. Rockafellar et L. Thibault. Le concept de fonction ''primal lower nice'' a été introduit en dimension finie par R.A. Poliquin et étendu au cadre Hilbertien par A.B. Levy, R.A. Poliquin et L. Thibault. Dans cette thèse, la première partie est consacrée à une étude des outils et des objets géométriques de l'Analyse non lisse tels que les fonctions primal lower nice et les ensembles localement prox-réguliers. On donnera une définition quantifiée de la prox-régularité locale. La deuxième partie établit des résultats d'existence et d'unicité de solutions d'inéquations variationnelles se présentant sous forme d'inclusions différentielles associées au cône normal d'un ensemble localement prox-régulier
The properties of locally prox-regular sets have been studied by R.A. Poliquin, R.T. Rockafellar and L. Thibault. R.A. Poliquin also introduced the concept of ``primal lower nice function. This dissertation is devoted, on one hand to the study of primal lower nice functions and locally prox-regular sets and, on the other hand, to show existence and uniqueness of solutions of differential variational inequalities involwing such sets. Concerning the first part, we introduce a quantified viewpoint of local-prox-regularity and establish a series of characterizations for set satisfying this property. In the second part, we study differential variational inequalities with locally prox-regular sets and we show the relevance of our quantified viewpoint to prove existence results of solutions
36

Zamora, i. Mestre Joan-Lluís. "Proposta de codi normalitzat per a la representació gràfica de la tecnologia de la construcció." Doctoral thesis, Universitat Politècnica de Catalunya, 1995. http://hdl.handle.net/10803/6120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Esta tesis partió de la constatación por parte del autor que todo el contenido técnico de un proyecto de arquitectura se vierte principalmente en su documentación gráfica. Sin embargo esta documentación adolece de muchos defectos tanto desde el punto de vista de su elaboración (qué se dibuja, como se dibuja en equipo, donde y cómo se dibuja) como desde el punto de vista de su resultado comunicativo (errores de comprensión, incorporación de procesos dinámicos, etc.).
Además la progresiva incorporación de la tecnología informática abre nuevas posibilidades de tratamiento de la información. Resulta pertinente reflexionar ahora sobre las bases de la representación gráfica de la tecnología de la construcción arquitectónica con el fin que las nuevas tecnologías no se conviertan en puros instrumentos de edición sino que se concentren en dar servicio a las necesidades de la elaboración y comunicación de la información.
El concepto de código normalizado intenta ir más allá y se refiere a un procedimiento lingüístico normalizado: cuales son las letras, las palabras y las frases del discurso gráfico del proyecto atendiendo a los contenidos tecnológicos básicos de la edificación que hay que comunicar:
1 Situación y disposición de cada elemento constructivo
2 Forma i dimensiones de cada elemento constructivo
3 Situación y disposición de los materiales y productos dentro del elemento constructivo
4 Forma y dimensiones de cada material o producto constructivo
5 Condiciones de borde interior
6 Condiciones de borde exterior
Sin embargo también es cierto que nunca se dibuja todo, sino solo aquello que no es habitual, que está sometido a control externo o que forma parte de las propias competencias.
La representación gráfica del proyecto ejecutivo tiene dos claras vertientes: una vertiente introspectiva puesto que el proyectista realiza a través del dibujo técnico un proceso interior, consigo mismo y sus compañeros de equipo, de toma de decisiones siguiendo un camino de reducción de incertidumbres. Una vertiente externa, puesto que al final el proyecto se convierte simultáneamente en un gran libro de instrucciones que contiene dos relatos: uno referente al proceso de ejecución y otro referente al propio comportamiento fisiológico y funcional del futuro edificio.

INDICIOS DE CALIDAD

La normativa europea no ha desarrollado profundamente estos temas y solo existen ciertas convenciones gráficas y de formato impulsadas por algunos organismos de control. Existe poco debate entorno al contenido informativo del proyecto y su forma de procesamiento, incluso entre los organismo de visado y ayuntamientos que cada día verifican, en parte, multitud de proyectos.
Además esta tesis cubre un vacío docente puesto que las asignaturas de expresión gráfica no tienen oportunidad de entrar a tratar estos temas que están en el núcleo del ejercicio profesional.
Aún cuando el contenido reflexivo y analítico de esta tesis era alto tampoco se pretendía que finalizara en un manual de procedimiento para los estudiantes de arquitectura. Sin embargo el director de la tesis, Don Jaime Avellaneda Diaz-Grande insistió en que se verificaran las hipótesis y soluciones apuntadas por lo cual se decidió "redibujar" todo un proyecto ejecutivo ya existente, de acuerdo con las directrices emanadas de la tesis.
Se trata pues de una tesis doctoral sólida y una aportación original al conocimiento sobre la cual se pueden elaborar en el futuro otras tesis sobre aspectos que la tecnología informática ya está ofreciendo : representaciones 3D, dibujos dinámicos, uso del color, adjunción de archivos de sonido, etc.
This thesis left from the establishment on the part of the author who all the technical content of an architecture project is spilled mainly in its graphical documentation. Nevertheless this documentation suffers from many defects as much from the point of view of its elaboration (what it is drawn, how it is drawn in a architectural team , where and how is drawn) like from the point of view of its communicative result (errors of understanding, incorporation of dynamic processes, etc.)
In addition the progressive incorporation to the computer science technology opens new possibilities of data processing. It turns out pertinent to reflect now on the bases of the graphical representation of the technology of the architectonic construction with the aim that the new technologies do not become pure edition instruments but that are concentrated in giving to service to the necessities of the elaboration and communication of the information.
The concept of standardized code tries to go beyond and it talks about a standardized linguistic procedure: as they are the letters, the words and the phrases of the graphical speech of the project taking care of basic the technological contents of the construction that there is to communicate:
1 Situation and disposition of each constructive element
2 Form and dimensions of each constructive element
3 Situation and disposition of the materials and products within the constructive element
4 Form and dimensions of each material or constructive product
5 Conditions of inner edge
6 Conditions of outer edge
Nevertheless also it is certain that everything is never drawn, but single what is not habitual, that is put under external control or that comprises of the own competitions.
The graphical representation of the executive project knows two clearly slopes: a introspectiva slope since the designer makes through technical drawing an inner process, with itself and its teammates, of decision making following a way of reduction of uncertainties. An external slope, since in the end the project becomes simultaneously a great book of instructions that contains two stories: one referring to the execution process and another referring one to the own physiological and functional behavior of the future building.

QUALITY INDICATIONS

The European norms has not developed deeply these subjects and single certain graphical conventions and of format exist impelled by some organisms of control. Little debate exists surroundings to the informative content of the project and its form of processing, even between the organism of visa and city councils that every day verify, partly, multitude of projects.
In addition this thesis covers an educational emptiness since the subjets of graphical expression do not have opportunity to enter to treat these subjects that are in the nucleus of the professional exercise.
Even though the reflective and analytical content of this thesis was not high either was tried that it finalized in a manual of procedure for the architecture students. Nevertheless the director of the thesis, Professor Jaime Avellaneda Diaz-Grande insisted on which the hypotheses were verified and solutions pointed thus an executive project was decided "to redrawn" everything already existing, in agreement with the emanated directives of the thesis.
One is then a solid doctoral thesis and an original contribution to the knowledge on which they are possible to be elaborated the future in other theses on aspects that the computer science technology already is offering: representations 3D, dynamic drawings, use of the color, adjunción of sound archives, etc.
37

Chen, Brenden Chong. "Robust image hash functions using higher order spectra." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/61087/1/Brenden_Chen_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
38

Carcolé, Carrubé Eduard. "Three-dimensional spatial distribution of scatterers in the crust by inversion analysis of s-wave coda envelopes. A case study of Gauribidanur seismic array site (Southern india) and Galeras volcano (South-western Colombia)." Doctoral thesis, Universitat Ramon Llull, 2006. http://hdl.handle.net/10803/9321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, coda waves recorded by local seismographic networks will be analyzed to estimate the three-dimensional spatial distribution of scatterers (SDS). This will be done by using the single scattering approximation. This approach leads to a huge system of equations that can not be solved by traditional methods. For the first time, we will use the Simultaneous Iterative Reconstructive Technique (SIRT) to solve this kind of system in seismological applications. SIRT is slow but provides a means to carry out the inversion with greater accuracy. There is also a very fast non-iterative method that allows to carry out the inversion 102 times faster, with a higher resolution and reasonable accuracy: the Filtered Back-Projection (FBP). If one wishes to use this technique it is necessary to adapt it to the geometry of our problem. This will be done for the first time in this thesis. The theory necessary to carry out the adaptation will be developed and a simple expression will be derived to carry out the inversion.

FBP and SIRT are then used to determine the SDS in southern India. Results are almost independent of the inversion method used and they are frequency dependent. They show a remarkably uniform distribution of the scattering strength in the crust around GBA. However, a shallow (0-24 km) strong scattering structure, which is only visible at low frequencies, seems to coincide with de Closepet granitic batholith which is the boundary between the eastern and western parts of the Dharwar craton.

Also, the SDS is estimated for the Galeras volcano, Colombia. Results reveal a highly non-uniform SDS. Strong scatterers show frequency dependence, which is interpreted in terms if the scale of the heterogeneities producing scattering. Two zones of strong scattering are detected: the shallower one is located at a depth from 4 km to 8 km under the summit whereas the deeper one is imaged at a depth of ~37 km from the Earth's surface. Both zones may be correlated with the magmatic plumbing system beneath Galeras volcano. The second strong scattering zone may be probably related to the deeper magma reservoir that feeds the system.
39

Reshef, Aymeric. "Dual-rotation C-arm cone-beam tomographic acquisition and reconstruction frameworks for low-contrast detection in brain soft-tissue imaging." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’arceau interventionnel est un système d’imagerie rayons X temps réel. Il dispose d’une option tomographique qui, grâce à une rotation de l’arceau autour du patient, permet d’acquérir des images en coupes dont la résolution en contraste est plus faible que celle des tomodensitomètres diagnostiques, rendant l’information clinique des tissus mous du cerveau inexploitable. Nous proposons un nouveau mode d’acquisition et de reconstruction tomographiques sur arceau interventionnel pour l’amélioration de la détection des faibles contrastes en imagerie interventionnelle des tissus mous de la tête. Afin d’émuler un filtre « bow-tie » (en nœud papillon), une double acquisition est envisagée. Les spécificités de la double acquisition imposent la conception d’un algorithme de reconstruction itérative dédié, incluant le filtre rampe dans l’énergie de minimisation. En bifurquant des approches par rétro-projection filtrée vers celles par filtration des rétro-projections, une méthode de reconstruction directe, alternative à la précédente, est proposée pour les acquisitions doubles. Pour une acquisition simple, la méthode est assurée de faire aussi bien que l’algorithme de rétro-projection filtrée quel que soit l’échantillonnage angulaire en géométrie planaire, et offre une approximation alternative à l’algorithme de Feldkamp-Davis-Kress en géométrie conique. Nous montrons qu’avec peu ou pas de modifications aux schémas précédents, les deux méthodes de reconstruction (itérative et directe) s’adaptent bien à la reconstruction de régions d’intérêt, à laquelle l’acquisition double reste étroitement liée à travers son acquisition tronquée
Interventional C-arm systems are real-time X-ray imaging systems, that can perform tomographic acquisitions by rotating the C-arm around the patient ; however, C-arm cone-beam computed tomography (CBCT) achieves a lower contrast resolution than diagnostic CT, which is necessary in order to benefit from the clinical information of soft tissues in the brain. We propose a new C-arm CBCT acquisition and reconstruction framework to increase low-contrast detection in brain soft-tissue imaging. In order to emulate a bow-tie filter, a dualrotation acquisition is proposed. To account for all the specificities of the dual-rotation acquisition, a dedicated iterative reconstruction algorithm is designed, that includes the ramp filter in the cost function. By switching from filtered backprojection (FBP) to backprojection-filtration (BPF) reconstruction methods, we propose an alternative, direct reconstruction method for dual-rotation acquisitions. For single-rotation acquisitions, the method ensures to perform as good as FBP with arbitrarily coarse angular sampling in planar geometries, and provides a different approximation from the Feldkamp-Davis-Kress (FDK) algorithm in the cone-beam geometry. Although we used it to emulate a virtual bow-tie, our dual-rotation acquisition framework is intrinsically related to region-of-interest (ROI) imaging through the truncated acquisition. With few or no modification of the proposed reconstruction methods, we successfully addressed the problem of ROI imaging in the context of dual-rotation acquisitions
40

Garla, Venkatakrishnaiah Sharath Chandra, and Harivinay Varadaraju. "Validation of Black-and-White Topology Optimization Designs." Thesis, Linköpings universitet, Mekanik och hållfasthetslära, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Topology optimization has seen rapid developments in its field with algorithms getting better and faster all the time. These new algorithms help reduce the lead time from concept development to a finished product. Simulation and post-processing of geometry are one of the major developmental costs. Post-processing of this geometry also takes up a lot of time and is dependent on the quality of the geometry output from the solver to make the product ready for rapid prototyping or final production. The work done in this thesis deals with the post-processing of the results obtained from topology optimization algorithms which output the result as a 2D image. A suitable methodology is discussed where this image is processed and converted into a CAD geometry all while minimizing deviation in geometry, compliance and volume fraction. Further on, a validation of the designs is performed to measure the extracted geometry's deviation from the post-processed result. The workflow is coded using MATLAB and uses an image-based post-processing approach. The proposed workflow is tested on several numerical examples to assess the performance, limitations and numerical instabilities. The code written for the entire workflow is included as an appendix and can be downloaded from the website:https://github.com/M87K452b/postprocessing-topopt.
41

Долгіх, Володимир Миколайович, Владимир Николаевич Долгих, Volodymyr Mykolaiovych Dolhikh, and П. І. Стецюк. "О применении методов негладкой оптимизации для исследования эффективности сложных экономических систем." Thesis, Таврійський національний університет, 2013. http://essuir.sumdu.edu.ua/handle/123456789/58879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hitchcock, Yvonne Roslyn. "Elliptic curve cryptography for lightweight applications." Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15838/1/Yvonne_Hitchcock_Thesis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.
43

Hitchcock, Yvonne Roslyn. "Elliptic Curve Cryptography for Lightweight Applications." Queensland University of Technology, 2003. http://eprints.qut.edu.au/15838/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.
44

Zakaryan, Taron. "Contribution à l'analyse variationnelle : stabilité des cônes tangents et normaux et convexité des ensembles de Chebyshev." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS073/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le but de cette thèse est d'étudier les trois problèmes suivantes : 1) On s'intéresse à la stabilité des cônes normaux et des sous-différentiels via deux types de convergence d'ensembles et de fonctions : La convergence au sens de Mosco et celle d'Attouch-Wets. Les résultats obtenus peuvent être vus comme une extension du théorème d'Attouch aux fonctions non nécessairement convexes sur des espaces de Banach localement uniformément convexes. 2) Pour une bornologie β donnée sur un espace de Banach X, on étudie la validité de la formule suivante (…). Ici Tβ(C; x) et Tc(C; x) désignent le β -cône tangent et le cône tangent de Clarke à C en x. On montre que si, X x X est ∂β-« trusted » alors cette formule est valable pour tout ensemble fermé non vide C ⊂ X et x ∈ C. Cette classe d'espaces contient les espaces ayant une norme équivalent β-différentiable, etplus généralement les espaces possédant une fonction "bosse" lipschitzienne et β-différentiable). Comme conséquence, on obtient que pour la bornologie de Fréchet, cette formule caractérise les espaces d'Asplund. 3) On examine la convexité des ensembles de Chebyshev. Il est bien connu que, dans un espace normé réflexif ayant la propriété Kadec-Klee, tout ensemble de Chebyshev faiblement fermé est convexe. On démontre que la condition de faible fermeture peut être remplacée par la fermeture faible locale, c'est-à-dire pour tout x ∈ C il existe ∈ > 0 tel que C ∩ B(x, ε) est faiblement fermé. On montre aussi que la propriété Kadec-Klee n'est plus exigée lorsque l'ensemble de Chebyshev est représenté comme une union d'ensembles convexes fermés
The aim of this thesis is to study the following three problems: 1) We are concerned with the behavior of normal cones and subdifferentials with respect to two types of convergence of sets and functions: Mosco and Attouch-Wets convergences. Our analysis is devoted to proximal, Fréchet, and Mordukhovich limiting normal cones and subdifferentials. The results obtained can be seen as extensions of Attouch theorem to the context of non-convex functions on locally uniformly convex Banach space. 2) For a given bornology β on a Banach space X we are interested in the validity of the following "lim inf" formula (…).Here Tβ(C; x) and Tc(C; x) denote the β-tangent cone and the Clarke tangent cone to C at x. We proved that it holds true for every closed set C ⊂ X and any x ∈ C, provided that the space X x X is ∂β-trusted. The trustworthiness includes spaces with an equivalent β-differentiable norm or more generally with a Lipschitz β-differentiable bump function. As a consequence, we show that for the Fréchet bornology, this "lim inf" formula characterizes in fact the Asplund property of X. 3) We investigate the convexity of Chebyshev sets. It is well known that in a smooth reflexive Banach space with the Kadec-Klee property every weakly closed Chebyshev subset is convex. We prove that the condition of the weak closedness can be replaced by the local weak closedness, that is, for any x ∈ C there is ∈ > 0 such that C ∩ B(x, ε) is weakly closed. We also prove that the Kadec-Klee property is not required when the Chebyshev set is represented by a finite union of closed convex sets
45

Waidacher, Christoph. "Charge properties of cuprates: ground state and excitations." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2000. http://nbn-resolving.de/urn:nbn:de:swb:14-998985918593-73513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis analyzes charge properties of (undoped) cuprate compounds from a theoretical point of view. The central question considered here is: How does the dimensionality of the CU-O sub-structure influence its charge degrees of freedom? The model used to describe the Cu-O sub-structure is the three- (or multi-) band Hubbard model. Analytical approaches are employed (ground-state formalism for strongly correlated systems, Mori-Zwanzig projection technique) as well as numerical simulations (Projector Quantum Monte Carlo, exact diagonalization). Several results are compared to experimental data. The following materials have been chosen as candidates to represent different Cu-O sub-structures: Bi2CuO4 (isolated CuO4 plaquettes), Li2CuO2 (chains of edge-sharing plaquettes), Sr2CuO3 (chains of corner-sharing plaquettes), and Sr2CuO2Cl2 (planes of plaquettes). Several results presented in this thesis are valid for other cuprates as well. Two different aspects of charge properties are analyzed: 1) Charge properties of the ground state 2) Charge excitations. (gekürzte Fassung)
46

Waidacher, Christoph. "Charge properties of cuprates: ground state and excitations." Doctoral thesis, Technische Universität Dresden, 1999. https://tud.qucosa.de/id/qucosa%3A24786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis analyzes charge properties of (undoped) cuprate compounds from a theoretical point of view. The central question considered here is: How does the dimensionality of the CU-O sub-structure influence its charge degrees of freedom? The model used to describe the Cu-O sub-structure is the three- (or multi-) band Hubbard model. Analytical approaches are employed (ground-state formalism for strongly correlated systems, Mori-Zwanzig projection technique) as well as numerical simulations (Projector Quantum Monte Carlo, exact diagonalization). Several results are compared to experimental data. The following materials have been chosen as candidates to represent different Cu-O sub-structures: Bi2CuO4 (isolated CuO4 plaquettes), Li2CuO2 (chains of edge-sharing plaquettes), Sr2CuO3 (chains of corner-sharing plaquettes), and Sr2CuO2Cl2 (planes of plaquettes). Several results presented in this thesis are valid for other cuprates as well. Two different aspects of charge properties are analyzed: 1) Charge properties of the ground state 2) Charge excitations. (gekürzte Fassung)
47

Liška, Ondřej. "Lineární kódy a projektivní rovina řádu 10." Master's thesis, 2013. http://www.nusl.cz/ntk/nusl-320996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Projective plane of order 10 does not exist. Proof of this assertion was finished in 1989 and is based on the nonexistence of a binary code C generated by the incidence vectors of the plane's lines. As part of the proof of the nonexistence of code C, the coefficients of its weight enumerator were studied. It was shown that coefficients A12, A15, A16 and A19 have to be equal to zero, which contradicted other findings about the relationship among the coefficients. Presented diploma thesis elaborately analyses the phases of the proof and, in several places, enhances them with new observations and simplifications. Part of the proof is generalized for projective planes of order 8m + 2. 1
48

Wu, Min Tzu, and 吳敏子. "Fairy Tale Code and Self Projection." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/5qp9pq.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立臺北藝術大學
美術學系碩士在職專班
98
Through Fairy Tale Code and Self Projection, I attempt to explore my inner part with recognizing the interference of fairy tales and images to my body and my values during childhood. While pondering the concept of “Love”, which one of the main issues of this thesis, I found out how“Love” becomes a formulistic visual icon as a result elaborately decorated in consumption culture.“ Love ” is commercially packed, circulated and utilized over and over in media market. Base on text of selected fairy tales, my purpose is to explore the experiences of how I accepted the code of objects that were transformed from characters in fairy tales. I purpose to use “codes”from fairy tale characters to reflect another more meaningful intention to re-assemble an implied narrative in painting.
49

Basu, Pranab. "On Linear Codes in Projective Spaces." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The projective space $\mathbb{P}_q(n)$ of order $n$ over a finite field $\mathbb{F}_q$ is defined as the collection of all subspaces of the ambient space $\mathbb{F}_q^n$. The Grassmannian $\mathcal{G}_q(n, k)$ is the set of all members of $\mathbb{P}_q(n)$ with fixed dimension $k$. The subspace distance function, defined as $d_S(X, Y) = \dim(X+Y) - \dim(X \cap Y)$ serves as a suitable metric to turn the projective space $\mathbb{P}_q(n)$ into a metric space. A code in the projective space $\mathbb{P}_q(n)$ is a subset of $\mathbb{P}_q(n)$. Projective space has been shown previously by Koetter and Kschischang to be the ideal coding space for error and erasure correction in random network coding. Linear codes find huge applications in classical error-correction. The notion of linearity was introduced in codes in projective space recently. A subspace code $\mathcal{U}$ in $\mathbb{P}_q(n)$ that contains $\left\{ 0\right\}$ is linear if there exists a function $\boxplus : \mathcal{U} \times \mathcal{U} \rightarrow \mathcal{U}$ such that $(\mathcal{U}, \boxplus)$ is an abelian group with identity element as $\left\{ 0\right\}$, all elements of $\mathcal{U}$ are idempotent with respect to $\boxplus$, and the operation $\boxplus$ is isometric. It was conjectured that the size of any linear subspace code in $\mathbb{P}_q(n)$ can be at most $2^n$. In this work, we focus on different classes of linear subspace codes with a view to proving the conjectured upper bound for them as well as characterizing the maximal cases. We study connections of linear codes with lattices and a few combinatorial objects. Binary linear block codes and linear subspace codes are subspaces of a finite vector space over $\mathbb{F}_2$. We identify common features in their structures and prove analogous results for subspace codes including the Union-Intersection theorem. We investigate a class of linear subspace codes which are closed under intersection and show that these codes are equivalent to codes derived from a partition of a linearly independent set. The set of indecomposable codewords in a linear code closed under intersection is proved to generate the code. We verify the conjectured upper bound of $2^n$ for this class of linear codes and show that the maximal codes are essentially codes derived from a fixed basis. We prove linear codes that are sublattices of the projective lattice are precisely those closed under intersection. The sublattice is geometric distributive. We also give an alternate definition of codes derived from a fixed basis and prove that it is equivalent to the one presented in the existing literature. A code in a projective space is equidistant if the distance between each pair of distinct codewords are equal. A similarity in structure is established between equidistant linear subspace codes and $\lambda$-intersecting families, which are studied in the combinatorics of finite sets. We prove the conjectured bound for equidistant linear codes in $\mathbb{P}_2(n)$ and also determine the extremal case which is shown to be closely related to the Fano plane. Equidistant linear codes attaining maximum cardinality for all values of $n \ge 4$ are constructed. Such constructions are shown as $q$-analogs of a particular class of intersecting families called sunflowers. For positive integer values of $r$, construction of equidistant linear codes in $\mathbb{P}_q(n)$ is shown to be possible from any $(2^r - 1)$-subset of a Grassmannian $\mathcal{G}_q(n, 2k)$ with certain intersecting property. This proves constant distance decouples the translation invariance on the subspace distance metric for linear codes. We generalize linear subspace codes as $L$-intersecting families and give a construction for $|L| = 2$ that attains size $2^n$ with larger minimum distance than codes derived from a fixed basis. The conjectured upper bound is proved to hold for $L$-intersecting codes when $|L| = 2, q = 2$.
50

Khaleghi, Azadeh. "Projective Space Codes for the Injection Metric." Thesis, 2009. http://hdl.handle.net/1807/18790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the context of error control in random linear network coding, it is useful to construct codes that comprise well-separated collections of subspaces of a vector space over a finite field. This thesis concerns the construction of non-constant-dimension projective space codes for adversarial error-correction in random linear network coding. The metric used is the so-called injection distance introduced by Silva and Kschischang, which perfectly reflects the adversarial nature of the channel. A Gilbert-Varshamov-type bound for such codes is derived and its asymptotic behaviour is analysed. It is shown that in the limit as the ambient space dimension approaches infinity, the Gilbert-Varshamov bound on the size of non-constant-dimension codes behaves similar to the Gilbert-Varshamov bound on the size of constant-dimension codes contained within the largest Grassmannians in the projective space. Using the code-construction framework of Etzion and Silberstein, new non-constant-dimension codes are constructed; these codes contain more codewords than comparable codes designed for the subspace metric. To our knowledge this work is the first to address the construction of non-constant-dimension codes designed for the injection metric.

To the bibliography