To see the other types of publications on this topic, follow the link: Regularisation.

Dissertations / Theses on the topic 'Regularisation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Regularisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schulze, Bert-Wolfgang, Alexander Shlapunov, and Nikolai Tarkhanov. "Regularisation of mixed boundary problems." Universität Potsdam, 1999. http://opus.kobv.de/ubp/volltexte/2008/2545/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shaban, Neil Tamim. "Dimensional regularisation and gauge theories." Thesis, Durham University, 1994. http://etheses.dur.ac.uk/5103/.

Full text
Abstract:
Dimensional regularisation is formulated without using the assumption that f d(^D)k(k(^2))(^n) = 0. Alternative definitions of ϵ(_kλµv) and γ(^5) are also considered. In the reformulated scheme, quadratic divergences are present, in general, in the scalar and gauge boson self-energies, and remain unregularised. The possible cancellation of such divergences is investigated. Phenomenological aspects of unified gauge theories are studied.
APA, Harvard, Vancouver, ISO, and other styles
3

Marco, Jean-Pierre. "Prolongement et regularisation des systemes differentiables." Paris 6, 1991. http://www.theses.fr/1991PA066570.

Full text
Abstract:
On donne dans le premier chapitre une methode universelle de regularisation des systemes differentiables sur des varietes non compactes, en generalisant une idee de sourian pour le probleme de kepler. Cette construction s'applique a la classification des regularisations usuelles (levi civita, moser, mcgehee), appelees ici prolongements. Les methodes de prolongement se separent en deux classes: les prolongements transverses (moser) et les prolongements tangents (mcgehee), on donne une etude topologique intrinseque dans les deux cas. Dans le deuxieme chapitre, on transpose aux integrales de bott quelques resultats de non integrabilite obtenus par kozlov, taimanov et bolotin dans le cas analytique. On utilise ensuite une methode de prolongement transverse pour prouver la non-integrabilite de quelques problemes de n centres fixes, en generalisant l'approche de bolotin. Dans le troisieme chapitre, on corrige une faute dans la demonstration de henard sur l'existence des solutions de seconde espece dans le probleme restreint des trois corps. On en deduit quelques resultats de non-existence de tores invariants troues dans le probleme
APA, Harvard, Vancouver, ISO, and other styles
4

Saadi, Kamel. "Efficient regularisation of least-squares kernel machines." Thesis, University of East Anglia, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

El, anbari Mohammed. "Regularisation and variable selection using penalized likelihood." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00661689.

Full text
Abstract:
We are interested in variable sélection in linear régression models. This research is motivated by recent development in microarrays, proteomics, brain images, among others. We study this problem in both frequentist and bayesian viewpoints.In a frequentist framework, we propose methods to deal with the problem of variable sélection, when the number of variables is much larger than the sample size with a possibly présence of additional structure in the predictor variables, such as high corrélations or order between successive variables. The performance of the proposed methods is theoretically investigated ; we prove that, under regularity conditions, the proposed estimators possess statistical good properties, such as Sparsity Oracle Inequalities, variable sélection consistency and asymptotic normality.In a Bayesian Framework, we propose a global noninformative approach for Bayesian variable sélection. In this thesis, we pay spécial attention to two calibration-free hierarchical Zellner's g-priors. The first one is the Jeffreys prior which is not location invariant. A second one avoids this problem by only considering models with at least one variable in the model. The practical performance of the proposed methods is illustrated through numerical experiments on simulated and real world datasets, with a comparison betwenn Bayesian and frequentist approaches under a low informative constraint when the number of variables is almost equal to the number of observations.
APA, Harvard, Vancouver, ISO, and other styles
6

Borsic, Andrea. "Regularisation methods for imaging from electrical measurements." Thesis, Oxford Brookes University, 2002. https://radar.brookes.ac.uk/radar/items/4837c97a-d3ff-4521-a1df-f32a378534d2/1/.

Full text
Abstract:
In Electrical Impedance Tomography the conductivity of an object is estimated from boundary measurements. An array of electrodes is attached to the surface of the object and current stimuli are applied via these electrodes. The resulting volt ages are measured. The process of estimating the conductivity as a function of space inside the object from voltage measurements at the surface is called reconstruction. Mathematically the ElT reconstruction is a non linear inverse problem, the stable solution of which requires regularisation nwthods. Most common regularisation methods impose that the reconstructed image should be smooth. Such methods confer stability to the reconstruction process, but limit the capability of describing sharp variations in the sought parameter. In this thesis two new methods of regularisation are proposed. The first method, Gallssian anisotropic regularisation, enhances the reconstruction of sharp conductivity changes occurring at the interface between a contrasting object and the background. As such changes are step changes, reconstruction with traditional smoothing regularisation techniques is unsatisfactory. The Gaussian anisotropic filtering works by incorporating prior structural information. The approximate knowledge of the shapes of contrasts allows us to relax the smoothness in the direction normal to the expected boundary. The construction of Gaussian regularisation filters that express such directional properties on the basis of the structural information is discussed, and the results of numerical experiments are analysed. The method gives good results when the actual conductivity distribution is in accordance with the prior information. When the conductivity distribution violates the prior information the method is still capable of properly locating the regions of contrast. The second part of the thesis is concerned with regularisation via the total variation functional. This functional allows the reconstruction of discontinuous parameters. The properties of the functional are briefly introduced, and an application in inverse problems in image denoising is shown. As the functional is non-differentiable, numerical difficulties are encountered in its use. The aim is therefore to propose an efficient numerical implementation for application in ElT. Several well known optimisation methods arc analysed, as possible candidates, by theoretical considerations and by numerical experiments. Such methods are shown to be inefficient. The application of recent optimisation methods called primal- dual interior point methods is analysed be theoretical considerations and by numerical experiments, and an efficient and stable algorithm is developed. Numerical experiments demonstrate the capability of the algorithm in reconstructing sharp conductivity profiles.
APA, Harvard, Vancouver, ISO, and other styles
7

Goutte, Cyril. "Apprentissage statistique et regularisation pour la regression." Paris 6, 1997. http://www.theses.fr/1997PA066663.

Full text
Abstract:
Le sujet de cette these est l'etude et l'utilisation de l'apprentissage statistique et de la regularisation sur des problemes de regression. On s'interesse plus particulierement a l'identification de systemes et a la modelisation de series temporelles par des modeles lineaires d'une part et connexionnistes non-lineaires d'autre part. Les regressions parametriques lineaires et non-lineaire sont brievement presentees, et les limites de la regression simple sont illustrees en utilisant le concept d'erreur de generalisation. Ainsi definis, ces problemes sont incorrectement poses, et necessitent donc l'utilisation de regularisation pour obtenir des solutions correctes. Ceci introduit un ou plusieurs hyper-parametres qui controlent le niveau de regularisation et dont l'optimisation est effectuee en estimant l'erreur de generalisation. Plusieurs methodes sont presentees a cet effet. Ces developpements sont utilises pour s'attaquer a deux problemes particuliers. Dans le premier, il s'agit de determination des entrees necessaires a la modelisation d'une serie temporelle, par l'intermediaire d'une methode iterative s'appuyant sur l'estimation de la generalisation. Dans le second, on etudie une fonctionnelle de regularisation particuliere qui present l'interet d'effectuer un elagage des parametres inutiles du modele en conjonction avec son effet regularisant. Cette derniere partie utilise des estimateurs bayesiens qui sont aussi presentes de facon generale dans la these.
APA, Harvard, Vancouver, ISO, and other styles
8

Duignan, Nathan. "On the Regularisation of Simultaneous Binary Collisions." Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/21315.

Full text
Abstract:
This dissertation contains work on the simultaneous binary collision in the n-body problem. Martínez and Simó have conjectured that removal of the singularity at this collision via block regularisation results in a regularised flow that is no more than C^(8/3) differentiable with respect to initial conditions. Remarkably, the same authors proved the conjecture for the collinear 4-body problem. The conjecture remains open for the planar case or for n > 4 . This thesis explores the loss of differentiability in the collinear and planar 4-body problem. In the collinear problem, a new proof is provided of the C^(8/3)-regularisation. In the planar problem, a proof that the simultaneous binary collisions are at least C^2-regularisable is given. In both cases a remarkable link between the finite differentiability and the inability to construct a set of integrals local to the singularities is established. The theoretical framework for improving the C^2 result in the plane is established. The method of proof in both cases brings together the theory of blow-up, normal forms, hyperbolic transitions, and computation of regular transition maps to explicitly compute an asymptotic expansion of the transition past the singularities. These tools are first explored in novel work on the regularisation of a generic class of degenerate singularities in planar vector fields. In particular, a relatively simple perturbation of an example derived from the 4-body problem is shown to be C^(4/3) However, the study of simultaneous binary collisions requires that each of these tools be extended to higher dimensions, in particular to manifolds of normally hyperbolic fixed points. General theory on normal forms and asymptotic properties of nearby transitions of such manifolds are detailed. The normal forms are studied in the formal and C^k categories. The hyperbolic transitions are shown to have similar properties to the well studied Dulac maps of planar saddles.
APA, Harvard, Vancouver, ISO, and other styles
9

Papafitsoros, Konstantinos. "Novel higher order regularisation methods for image reconstruction." Thesis, University of Cambridge, 2015. https://www.repository.cam.ac.uk/handle/1810/246692.

Full text
Abstract:
In this thesis we study novel higher order total variation-based variational methods for digital image reconstruction. These methods are formulated in the context of Tikhonov regularisation. We focus on regularisation techniques in which the regulariser incorporates second order derivatives or a sophisticated combination of first and second order derivatives. The introduction of higher order derivatives in the regularisation process has been shown to be an advantage over the classical first order case, i.e., total variation regularisation, as classical artifacts such as the staircasing effect are significantly reduced or totally eliminated. Also in image inpainting the introduction of higher order derivatives in the regulariser turns out to be crucial to achieve interpolation across large gaps. First, we introduce, analyse and implement a combined first and second order regularisation method with applications in image denoising, deblurring and inpainting. The method, numerically realised by the split Bregman algorithm, is computationally efficient and capable of giving comparable results with total generalised variation (TGV), a state of the art higher order method. An additional experimental analysis is performed for image inpainting and an online demo is provided on the IPOL website (Image Processing Online). We also compute and study properties of exact solutions of the one dimensional total generalised variation problem with L^{2} data fitting term, for simple piecewise affine data functions, with or without jumps . This gives an insight on how this type of regularisation behaves and unravels the role of the TGV parameters. Finally, we introduce, study and analyse a novel non-local Hessian functional. We prove localisations of the non-local Hessian to the local analogue in several topologies and our analysis results in derivative-free characterisations of higher order Sobolev and BV spaces. An alternative formulation of a non-local Hessian functional is also introduced which is able to produce piecewise affine reconstructions in image denoising, outperforming TGV.
APA, Harvard, Vancouver, ISO, and other styles
10

Battle, David John. "Maximum Entropy Regularisation Applied to Ultrasonic Image Reconstruction." University of Sydney. Electrical Engineering, 1999. http://hdl.handle.net/2123/842.

Full text
Abstract:
Image reconstruction, in common with many other inverse problems, is often mathematically ill-posed in the sense that solutions are neither stable nor unique. Ultrasonic image reconstruction is particularly notorious in this regard, with narrow transducer bandwidths and limited - sometimes sparsely sampled apertures posing formidable difficulties for conventional signal processing. To overcome these difficulties, some form of regularisation is mandatory, whereby the ill-posed problem is restated as a closely related, well-posed problem, and then solved uniquely. This thesis explores the application of maximum entropy (MaxEnt) regularisation to the problem of reconstructing complex-valued imagery from sparsely sampled coherent ultrasonic field data, with particular emphasis on three-dimensional problems in the non-destructive evaluation (NDE) of materials. MaxEnt has not previously been applied to this class of problem, and yet in comparison with many other approaches to image reconstruction, it emerges as the clear leader in terms of resolution and overall image quality. To account for this performance, it is argued that the default image model used with MaxEnt is particularly meaningful in cases of ultrasonic scattering by objects embedded in homogeneous media. To establish physical and mathematical insights into the forward problem, linear equations describing scattering from both penetrable and impenetrable objects are first derived using the Born and physical optics approximations respectively. These equations are then expressed as a shift-invariant computational model that explicitly incorporates sparse sampling. To validate this model, time-domain scattering responses are computed and compared with analytical solutions for a simple canonical test case drawn from the field of NDE. The responses computed via the numerical model are shown to accurately reproduce the analytical responses. To solve inverse scattering problems via MaxEnt, the robust Cambridge algorithm is generalised to the complex domain and extended to handle broadband (multiple-frequency) data. Two versions of the augmented algorithm are then compared with a range of other algorithms, including several linearly regularised algorithms and lastly, due to its acknowledged status as a competitor with MaxEnt in radio-astronomy, the non-linear CLEAN algorithm. These comparisons are made through simulated 3-D imaging experiments under conditions of both complete and sparse aperture sampling with low and high levels of additive Gaussian noise. As required in any investigation of inverse problems, the experimental confirmation of algorithmic performance is emphasised, and two common imaging geometries relevant to NDE are selected for this purpose. In monostatic synthetic aperture imaging experiments involving side-drilled holes in an aluminium plate and test objects immersed in H2O, MaxEnt image reconstruction is demonstrated to be robust against grating-lobe and side-lobe formation, in addition to temporal bandwidth restriction. This enables efficient reconstruction of 2-D and 3-D images from small numbers of discrete samples in the spatial and frequency domains. The thesis concludes with a description of the design and testing of a novel polyvinylidene fluoride (PVDF) bistatic array transducer that offers advantages over conventional point-sampled arrays in terms of construction simplicity and signal-to-noise ratio. This ultra-sparse orthogonal array is the only one of its kind yet demonstrated, and was made possible by MaxEnt signal processing.
APA, Harvard, Vancouver, ISO, and other styles
11

BENSMAIL, HALIMA. "Modeles de regularisation en discrimination et classification bayesienne." Paris 6, 1995. http://www.theses.fr/1995PA066528.

Full text
Abstract:
Le theme du present travail est l'exploitation extensive de la parametrisation de banfield et raftery (1993), d'une part dans le cadre de la regularisation en discrimination, et d'autre part dans le cadre de la classification bayesienne. Friedman (1989) a propose une technique de regularisation (r. D. A. ) en discrimination sous hypotheses de normalite, qui est efficace mais qui a l'inconvenient de proposer des regles de decision opaques et difficiles a interpreter. Grace a la decomposition spectrale des matrices variance, nous proposons 14 modeles de discrimination. De la sorte, on propose une modelisation complete des schemas de regularisation et on selectionne le meilleur modele comme etant celui qui minimise le taux d'erreur de classement evalue par validation croisee. En classification, notre contribution est d'integrer la prise en compte de la decomposition spectrale des matrices variance des composants des melanges gaussiens consideres. De la meme facon que pour la discrimination, nous proposons differents modeles. Suivant le chemin trace par diebolt et robert (1994), nous utilisons l'echantillonnage de gibbs pour estimer les parametres, et nous le combinons avec les facteurs de bayes pour trouver le meilleur modele et evaluer le nombre de composantes
APA, Harvard, Vancouver, ISO, and other styles
12

Van, der Rest John C. "Minimum description length, regularisation and multi-modal data." Thesis, Aston University, 1995. http://publications.aston.ac.uk/7986/.

Full text
Abstract:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
APA, Harvard, Vancouver, ISO, and other styles
13

Kalivas, N. G. "Heat kernel regularisation and the stochastic quantisation of superfields." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Dunmur, Alan P. "Statistical mechanics, generalisation and regularisation of neural network models." Thesis, University of Edinburgh, 1994. http://hdl.handle.net/1842/13743.

Full text
Abstract:
There has been much recent interest in obtaining analytic results for rule learning using a neural network. In this thesis the performance of a simple neural network model learning a rule from noisy examples is calculated using methods of statistical mechanics. The free energy for the model is defined and order parameters that capture the statistical behaviour of the system are evaluated analytically. A weight decay term is used to regularise the effect of the noise added to the examples. The network's performance is estimated in terms of its ability to generalise to examples from outside the data set. The performance is studied for a linear network learning both linear and nonlinear rules. The analysis shows that a linear network learning a nonlinear rule is equivalent to a linear network learning a linear rule, with effective noise added to the training data and an effective gain on the linear rule. Examining the dependence of the performance measures on the number of examples, the noise added to the data and the weight decay parameter, it is possible to optimise the generalisation error by setting the weight decay parameter to be proportional to the noise level on the data. Hence, a weight decay is not only useful for reducing the effect of noisy data, but can also be used to improve the performance of a linear network learning a nonlinear rule. A generalisation of the standard weight decay term in the form of a general quadratic penalty term or regulariser, which is equivalent to a general Gaussian prior on the network's weight vector, is considered. In this case an average over a distribution of rule weight vectors is included in the calculation to remove any dependence on the exact realisation of the rule.
APA, Harvard, Vancouver, ISO, and other styles
15

SANSEIGNE, LAETITIA. "Regularisation en identification de structures et fiabilite des modeles." Besançon, 1997. http://www.theses.fr/1997BESA2058.

Full text
Abstract:
Les travaux developpes dans ce memoire ont pour objet de contribuer a l'elaboration de modeles mathematiques fiables permettant de decrire le comportement des structures mecaniques avec une precision suffisante. La premiere partie du memoire est consacree aux problemes lies a la grande dimensionnalite du probleme d'identification parametrique. Le grand nombre de parametres a corriger et le manque de donnees experimentales exploitees pour le recalage, qui se traduisent par un mauvais conditionnement des equations d'estimation, ne garantissent pas l'unicite et la stabilite de la solution. Deux methodes dites de regularisation sont proposees. L'etude consiste, dans un premier temps, a enrichir l'espace de connaissance par introduction de nouvelles donnees experimentales dans la procedure de recalage utilisant la sensibilite des solutions propres. Ces quantites sont obtenues a partir d'essais sur la structure excitee par la base, essais specifiques pratiques en aeronautique. L'objectif de la seconde methode est de localiser les regions du modele presentant des defauts dominants de modelisation. Cette approche, formulee dans le domaine frequentiel repose sur l'existence de forces d'excitation permettant d'obtenir des reponses sensibilisantes ou non sensibilisantes pour les zones erronees du modele. La deuxieme partie du memoire concerne la phase d'evaluation de la qualite des modeles analytiques construits. Cette analyse est justifiee dans la mesure ou des decisions concernant la conception, l'optimisation ou la securite du systeme sont prises a partir de simulations sur un modele mathematique et du fait que ces decisions peuvent etre affectees par les incertitudes relatives a la valeur des parametres du modele et l'environnement dynamique du systeme. L'etude de la fiabilite des modeles est formulee en terme de robustesse des decisions basees sur les predictions du modele, et cela par rapport aux incertitudes probables. La procedure proposee exploite les modeles convexes pour caracteriser les incertitudes et permet de determiner les limites de validite du modele en fonction de l'utilisation envisagee.
APA, Harvard, Vancouver, ISO, and other styles
16

Casanove, Marie-José. "Deconvolution partielle et reconstruction d'image : un nouveau principe de regularisation." Toulouse 3, 1987. http://www.theses.fr/1987TOU30022.

Full text
Abstract:
Il est montre qu'un bon conditionnement du probleme peut etre assure en limitant de facon explicite la resolution desiree sur l'objet a reconstruire. La procedure interactive developpee sur la base du principe de regularisation permet d'estimer tous les parametres "cles" du probleme. L'influence des divers parametres du probleme sur la qualite de la reconstruction est mise en evidence par des resultats obtenus lors de la deconvolution partielle d'images simulees. Le processus est ensuite applique a des spectres de pertes d'energie experimentaux obtenus en microscopie electronique
APA, Harvard, Vancouver, ISO, and other styles
17

Renaud, Arnaud. "Algorithmes de regularisation et decomposition pour les problemes variationnels monotones." Paris, ENMP, 1993. http://www.theses.fr/1993ENMP0444.

Full text
Abstract:
L'operation de regularisation, qui apparait notamment dans les algorithmes proximaux introduits par martinet, peut etre aussi complexe que le probleme d'origine. Cherchant a pallier cet inconvenient cette these suit deux pistes principales: la premiere introduit une notion de regularisation par rapport a un operateur. Comme la regularisation classique, cette transformation donne la propriete de dunn et assure la convergence d'un algorithme de gradient explicite. Grace a cette extension de l'idee de regularisation, on montre, dans le cas d'un operateur lineaire et dans le cas ou il derive d'un lagrangien, que l'operation de regularisation peut etre decomposee. Une mesure du conditionnement d'un operateur, dependant de la constante de forte monotonie et de la constante de dunn, est introduite. Sur cette base, l'influence de la regularisation sur la vitesse de convergence est quantifiee; la seconde generalise le principe du probleme auxiliaire introduit par guy cohen au cas d'operateurs auxiliaires non symetriques. Pour la recherche des zeros d'un operateur maximal monotone, la convergence d'un continuum d'algorithmes entre gradient explicite et implicite se trouve ainsi montree. Les conditions de convergence obtenues reposent sur une relation entre la geometrie de la partie symetrique de l'operateur auxiliaire et de la partie non symetrique de l'operateur etudie. Comme pour l'algorithme proximal classique, si l'inverse est lipschitzien la vitesse de convergence est lineaire
APA, Harvard, Vancouver, ISO, and other styles
18

Schenke, Andre [Verfasser]. "Regularisation and Long-Time Behaviour of Random Systems / Andre Schenke." Bielefeld : Universitätsbibliothek Bielefeld, 2020. http://d-nb.info/1206592176/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Summersgill, Freya. "Numerical modelling of stiff clay cut slopes with nonlocal strain regularisation." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/34567.

Full text
Abstract:
The aim of this project is to investigate the stability of cut slopes in stiff clay. The findings are subsequently applied to model stabilisation with piles, used to remediate failure of existing slopes and stabilise potentially unstable slopes created by widening transport corridors. Stiff clay is a strain softening material, meaning that soil strength reduces as the material is strained, for example in the formation of a slip surface. In an excavated slope this can lead to a progressive, brittle slope failure. Simulation of strain softening behaviour is therefore an important aspect to model. The interaction of piles and stiff clay cut slopes is investigated using the Imperial College Geotechnics section's finite element program ICFEP. In designing a suitable layout of the finite element mesh, preliminary analyses found the two existing local strain softening models to be very dependent on the size and arrangement of elements. To mitigate this shortcoming, a nonlocal strain softening model was implemented in ICFEP. This model controls the development of strain by relating the surrounding strains to the calculation of strain at that point, using a weighting function. Three variations of the nonlocal formulation are evaluated in terms of their mesh dependence. A parametric study with simple shear and biaxial compression analyses evaluated the new parameters required by the nonlocal strain softening model. The nonlocal results demonstrated very low mesh dependence and a clear improvement on the local strain softening models. In order to examine the mesh dependence of the new model in a boundary value problem compared to the local strain softening approach, excavated slope analyses without piles were first performed. The slope was modelled in plane strain with coupled consolidation. These analyses also investigated other factors such as the impact of adopting a small strain stiffness material model on the development of the failure mechanism and the impact of the spatial variation of permeability on the time to failure. The final set of analyses constructed vertical stabilisation piles in the excavated slope, represented as either solid elements or one dimensional beam elements. The development of various failure mechanisms for stiff clay cuttings was found to be dependent on pile location, pile diameter and pile length. This project provides an insight into the constitutive model and boundary conditions required to study stabilisation piles in a stiff clay cutting. The nonlocal model performed very well to reduce mesh dependence, confirming the biaxial compression results. However, the use of coupled consolidation was found to cause further mesh dependence of the results.
APA, Harvard, Vancouver, ISO, and other styles
20

Roques, Sylvie. "Problemes inverses en traitement d'image : regularisation et resolution en imagerie bidimensionnelle." Toulouse 3, 1987. http://www.theses.fr/1987TOU30158.

Full text
Abstract:
On veut assurer la stabilite d'un processus de reconstruction d'image. Reformulation du probleme inverse en imposant une contrainte de champ et en effectuant par la suite une interpolation spectrale ponderee. Bonne estimation de l'erreur de reconstruction de l'objet. Etude de quelques images astronomiques
APA, Harvard, Vancouver, ISO, and other styles
21

WANG, XINFANG. "Resolution inverse du probleme d'ablation par utilisation d'une methode de regularisation." Paris 6, 1992. http://www.theses.fr/1992PA066633.

Full text
Abstract:
Cette these est consacree a l'identification de l'evolution de la position de l'interface entre liquide et solide, lors d'un processus de fusion ou de solidification, a partir de mesures effectuees sur les faces accessibles du materiau solide. On ne dispose d'aucune mesure dans la partie liquide et il est donc impossible d'acceder directement a l'evolution de cette interface par resolution directe du probleme de stefan. L'identification de cette interface est un probleme inverse qui necessite des techniques de resolution specifiques. Une methode qui combine une technique sur horizon glissant avec une prediction de la fonction a identifier et une methode de regularisation, issue de celle proposee par tikhonov, est utilisee pour la resolution de notre probleme. L'algorithme est developpe respectivement dans le cas 1d et 2d dans l'espace et des resultats numeriques sont obtenus sur un large ensemble de situations diverses
APA, Harvard, Vancouver, ISO, and other styles
22

Brown, David Frederick. "Residential management strategies in formal and informal settlements, a case study in Trinidad and Tobago." Thesis, University of Sheffield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

GUYOMARC'H, FREDERIC. "Methodes de krylov : regularisation de la solution et acceleration de la convergence." Rennes 1, 2000. http://www.theses.fr/2000REN10096.

Full text
Abstract:
De nombreux problemes de calcul scientifique reclament la resolution de systemes lineaires. Des algorithmes recents et performants pour resoudre ces systemes sont bases sur les methodes de krylov. L'espace des solutions de celles-ci est un espace de krylov et la solution est alors definie par une condition d'orthogonalite dite de galerkin. Dans une premiere partie, on modifie la definition de la solution pour la resolution de systemes mal-conditionnes, en introduisant une nouvelle technique de regularisation basee sur des filtres polynomiaux. Le point fort de cette methode est que la forme des filtres n'est pas fixee par la methode mais peut etre quelconque, et donc dictee par les specificites du probleme. Dans la seconde partie, on modifie l'espace des solutions pour accelerer la convergence. Deux techniques sont explorees. La premiere permet de recycler un espace de krylov utilise pour resoudre une premiere equation. La seconde, basee sur des techniques de deflation, cherche a attenuer l'effet nefaste des plus petites valeurs propres. Cette derniere peut, de plus, s'affiner lors de la resolution de plusieurs systemes, jusqu'a eliminer completement l'impact de ces petites valeurs propres. Tous ces algorithmes sont simplementes et testes sur des problemes issus de l'analyse d'images et de la mecanique. Cette validation numerique confirme les resultats theoriques.
APA, Harvard, Vancouver, ISO, and other styles
24

LE, VAN ANH. "Etude des equations integrales singulieres pour fissures tridimensionnelles : contribution a la regularisation." Nantes, 1988. http://www.theses.fr/1988NANT2021.

Full text
Abstract:
En partant des potentiels de kupradze, on formule sous forme d'equations integrales le probleme d'une fissure tridimensionelle de geometrie complexe, en particulier debouchantes, dans un solide elastique fini ou non. L'etude a ete faite en deux temps: d'abord sans regularisation, il apparait alors une integrale singuliere au sens de la valeur principale, ensuite avec regularisation, dans ce cas on demontre que l'integrale singuliere se transforme ainsi en une integrale impropre ordinaire. Dans les deux cas, le systeme d'equations contient a la fois des integrales de surface ou curvilignes avec la particularite que leurs noyaux contiennent uniquement la densite sans son prolongement. On montre que dans les integrales de surface, le noyau reste invariant par rapport au choix de la parametrisation de la fissure. Dans le cas des fissures debouchantes on montre qu'il apparait toujours un ensemble d'integrales curvilignes dont la somme est nulle. On presente en dernier lieu deux applications numeriques: l'une portant sur la fissure circulaire en milieu infini, l'autre concernant une barre cylindrique avec une fissure semi-elliptique debouchante dans une section droite
APA, Harvard, Vancouver, ISO, and other styles
25

FREVILLE, ELSA. "Selection de variables, regularisation statistique. Application a la prevision du trafic routier." Paris 6, 2001. http://www.theses.fr/2001PA066097.

Full text
Abstract:
Ce travail propose une nouvelle approche de la regression lineaire, et de l'analyse de la variance lorsque l'objectif principal de ces methodes est d'obtenir des previsions. Le fait d'analyser la qualite d'estimateurs de ces dernieres, plutot que celle d'estimateurs des observations de l'historique, permet d'introduire la nature des donnees a predire dans les procedures couramment utilisees pour construire le modele, ou pour calculer les estimations. Les resultats developpes donnent egalement une validation du dernier modele de prevision du trafic routier journalier utilise par le logiciel bison fute. Dans la premiere partie, nous mettons en parallele la notion de probleme mal pose utilisee en analyse numerique, et celles d'estimabilite et de risque quadratique issues de la statistique pour analyser l'estimateur des moindres carres d'un ensemble donne de previsions. Ceci nous amene a retenir le critere du risque quadratique minimum, et nous abordons sous cet angle dans la deuxieme partie les methodes de selection de variables, et dans la troisieme celles d'estimation biaisee. Le critere empirique de selection de variables que nous proposons est une combinaison de ceux de mallows et de shibata, et nous le validons lorsque le modele de regression depend d'une infinite de parametres. Quant aux methodes d'estimation biaisee, nous montrons qu'il est possible d'utiliser l'objectif d'obtenir des previsions de risque quadratique minimum pour estimer, sous certaines hypotheses, le coefficient de ponderation d'une methode de regularisation. Nous etudions egalement les methodes de regression en composantes principales et des moindres carres partiels qui reviennent a un probleme de selection de variables. Notre critere de selection adapte a un probleme de prevision peut alors etre mis en place pour obtenir des previsions de risque quadratique inferieur a celui de l'estimateur des moindres carres.
APA, Harvard, Vancouver, ISO, and other styles
26

Grah, Joana Sarah. "Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisation." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273243.

Full text
Abstract:
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways: (i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to monitor behaviour of cells in cultures that have previously been treated with chemotherapy drugs, since atypical duration and outcome of mitosis, the process of cell division, can be an indicator of successfully working drugs. As an imaging modality we focus on phase contrast microscopy, hence avoiding phototoxicity and influence on cell behaviour. As a drawback, the common halo- and shade-off effect impede image analysis. We present a novel workflow uniting both automated mitotic cell detection with the Hough transform and subsequent cell tracking by a tailor-made level-set method in order to obtain statistics on length of mitosis and cell fates. The proposed image analysis pipeline is deployed in a MATLAB software package called MitosisAnalyser. For the detection of mitotic cells we use the circular Hough transform. This concept is investigated further in the framework of image regularisation in the general context of imaging inverse problems, in which circular objects should be enhanced, (ii) exploiting sparsity of first-order derivatives in combination with the linear circular Hough transform operation. Furthermore, (iii) we present a new unified higher-order derivative-type regularisation functional enforcing sparsity of a vector field related to an image to be reconstructed using curl, divergence and shear operators. The model is able to interpolate between well-known regularisers such as total generalised variation and infimal convolution total variation. Finally, (iv) we demonstrate how we can learn sparsity promoting parametrised regularisers via quotient minimisation, which can be motivated by generalised Eigenproblems. Learning approaches have recently become very popular in the field of inverse problems. However, the majority aims at fitting models to favourable training data, whereas we incorporate knowledge about both fit and misfit data. We present results resembling behaviour of well-established derivative-based sparse regularisers, introduce novel families of non-derivative-based regularisers and extend this framework to classification problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Holt, William Travis. "Mis-specification tests for neural regression models : applications in business and finance." Thesis, London Business School (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Shahsavand, Akbar. "Optimal and adaptive radial basis function neural networks." Thesis, University of Surrey, 2000. http://epubs.surrey.ac.uk/844452/.

Full text
Abstract:
The optimisation and adaptation of single hidden layer feed-forward neural networks employing radial basis activation functions (RBFNs) was investigated. Previous work on RBFNs has mainly focused on problems with large data sets. The training algorithms developed with large data sets prove unreliable for problems with a small number of observations, a situation frequently encountered in process engineering. The primary objective of this study was the development of efficient and reliable learning algorithms for the training of RJBFNs with small and noisy data sets. It was demonstrated that regularisation is essential in order to filter out the noise and prevent over-fitting. The selection of the appropriate level of regularisation, lambda*, with small data sets presents a major challenge. The leave-one-out cross validation technique was considered as a potential means for automatic selection of lambda*. The computational burden of selecting lambda* was significantly reduced by a novel application of the generalised singular value decomposition. The exact solution of the multivariate linear regularisation problem can be represented as a single hidden layer neural network, the Regularisation Network, with one neurone for each distinct exemplar. A new formula was developed for automatic selection of the regularisation level for a Regularisation Network with given non-linearities. It was shown that the performance of a Regularisation Network is critically dependent on the non-linear parameters of the activation function employed; a point which has received surprisingly little attention. It was demonstrated that a measure of the effective degrees of freedom df(lambda*,alpha) of a Regularisation Network can be used to select the appropriate width of the local radial basis functions, alpha, based on the data alone. The one-to-one correspondence between the number of exemplars and the number of hidden neurones of a Regularisation Network may prove computationally prohibitive. The remedy is to use a network with a smaller number of neurones, the Generalised Radial Basis Function Network (GRBFN). The training of a GRBFN ultimately settles down to a large-scale non-linear optimisation problem. A novel sequential back-fit algorithm was developed for training the GRBFNs, which enabled the optimisation to proceed one neurone at a time. The new algorithm was tested with very promising results and its application to a simple chemical engineering process was demonstrated In some applications the overall response is composed of sharp localised features superimposed on a gently varying global background. Existing multivariate regression techniques as well as conventional neural networks are aimed at filtering the noise and recovering the overall response. An initial attempt was made at developing an Adaptive GRBFN to separate the local and global features. An efficient algorithm was developed simply by insisting that all the activation functions which are responsible for capturing the global trend should lie in the null space of the differential operator generating the activation function of the kernel based neurones. It was demonstrated that the proposed algorithm performs extremely well in the absence of strong global input interactions.
APA, Harvard, Vancouver, ISO, and other styles
29

Bell, Simon J. G. "Numerical techniques for smooth transformation and regularisation of time-varying linear descriptor systems." Thesis, University of Reading, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Dabare, Rukshima. "Classification of overlapped data with improved regularisation techniques using Fuzzy Deep Neural Networks." Thesis, Dabare, Rukshima (2020) Classification of overlapped data with improved regularisation techniques using Fuzzy Deep Neural Networks. PhD thesis, Murdoch University, 2020. https://researchrepository.murdoch.edu.au/id/eprint/59203/.

Full text
Abstract:
This thesis investigates methods to enhance the performance of a Deep Neural Network (DNN) classifier when dealing with numerical data by the introduction of improved regularisation techniques. In this thesis, three factors are considered for enhancement. The three significant factors that are considered are overlapped data in balanced and imbalanced data environments and the issue of invariance and overfitting. Many classification algorithms, such as DNNs, classify a data item as belonging to either true or false for binary classification. However, in some real-world applications, they may have data items belonging to two classes to a certain degree. Data with similar characteristics appear in the feature space with different degrees of belongings is known as overlapped data. The overlapping class issue is one of the significant factors that lead to poor classification performance. In practice, there are two ways of handling this overlapped data issue. First, is the removal of overlapped instances, and the second is the separating of the overlapped regions and classify them separately. However, there are many drawbacks to these practices. The removal of overlapped instances is not the best option as it may remove essential data items that describe the dataset, especially in an imbalanced dataset. On the other hand, when the overlapped regions and non-overlapped regions are classified separately, then it is a time-consuming task. Hence, there is a need to consider other techniques to handle overlapped data. Furthermore, a traditional classifier does not consider the underlying overlapping behaviour of the data attributes. However, the underlying overlapping behaviour of the data attributes can be addressed with the use of fuzzy concepts. When a data item belongs to different degrees to different classes, that belongings can be modelled using fuzzy concepts to classify the classes. Therefore, in this research, an overlapped data handling technique using Fuzzy C-Means, fuzzy membership grades, and cluster centre values named as FuzzyDNN is proposed. The results indicated that the proposed FuzzyDNN is capable of addressing the underlying behaviour of the overlapped data when performing classification. FuzzyDNN improves the classification accuracy by 8.89%, 0.88% and 1.24% when compared with the next highest performing technique for the three datasets tested on this thesis. On the other hand, DNNs tend to overfit due to its ability to extract more features from a given set of data. One of the main problems in the generalisation capability of a DNN classifier is due to a small number of training data with limited variations is used. It is, therefore, vital to present training data with different variations of the domain to a classifier to ensure that the classifier can generalise well. Therefore, if pattern variations are smaller in the training dataset, one cannot expect a good generalisation from the classifier. Hence, in this research, a technique to improve the generalisation capability of DNNs is proposed to address this issue. Generally, the techniques used to improve the generalisation ability is known as regularisation techniques. There are various regularisation techniques in practice to handle different issues that can affect the generalisation capability of the DNN. However, the proposed technique is capable of augmenting numerical dataset to enhance the training dataset by introducing variations in the training of a classifier. In this thesis, the FDA, the proposed data augmentation technique, uses fuzzy concepts. The experimental results indicated that the FDA could enhance the training dataset to assist the DNN classifier to generalise well to the unseen data and act as a proper regularisation technique when compared with some commonly used regularisation techniques. Finally, in this research, the classification of the overlapped data for an imbalanced dataset, and its generalisation capability are considered concurrently. An imbalanced binary dataset is a dataset with instances of one class predominately higher than the other class. In such scenarios, the traditional classifiers biases towards the majority classes. However, the performance of a classifier degrades heavily when overlapped data also appear in the imbalanced dataset. Given that the issues of invariant of training data for the DNN can also occur at the same time given that the available data could be limited, there is a need to have a suite of techniques working together to address the three issues concurrently. Further, there is a limited amount of work concentrates on numerical data classification with DNNs for imbalanced overlapped data. Therefore, in this research, a model is proposed to handle the overlapped data in an imbalanced dataset using the proposed data augmentation technique to improve the generalisation ability of the DNN classification model. All the algorithms proposed in this thesis was implemented using MATLAB and Python (in an Anaconda Environment).
APA, Harvard, Vancouver, ISO, and other styles
31

Mesmar, Sultan. "On the use of viscosity as a regularisation technique for hardening/softening constitutive models." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Rajack, Robin Michael. "Tenurial security, property freedoms, dwelling improvements and squatter regularisation - a case study of Trinidad." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.627275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

CIBAS, TAUTVYDAS. "Controle de la complexite dans les reseaux de neurones : regularisation et selection de caracteristiques." Paris 11, 1996. http://www.theses.fr/1996PA112435.

Full text
Abstract:
Un des problemes fondamentaux de l'apprentissage supervise a partir d'exemples est d'adapter la complexite des fonctions apprises aux donnees d'apprentissage. Dans cette these nous menons notre etude sur les modeles connexionnistes en nous concentrant sur des perceptrons simples et des perceptrons multicouches. Les resultats presentes dans cette these montrent que ? il faut exercer un controle de la complexite afin d'atteindre de bonnes performances en generalisation, tout particulierement dans le cas d'echantillons de petite taille ; ? l'introduction d'information a priori sur la distribution des poids permet de controler la complexite du reseau par elagage des poids, elle permet egalement de realiser la selection de caracteristiques ; ? des techniques differentes de regularisation comme la penalisation des poids et l'apprentissage stoppe permettent le controle de la complexite, toutefois, les methodes structurelles peuvent se reveler insuffisantes alors que les methodes de type penalisation permettent d'adapter de facon tres efficace la complexite d'un reseau ; ? la complexite de la regle de classification realisee par le reseau de neurones evolue pendant l'apprentissage
APA, Harvard, Vancouver, ISO, and other styles
34

Zucknick, Manuela. "Multivariate analysis of tumour gene expression profiles applying regularisation and Bayesian variable selection techniques." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/4397.

Full text
Abstract:
High-throughput microarray technology is here to stay, e.g. in oncology for tumour classification and gene expression profiling to predict cancer pathology and clinical outcome. The global objective of this thesis is to investigate multivariate methods that are suitable for this task. After introducing the problem and the biological background, an overview of multivariate regularisation methods is given in Chapter 3 and the binary classification problem is outlined (Chapter 4). The focus of applications presented in Chapters 5 to 7 is on sparse binary classifiers that are both parsimonious and interpretable. Particular emphasis is on sparse penalised likelihood and Bayesian variable selection models, all in the context of logistic regression. The thesis concludes with a final discussion chapter. The variable selection problem is particularly challenging here, since the number of variables is much larger than the sample size, which results in an ill-conditioned problem with many equally good solutions. Thus, one open problem is the stability of gene expression profiles. In a resampling study, various characteristics including stability are compared between a variety of classifiers applied to five gene expression data sets and validated on two independent data sets. Bayesian variable selection provides an alternative to resampling for estimating the uncertainty in the selection of genes. MCMC methods are used for model space exploration, but because of the high dimensionality standard algorithms are computationally expensive and/or result in poor Markov chain mixing. A novel MCMC algorithm is presented that uses the dependence structure between input variables for finding blocks of variables to be updated together. This drastically improves mixing while keeping the computational burden acceptable. Several algorithms are compared in a simulation study. In an ovarian cancer application in Chapter 7, the best-performing MCMC algorithms are combined with parallel tempering and compared with an alternative method.
APA, Harvard, Vancouver, ISO, and other styles
35

Torres, Bobadilla William Javier. "Generalised Unitarity, Integrand Decomposition, and Hidden properties of QCD Scattering Amplitudes in Dimensional Regularisation." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3423251.

Full text
Abstract:
In this thesis, we present new developments for the analytic calculation of tree- and multi-loop level amplitudes. Similarly, we study and extend their analytic properties. We propose a Four-dimensional formulation (FDF) equivalent to the four-dimensional helicity scheme (FDH). In our formulation, particles propagating inside the loop are represented by four dimensional massive internal states regulating the divergences. We provide explicit four-dimensional representations of the polarisation and helicity states of the particles propagating in the loop. Within FDF, we use integrand reduction and four dimensional unitarity to perform analytic computations of one-loop scattering amplitudes. The calculation of tree level scattering amplitude, in this framework, allows for a simultaneous computation of cut-constructible and rational parts of one-loop scattering amplitudes. We present a set of non-trivial examples, showing that FDF scheme is suitable for computing important $2\to2,3,4$ partonic amplitudes at one-loop level. We start by considering two gluons production by quark anti-quark annihilation. Then, the (up to four) gluon production, $gg\to ng$ with $n=2,3,4$. And finally, the Higgs and (up to three) gluons production via gluon fusion, $gg\to ng\,H$ with $n=1,2,3$, in the heavy top mass limit. We also investigate, by following a diagrammatic approach, the role of colour-kinematics (C/K) duality of off-shell diagrams in gauge theories coupled to matter. We study the behaviour of C/K-duality for theories in four- and in $d$-dimensions. The latter follows the prescriptions given by FDF. We show that the Jacobi relations for the kinematic numerators of off-shell diagrams, built with Feynman rules in axial gauge, reduce to a C/K-violating term due to the contributions of sub-graphs only. We discuss the role of the off-shell decomposition in the direct construction of higher-multiplicity numerators satisfying C/K-duality. We present the QCD process $gg\to q\bar{q}g$. An analogous study, within FDF, is carried out for $d$-dimensionally regulated amplitudes. The computation of dual numerators generates, as byproduct, relations between tree-level amplitudes with different orderings. These relations turn to be the Bern-Carrasco-Johansson (BCJ) identities for four- and $d$-dimensionally regulated amplitudes. We combine BCJ identities and integrand reduction methods to establish relations between one-loop integral coefficients for dimensionally regulated QCD amplitudes. We also elaborate on the radiative behaviour of tree-level scattering amplitudes in the soft regime. We show that the subleading soft term in single-gluon emission of quark-gluon amplitudes in QCD is controlled by differential operators, whose universal form can be derived from both Britto-Cachazo-Feng-Witten recursive relations and gauge invariance, as it was shown to hold for graviton and gluon scattering. In the last part of the thesis, we describe the main features of the multi-loop calculations. We briefly describe the adaptive integrand decomposition (AID), a variant of the standard integrand reduction algorithm. AID exploits the decomposition of the space-time dimension in parallel and orthogonal subspaces. We focus, in particular, on the calculation of $2\to2,3$ partonic amplitudes at two loop-level
In questa tesi discutiamo le proprietà di analiticità delle ampiezze di scattering e presentiamo nuovi metodi per il loro calcolo analitico, sia a tree-level e agli ordini perturbativi successivi. Proponiamo un nuovo schema di regolarizzazione dimensionale, la Four-dimensional formulation (FDF), che mostriamo equivalente al Four-dimensional helicity scheme (FDH). Nella nostra formulazione, consideriamo le particelle che si propagano all'interno dei loop in quattro dimensioni, fornendo una rappresentazione esplicitamente quadridimensionale dei loro stati di polarizzazione ed elicità. La massa di tali particelle virtuali agisce da regolatore delle divergenze. Lavorando in FDF, utilizziamo le tecniche di unitarietà e il metodo dell'integrand reduction per calcolare analiticamente ampiezze di scattering a un loop, mostrando che la conoscenza delle ampiezze a tree-level consente, in questo formalismo, di ottenere sia la cosiddetta parte cut-constructibile dell'ampiezza di loop sia i suoi termini razionali. Presentiamo una serie di esempi non banali e illustriamo come FDF consenta di calcolare ampiezze partoniche per processi $2\to 2,3,4$ di notevole rilevanza fenomenologica. In particolare, iniziamo considerando la produzione di due gluoni a partire da una coppia di quark-antiquark per poi analizzare ampiezze puramente gluoniche del tipo $gg\to ng$, con $n=2,3,4$. Infine, lavorando nel limite di massa infinita del quark top, presentiamo i risultati per la produzione via gluon-fusion di un bosone di Higgs in associazione con jet gluonici, $gg\to ngH$, $n=1,2,3$. Seguendo un approccio diagrammatico, investighiamo il ruolo della colour-kinematics duality (C/K) in teorie di gauge accoppiate alla materia, sia in quattro che in $d$ dimensioni, adottando, nel secondo caso, le prescrizioni di FDF. Mostriamo che le identità di Jacobi tra i numeratori cinematici dei diagrammi di Feynman off-shell (per i quali utilizziamo il gauge assiale) producono violazioni della C/K dualità riconducibili all'esclusivo contributo di sottodiagrammi. Discutiamo il ruolo di tale decomposizione off-shell nella costruzione diretta di numeratori esplicitamente duali. In particolare, analizziamo il processo $gg\to q\bar{q}g$ in quattro dimensioni per poi estendere tale studio, mediante l'utilizzo di FDF, al caso $d$-dimensionale. Nel seguito, studiamo il comportamento delle ampiezze a tree-level di QCD nel limite di emissione di radiazione soffice. Nel caso dell'emissione di un singolo gluone, mostriamo che il termine sottodominante nell'approssimazione soffice dell'ampiezza è descritto da operatori differenziali la cui espressione universale può essere derivata sia delle relazioni di ricorrenza di Britto-Cachazo-Feng-Witten sia dalle proprietà di invarianza di gauge dell'ampiezza. Tali proprietà si rivelano valide, oltre che per processi gluonici, per lo scattering tra gravitoni. Nell'ultima parte di questa tesi, discutiamo le caratteristiche principali del calcolo di ampiezze di scattering oltre un loop. Descriviamo brevemente il metodo dell'adaptive integrand decomposition (AID), una formulazione alternativa della tecnica di integrand decomposition tradizionale, che sfrutta la scomposizione dello spazio-tempo nei sottospazi parallelo ed ortogonale alla cinematica esterna. In particolare, ci concentriamo su calcolo di ampiezze partoniche $2\to2,3$ a due loop.
APA, Harvard, Vancouver, ISO, and other styles
36

Besnard, Valérie. "Cesarienne chez la femme diabetique : choix d'une technique d'anesthesie, prise en charge de la regularisation glycemique." Lille 2, 1989. http://www.theses.fr/1989LIL2M250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Vivero, Oskar. "Estimation of long-range dependence." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/estimation-of-longrange-dependence(65565876-4ec6-44b3-8181-51b13dca309c).html.

Full text
Abstract:
A set of observations from a random process which exhibit correlations that decay slower than an exponential rate is regarded as long-range dependent. This phenomenon has stimulated great interest in the scientific community as it appears in a wide range of areas of knowledge. For example, this property has been observed in data pertaining to electronics, econometrics, hydrology and biomedical signals.There exist several estimation methods for finding model parameters that help explain the set of observations exhibiting long-range dependence. Among these methods, maximum likelihood is attractive, given its desirable statistical properties such as asymptotic consistency and efficiency. However, its computational complexity makes the implementation of maximum likelihood prohibitive.This thesis presents a group of computationally efficient estimators based on the maximum likelihood framework. The thesis consists of two main parts. The first part is devoted to developing a computationally efficient alternative to the maximum likelihood estimate. This alternative is based on the circulant embedding concept and it is shown to maintain the desirable statistical properties of maximum likelihood.Interesting results are obtained by analysing the circulant embedding estimate. In particular, this thesis shows that the maximum likelihood based methods are ill-conditioned; the estimators' performance will deteriorate significantly when the set of observations is corrupted by errors. The second part of this thesis focuses on developing computationally efficient estimators with improved performance under the presence of errors in the observations.
APA, Harvard, Vancouver, ISO, and other styles
38

Spinelli, Patricia. "Un coup d'oeil non standard sur un probleme de zermelo : regularisation de systemes singuliers et illustration graphique." Nice, 1986. http://www.theses.fr/1986NICE4079.

Full text
Abstract:
Nous nous interessons au probleme de controle en temps minimun d'un systeme asservi non lineaire ou les parametres de controle apparaissent de facon lineaire et ou les champs de vecteurs sont de dimension n. Une facon evidente de regulariser ce probleme dans le cas ou le nombre de controles p est strictement plus petit que n consiste a rajouter (n-p) champs independants des precedents, a perturber le systeme avec un parametre h puis a etudier la limite quand h tend vers zero. Cette technique se heurte a une difficulte serieuse: le systeme hamiltonien associe au probleme quand h est nul n'est plus un champ de vecteurs continu. L'analyse non standard permet de s'en tenir a h infiniment petit strictement positif, c'est-a-dire de remplacer un systeme standard avec singularites par un systeme non standard sans singularite pour lesquels tous les theoremes de base vont bien fonctionner. Les ombres des solutions de ce systeme sont les "solutions" du systeme hamiltonien correspondant a h=0. Une deuxieme partie est consacree a l'usage des micro-ordinateurs: pour un probleme en dimension 2, la determination de la "synthese" equivaut a l'analyse qualitative du champ hamiltonien de dimension 4. Nous essayons de montrer qu'une bonne connaissance "generale" de la geometrie de ces champs, permet de facon interactive de resoudre assez rapidement la plupart des problemes
APA, Harvard, Vancouver, ISO, and other styles
39

LOBEL, PIERRE, and Michel Barlaud. "Problemes de diffraction inverse : reconstruction d'image et optimisation avec regularisation par preservation des discontinuites - application a l'imagerie microonde." Nice, 1996. http://www.theses.fr/1996NICE4989.

Full text
Abstract:
Ce memoire est consacre a la reconstruction d'image en tomographie microonde. Il s'inscrit dans le cadre plus general des problemes de diffraction inverse. De part sa nature non lineaire et son caractere mal-pose, le probleme de diffraction inverse est particulierement complexe. Il conduit a la minimisation d'un systeme non lineaire pour lequel, depuis une quinzaine d'annees, differentes methodes iteratives de resolution quantitative ont ete proposees. Nous presentons dans ce memoire une methode de resolution basee sur un algorithme de descente du type gradient conjugue (gc). Cette methode s'appuie sur la minimisation d'une unique fonctionnelle non lineaire issue de l'application de la methode des moments a une representation integrale du champ electrique. Menes a partir de donnees synthetiques et experimentales, des tests probants ont valide cet algorithme. Une etude sur l'influence d'une estimee initiale calculee par retro-projection sur la solution, a en outre ete menee. Afin de reconstruire des images dans le cas de donnees fortement bruitees ou a fortes valeurs de contraste, l'introduction de techniques de regularisation devient necessaire. Nous avons ainsi developpe une methode de regularisation non lineaire, basee sur la theorie des champs de markov. La contrainte employee consiste en un lissage des zones homogenes de l'image, avec preservation de ses discontinuites. Cette technique a ete appliquee sur notre algorithme gc mais aussi sur un algorithme de type newton-kantorovitch. Nous avons ainsi pu observer une amelioration notable des images obtenues a partir de donnees fortement bruitees, synthetiques ou experimentales.
APA, Harvard, Vancouver, ISO, and other styles
40

DELAMARRE, LEGRAND BRIGITTE. "Regularisation medicamenteuse de la fibrillation auriculaire permanente, efficacite respective de la lidoflazine per os et du flecainide intra-veineux." Angers, 1990. http://www.theses.fr/1990ANGE1017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Papoutsellis, Evangelos. "First-order gradient regularisation methods for image restoration : reconstruction of tomographic images with thin structures and denoising piecewise affine images." Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/256216.

Full text
Abstract:
The focus of this thesis is variational image restoration techniques that involve novel non-smooth first-order gradient regularisers: Total Variation (TV) regularisation in image and data space for reconstruction of thin structures from PET data and regularisers given by an infimal-convolution of TV and $L^p$ seminorms for denoising images with piecewise affine structures. In the first part of this thesis, we present a novel variational model for PET reconstruction. During a PET scan, we encounter two different spaces: the sinogram space that consists of all the PET data collected from the detectors and the image space where the reconstruction of the unknown density is finally obtained. Unlike most of the state of the art reconstruction methods in which an appropriate regulariser is designed in the image space only, we introduce a new variational method incorporating regularisation in image and sinogram space. In particular, the corresponding minimisation problem is formed by a total variational regularisation on both the sinogram and the image and with a suitable weighted $L^2$ fidelity term, which serves as an approximation to the Poisson noise model for PET. We establish the well-posedness of this new model for functions of Bounded Variation (BV) and perform an error analysis through the notion of the Bregman distance. We examine analytically how TV regularisation on the sinogram affects the reconstructed image especially the boundaries of objects in the image. This analysis motivates the use of a combined regularisation principally for reconstructing images with thin structures. In the second part of this thesis we propose a first-order regulariser that is a combination of the total variation and $L^p$ seminorms with $1 < p \le \infty$. A well-posedness analysis is presented and a detailed study of the one dimensional model is performed by computing exact solutions for simple functions such as the step function and a piecewise affine function, for the regulariser with $p = 2$ and $p = 1$. We derive necessary and sufficient conditions for a pair in $BV \times L^p$ to be a solution for our proposed model and determine the structure of solutions dependent on the value of $p$. In the case $p = 2$, we show that the regulariser is equivalent to the Huber-type variant of total variation regularisation. Moreover, there is a certain class of one dimensional data functions for which the regularised solutions are equivalent to high-order regularisers such as the state of the art total generalised variation (TGV) model. The key assets of our regulariser are the elimination of the staircasing effect - a well-known disadvantage of total variation regularisation - the capability of obtaining piecewise affine structures for $p = 1$ and qualitatively comparable results to TGV. In addition, our first-order $TVL^p$ regulariser is capable of preserving spike-like structures that TGV is forced to smooth. The numerical solution of the proposed first-order model is in general computationally more efficient compared to high-order approaches.
APA, Harvard, Vancouver, ISO, and other styles
42

Bimha, Primrose Zvikomborero Joylyn. "Legalising the illegal: an assessment of the Dispensation of Zimbabweans Project (DZP) and Zimbabwe Special Dispensation Permit (ZSP) regularisation projects." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/25184.

Full text
Abstract:
Since the late 1990s economic insecurity and political uncertainty have continued to worsen in Zimbabwe. Zimbabwe's economy plunged into deep crisis in the early 2000s owing to failed fiscal policies and the highly criticised 'Fast-track' land reform program. Election related violence between 2002 and 2013 resulted in a state of insecurity thereby leading to an exodus of Zimbabwean migrants. An unprecedented influx of Zimbabwean migrants to South Africa (SA) led to high levels of illegal migration and the clogging up of the asylum seeker management system in the early 2000s. In 2009, SA launched the Dispensation of Zimbabweans Project (DZP) in order to achieve four main objectives: to reduce pressure on the asylum management system, to curb the deportation of illegal Zimbabwean migrants, to regularise Zimbabweans who were residing in SA illegally and to provide amnesty to Zimbabweans who had obtained South African documents fraudulently. The DZP was considered a success and a successor permit, the Zimbabwe Special Dispensation Permit (ZSP), was launched in 2014 to allow former DZP applicants to extend their stay in South Africa. Using government publications, parliamentary debates, non-governmental organization (NGO) and media reports it was found that the DZP reduced pressure on the asylum seeker management system while deportation figures dropped significantly. It was also found that, less than 6% (250 000) of an estimated 1,5 million undocumented migrants were documented during the regularisation processes. The DZP and ZSP projects complemented South Africa's highly restrictive approach to migration management and jealous safeguarding of access to permanent residence and citizenship. The regularisation projects also enabled the South African government to show sympathy towards Zimbabweans who were forced to migrate to South Africa by recognising that they could not return home as long as the situation back home remained unchanged.
APA, Harvard, Vancouver, ISO, and other styles
43

THEOLADE, RODOLPHE. "Critere de choix d'une procedure de regularisation de la fibrillation auriculaire chronique et recours au choc electrique endocavitaire a haute energie." Reims, 1991. http://www.theses.fr/1991REIMM089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

ROCHA, E. J. DA. "SURVIVAL STRATEGIES, INTEGRATION, REGULARISATION AND TRANSNATIONAL ACTIVITIES OF DOCUMENTED AND UNDOCUMENTED BRAZILIAN IMMIGRANTS IN EUROPE: A COMPARATIVE STUDY IN LONDON MILAN." Doctoral thesis, Università degli Studi di Milano, 2012. http://hdl.handle.net/2434/174186.

Full text
Abstract:
This research has two objectives: to explore the ways documented and undocumented Brazilian migrants living and working in London and Milan. The aim is to analyses Brazilian migrants’ accounts the full range of entrance, survival, integration and legalisation strategies. The second objective analyses the role of transnational activities between Brazilian’s documented and undocumented living in two different cities of Europe: Milan and London. Few studies in Europe have focused on groups coming from the same place of origin and residing in different cities in comparative studies. It is this international comparison that makes this research original. It was collected 40 In-depth interviews with documented and undocumented Brazilian migrants split in: Milan (20) and London (20). Starting from the point of view of the migrant, this study provided information of everyday experience of documented and undocumented Brazilian immigrants in London and Milan and to analyses the transitional political, economic and socio cultural activities. Compare documented and undocumented migrants whether they live, work and survive. This dissertation considers the various social, cultural, political, and economic activities of documented and undocumented Brazilian in receiving context and analyses this experience as further process for survive, regularised and integrate. Results from this thesis show that there are differences between documented and undocumented brazilian migrants in both London and Milan, and that undocumented in both countries make use of fake documents as strategy to get a job and health care.
APA, Harvard, Vancouver, ISO, and other styles
45

Pilo, Francesca. "La régularisation des favelas par l’électricité : un service entre Etat, marché et citoyenneté." Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1089.

Full text
Abstract:
L'accueil de plusieurs événements internationaux ayant réactualisé l'enjeu sécuritaire, dès la fin de l'année 2008, le gouvernement de l'État de Rio de Janeiro a mis en place une nouvelle politique de sécurité publique pour reprendre le contrôle territorial d'un grand nombre de favelas en s'appuyant sur les Unités de police de pacification (UPP). Dans ce cadre, les pouvoirs publics ont en partie remodelé leur projet d'intégration des favelas. Depuis les années 1990, il était principalement envisagé en termes d'aménagement, par l'amélioration des infrastructures et des voies d'accès ainsi que, dans une moindre mesure, de régularisation foncière et urbaine. Désormais, les autorités envisagent de promouvoir « l'intégration par la régularisation » des relations marchandes et administratives, associant les différents acteurs des sphères publiques et privées. Cette thèse pose la question de l'intégration des favelas selon une perspective peu explorée : celle de la régularisation par le réseau d'électricité, dont l'objectif est de faire des « usagers clandestins » de nouveaux « clients abonnés », liés à l'entreprise de distribution par un compteur. En particulier, nous nous attacherons à mettre en exergue l'articulation entre logiques publiques et privées à l'œuvre dans les projets de régularisation du service d'électricité dans deux favelas, Santa Marta et Cantagalo. Pour ce faire, notre analyse se propose d'étudier la régularisation du service d'électricité à travers ses outils - socio-techniques (installation des compteurs et réfection du réseau), commerciaux (modes de recouvrement des factures) et de maîtrise de la consommation d'électricité – et leurs modes d'appropriation par les abonnés. La recherche montre que la régularisation du service d'électricité reconfigure la relation des favelados à l'Etat et au marché qui se heurte à certaines limites : la relation commerciale contractualisée peine à s'ancrer dans un rapport de confiance ; les actions de maîtrise de la consommation prônent une « mise aux normes » des comportements plus qu'un accompagnement des usages ; la régularisation du service reproduit plutôt qu'elle ne dépasse les inégalités socio-économiques, qui perdent par ailleurs progressivement leur caractère politique. Cette thèse vise ainsi à contribuer à une meilleure compréhension des modalités d'intégration des favelas dans le cadre d'une néolibéralisation accrue des politiques urbaines
With the country's hosting of a number of major international events having refocused attention on security issues, the government of the state of Rio de Janeiro introduced a new public security policy at the end of 2008 to regain territorial control over many of the city's favelas through the use of Pacifying Police Units (UPP). This programme has led to a partial revamp of the public authorities' favelas integration project. Since the 1990s, development has mainly involved improving infrastructure and access roads and, to a lesser extent, land and urban regularisation. Now, however, the authorities plan to promote ‘integration through the regularisation' of market and administrative relationships, involving various stakeholders from both the public and private spheres. This thesis examines the integration of these favelas from a relatively unexplored perspective: that of regularisation through the electricity network, the aim of which is to transform ‘illegal users' into new ‘registered customers', connected to the distribution company by a meter. In particular, we will highlight the link between the public and private approaches being used in projects to regularise the electricity service in two favelas, Santa Marta and Cantagalo. To this end, our analysis will focus on studying regularisation of the electricity service using its own tools - including socio-technical (installing meters and rehabilitating the network), commercial (billing collection methods) and controlling electricity consumption tools - and examining the ways in which customers have taken ownership of these. Research shows that regularising the electricity service tends to reshape the favelados' relationship with the state and the market; however, this has a number of limitations: it is difficult to build contractual customer relationships based on trust; activities to control consumption advocate bringing behaviours ‘up to standard' rather than supporting use; service regularisation tends to reproduce socio-economic inequalities rather than rise above them and these inequalities also gradually become less political. Thus, the aim of this thesis is to help improve understanding of the methods being used to integrate the favelas given the growing neo-liberalisation of urban policy
APA, Harvard, Vancouver, ISO, and other styles
46

Keith, Tūreiti. "A General-Purpose GPU Reservoir Computer." Thesis, University of Canterbury. Department of Electrical & Computer Engineering, 2013. http://hdl.handle.net/10092/7617.

Full text
Abstract:
The reservoir computer comprises a reservoir of possibly non-linear, possibly chaotic dynamics. By perturbing and taking outputs from this reservoir, its dynamics may be harnessed to compute complex problems at “the edge of chaos”. One of the first forms of reservoir computer, the Echo State Network (ESN), is a form of artificial neural network that builds its reservoir from a large and sparsely connected recurrent neural network (RNN). The ESN was initially introduced as an innovative solution to train RNNs which, up until that point, was a notoriously difficult task. The innovation of the ESN is that, rather than train the RNN weights, only the output is trained. If this output is assumed to be linear, then linear regression may be used. This work presents an effort to implement the Echo State Network, and an offline linear regression training method based on Tikhonov regularisation. This implementation targeted the general purpose graphics processing unit (GPU or GPGPU). The behaviour of the implementation was examined by comparing it with a central processing unit (CPU) implementation, and by assessing its performance against several studied learning problems. These assessments were performed using all 4 cores of the Intel i7-980 CPU and an Nvidia GTX480. When compared with a CPU implementation, the GPU ESN implementation demonstrated a speed-up starting from a reservoir size of between 512 and 1,024. A maximum speed-up of approximately 6 was observed at the largest reservoir size tested (2,048). The Tikhonov regularisation (TR) implementation was also compared with a CPU implementation. Unlike the ESN execution, the GPU TR implementation was largely slower than the CPU implementation. Speed-ups were observed at the largest reservoir and state history sizes, the largest of which was 2.6813. The learning behaviour of the GPU ESN was tested on three problems, a sinusoid, a Mackey-Glass time-series, and a multiple superimposed oscillator (MSO). The normalised root-mean squared errors of the predictors were compared. The best observed sinusoid predictor outperformed the best MSO predictor by 4 orders of magnitude. In turn, the best observed MSO predictor outperformed the best Mackey-Glass predictor by 2 orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
47

Alsaedy, Ammar, and Nikolai Tarkhanov. "The method of Fischer-Riesz equations for elliptic boundary value problems." Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/6179/.

Full text
Abstract:
We develop the method of Fischer-Riesz equations for general boundary value problems elliptic in the sense of Douglis-Nirenberg. To this end we reduce them to a boundary problem for a (possibly overdetermined) first order system whose classical symbol has a left inverse. For such a problem there is a uniquely determined boundary value problem which is adjoint to the given one with respect to the Green formula. On using a well elaborated theory of approximation by solutions of the adjoint problem, we find the Cauchy data of solutions of our problem.
APA, Harvard, Vancouver, ISO, and other styles
48

Delame, Thomas. "Les squelettes : structures d'interaction directe et intuitive avec des formes 3D." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS013/document.

Full text
Abstract:
Dans les applications graphiques, les interactions avec les formes sont peu naturelles. L'utilisateur repousse autant que possible l'usage de ces applications, préférant dessiner ou sculpter une forme. Pour combler ce fossé qui se creuse entre l'informatique et le grand public, nous nous tournons vers les squelettes. Ce sont des modèles de représentation des formes intuitifs que nous proposons d'utiliser comme structure d'interaction directe et intuitive.Tous les squelettes souffrent d'un problème de qualité, que ce soit au niveau de la géométrie qu'ils capturent, de leurs quantité de bruit ou encore de l'absence d'organisation utile de leurs éléments. De plus, certaines fonctionnalités nécessaires des squelettes ne sont que partiellement résolues, et ceci grâce à des données additionnelles calculées à partir de la forme lors de la squelettisation. Ainsi, lorsque le squelette est modifié par une interaction, nous sommes dans l'incapacité de mettre à jour ces données et d'utiliser ces fonctionnalités.Nous avons construit un ensemble de solutions algorithmiques à ces problèmes. Nous faisons un usage optimal des données contenues dans le squelette pour visualiser la forme qu'il décrit, supprimer son bruit et structurer ses éléments. Nous construisons un squelette hiérarchique qui capture et contrôle toutes les zones caractéristiques d'une forme.Ce squelette est adapté pour une interaction directe et intuitive, ce qui permet de combler le fossé dont nous faisions mention. Nos travaux permettent également d'améliorer les méthodes de squelettisation et produire des squelettes qui sont déjà de bonne qualité
The interactions in shape creation graphic applications are far from natural. The user tends to avoid as much as possible such applications and prefer to sketch or model his/her shape.To bridge this widening gap between computer and the general public, we focus on skeletons. They are intuitive shape representation models that we propose to use as direct and intuitive interaction structures.All skeletons suffer from very low quality as shape representation models, concerning the geometry of the shape they capture, the quantity of skeletal noise they contain or the lack of useful organization of their elements. Moreover, some functionalities that must be granted to skeletons are only partially solved. Those solutions make use of additional data computed thanks to the shape during the skeletonization. Thus, when the skeleton is modified by an interaction, we cannot update those data to make use of such functionalities.Thanks to a practical observation of skeletons, we built a set of algorithmic solutions to those problems.We make an optimal use of skeleton data to visualize the shape described by a skeleton, to remove skeletal noise and to structure skeleton elements. With our methods, we build the meso-skeleton, a hierarchical structure that captures and controls all characteristic parts of a shape.The meso-skeleton is adapted to be used as a direct and intuitive interaction structure, which allows us to bridge the gap aforementioned. Also, our work can lead to further researches to enhance skeletonization techniques and thus produce skeletons that are good quality shape representation models
APA, Harvard, Vancouver, ISO, and other styles
49

VERLYNDE, BERNARD. "C. R. R. A. M. 62 (centre de reception et de regularisation des appels medicaux du pas-de-calais) : bilan de deux ans et demi de fonctionnement." Lille 2, 1991. http://www.theses.fr/1991LIL2M216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Nielsen, Amanda. "Challenging Rightlessness : On Irregular Migrants and the Contestation of Welfare State Demarcation in Sweden." Doctoral thesis, Linnéuniversitetet, Institutionen för statsvetenskap (ST), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-49015.

Full text
Abstract:
This thesis explores the political struggles that followed after the appearance of irregular migrants in Sweden. The analysis starts from the assumption that the group’s precarious circumstances of living disrupted the understanding of Sweden as an inclusive society and shed light on the limits of the welfare state’s inclusionary ambitions. The overarching analytical point of entry is accordingly that the appearance of irregular migrants constitutes an opening for contestation of the demarcation of the welfare state. The analysis draws on two strands of theory to explore this opening. Citizenship theory, first, provides insights about the contradictory logics of the welfare state, i.e. the fact that it rests on norms of equality and inclusion at the same time as it is premised on a fundamental exclusion of non-members. Discourse theory, furthermore, is brought in to make sense of the potential for contestation. The study approaches these struggles over demarcation through an analysis of the debates and claims-making that took place in the Swedish parliament between 1999 and 2014. The focal point of the analysis is the efforts to make sense of and respond to the predicament of the group. The study shows that efforts to secure rights and inclusion for the group revolved around two demands. The first demand, regularisation, aimed to secure rights for irregular migrants through status, i.e. through the granting of residence permits, whereas the second demand, access to social rights, aimed to secure rights through turning the group into right-bearers in the welfare state. The thesis concludes that the debates and claims-making during the 2000s resulted in a small, but significant, shift in policy. In 2013, new legislation was adopted that granted irregular migrants access to schooling and health- and medical care. I argue that this was an effect of successful campaigning that managed to establish these particular rights as human rights, and as such, rights that should be provided to all residents regardless of legal status. Overall, however, I conclude that there has been an absence of more radical contestation of the citizenship order, and of accompanying notions of rights and entitlement, in the debates studied.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography