To see the other types of publications on this topic, follow the link: L1 Norms.

Dissertations / Theses on the topic 'L1 Norms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 dissertations / theses for your research on the topic 'L1 Norms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chakraborti, Nisith Ranjan. "Solution of certain locational problems arising in L1 Norms." Thesis, University of North Bengal, 1994. http://hdl.handle.net/123456789/598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Flint, Alexander. "The effects of interlocutor backchannels and L1 backchannel norms on the speech of L2 English learners." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:76ca16c9-20a8-40d3-bc53-e8a05c4cbe00.

Full text
Abstract:
Verbal backchannels - short responses such as 'uh-huh' and 'mhm' given by an interlocutor to the main speaker - have been studied extensively for several decades. The great majority of the research has been descriptive or based on backchannel uses. In contrast, little has been reported of their effects on spoken interaction and almost no research has examined their effects on second language (L2) speech. Given that first language (L1) backchannel norms vary, L2 speakers unaccustomed to different norms could be affected when exposed to such variation. This thesis investigated such effects through the use a quasi-experimental repeated measures design that compared the effects of two backchannel frequencies - one approximately a third of the other - on L2 English speech. The 37 L1 Japanese and 34 L1 Mandarin Chinese participants spoke in English to an interlocutor who varied the frequency of backchannels that they were given in different dyadic interactions. The resultant audio recordings were transcribed and analysed using common measures of speech complexity, accuracy and fluency. Multivariate analyses of variance and t-tests helped show that the fluency of each group was increased when the higher of the two frequencies was given and that, while the accuracy of the Japanese group did not alter, the Chinese group was less accurate in one set of interactions when receiving the higher frequency of backchannels. Effect sizes for these changes (d = 0.19-0.87) were comparable with other studies that used the same measures of fluency and accuracy. There were no statistically significant differences for measures of complexity. The findings show that the contribution of L1 norms to the effects of backchannels on L2 interactions is not as clear-cut as assumed by previous research. The implications of the findings extend into language testing, teaching, theory and research methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Ugolini, Elisa. "Ricostruzione di immagini in norma L1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6459/.

Full text
Abstract:
In questa tesi è stato trattato il problema della ricostruzione di immagini di tomografia computerizzata considerando un modello che utilizza la variazione totale come termine di regolarizzazione e la norma 1 come fidelity term (modello TV/L1). Il problema è stato risolto modificando un metodo di minimo alternato utilizzato per il deblurring e denoising di immagini affette da rumore puntuale. Il metodo è stato testato nel caso di rumore gaussiano e geometria fan beam e parallel beam. Infine vengono riportati i risultati ottenuti dalle sperimentazioni.
APA, Harvard, Vancouver, ISO, and other styles
4

Lima, Jose Paulo Rodrigues de. "Representação compressiva de malhas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-17042014-151933/.

Full text
Abstract:
A compressão de dados é uma área de muito interesse em termos computacionais devido à necessidade de armazená-los e transmiti-los. Em particular, a compressão de malhas possui grande interesse em função do crescimento de sua utilização em jogos tridimensionais e modelagens diversas. Nos últimos anos, uma nova teoria de aquisição e reconstrução de sinais foi desenvolvida, baseada no conceito de esparsidade na minimização da norma L1 e na incoerência do sinal, chamada Compressive Sensing (CS). Essa teoria possui algumas características marcantes, como a aleatoriedade de amostragem e a reconstrução via minimização, de modo que a própria aquisição do sinal é feita considerando somente os coeficientes significativos. Qualquer objeto que possa ser interpretado como um sinal esparso permite sua utilização. Assim, ao se representar esparsamente um objeto (sons, imagens) é possível aplicar a técnica de CS. Este trabalho verifica a viabilidade da aplicação da teoria de CS na compressão de malhas, de modo que seja possível um sensoreamento e representação compressivos na geometria de uma malha. Nos experimentos realizados, foram utilizadas variações dos parâmetros de entrada e técnicas de minimização da Norma L1. Os resultados obtidos mostram que a técnica de CS pode ser utilizada como estratégia de compressão da geometria das malhas.
Data compression is an area of a major interest in computational terms due to the issues on storage and transmission. Particularly, mesh compression has wide usage due to the increase of its application in games and three-dimensional modeling. In recent years, a new theory of acquisition and reconstruction of signals was developed, based on the concept of sparsity and in the minimization of the L1 norm and the incoherency of the signal, called Compressive Sensing (CS). This theory has some remarkable features, such as random sampling and reconstruction by minimization, in a way that the signal acquisition is done by considering only its significant coefficients. Any object that can be interpreted as a sparse sign allows its use. Thus, representing an object sparsely (sounds, images), you can apply the technique of CS. This work explores the viability of CS theory on mesh compression, so that it is possible a representative and compressive sensing on the mesh geometry. In the performed experiments, different parameters and L1 Norm minimization strategies were used. The results show that CS can be used as a mesh geometry compression strategy.
APA, Harvard, Vancouver, ISO, and other styles
5

Zayouna, Ammar. "Optical flow estimation using steered-L1 norm." Thesis, Middlesex University, 2016. http://eprints.mdx.ac.uk/21273/.

Full text
Abstract:
Motion is a very important part of understanding the visual picture of the surrounding environment. In image processing it involves the estimation of displacements for image points in an image sequence. In this context dense optical flow estimation is concerned with the computation of pixel displacements in a sequence of images, therefore it has been used widely in the field of image processing and computer vision. A lot of research was dedicated to enable an accurate and fast motion computation in image sequences. Despite the recent advances in the computation of optical flow, there is still room for improvements and optical flow algorithms still suffer from several issues, such as motion discontinuities, occlusion handling, and robustness to illumination changes. This thesis includes an investigation for the topic of optical flow and its applications. It addresses several issues in the computation of dense optical flow and proposes solutions. Specifically, this thesis is divided into two main parts dedicated to address two main areas of interest in optical flow. In the first part, image registration using optical flow is investigated. Both local and global image registration has been used for image registration. An image registration based on an improved version of the combined Local-global method of optical flow computation is proposed. A bi-lateral filter was used in this optical flow method to improve the edge preserving performance. It is shown that image registration via this method gives more robust results compared to the local and the global optical flow methods previously investigated. The second part of this thesis encompasses the main contribution of this research which is an improved total variation L1 norm. A smoothness term is used in the optical flow energy function to regularise this function. The L1 is a plausible choice for such a term because of its performance in preserving edges, however this term is known to be isotropic and hence decreases the penalisation near motion boundaries in all directions. The proposed improved L1 (termed here as the steered-L1 norm) smoothness term demonstrates similar performance across motion boundaries but improves the penalisation performance along such boundaries.
APA, Harvard, Vancouver, ISO, and other styles
6

Hess, Eric. "Ramp Loss SVM with L1-Norm Regularizaion." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3538.

Full text
Abstract:
The Support Vector Machine (SVM) classification method has recently gained popularity due to the ease of implementing non-linear separating surfaces. SVM is an optimization problem with the two competing goals, minimizing misclassification on training data and maximizing a margin defined by the normal vector of a learned separating surface. We develop and implement new SVM models based on previously conceived SVM with L_1-Norm regularization with ramp loss error terms. The goal being a new SVM model that is both robust to outliers due to ramp loss, while also easy to implement in open source and off the shelf mathematical programming solvers and relatively efficient in finding solutions due to the mixed linear-integer form of the model. To show the effectiveness of the models we compare results of ramp loss SVM with L_1-Norm and L_2-Norm regularization on human organ microbial data and simulated data sets with outliers.
APA, Harvard, Vancouver, ISO, and other styles
7

Azaoui, Brahim. "Coconstruction de normes scolaires et contextes d’enseignement : une étude multimodale de l’agir professoral." Thesis, Montpellier 3, 2014. http://www.theses.fr/2014MON30031/document.

Full text
Abstract:
L'agir professoral fait l'objet de recherches approfondies en didactiques des langues et des cultures. Si la multimodalité est considérée comme un élément définitoire de cette notion, peu d'études se sont penchées en détail sur la compréhension de cet aspect de l'action enseignante.Notre travail, mené dans une approche ethnographique, vise à analyser la pratique multimodale de deux enseignantes. Chacune intervient dans deux contextes pédagogiques : en cours de français langue première (FL1) et auprès de collégiens allophones apprenant le français langue seconde (FLS). Cela nous offre l'occasion d'étudier l'effet du contexte d'enseignement sur les actions professorales, en particulier celles mises en œuvre dans la construction de normes scolaires (linguistique et interactionnelle). Ce travail vise également à analyser le style professoral de ces enseignantes pour mettre au jour les invariants pédagogiques, d'un contexte à l'autre, dans la gestion de ces normes.Cette recherche s'appuie essentiellement sur l'observation et l'analyse de deux types de corpus : des films de classe, transcrits et annotés à l'aide du logiciel ELAN, et trois différents formats de corpus vidéoscopiques (autoscopie, hétéroscopie et autohétéroscopie).Les procédés de normalisation linguistique et interactionnelle sont appréhendés en croisant une analyse quantitative des productions verbales et gestuelles et une analyse qualitative, qui emprunte des outils à la linguistique énonciative, à l'analyse des discours et à l'analyse conversationnelle, ou encore à la microsociologie
A considerable body of research has shown interest to teacher action. Though the nonverbal dimension of these actions is acknowledged, few studies have considered it thoroughly in their analysis. Hence, following an ethnographic approach, our work analyzes the verbal and nonverbal actions of two secondary school teachers. Each one teaches both French as an L1 to native speakers and French as a schooling language to non native speakers. This work attempts to assess the effect of the teaching contexts on the teachers' actions, and more specifically on the way they co-Construct school norms (language and interaction norms). It also aims at highlighting the normalizing process invariants from one teaching context to the next.This work relies on the observation and analysis of two types of corpora: video recorded class interaction, transcribed with ELAN, and three different types of videoed confrontations: the teacher's self-Confrontation, students' observation and comments of videoed interactions of their class, and the teacher's confrontation of her students' videoed reflections.We analyzed the norm construction strategies using both a quantitative and a qualitative approach of the verbal and nonverbal productions. We borrowed tools from various fields: enunciative linguistics, discourse analysis, conversation analysis, and micro-Sociology
APA, Harvard, Vancouver, ISO, and other styles
8

Shen, Chenyang. "L1-norm local preserving projection and its application." HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guidi, Anna Beatrice. "Regolarizzazione in norma L1 per l'inversione di dati NMR." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14119/.

Full text
Abstract:
La presente tesi tratta il problema di inversione 2D di dati di Risonanza Magnetica Nucleare. La rilassometria NMR è una tecnica d'indagine sulla struttura e dinamica delle molecole basata sulla correlazione tra le costanti di tempo di rilassamento longitudinale e trasversale. L'equazione che mette in relazione l'array 2D dei dati acquisiti (S) e la distribuzione dei tempi di rilassamento (G) è: S = KcGKr' + E; (1) dove i kernel Kr e Kc forniscono informazioni sul segnale di decadimento NMR, mentre E è il rumore dovuto al processo di misurazione. Si tratta quindi di un problema inverso poichè occorre determinare G conoscendo il dato misurato ed i kernel. Tale problema è notoriamente mal posto e uno degli obiettivi della tesi è stato quello di analizzare la mal posizione del problema mediante la condizione discreta di Picard. Un altro obiettivo è stato quello di analizzare l'uso della regolarizzazione in norma L1 proposto di recente in letteratura per risolvere questo problema NMR. Una criticità dei metodi di regolarizzazione risiede nella scelta del parametro di regolarizzazione lambda che in questa tesi è stata effettuata in due momenti. Da un lato testando la regola di aggiornamento di �lambda suggerita da un articolo di ricerca e basata sulla conoscenza della norma del rumore. Dall'altro proponendo una nuova regola di aggiornamento di lambda � nel caso in cui la norma del rumore non è nota. Dalla sperimentazione è emerso che occorre risolvere il problema accuratamente e che se è nota la norma del rumore, la regola di aggiornamento di lambda suggerita dall'articolo funziona bene, ma migliora usandone una piccola sottostima. Nel caso in cui la norma del rumore non è conosciuta il criterio proposto per aggiornare dà risultati positivi.
APA, Harvard, Vancouver, ISO, and other styles
10

Bolognesi, Matteo. "Metodi di Ricostruzione di Immagini mediante regolarizzazione in Norma L1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15770/.

Full text
Abstract:
Il problema di ricostruzione di immagini può essere modellizzato in maniera discreta come un sistema lineare A x = b con A matrice malcondizionata e consiste nello stimare l'immagine x che approssimi bene l'immagine esatta. Poiché tale problema è mal posto è necessario utilizzare metodi di regolarizzazione che riformulano il problema lineare come il problema di minimizzare la funzione F(x) = f(x) + g(x), dove f(x) è il termine di consistenza dei dati, g(x) = lambda*phi(x) è il termine di regolarizzazione e lambda è il parametro di regolarizzazione. La scelta di lambda ha un'importanza fondamentale poiché determina la qualità della ricostruzione. In questa tesi vengono analizzati tre diversi funzionali di regolarizzazione tutti basati sulla norma L1 e vengono analizzati alcuni criteri automatici per la scelta del parametro di regolarizzazione diversi dai più conosciuti (criterio della discrepanza, L-curva, GCV) sia nel caso in cui sia conosciuta la norma del rumore, sia nel caso in cui essa non sia nota. Il contributo della tesi consiste nell'analizzare tre immagini campione con caratteristiche molto diverse in termini di contrasti, continuità e colore. Vengono quindi proposte due regole di aggiornamento per ognuno dei funzionali di regolarizzazione e infine validate conducendo una sperimentazione sulle tre immagini. La sperimentazione condotta ha mostrato la robustezza degli aggiornamenti proposti rispetto alle caratteristiche delle immagini, portando a dei buoni risultati nella maggior parte dei casi.
APA, Harvard, Vancouver, ISO, and other styles
11

Ignatti, Laura. "Ricostruzione di immagini mediante regolarizzazione adattiva in norma L2+L1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16409/.

Full text
Abstract:
Il problema di ricostruzione di immagini consiste nel determinare un oggetto x tale che Ax=b, con A matrice mal condizionata che descrive il blur e b vettore che rappresenta l'immagine degradata a nostra disposizione. Si tratta di un problema lineare mal posto. Per stabilizzare il problema è necessario ricorrere ai metodi di regolarizzazione. Questi riformulano il problema originale come un problema di minimo, la cui funzione obiettivo è costituita da due termini: il primo misura l'accuratezza della soluzione rispetto ai dati e il secondo aggiunge informazioni a priori sulla soluzione desiderata. Quest'ultimo termine dipende da un parametro, detto parametro di regolarizzazione. Affinché la regolarizzazione sia efficace bisogna affrontare due problemi cruciali: la scelta del termine di regolarizzazione che influisce in modo significativo sulla qualità della ricostruzione e la scelta del parametro di regolarizzazione che permette di bilanciare la richiesta di fedeltà ai dati con quella di soddisfare le condizioni a priori stabilite. In questo lavoro ci siamo occupati di entrambi questi aspetti. Allo scopo di superare i limiti noti delle tecniche di regolarizzazione standard in norma L1 ed L2, abbiamo utilizzato una regolarizzazione multipla combinando la regolarizzazione TV, in norma L1, con la regolarizzazione di Tikhonov multiparametro, in norma L2. Per la scelta dei parametri di regolarizzazione sono state utilizzate regole di aggiornamento di tipo automatico. Il contributo della Tesi è stato duplice: da un lato si sono analizzati gli effetti della tecnica di regolarizzazione multipla e dall'altro si è proposto un algoritmo, UpenTv, in grado di risolvere il problema regolarizzato da noi introdotto. Abbiamo confrontato sperimentalmente Upen Tv con le tecniche standard di regolarizzazione TV e di Tikhonov. I risultati ottenuti sembrano promettenti: in tutti i casi la regolarizzazione multipla ha fornito le ricostruzioni migliori nel tempo minore.
APA, Harvard, Vancouver, ISO, and other styles
12

Galiotto, Valentina. "Inversione della trasformata di Laplace mediante regolarizzazione con norma L1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7075/.

Full text
Abstract:
I dati derivanti da spettroscopia NMR sono l'effetto di fenomeni descritti attraverso la trasformata di Laplace della sorgente che li ha prodotti. Ci si riferisce a un problema inverso con dati discreti ed in relazione ad essi nasce l'esigenza di realizzare metodi numerici per l'inversione della trasformata di Laplace con dati discreti che è notoriamente un problema mal posto e pertanto occorre ricorrere a metodi di regolarizzazione. In questo contesto si propone una variante ai modelli presenti il letteratura che fanno utilizzo della norma L2, introducendo la norma L1.
APA, Harvard, Vancouver, ISO, and other styles
13

Shi, Mingren. "Constrained regularization with the L1 norm for ill-posed problems." Thesis, Shi, Mingren (1997) Constrained regularization with the L1 norm for ill-posed problems. PhD thesis, Murdoch University, 1997. https://researchrepository.murdoch.edu.au/id/eprint/51534/.

Full text
Abstract:
Many advances in modern science and technology have resulted in linear ill-posed problems, whose operator form is Kf = g, with possibly some constraints on the solution (e.g. non-negativity). The prototype example is a Fredholm integral equation of the first kind. An effective and prominent approach to cope with the instability of these problems is known as regularization. Most existing regularization methods are based on the L2 norm and this approach is now well-developed. This thesis aims to develop a practical, general regularization method using the Li norm in both unconstrained and linearly constrained cases...
APA, Harvard, Vancouver, ISO, and other styles
14

De, Santis Ruggero. "L1-norm based regularization for a non linear imaging model tomography." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16163/.

Full text
Abstract:
Digital tomosynthesis is a technique that allows the reconstruction of any slice of a 3D object thanks to a certain number of 2D projections. The mathematical model for tomosynthesis was simplified until 2010. X-ray beam was considered to be monoenergetic and the object to be made up of one material. In this paper we consider the multimaterial polyenergetic model. The polyenergetic model requires solving a large-scale, nonlinear inverse problem, which is more expensive than the typically used simplified, linear monoenergetic model. Inverse problems requires a regularization in order to be stabilized and solved. In this paper we consider two types of regularization: the first based on the L1-Norm of the solution and the second based on the L1-Norm of the gradient of the solution (Total Variation). We'll, then, solve the regularized problem using two iterative methods: the Gradient method and a Non Linear Hybrid Conjugate Gradient method. La Tomosintesi Digitale è una tecnica in grado di ricostruire un qualsiasi numero di sezioni di un oggetto tridimenionale partendo da un insieme di proiezioni 2D. Fino al 2010 il modello matematico alla base della tomosintesi veniva semplificato. Il fascio di raggi X veniva considerato monoenergetico e l’oggetto composto di un solo materiale. In questo lavoro considereremo il modello polienergetico e multimateriale. Il modello polienergetico richiede la soluzione di un problema inverso di grandi dimensioni, la cui risoluzione è molto più complessa del problema ottenuto dal modello monoenergetico lineare. I problemi inversi richiedono un regolarizzazione per essere stabilizzati e risolti. Noi consideriamo due tipi di regolarizzazione: la prima basata sulla Norma L1 della soluzione e la seconda sulla Norma L1 del gradiente della soluzione (Variazione Totale). Risolveremo, poi, il problema regolarizzato utilizzando due metodi iterativi: Il metodo del Gradiente e un metodo del Gradiente Coniugato Non Lineare Ibrido.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Buyong. "Lp norm estimation procedures and an L1 norm algorithm for unconstrained and constrained estimation for linear models." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/53627.

Full text
Abstract:
When the distribution of the errors in a linear regression model departs from normality, the method of least squares seems to yield relatively poor estimates of the coefficients. One alternative approach to least squares which has received a great deal of attention of late is minimum Lp norm estimation. However, the statistical efüciency of a Lp estimator depends greatly on the underlying distribution of errors and on the value of p. Thus, the choice of an appropriate value of p is crucial to the effectiveness of p estimation. Previous work has shown that L₁ estimation is a robust procedure in the sense that it leads to an estimator which has greater statistical efficiency than the least squares estimator in the presence of outliers, and that L₁ estimators have some- desirable statistical properties asymptotically. This dissertation is mainly concerned with the development of a new algorithm for L₁ estimation and constrained L₁ estimation. The mainstream of computational procedures for L₁ estimation has been the simplex-type algorithms via the linear programming formulation. Other procedures are the reweighted least squares method, and. nonlinear programming technique using the penalty function approach or descent method. A new computational algorithm is proposed which combines the reweighted least squares method and the linear programming approach. We employ a modified Karmarkar algorithm to solve the linear programming problem instead of the simplex method. We prove that the proposed algorithm converges in a finite number of iterations. From our simulation study we demonstrate that our algorithm requires fewer iterations to solve standard problems than are required by the simplex-type methods although the amount of computation per iteration is greater for the proposed algorithm. The proposed algorithm for unconstrained L₁ estimation is extended to the case where the L₁ estimates of the parameters of a linear model satisfy certain linear equality and/or inequality constraints. These two procedures are computationally simple to implement since a weighted least squares scheme is adopted at each iteration. Our results indicate that the proposed L₁ estimation procedure yields very accurate and stable estimates and is efficient even when the problem size is large.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Jot, Sapan. "pcaL1: An R Package of Principal Component Analysis using the L1 Norm." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2488.

Full text
Abstract:
Principal component analysis (PCA) is a dimensionality reduction tool which captures the features of data set in low dimensional subspace. Traditional PCA uses L2-PCA and has much desired orthogonality properties, but is sensitive to outliers. PCA using L1 norm has been proposed as an alternative to counter the effect of outliers. The R environment for statistical computing already provides L2-PCA function prcomp(), but there are not many options for L1 norm PCA methods. The goal of the research was to create one R package with different options of PCA methods using L1 norm. So, we choose three different L1-PCA algorithms: PCA-L1 proposed by Kwak [10], L1-PCA* by Brooks et. al. [1], and L1-PCA by Ke and Kanade [9]; to create a package pcaL1 in R, interfacing with C implementation of these algorithms. An open source software for solving linear problems, CLP, is used to solve the optimization problems for L1-PCA* and L1-PCA. We use this package on human microbiome data to investigate the relationship between people based on colonizing bacteria.
APA, Harvard, Vancouver, ISO, and other styles
17

Strazzari, Dario. "Un metodo di tipo Newton per la ricostruzione di immagini con norma L1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7228/.

Full text
Abstract:
Scopo dell'opera è implementare in maniera efficiente ed affidabile un metodo di tipo Newton per la ricostruzione di immagini con termine regolativo in norma L1. In particolare due metodi, battezzati "OWL-QN per inversione" e "OWL-QN precondizionato", sono presentati e provati con numerose sperimentazioni. I metodi sono generati considerando le peculiarità del problema e le proprietà della trasformata discreta di Fourier. I risultati degli esperimenti numerici effettuati mostrano la bontà del contributo proposto, dimostrando la loro superiorità rispetto al metodo OWL-QN presente in letteratura, seppure adattato alle immagini.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Han. "Méthodes de reconstruction d'images à partir d'un faible nombre de projections en tomographie par rayons x." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00680100.

Full text
Abstract:
Afin d'améliorer la sûreté (dose plus faible) et la productivité (acquisition plus rapide) du système de la tomographie par rayons X (CT), nous cherchons à reconstruire une image de haute qualitée avec un faible nombre de projections. Les algorithmes classiques ne sont pas adaptés à cette situation et la reconstruction est instable et perturbée par des artefacts. L'approche "Compressed Sensing" (CS) fait l'hypothèse que l'image inconnue est "parcimonieuse" ou "compressible", et la reconstruit via un problème d'optimisation (minimisation de la norme TV/L1) en promouvant la parcimonie. Pour appliquer le CS en CT, en utilisant le pixel/voxel comme base de representation, nous avons besoin d'une transformée parcimonieuse, et nous devons la combiner avec le "projecteur du rayon X" appliqué sur une image pixelisée. Dans cette thèse, nous avons adapté une base radiale de famille Gaussienne nommée "blob" à la reconstruction CT par CS. Elle a une meilleure localisation espace-fréquentielle que le pixel, et des opérations comme la transformée en rayons-X, peuvent être évaluées analytiquement et sont facilement parallélisables (sur plateforme GPU par exemple). Comparé au blob classique de Kaisser-Bessel, la nouvelle base a une structure multi-échelle : une image est la somme des fonctions translatées et dilatées de chapeau Mexicain radiale. Les images médicales typiques sont compressibles sous cette base. Ainsi le système de representation parcimonieuse dans les algorithmes ordinaires de CS n'est plus nécessaire. Des simulations (2D) ont montré que les algorithmes TV/L1 existants sont plus efficaces et les reconstructions ont des meilleures qualités visuelles que par l'approche équivalente basée sur la base de pixel-ondelettes. Cette nouvelle approche a également été validée sur des données expérimentales (2D), où nous avons observé que le nombre de projections en général peut être réduit jusqu'à 50%, sans compromettre la qualité de l'image.
APA, Harvard, Vancouver, ISO, and other styles
19

Silva, Diego Wesllen da. "Diagnóstico de influência bayesiano em modelos de regressão da família t-assimétrica." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-10082017-005536/.

Full text
Abstract:
O modelo de regressão linear com erros na família de distribuições t-assimétrica, que contempla as distribuições normal, t-Student e normal assimétrica como casos particulares, tem sido considerado uma alternativa robusta ao modelo normal. Para concluir qual modelo é, de fato, mais robusto, é importante ter um método tanto para identificar uma observação como discrepante quanto aferir a influência que esta observação terá em nossas estimativas. Nos modelos de regressão bayesianos, uma das medidas de identificação de observações discrepantes mais conhecidas é a conditional predictive ordinate (CPO). Analisamos a influência dessas observações nas estimativas tanto de forma global, isto é, no vetor completo de parâmetros do modelo quanto de forma marginal, apenas nos parâmetros regressores. Consideramos a norma L1 e a divergência Kullback-Leibler como medidas de influência das observações nas estimativas dos parâmetros. Além disso, encontramos as distribuições condicionais completas de todos os modelos para o uso do algoritmo de Gibbs obtendo, assim, amostras da distribuição a posteriori dos parâmetros. Tais amostras são utilizadas no calculo do CPO e das medidas de divergência estudadas. A principal contribuição deste trabalho é obter as medidas de influência global e marginal calculadas para os modelos t-Student, normal assimétrico e t-assimétrico. Na aplicação em dados reais originais e contaminados, observamos que, em geral, o modelo t-Student é uma alternativa robusta ao modelo normal. Por outro lado, o modelo t-assimétrico não é, em geral, uma alternativa robusta ao modelo normal. A capacidade de robustificação do modelo t-assimétrico está diretamente ligada à posição do resíduo do ponto discrepante em relação a distribuição dos resíduos.
The linear regression model with errors in the skew-t family, which includes the normal, Student-t and skew normal distributions as particular cases, has been considered as a robust alternative to the normal model. To conclude which model is in fact more robust its important to have a method to identify an observation as outlier, as well as to assess the influence of this observation in the estimates. In bayesian regression models, one of the most known measures to identify an outlier is the conditional predictive ordinate (CPO). We analyze the influence of these observations on the estimates both in a global way, that is, in the complete parameter vector of the model and in a marginal way, only in the regressor parameters. We consider the L1 norm and the Kullback-Leibler divergence as influence measures of the observations on the parameter estimates. Using the bayesian approach, we find the complete conditional distributions of all the models for the usage of the Gibbs sampler thus obtaining samples of the posterior distribution of the parameters. These samples are used in the calculation of the CPO and the studied divergence measures. The major contribution of this work is to present the global and marginal influence measures calculated for the Student-t, skew normal and skew-t models. In the application on original and contaminated real data, we observed that in general the Student-t model is a robust alternative to the normal model. However, the skew-t model is not a robust alternative to the normal model. The robustification capability of the skew-t model is directly linked to the position of the residual of the outlier in relation to the distribution of the residuals.
APA, Harvard, Vancouver, ISO, and other styles
20

Mancini, Chiara. "Metodo dell'ortante per la ricostruzione di immagini con rumore di Poisson mediante regolarizzazione in norma L1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18231/.

Full text
Abstract:
Il problema di ricostruzione di immagini solitamente prende in considerazione immagini corrotte da rumore di tipo Gaussiano bianco. Per alcuni tipi di immagini però il rumore prevalente è il rumore di Poisson, generato dal conteggio dei fotoni nella fase di registrazione dell’immagine. In questa tesi si è considerato il problema di ricostruzione di immagini affette da rumore di Poisson, con regolarizzazione in norma L1. Partendo dal metodo OWL-QN presente in letteratura per la minimizzazione di una funzione regolarizzata in norma L1, si sono proposte alcune modifiche che tengano in considerazione le caratteristiche del problema in esame. I nuovi metodi sono stati testati, provando la loro superiorità rispetto al metodo OWL-QN.
APA, Harvard, Vancouver, ISO, and other styles
21

Gajny, Laurent. "Approximation de fonctions et de données discrètes au sens de la norme L1 par splines polynomiales." Thesis, Paris, ENSAM, 2015. http://www.theses.fr/2015ENAM0006/document.

Full text
Abstract:
L'approximation de fonctions et de données discrètes est fondamentale dans des domaines tels que la planification de trajectoire ou le traitement du signal (données issues de capteurs). Dans ces domaines, il est important d'obtenir des courbes conservant la forme initiale des données. L'utilisation des splines L1 semble être une bonne solution au regard des résultats obtenus pour le problème d'interpolation de données discrètes par de telles splines. Ces splines permettent notamment de conserver les alignements dans les données et de ne pas introduire d'oscillations résiduelles comme c'est le cas pour les splines d'interpolation L2. Nous proposons dans cette thèse une étude du problème de meilleure approximation au sens de la norme L1. Cette étude comprend des développements théoriques sur la meilleure approximation L1 de fonctions présentant une discontinuité de type saut dans des espaces fonctionnels généraux appelés espace de Chebyshev et faiblement Chebyshev. Les splines polynomiales entrent dans ce cadre. Des algorithmes d'approximation de données discrètes au sens de la norme L1 par procédé de fenêtre glissante sont développés en se basant sur les travaux existants sur les splines de lissage et d'ajustement. Les méthodes présentées dans la littérature pour ces types de splines peuvent être relativement couteuse en temps de calcul. Les algorithmes par fenêtre glissante permettent d'obtenir une complexité linéaire en le nombre de données. De plus, une parallélisation est possible. Enfin, une approche originale d'approximation, appelée interpolation à delta près, est développée. Nous proposons un algorithme algébrique avec une complexité linéaire et qui peut être utilisé pour des applications temps réel
Data and function approximation is fundamental in application domains like path planning or signal processing (sensor data). In such domains, it is important to obtain curves that preserve the shape of the data. Considering the results obtained for the problem of data interpolation, L1 splines appear to be a good solution. Contrary to classical L2 splines, these splines enable to preserve linearities in the data and to not introduce extraneous oscillations when applied on data sets with abrupt changes. We propose in this dissertation a study of the problem of best L1 approximation. This study includes developments on best L1 approximation of functions with a jump discontinuity in general spaces called Chebyshev and weak-Chebyshev spaces. Polynomial splines fit in this framework. Approximation algorithms by smoothing splines and spline fits based on a sliding window process are introduced. The methods previously proposed in the littérature can be relatively time consuming when applied on large datasets. Sliding window algorithm enables to obtain algorithms with linear complexity. Moreover, these algorithms can be parallelized. Finally, a new approximation approach with prescribed error is introduced. A pure algebraic algorithm with linear complexity is introduced. This algorithm is then applicable to real-time application
APA, Harvard, Vancouver, ISO, and other styles
22

Deprez, Romain. "Optimisation perceptive de la restitution sonore multicanale par une analyse spatio-temporelle des premières réflexions." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4746/document.

Full text
Abstract:
L'objectif de cette thèse est l'optimisation de la qualité perçue de la reproduction sonore par un système audio multicanal, dans un contexte de salle d'écoute domestique. Les travaux de recherche présentés s'articulent selon deux axes. Le premier concerne l'effet de salle, et plus particulièrement les aspects physiques et perceptifs liés aux premières réflexions d'une salle. Ces éléments sont décrits spécifiquement, et une expérience psychoacoustique a été menée afin d'étendre les données disponibles quant à leur perceptibilité, c'est à dire leur capacité à modifier la perception du son direct, que ce soit en termes de timbre ou de localisation. Les résultats mettent en évidence la dépendance du seuil en fonction du type de stimulus, ainsi que son évolution en fonction de la configuration spatiale de l'onde directe et de la réflexion. Pour une condition donnée, le seuil de perceptibilité est décrit comme une fonction de directivité dépendant de l'incidence de la réflexion.Le deuxième axe de travail concerne les méthodes de correction de l'effet de la salle de reproduction. Les méthodes numériques classiques sont d'abord étudiées. Leur principale lacune réside dans l'absence de prise en compte du rôle spécifique des propriétés temporelles et spatiales des première réflexions. Le travail de thèse se termine par la proposition d'une nouvelle méthode de correction utilisant un algorithme itératif de type FISTA modifié afin de prendre en compte la perceptibilité des réflexions. Le traitement est implémenté dans une représentation où l'information spatiale est analysée sur la base des harmoniques sphériques
The goal of this Ph. D. thesis is to optimize the perceived quality of multichannel sound reproduction systems, in the context of a domestic listening room. The presented research work have been pursued in two different directions.The first deals with room effet, and more particularly with physical and perceptual aspects of first reflections within a room. These reflections are specifically described, and a psychoacoustical experiment have been carried out in order to extend the available data on their perceptibility, i.e. their potency in altering the perception of the direct sound, whether in its timbral or spatial features. Results exhibit the variation of the threshold depending on the type of stimulus, as well as on the spatial configuration of the direct sound and the reflection. For a given condition, the perceptibility threshold is given as a directivity function depending on the direction of incidence of the reflection.The second topic deals with room correction methods. Firstly, state-of-the art digital methods are investigated. Their main drawback is that they don't consider the specific impact of the temporal and spatial attributes of first reflections. A new correction method is therefore proposed. It uses an iterative algorithm, derivated from the FISTA method, in order to take into account the perceptibility of the reflections. All the processing is carried out in a spatial sound representation, where the spatial properties of the sound are analysed thanks to spherical harmonics
APA, Harvard, Vancouver, ISO, and other styles
23

Saponi, Matteo. "Il Compressed Sensing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6924/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Govindaraj, Santhosh. "Calculation of sensor redundancy degree for linear sensor systems." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/503.

Full text
Abstract:
The rapid developments in the sensor and its related technology have made automation possible in many processes in diverse fields. Also sensor-based fault diagnosis and quality improvements have been made possible. These tasks depend highly on the sensor network for the accurate measurements. The two major problems that affect the reliability of the sensor system/network are sensor failures and sensor anomalies. The usage of redundant sensors offers some tolerance against these two problems. Hence the redundancy analysis of the sensor system is essential in order to clearly know the robustness of the system against these two problems. The degree of sensor redundancy defined in this thesis is closely tied with the fault-tolerance of the sensor network and can be viewed as a parameter related to the effectiveness of the sensor system design. In this thesis, an efficient algorithm to determine the degree of sensor redundancy for linear sensor systems is developed. First the redundancy structure is linked with the matroid structure, developed from the design matrix, using the matroid theory. The matroid problem equivalent to the degree of sensor redundancy is developed and the mathematical formulation for it is established. The solution is obtained by solving a series of l1-norm minimization problems. For many problems tested, the proposed algorithm is more efficient than other known alternatives such as basic exhaustive search and bound and decomposition method. The proposed algorithm is tested on problem instances from the literature and wide range of simulated problems. The results show that the algorithm determines the degree of redundancy more accurately when the design matrix is dense than when it is sparse. The algorithm provided accurate results for most problems in relatively short computation times.
APA, Harvard, Vancouver, ISO, and other styles
25

Favre, Cécile. "Analyse en normes l1 et l0 des distances et des preferences : plannification en analyse sensorielle : application au confort d'accueil de sieges automobiles." Rennes 2, 1999. http://www.theses.fr/1999REN20028.

Full text
Abstract:
L'analyse des données peut être présentée comme l'ensemble des outils et techniques permettant la représentation des dites données. En fait, ce ne sont pas celles-ci qui seront représentées mais des relations de proximité observées ou calculées. Deux grands types de méthodes s'opposent, ou plutôt se complètent. Apres avoir montre que ni la première, qui analyse des notions de ressemblance, ni la seconde qui représente les notions d'écartement, s'appuyant toutes les deux sur le principe de projection ne permettent de trouver la meilleure représentation d'une matrice de dissimilarités, nous proposons une nouvelle approche d'analyse métrique en norme l1. Nous proposons une solution algorithmique au problème de m. D. S. Ll, appelé dist, qui permet le calcul de solutions exactes, optimum ou heuristiques. L'adaptation de ce modèle aux données de préférences est a l'origine de pref1 et pref0 utilisant respectivement les normes l1 et l0. Ils permettent d'analyser les préférences d'une population non homogène et de retrouver plusieurs ordres sous-jacents. Enfin, un outil de planification de comparaisons par a été développé (p. E. Paires). Le lien entre le résultat de l'analyse des préférences et les préconisations sur variables instrumentales est réalisé par le biais d'un calcul de tolérances. L'étude présentée concerne le confort statique de sièges automobiles. Elle contient toutes les étapes depuis le choix d'une population cible sur des critères anthropométriques, la création d'un espace produit, le recueil des préférences et des mesures instrumentales jusqu'a l'analyse des données permettant le calcul des tolérances
Data analysis can be described as all the tools and techniques allowing the representation of said data. Indeed, it is not these which are represented, but instead observed or calculated proximity relations. One can distinguish two main types of methods which supplement each other, both based on the principle of projection. After having shown that neither the first, which analyses notions of likeness, nor the second, which expresses notions of remoteness, can lead to the best representation of a dissimilarities matrix, we set out a new multidimensional scaling approach under the l1-norm. We propose an algorithmic solution to the m. D. S. -l1 problem, called dist, which allows the computation of exact, optimum or heuristic solutions. The adaptation of this model to preferences data is at the origin of prefl and pref0 based on the l1 and l0 standards respectively. They allow the analysis of the preferences of a non-homogeneous population and to trace a number of underlying orders. Last, a paired comparisons planning tool has been developed (p. E. Paires). The link between the result of the analysis of preferences and the recommendations based on instrument-measured variables is established by means of a tolerance calculation. The study presented here deals with the static comfort of car seats. It includes all the steps from the selection of a target population based on anthropometric criteria, through the creation of a representative collection of products, capturing preference and measurement data, up to the data analysis allowing the calculation of tolerances
APA, Harvard, Vancouver, ISO, and other styles
26

Guiducci, Martina. "Metodo delle Direzioni Alternate per la ricostruzione di immagini Poissoniane." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19501/.

Full text
Abstract:
Il problema della ricostruzione di immagini si propone di eliminare dall'immagine acquisita il blur e ridurre il rumore per ottenerne una che sia simile il più possibile all'oggetto esatto. In termini matematici tale ricostruzione si traduce nella minimizzazione di una funzione obiettivo formata da due termini: la divergenza di Kullback-Leibler, che rappresenta la distanza tra l'immagine acquisita e l'immagine ricostruita, e un termine di regolarizzazione in norma L_1, che esprime delle informazioni aggiuntive sulla soluzione. Tuttavia la divergenza di Kullback-Leibler coinvolge un logaritmo; dunque nei casi in cui l'immagine da ricostruire sia costituita da molti pixel neri, è necessario imporre un vincolo di non negatività sull'argomento del logaritmo e sulla soluzione. Il problema di minimo studiato in questa tesi è quindi un problema di ottimizzazione vincolata, in cui nella funzione obiettivo compare anche un vincolo che obbliga la soluzione a essere non negativa. A tal fine l'algoritmo implementato è l'Alternating Direction Method of Multipliers (ADMM). Il metodo delle direzioni alternate richiede a sua volta l'utilizzo dell'Orthant-Wise Limited Memory Quasi-Newton Method, il quale è un metodo di tipo quasi Newton progettato per la risoluzione di problemi generali di grandi dimensioni, regolarizzati in norma L_1. Per determinare la direzione di ricerca, richiede ad ogni iterazione la risoluzione di un sistema lineare la cui matrice dei coefficienti è la matrice Hessiana. Questa modalità del calcolo della direzione di ricerca non tiene conto della forma dell'Hessiana. Pertanto è stata proposta una modifica al metodo OWLQN che tenesse conto delle caratteristiche peculiari del problema e quindi della forma dell'Hessiana, in modo tale da poter invertire velocemente quest'ultima in uno spazio di Fourier e rendere il metodo più efficiente. Infine, è stata condotta un'analisi sperimentale sull'immagine oggetto di studio, facendo il confronto tra i metodi impiegati.
APA, Harvard, Vancouver, ISO, and other styles
27

Tyaglova, Svetlana. "De la déviance à la norme discursive. Médiation didactique pour l'acquisition d'une seconde langue étrangère (Français) par les bilingues (L1 Russe, L2 Anglais)." Thesis, Paris 10, 2010. http://www.theses.fr/2010PA100096.

Full text
Abstract:
Cette thèse est dictée par le souci d’améliorer l'acquisition du français deuxième langue étrangère en Russie (notamment à l'Université d'Etat de Kemerovo, à la Faculté de Traduction et d’Interprétariat), qui est généralement enseigné sans prendre en compte les acquis et savoirs préexistants. Or la maîtrise de ces connaissances, surtout si la première langue étrangère est l'anglais, peut s'avérer précieuse, car on peut profiter du transfert de la langue maternelle et de celui de la première langue étrangère en maîtrisant leurs interférences. De cette façon, nous économisons du temps d'enseignement et pouvons développer l'autonomie et la métacognition des apprenants qui en ont besoin.Nos recherches montrent que les apprenants font souvent appel à la traduction tantôt de la langue maternelle – qualifiée de première langue base, selon la théorie de R. Lado (1957) – tantôt de l'anglais (deuxième langue base). S’ils ne savent pas maîtriser les interférences, ce processus spontané les amène à des déviances linguistiques et culturelles. Dans cette situation, les préconisations les plus fréquemment avancées sont d’"éviter" ou d’"interdire" ce recours naturel. Nous proposons au contraire de le maîtriser grâce aux stratégies fondées sur les principes qui développent la métacognition de nos apprenants. La présente étude examine en priorité les déviances à la norme dans le cadre des cours de phonétique française pour étudiants débutants. Elle propose des stratégies de correction, de post-correction et de prévention selon des principes applicables aussi à d’autres niveaux linguistiques
This research is dictated by a view to improving the acquisition of French second foreign language in Russia (notably to State university of Kemerovo, in Translation and Interpreting Faculty), which is generally taught without pre-existing knowledge. But the possession of this knowledge, especially if the first foreign language is English, can be precious, because it's possible to use the transfer from the mother tongue as well as the first foreign language on condition interferences' controlling. In that way, we economize education's time and can develop autonomy and métacognition of the students who need it.Our researches show the students often translate from the mother tongue - qualified as first language bases by theory of R. Lado (1957) - sometimes from English (second language bases). If they can't control interferences, this brings them to linguistic and cultural errors. In this situation, recommendations most often advanced are ’to "avoid" or’ "to forbid" this natural appeal. We propose to control it by strategies founded on the principles which develop the métacognition of our students. The present study examines especially phonetic errors of beginners. It offers strategies of correction, of post correction and of prevention, based on principles which also are applicable to other linguistic levels
APA, Harvard, Vancouver, ISO, and other styles
28

Elf, Tora. "Percepción de la participación comunicativa en la conversación peninsular por parte de hablantes de español (L1 y L2) residentes en Suecia." Thesis, Stockholm University, Department of Spanish, Portuguese and Latin American Studies, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8070.

Full text
Abstract:

El objetivo de este estudio es investigar cuáles son las actitudes que se tienen ante un estilo comunicativo que no es el propio. Suponemos que en el encuentro intercultural estas actitudes pueden provocar malos entendidos. Para eso observamos las reacciones que un grupo de informantes, hablantes de español (L1 y L2), residentes en Suecia, tienen ante conversaciones filmadas en España. La condición para la elección de los informantes, ha sido que éstos no formaran parte de la comunidad de lengua de la variante peninsular del español. Otra condición ha sido que estas personas fueran residentes en Suecia. Esto significa que el parámetro cultural constrastivo, es, en forma focalizada, la adscripción a la cultura comunicativa en Suecia, ya sea como país de origen o de adopción.

El trabajo se basa en entrevistas realizadas con informantes latinoamericanos y suecos hispanohablantes. Antes de las entrevistas individuales, los informantes vieron dos secuencias filmadas, mostrando partes de dos conversaciones entre españoles. Estas conversaciones filmadas, y la experiencia personal de los informantes, han servido como base para las entrevistas. Todos son estudiantes universitarios de español en un nivel avanzado.

Las dos perspectivas principales de este estudio son el enfoque de la cortesía lingüística y la sociocultural.

Lo esencial con este estudio es que, aunque la mayoría de los informantes tienen un relativo conocimiento de las estrategias de cortesía de los españoles, y, consecuentemente, están bastante habituados al fenómeno, se han registrado reacciones que, en general, muestran una tendencia negativa hacia el estilo comunicativo peninsular, y especialmente, hacia el habla simultánea y la interrupción.

APA, Harvard, Vancouver, ISO, and other styles
29

Rieber, Jochen M. "Control of uncertain systems with l 1 and quadratic performance objectives." [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-31056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bertuccioli, Marialetizia. "Metodi numerici per la ricostruzione di immagini di tomosintesi." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6298/.

Full text
Abstract:
La tesi affronta il problema della ricostruzione di immagini di tomosintesi, problema che appartiene alla classe dei problemi inversi mal posti e che necessita di tecniche di regolarizzazione per essere risolto. Nel lavoro svolto sono presenti principalmente due contributi: un'analisi del modello di ricostruzione mediante la regolarizzazione con la norma l1; una valutazione dell'efficienza di alcuni metodi tra quelli che in letteratura costituiscono lo stato dell'arte per quanto riguarda i metodi basati sulla norma l1, ma che sono in genere applicati a problemi di deblurring, dunque non usati per problemi di tomosintesi.
APA, Harvard, Vancouver, ISO, and other styles
31

Le, Xuan-Chien. "Improving performance of non-intrusive load monitoring with low-cost sensor networks." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S019/document.

Full text
Abstract:
Dans les maisons et bâtiments intelligents, il devient nécessaire de limiter l'intervention humaine sur le système énergétique, afin de fluctuer automatiquement l'énergie consommée par les appareils consommateurs. Pour cela, un système de mesure de la consommation électrique d'équipements est aussi nécessaire et peut être déployé de deux façons : intrusive ou non-intrusive. La première solution consiste à relever la consommation de chaque appareil, ce qui est inenvisageable à une grande échelle pour des raisons pratiques liées à l'entretien et aux coûts. Donc, la solution non-intrusive (NILM pour Non-Intrusive Load Monitoring), qui est capable d'identifier les différents appareils en se basant sur les signatures extraites d'une consommation globale, est plus prometteuse. Le problème le plus difficile des algorithmes NILM est comment discriminer les appareils qui ont la même caractéristique énergétique. Pour surmonter ce problème, dans cette thèse, nous proposons d'utiliser une information externe pour améliorer la performance des algorithmes existants. Les premières informations additionnelles proposées considèrent l'état précédent de chaque appareil comme la probabilité de transition d'état ou la distance de Hamming entre l'état courant et l'état précédent. Ces informations sont utilisées pour sélectionner l'ensemble le plus approprié des dispositifs actifs parmi toutes les combinaisons possibles. Nous résolvons ce problème de minimisation en norme l1 par un algorithme d'exploration exhaustive. Nous proposons également d'utiliser une autre information externe qui est la probabilité de fonctionnement de chaque appareil fournie par un réseau de capteurs sans fil (WSN pour Wireless Sensor Network) déployé dans le bâtiment. Ce système baptisé SmartSense, est différent de la solution intrusive car seul un sous-ensemble de tous les dispositifs est surveillé par les capteurs, ce qui rend le système moins intrusif. Trois approches sont appliquées dans le système SmartSense. La première approche applique une détection de changements de niveau sur le signal global de puissance consommé et les compare avec ceux existants pour identifier les dispositifs correspondants. La deuxième approche vise à résoudre le problème de minimisation en norme l1 avec les algorithmes heuristiques de composition Paréto-algébrique et de programmation dynamique. Les résultats de simulation montrent que la performance des algorithmes proposés augmente significativement avec la probabilité d'opération des dispositifs surveillés par le WSN. Comme il n'y a qu'un sous-ensemble de tous les appareils qui sont surveillés par les capteurs, ceux qui sont sélectionnés doivent satisfaire quelques critères tels qu'un taux d'utilisation élevé ou des confusions dans les signatures sélectionnées avec celles des autres
In smart homes, human intervention in the energy system needs to be eliminated as much as possible and an energy management system is required to automatically fluctuate the power consumption of the electrical devices. To design such system, a load monitoring system is necessary to be deployed in two ways: intrusive or non-intrusive. The intrusive approach requires a high deployment cost and too much technical intervention in the power supply. Therefore, the Non-Intrusive Load Monitoring (NILM) approach, in which the operation of a device can be detected based on the features extracted from the aggregate power consumption, is more promising. The difficulty of any NILM algorithm is the ambiguity among the devices with the same power characteristics. To overcome this challenge, in this thesis, we propose to use an external information to improve the performance of the existing NILM algorithms. The first proposed additional features relate to the previous state of each device such as state transition probability or the Hamming distance between the current state and the previous state. They are used to select the most suitable set of operating devices among all possible combinations when solving the l1-norm minimization problem of NILM by a brute force algorithm. Besides, we also propose to use another external feature that is the operating probability of each device provided by an additional Wireless Sensor Network (WSN). Different from the intrusive load monitoring, in this so-called SmartSense system, only a subset of all devices is monitored by the sensors, which makes the system quite less intrusive. Two approaches are applied in the SmartSense system. The first approach applies an edge detector to detect the step-changes on the power signal and then compare with the existing library to identify the corresponding devices. Meanwhile, the second approach tries to solve the l1-norm minimization problem in NILM with a compositional Pareto-algebraic heuristic and dynamic programming algorithms. The simulation results show that the performance of the proposed algorithms is significantly improved with the operating probability of the monitored devices provided by the WSN. Because only part of the devices are monitored, the selected ones must satisfy some criteria including high using rate and more confusions on the selected patterns with the others
APA, Harvard, Vancouver, ISO, and other styles
32

Asif, Muhammad Salman. "Dynamic compressive sensing: sparse recovery algorithms for streaming signals and video." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49106.

Full text
Abstract:
This thesis presents compressive sensing algorithms that utilize system dynamics in the sparse signal recovery process. These dynamics may arise due to a time-varying signal, streaming measurements, or an adaptive signal transform. Compressive sensing theory has shown that under certain conditions, a sparse signal can be recovered from a small number of linear, incoherent measurements. The recovery algorithms, however, for the most part are static: they focus on finding the solution for a fixed set of measurements, assuming a fixed (sparse) structure of the signal. In this thesis, we present a suite of sparse recovery algorithms that cater to various dynamical settings. The main contributions of this research can be classified into the following two categories: 1) Efficient algorithms for fast updating of L1-norm minimization problems in dynamical settings. 2) Efficient modeling of the signal dynamics to improve the reconstruction quality; in particular, we use inter-frame motion in videos to improve their reconstruction from compressed measurements. Dynamic L1 updating: We present homotopy-based algorithms for quickly updating the solution for various L1 problems whenever the system changes slightly. Our objective is to avoid solving an L1-norm minimization program from scratch; instead, we use information from an already solved L1 problem to quickly update the solution for a modified system. Our proposed updating schemes can incorporate time-varying signals, streaming measurements, iterative reweighting, and data-adaptive transforms. Classical signal processing methods, such as recursive least squares and the Kalman filters provide solutions for similar problems in the least squares framework, where each solution update requires a simple low-rank update. We use homotopy continuation for updating L1 problems, which requires a series of rank-one updates along the so-called homotopy path. Dynamic models in video: We present a compressive-sensing based framework for the recovery of a video sequence from incomplete, non-adaptive measurements. We use a linear dynamical system to describe the measurements and the temporal variations of the video sequence, where adjacent images are related to each other via inter-frame motion. Our goal is to recover a quality video sequence from the available set of compressed measurements, for which we exploit the spatial structure using sparse representations of individual images in a spatial transform and the temporal structure, exhibited by dependencies among neighboring images, using inter-frame motion. We discuss two problems in this work: low-complexity video compression and accelerated dynamic MRI. Even though the processes for recording compressed measurements are quite different in these two problems, the procedure for reconstructing the videos is very similar.
APA, Harvard, Vancouver, ISO, and other styles
33

Oliver, Parera Maria. "Scene understanding from image and video : segmentation, depth configuration." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663870.

Full text
Abstract:
In this thesis we aim at analyzing images and videos at the object level, with the goal of decomposing the scene into complete objects that move and interact among themselves. The thesis is divided in three parts. First, we propose a segmentation method to decompose the scene into shapes. Then, we propose a probabilistic method, which works with shapes or objects at two different depths, to infer which objects are in front of the others, while completing the ones which are partially occluded. Finally, we propose two video related inpainting method. On one hand, we propose a binary video inpainting method that relies on the optical flow of the video in order to complete the shapes across time taking into account their motion. On the other hand, we propose a method for optical flow that takes into account the informational from the frames.
Aquesta tesi té per objectiu analitzar imatges i vídeos a nivell d’objectes, amb l’objectiu de descompondre l’escena en objectes complets que es mouen i interaccionen entre ells. La tesi està dividida en tres parts. En primer lloc, proposem un mètode de segmentació per descompondre l’escena en les formes que la componen. A continuació, proposem un mètode probabilístic, que considera les formes o objectes en dues profunditats de l’escena diferents, i infereix quins objectes estan davant dels altres, completant també els objectes parcialment ocults. Finalment, proposem dos mètodes relacionats amb el vídeo inpainting. Per una banda, proposem un mètode per vídeo inpainting binari que utilitza el flux òptic del vídeo per completar les formes al llarg del temps, tenint en compte el seu moviment. Per l’altra banda, proposem un mètode per inpainting de flux òptic que té en compte la informació provinent dels frames.
APA, Harvard, Vancouver, ISO, and other styles
34

Godard, Alexandre. "Espaces Lipschitz-libres, propriété (M) et lissité asymptotique." Paris 6, 2007. http://www.theses.fr/2007PA066438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bonaccolto, Giovanni. "Quantile regression methods in economics and finance." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424494.

Full text
Abstract:
In the recent years, quantile regression methods have attracted relevant interest in the statistical and econometric literature. This phenomenon is due to the advantages arising from the quantile regression approach, mainly the robustness of the results and the possibility to analyse several quantiles of a given random variable. Such as features are particularly appealing in the context of economic and financial data, where extreme events assume critical importance. The present thesis is based on quantile regression, with focus on the economic and financial environment. First of all, we propose new approaches in developing asset allocation strategies on the basis of quantile regression and regularization techniques. It is well known that quantile regression model minimizes the portfolio extreme risk, whenever the attention is placed on the estimation of the response variable left quantiles. We show that, by considering the entire conditional distribution of the dependent variable, it is possible to optimize different risk and performance indicators. In particular, we introduce a risk-adjusted profitability measure, useful in evaluating financial portfolios under a pessimistic perspective, since the reward contribution is net of the most favorable outcomes. Moreover, as we consider large portfolios, we also cope with the dimensionality issue by introducing an l1-norm penalty on the assets weights. Secondly, we focus on the determinants of equity risk and their forecasting implications. Several market and macro-level variables influence the evolution of equity risk in addition to the well-known volatility persistence. However, the impact of those covariates might change depending on the risk level, being different between low and high volatility states. By combining equity risk estimates, obtained from the Realized Range Volatility, corrected for microstructure noise and jumps, and quantile regression methods, we evaluate, in a forecasting perspective, the impact of the equity risk determinants in different volatility states and, without distributional assumptions on the realized range innovations, we recover both the points and the conditional distribution forecasts. In addition, we analyse how the relationships among the involved variables evolve over time, through a rolling window procedure. The results show evidence of the selected variables' relevant impacts and, particularly during periods of market stress, highlight heterogeneous effects across quantiles. Finally, we study the dynamic impact of uncertainty in causing and forecasting the distribution of oil returns and risk. We analyse the relevance of recently developed news-based measures of economic policy uncertainty and equity market uncertainty in causing and predicting the conditional quantiles and distribution of the crude oil variations, defined both as returns and squared returns. For this purpose, on the one hand, we study the causality relations in quantiles through a non-parametric testing method; on the other hand, we forecast the conditional distribution on the basis of the quantile regression approach and the predictive accuracy is evaluated by means of several suitable tests. Given the presence of structural breaks over time, we implement a rolling window procedure to capture the dynamic relations among the variables.
Negli ultimi anni, la regressione quantile ha suscitato un notevole interesse nella letteratura statistica ed econometrica. Tale fenomeno è dovuto ai vantaggi derivanti dalla regressione quantile, in particolare, la robustezza dei risultati e la possibilità di analizzare differenti quantili di una certa variabile casuale. Tali caratteristiche sono particolarmente rilevanti nel contesto di dati economici e finanziari, data la cruciale rilevanza di eventi estremi. Innanzitutto, la tesi introduce approcci innovativi per la definizione di strategie di "asset allocation" sulla base di modelli di regressione quantile penalizzati. Come noto in letteratura, la regressione quantile minimizza il rischio estremo di portafoglio, nel momento in cui ci si focalizza sulla coda sinistra della distribuzione della variabile di risposta. Nella presente tesi si dimostra che, considerando l'intera distribuzione, è possibile ottimizzare diversi indicatori di performance e di rischiosità. In particolare, si introduce una nuova misura di performance aggiustata per il rischio, utile a valutare i portafogli finanziari in ottica pessimista. Inoltre, si dimostra che l'introduzione di una "l1-norm penalty" sui pesi dei titoli implica vantaggi non indifferenti su portafogli di notevoli dimensioni. In secondo luogo, la tesi analizza i fattori determinanti del rischio sul mercato azionario, con particolare enfasi sulle loro implicazioni previsionali. Dalla combinazione delle stime di volatilità realizzata di tipo "range-based", corrette per "noise" microstrutturali e "jumps", e modelli di regressione quantile, è possibile valutare, in ottica previsionale, l'impatto dei fattori determinanti del rischio in diversi stati del mercato e, senza assunzioni sulle innovazioni dei "realized range", ottenere le previsioni sia puntuali che sull'intera distribuzione. Inoltre, l'implementazione di una procedura a finestre mobili consente di analizzare l'evoluzione nel tempo delle relazioni tra le variabili d'interesse. Infine, l'ultimo aspetto trattato dalla tesi riguarda l'impatto dinamico dell'incertezza nel causare e prevedere la distribuzione dei rendimenti e del rischio del mercato petrolifero. L'attenzione è posta sull'impatto di due indici di tipo "news-based", recentemente elaborati, che misurano l'incertezza, rispettivamente, sulla politica economica e sui mercati azionari nel causare e prevedere le dinamiche del mercato petrolifero. A tale scopo, da un lato, la tesi esplora le relazioni di causalità nei quantili utilizzando un test non parametrico; dall'altro, la distribuzione condizionata è prevista sulla base di modelli di regressione quantile. La capacità previsionale dell'approccio adottato è valutata mediante differenti test. Data la presenza di break strutturali nel tempo, una procedura a finestre mobili è utilizzata al fine di catturare le dinamiche nei modelli proposti.
APA, Harvard, Vancouver, ISO, and other styles
36

Porée, Fabienne. "Estimation et suivi de temps de retard pour la tomographie acoustique océanique." Phd thesis, Université Rennes 1, 2001. http://tel.archives-ouvertes.fr/tel-00439634.

Full text
Abstract:
En TAO des ondes sont transmises dans le milieu marin conduisant à l'observation de plusieurs versions du signal émis diversement atténuées et retardées. On s'intéresse ici au problème de l'estimation de ces temps de retard. Les méthodes de type maximum de vraisemblance présentent l'inconvénient de nécessiter la connaissance a priori du nombre de trajets et sont sensibles à une mauvaise initialisation des paramètres. Afin de s'affranchir de ces limitations, on propose une approche bayésienne basée sur la prise en compte d'une information a priori concernant les amplitudes des trajets reçus. On propose alors 2 méthodes différentes de déconvolution. La première utilise comme information la sortie d'un récepteur quadratique et conduit à la minimisation d'un critère simple et de faible complexité. Dans la seconde méthode proposée, le signal est observé après le filtrage adapté. On introduit un modèle Bernoulli Gaussien et le problème est résolu en utilisant des simulations de Monte-Carlo. La mise en œuvre de ces deux méthodes sur des données réelles impose de prendre en compte les distorsions subies par le signal au niveau des transducteurs d'émission et de réception et éventuellement lors de la propagation. On indique comment cette méconnaissance de la forme d'onde peut être résolue et on met en évidence l'amélioration des résultats obtenus par cette prise en compte. Enfin, l'évolution modérée des temps de retard du canal de propagation entre des mesures successives permet d'envisager après la déconvolution trace par trace un traitement bidimensionnel de l'image obtenue en juxtaposant les traces déconvoluées successives. L'intérêt principal de la méthode ainsi obtenue est de ne pas considérer le nombre de trajets de la propagation constant. L'ensemble des méthodes développées est testé sur des données synthétiques, puis sur 2 types de données réelles.
APA, Harvard, Vancouver, ISO, and other styles
37

Tardivel, Patrick. "Représentation parcimonieuse et procédures de tests multiples : application à la métabolomique." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30316/document.

Full text
Abstract:
Considérons un vecteur gaussien Y de loi N (m,sigma²Idn) et X une matrice de dimension n x p avec Y observé, m inconnu, Sigma et X connus. Dans le cadre du modèle linéaire, m est supposé être une combinaison linéaire des colonnes de X. En petite dimension, lorsque n ≥ p et que ker (X) = 0, il existe alors un unique paramètre Beta* tel que m = X Beta* ; on peut alors réécrire Y sous la forme Y = X Beta* + Epsilon. Dans le cadre du modèle linéaire gaussien en petite dimension, nous construisons une nouvelle procédure de tests multiples contrôlant le FWER pour tester les hypothèses nulles Beta*i = 0 pour i appartient à [[1,p]]. Cette procédure est appliquée en métabolomique au travers du programme ASICS qui est disponible en ligne. ASICS permet d'identifier et de quantifier les métabolites via l'analyse des spectres RMN. En grande dimension, lorsque n < p on a ker (X) ≠ 0, ainsi le paramètre Beta* décrit précédemment n'est pas unique. Dans le cas non bruité lorsque Sigma = 0, impliquant que Y = m, nous montrons que les solutions du système linéaire d'équations Y = X Beta avant un nombre de composantes non nulles minimales s'obtiennent via la minimisation de la "norme" lAlpha avec Alpha suffisamment petit
Let Y be a Gaussian vector distributed according to N (m,sigma²Idn) and X a matrix of dimension n x p with Y observed, m unknown, sigma and X known. In the linear model, m is assumed to be a linear combination of the columns of X In small dimension, when n ≥ p and ker (X) = 0, there exists a unique parameter Beta* such that m = X Beta*; then we can rewrite Y = Beta* + Epsilon. In the small-dimensional linear Gaussian model framework, we construct a new multiple testing procedure controlling the FWER to test the null hypotheses Beta*i = 0 for i belongs to [[1,p]]. This procedure is applied in metabolomics through the freeware ASICS available online. ASICS allows to identify and to qualify metabolites via the analyse of RMN spectra. In high dimension, when n < p we have ker (X) ≠ 0 consequently the parameter Beta* described above is no longer unique. In the noiseless case when Sigma = 0, implying thus Y = m, we show that the solutions of the linear system of equation Y = X Beta having a minimal number of non-zero components are obtained via the lalpha with alpha small enough
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
APA, Harvard, Vancouver, ISO, and other styles
39

Demazeux, Romain. "Centres de Daugavet et opérateurs de composition à poids." Phd thesis, Université d'Artois, 2011. http://tel.archives-ouvertes.fr/tel-00684688.

Full text
Abstract:
Le propos de cette thèse est l'étude de la norme ||G+T|| d'une perturbation compacte d'un opérateur G agissant entre des espaces de Banach. Dans un premier temps nous abordons le problème du point de vue de la propriété de Daugavet : un opérateur G un centre de Daugavet si tout opérateur T de rang 1 (ou de manière équivalente tout opérateur compact) vérifie ||G+T||=||G||+||T||. Dans le premier chapitre, nous donnons des exemples de centres de Daugavet parmi les opérateurs de composition à poids agissant sur certains espaces de fonctions, comme par exemple l'espace C(K) des fonctions continues sur un compact parfait K, l'algèbre du disque, ou encore l'espace des fonctions lipschitziennes sur un espace métrique complet. Dans le second chapitre, nous étudions une propriété un peu plus faible, à savoir que l'équation ||G+T||=||G||+||T|| ne soit plus satisfaite que pour une certaine classe d'opérateurs de rang 1, et nous appelons alors un tel opérateur G un presque centre de Daugavet. Nous donnons une caractérisation des presque centres de Daugavet en terme de l^1-type canonique et d'épaisseur de l'opérateur G. Ceci nous permet alors d'obtenir une caractérisation des opérateurs qui fixent une copie de l'espace l^1. Le point de vue du dernier chapitre est différent : on ne cherche plus à trouver G qui " maximise " la norme de G+T pour tout opérateur compact T, mais à trouver un opérateur compact T qui minimise ||G+T||. En d'autres termes, on cherche à évaluer la norme essentielle de G. Nous complétons certains résultats obtenus dans le cadre des opérateurs de composition à poids agissant entre différents espaces de Hardy.
APA, Harvard, Vancouver, ISO, and other styles
40

Čelikovská, Klára. "L1 regrese." Master's thesis, 2020. http://www.nusl.cz/ntk/nusl-415868.

Full text
Abstract:
This thesis is focused on the L1 regression, a possible alternative to the ordinary least squares regression. L1 regression replaces the least squares estimation with the least absolute deviations estimation, thus generalizing the sample median in the linear regres- sion model. Unlike the ordinary least squares regression, L1 regression enables loosening of certain assumptions and leads to more robust estimates. Fundamental theoretical re- sults, including the asymptotic distribution of regression coefficient estimates, hypothesis testing, confidence intervals and confidence regions, are derived. This method is then compared to the ordinary least squares regression in a simulation study, with a focus on heavy-tailed distributions and the possible presence of outlying observations. 1
APA, Harvard, Vancouver, ISO, and other styles
41

Lu, Pei-Hsuan, and 呂姵萱. "L1-Norm Based Adversarial Example against CNN." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ua49z8.

Full text
Abstract:
碩士
國立中興大學
資訊科學與工程學系
106
In recent years, defending adversarial perturbations to natural examples in order to build robust machine learning models trained by deep neural networks (DNNs) has become an emerging research field in the conjunction of deep learning and security. In particular, MagNet consisting of an adversary detector and a data reformer is by far one of the strongest defenses in the black-box setting, where the attacker aims to craft transferable adversarial examples from an undefended DNN model to bypass a defense module without knowing its existence. MagNet can successfully defend a variety of attacks in DNNs, including the Carlini and Wagner''s transfer attack based on the L2 distortion metric. However, in this thesis, under the black-box transfer attack setting we show that adversarial examples crafted based on the L1 distortion metric can easily bypass MagNet and fool the target DNN image classifiers on MNIST and CIFAR-10. We also provide theoretical justification on why the considered approach can yield adversarial examples with superior attack transferability and conduct extensive experiments on variants of MagNet to verify its lack of robustness to L1 distortion based transfer attacks. Notably, our results substantially weaken the existing transfer attack assumption of knowing the deployed defense technique when attacking defended DNNs (i.e., the gray-box setting).
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Zhirong. "Two Affine Scaling Methods for Solving Optimization Problems Regularized with an L1-norm." Thesis, 2010. http://hdl.handle.net/10012/5546.

Full text
Abstract:
In finance, the implied volatility surface is plotted against strike price and time to maturity. The shape of this volatility surface can be identified by fitting the model to what is actually observed in the market. The metric that is used to measure the discrepancy between the model and the market is usually defined by a mean squares of error of the model prices to the market prices. A regularization term can be added to this error metric to make the solution possess some desired properties. The discrepancy that we want to minimize is usually a highly nonlinear function of a set of model parameters with the regularization term. Typically monotonic decreasing algorithm is adopted to solve this minimization problem. Steepest descent or Newton type algorithms are two iterative methods but they are local, i.e., they use derivative information around the current iterate to find the next iterate. In order to ensure convergence, line search and trust region methods are two widely used globalization techniques. Motivated by the simplicity of Barzilai-Borwein method and the convergence properties brought by globalization techniques, we propose a new Scaled Gradient (SG) method for minimizing a differentiable function plus an L1-norm. This non-monotone iterative method only requires gradient information and safeguarded Barzilai-Borwein steplength is used in each iteration. An adaptive line search with the Armijo-type condition check is performed in each iteration to ensure convergence. Coleman, Li and Wang proposed another trust region approach in solving the same problem. We give a theoretical proof of the convergence of their algorithm. The objective of this thesis is to numerically investigate the performance of the SG method and establish global and local convergence properties of Coleman, Li and Wang’s trust region method proposed in [26]. Some future research directions are also given at the end of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
43

Sejeso, Matthews Malebogo. "An l1-norm solution of under-determined linear algebraic systems using a hybrid method." Thesis, 2016. http://hdl.handle.net/10539/21638.

Full text
Abstract:
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of requirements for the degree of Master of Science. Johannesburg, 2016.
The l1-norm solution to an under-determined system of linear equations y = Ax is the sparsest solution to the system. In digital signal processing this mathematical problem is known as compressive sensing. Compressive sensing provides a mathematical framework for sampling and reconstructing an analogue signal at a rate far lower than the rate provided by the standard information theory. The reconstruction from few samples is possible using non-linear optimization algorithms provided that the signal is sparse and the sensing matrix in incoherent. The major algorithmic challenge in compressive sensing is to efficiently and effectively find sparse solutions from minimal measurements. General purpose optimization algorithms are not suitable for solving non-differentiable l1-minimization problem. In this dissertation, we survey the major practical algorithms for nding l1-norm solution of under-determined linear system of equations. Specific attention is paid to computational issues, in which individual methods tends to perform well. We propose a hybrid algorithm that combines complementary strengths of the fixed-point method and the interior-point method. The strong feature of the xed-point method is its speed, while the strength of the interior-point method is accuracy. The hybrid algorithm combine the two methods in a probabilistic manner. The algorithm tends to prioritise a method that is efficient and robust. The computational performance of the hybrid algorithm is tested on simple signal reconstruction problems. The hybrid algorithm is shown to produce similar recoverability of sparse solution as that of the xed-point method and the interior-point method. Furthermore the proposed hybrid algorithm is comparative in terms of speed and accuracy with existing methods.
LG2017
APA, Harvard, Vancouver, ISO, and other styles
44

Tien, Jin-yen, and 田錦燕. "A Study of Computing the Shortest Vector of the Two-dimensional Modular Lattice in L1-norm." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/68227710654778755014.

Full text
Abstract:
碩士
義守大學
資訊工程學系碩士班
95
In this thesis, we present an algorithm for finding the shortest nonzero vector of Lm(a, b) in L1-norm. The two-dimensional modular lattice Lm(a, b), generated by a vector(a, b) and a modulus m, is a special case in the planar lattice. Firstly, we showed that Lm(a, b) can be transformed into the planar lattice L((0, m), (1, w)), where w=a-1b mod m. Secondly, we proved that the shortest nonzero vector in the two-dimensional modular lattice is the corresponding vector of some even-numbered convergent of w/m. Finally, we discovered that, by computing the greatest common convergent of w/m and (w+1)/m, we can find fast the even-numbered convergent of w/m mentioned above. Our algorithm requires O(log m(loglog m)2) bit operations if we employ Sch?nhage’s method to compute one modular inverse operation and the greatest common convergent of w/m and (w+1)/m.
APA, Harvard, Vancouver, ISO, and other styles
45

Schomburg, Helen. "New Algorithms for Local and Global Fiber Tractography in Diffusion-Weighted Magnetic Resonance Imaging." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-0023-3F8B-F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography