Dissertations / Theses on the topic 'Analisi implicita'

To see the other types of publications on this topic, follow the link: Analisi implicita.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Analisi implicita.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Malheiros, Marcelo de Gomensoro. "Analise topologica de um modelo implicito." [s.n.], 1999. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260172.

Full text
Abstract:
Orientador: Wu Shin-Ting
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-07-31T15:13:24Z (GMT). No. of bitstreams: 1 Malheiros_MarcelodeGomensoro_M.pdf: 26877147 bytes, checksum: 59ca5d670223c400cfadc1dc9251d45b (MD5) Previous issue date: 1999
Mestrado
APA, Harvard, Vancouver, ISO, and other styles
2

Marziali, Giacomo. "Analisi di 3 microdelezioni implicate nel disturbo autistico." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11235/.

Full text
Abstract:
Nella Tesi viene riportata l’analisi genetica di un campione di 128 famiglie con Disturbo dello Spettro Autistico, tramite il sistema di SNP array “PsychArray” (Illumina ), contenente oltre 500.000 sonde sull’intero genoma. Questi dati sono stati utilizzati per individuare Copy Number Variants (CNVs) rari e rilevanti da un punto di vista clinico. Sono stati quindi selezionati tre CNVs per un ulteriore approfondimento: due microdelezioni già descritte come patologiche (rispettivamente nella regione 1p36.32 e 22q13.33 comprendente il gene SHANK3) sono risultate essere “de novo”, mentre una terza microdelezione nel gene CTNNA3 è ereditata dalla madre. Tutti e tre i CNV sono stati validati tramite Real Time-PCR, definendone i confini. Per quanto riguarda la microdelezione in CTNNA3, poiché difetti di questo gene sono stati implicati nell’autismo con un meccanismo recessivo, è stata anche condotta un’analisi di sequenza di tutti gli esoni del gene negli individui della famiglia interessata, al fine di ricercare eventuali mutazioni puntiformi sull’allele non deleto. Questa analisi non ha individuato nessuna variante potenzialmente dannosa, pertanto il difetto in CTNNA3 non risulta essere la causa principale del fenotipo autistico in questa famiglia, anche se potrebbe avere un ruolo come fattore di suscettibilità.
APA, Harvard, Vancouver, ISO, and other styles
3

Chernyshova, Elizaveta. "Expliciter et inférer dans la conversation : modélisation de la séquence d’explicitation dans l’interaction." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2132/document.

Full text
Abstract:
Cette thèse porte sur la co-construction de la signification en interaction et les manifestations des processus interprétatifs des participants. En s’intéressant au processus d’explicitation, c’est-à-dire le processus par lequel un contenu informationnel devient explicite dans la conversation, elle propose une étude pluridimensionnelle de la séquence conversationnelle en jeu dans ce processus. La co-construction de la signification est ici abordée comme relevant d’une transformation informationnelle et de l’inférence.Nos analyses ont porté sur un corpus de français parlé en interaction, en contexte de repas et apéritifs entre amis. A partir d’une collection de séquences d’explicitation, définies comme des configurations dans lesquelles une inférence estsoumise à validation, ce travail propose une analyse multidimensionnelle, portant un double regard sur les données : celui de l’analyse conversationnelle et celui d’une modélisation de la pratique d’explicitation. Ainsi, nous proposons de parcourir cettepratique selon trois axes d’analyse : (a) une analyse séquentielle, s’intéressant au déploiement de la séquence d’explicitation et des éléments la composant, (b) une analyse reposant sur une modélisation de la gestion informationnelle dans cetteséquence, et (c) une analyse des formats linguistiques employés pour l’exhibition du processus inférentiel. Un des enjeux de ce travail l’élaboration d’un modèle conversationnaliste pour la gestion informationnelle et son application à l’analyse desdonnées de langue parlée en interaction
This dissertation deals with the co-construction of meaning in interaction and the ways in which conversationalists exhibit their interpretative processes. The focus of this study is the process of explicitation, i.e. the process through which an informational content becomes explicit in conversation. By offering a multi-level analysis of conversational sequences engaged in this practice, the study approaches the co-construction of meaning from the point of view of informational transformation and inference.The analyses presented here have been conducted on a corpus of spoken French in interaction, within the setting of informal encounters between friends around a meal or a drink. The explicitation sequence is defined as a conversational pattern where an inference is being submitted for confirmation. Starting from a collection of these sequences, this study offers a twofold approach: that of conversation analysis, and that of modeling of the conversational sequence. The practice of making a content explicit is here being explored according to three analytical lines: (a) the sequential analysis, focusing on the deployment of the explicitation sequence and its components; (b) the analysis according to a device elaborated by means of modeling information management in these sequences; and (c) the analysis of the linguistic designs used when exhibiting the inference. One of themain challenges of the present study is that of a proposition of a conversationalist model, dealing with information management and its enforcement through analysis of talk in interaction
APA, Harvard, Vancouver, ISO, and other styles
4

Péchoux, Romain. "Analyse de la complexité des programmes par interprétation sémantique." Thesis, Vandoeuvre-les-Nancy, INPL, 2007. http://www.theses.fr/2007INPL084N/document.

Full text
Abstract:
Il existe de nombreuses approches développées par la communauté Implicit Computational Complexity (ICC) permettant d'analyser les ressources nécessaires à la bonne exécution des algorithmes. Dans cette thèse, nous nous intéressons plus particulièrement au contrôle des ressources à l'aide d'interprétations sémantiques. Après avoir rappelé brièvement la notion de quasi-interprétation ainsi que les différentes propriétés et caractérisations qui en découlent, nous présentons les différentes avancées obtenues dans l'étude de cet outil : nous étudions le problème de la synthèse qui consiste à trouver une quasi-interprétation pour un programme donné, puis, nous abordons la question de la modularité des quasi-interprétations. La modularité permet de diminuer la complexité de la procédure de synthèse et de capturer un plus grand nombre d'algorithmes. Après avoir mentionné différentes extensions des quasi-interprétations à des langages de programmation réactifs, bytecode ou d'ordre supérieur, nous introduisons la sup-interprétation. Cette notion généralise la quasi-interprétation et est utilisée dans des critères de contrôle des ressources afin d'étudier la complexité d'un plus grand nombre d'algorithmes dont des algorithmes sur des données infinies ou des algorithmes de type diviser pour régner. Nous combinons cette notion à différents critères de terminaison comme les ordres RPO, les paires de dépendance ou le size-change principle et nous la comparons à la notion de quasi-interprétation. En outre, après avoir caractérisé des petites classes de complexité parallèles, nous donnons quelques heuristiques permettant de synthétiser des sup-interprétations sans la propriété sous-terme, c'est à dire des sup-interprétations qui ne sont pas des quasi-interprétations. Enfin, dans un dernier chapitre, nous adaptons les sup-interprétations à des langages orientés-objet, obtenant ainsi différents critères pour contrôler les ressources d'un programme objet et de ses méthodes
There are several approaches developed by the Implicit Computational Complexity (ICC) community which try to analyze and control program resources. In this document, we focus our study on the resource control with the help of semantics interpretations. After introducing the notion of quasi-interpretation together with its distinct properties and characterizations, we show the results obtained in the study of such a tool: We study the synthesis problem which consists in finding a quasi-interpretation for a given program and we tackle the issue of quasi-interpretation modularity. Modularity allows to decrease the complexity of the synthesis procedure and to capture more algorithms. We present several extensions of quasi-interpretations to reactive programming, bytecode verification or higher-order programming. Afterwards, we introduce the notion of sup-interpretation. This notion strictly generalizes the one of quasi-interpretation and is used in distinct criteria in order to control the resources of more algorithms, including algorithms over infinite data and algorithms using a divide and conquer strategy. We combine sup-interpretations with distinct termination criteria, such as RPO orderings, dependency pairs or size-change principle, and we compare them to the notion of quasi-interpretation. Using the notion of sup-interpretation, we characterize small parallel complexity classes. We provide some heuristics for the sup-interpretation synthesis: we manage to synthesize sup-interpretations without the subterm property, that is, sup-interpretations which are not quasi-interpretations. Finally, we extend sup-interpretations to object-oriented programs, thus obtaining distinct criteria for resource control of object-oriented programs and their methods
APA, Harvard, Vancouver, ISO, and other styles
5

Velasquez, Rafael. "The Implicit Function Theorem." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-151933.

Full text
Abstract:
In this essay we present an introduction to real analysis, with the purpose of proving the Implicit Function Theorem. Our proof relies on other well-known theorems in set theory and real analysis as the Heine-Borel Covering Theorem and the Inverse Function Theorem.
I denna uppsats ger vi en introduktion till reel analys, med syftet att bevisa den implicita funktionssatsen. Vårt bevis bygger på andra välkända satser i mängdteori och reel analys som Heine-Borels övertäckningssats och inversa funktionssatsen.
APA, Harvard, Vancouver, ISO, and other styles
6

Bobadilla, Guadalupe Ulises 1959. "Analise dinamica da interação solo-estrutura para estruturas superficiais utilizando a transformada implicita de Fourier (ImFT)." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/257862.

Full text
Abstract:
Orientadores: Aloisio Ernesto Assan, Persio Leister de Almeida Barros
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil, Arquitetura e Urbanismo
Made available in DSpace on 2018-08-11T01:26:45Z (GMT). No. of bitstreams: 1 BobadillaGuadalupe_Ulises_D.pdf: 1151597 bytes, checksum: 005fbfba6e1ebd787b740be4f94cbf34 (MD5) Previous issue date: 2008
Resumo: As características que determinam o comportamento de uma estrutura sob carregamento dinâmico são as massas dos vários elementos, a rigidez dos seus membros e a dissipação de energia. Para avaliar corretamente a resposta dinâmica de uma estrutura levando em conta os efeitos da interação, é necessário incorporar as propriedades dinâmicas do solo dentro da formulação matemática do modelo físico adotado. Referido à superestrutura, os efeitos da interação alteram a resposta estrutural final, devido à inter-relação dinâmica entre o movimento do solo e o movimento da base de fundação. Conseqüentemente, se primeiro se avalia o movimento da base de fundação [produto da interação solo-estrutura (SSI)], a resposta estrutural final poderá ser resolvida depois via análise modal da superestrutura. Esta conceituação é utilizada no presente trabalho. Aqui, todo o processo de análise é feito no domínio da freqüência. A resposta estrutural é avaliada através da chamada Transformada Implícita de Fourier (ImFT), implementando-se para isto um algoritmo computacional que avalia a resposta dinâmica utilizando a ImFT eficientemente. A ImFT é uma avaliação racional das matrizes envolvendo as transformadas discretas de Fourier (DFT), para num mesmo processo matricial achar a resposta dinâmica estrutural diretamente no domínio do tempo. Correntemente, para a análise no domínio da freqüência tem-se utilizado a FFT (Fast Fourier Transform); embora a FFT seja computacionalmente eficiente, apresenta-se aqui a ImFT, um outro processo computacional alternativo à FFT e bastante eficiente para certos tipos de carregamento tais como uma excitação sísmica, que é o carregamento utilizado nesta pesquisa
Abstract: The characteristics that determine the behavior of a structure under dynamic loading are the masses of various elements, the rigidity of its members and the dissipation of energy. To properly evaluate the dynamic response of a structure taking into account the effects of the interaction, it is necessary to incorporate the dynamic properties of the soil within the mathematical formulation of the physical model adopted. Referred to the superstructure, the effects of interaction modify the final structural response due to the dynamic interrelationship between the soil motion and the base of foundation motions. Consequently, if that first assesses the base of foundation motions [product of the soil-structure interaction effects (SSI)], the final structural response can be assessed later by modal analysis of the superstructure. This concept is used in this work. Here, the whole process of analysis is done in frequency domain. The structural response is evaluated by the so-called Implicit Fourier Transform (ImFT), implementing to this a computational algorithm that assesses the structural dynamic response using the ImFT efficiently. The ImFT is a rational assessment of matrices involving the Discrete Fourier Transform (DFT) for a same matrix process find structural dynamics response directly on time domain. Commonly, for the analysis in the frequency domain has been used the FFT (Fast Fourier Transform), although the FFT is efficiently, presents itself here the ImFT, another alternative computational algorithm and quite competent to certain types of loading such as a seismic excitation which is the loading used in this study
Doutorado
Estruturas
Doutor em Engenharia Civil
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Matthew Scott. "Implicit Affinity Networks." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1112.

Full text
Abstract:
Although they clearly exist, affinities among individuals are not all easily identified. Yet, they offer unique opportunities to discover new social networks, strengthen ties among individuals, and provide recommendations. We propose the idea of Implicit Affinity Networks (IANs) to build, visualize, and track affinities among groups of individuals. IANs are simple, interactive graphical representations that users may navigate to uncover interesting patterns. This thesis describes a system supporting the construction of IANs and evaluates it in the context of family history and online communities.
APA, Harvard, Vancouver, ISO, and other styles
8

Mascali, Loriana Grazia Vanessa. "Caratterizzazione della funzione e del comportamento dell'alfa cellula pancreatica: analisi dei meccanismi fisiopatologici implicati nell'insorgenza del diabete." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1459.

Full text
Abstract:
Le alfa cellule pancreatiche in corso di diabete producono glucagone in maniera inappropriata, senza risentire dell effetto inibitorio dell insulina. Lo scopo dello studio è stato quello di analizzare la funzionalità della cellula alfa in termini di secrezione di glucagone, dopo stimolo fisiologico e la risposta di queste cellule all insulina, fisiologico inibitore della secrezione. Oltre che l'identificazione del possibile ruolo dannoso di questi stessi stimolanti, quando presenti cronicamente ed in maniera elevata. Abbiamo inoltre analizzato la risposta cellulare alle condizioni avverse che si verificano in condizioni di diabete (esposizione a FFA, citochine, disfunzione recettore insulinico) e le possibili strategie terapeutiche. I dati del nostro gruppo hanno dimostrato che la lipotossicità potrebbe essere la causa dell alterato segnale insulinico intracellulare nelle alfa cellule. I nostri dati hanno messo in luce inoltre per la prima volta le basi molecolari della maggiore resistenza e differente risposta, in termini di induzione dell apoptosi indotta da citochine, delle alfa cellule rispetto alle beta cellule pancreatiche. Inoltre ottenendo alfa cellule prive di recettore insulinico abbiamo potuto analizzare il contributo specifico del segnale insulinico intracellulare sia per la funzione secretoria di queste cellule sia per la funzione proliferativa. Oltre alla comprensione di quei meccanismi molecolari che permettono alla cellula alfa di sopperire, in condizioni fisiologiche, al mancato funzionamento del recettore insulinico. Abbiamo inoltre analizzato l'azione degli ormoni incretinici e dimostrato che le cellule alfa sono in grado di produrre GLP1 e che tale produzione di GLP1 è indotta dall esposizione cronica al GLP1 stesso. Questo aspetto di notevole interesse per la patogenesi del diabete apre a nuovi spunti, la produzione a livello locale di GLP1 potrebbe servire all isola per funzioni di secrezione ma anche per funzioni di trofismo e di sopravvivenza. L'alfa cellula potrebbe dunque aggiungersi alla lista dei tessuti di primo interesse, rendendo ancora più complessa la relazione tra insulino-resistenza, alterazione della funzione dell isola pancreatica e terapia farmacologica.
APA, Harvard, Vancouver, ISO, and other styles
9

HUYNH, NGOC MAI MONICA. "Newton-Krylov Dual-Primal Methods for Implicit Time Discretizations in Cardiac Electrophysiology." Doctoral thesis, Università degli studi di Pavia, 2021. http://hdl.handle.net/11571/1447636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kermiche, Lamya. "Dynamique de la surface de volatilité implicite." Grenoble 2, 2007. http://www.theses.fr/2007GRE21036.

Full text
Abstract:
La popularité de la formule de Black et Scholes ne s'est jamais démentie, malgré les écarts constatés entre la réalité et certaines de ses hypothèses. Les modèles de marché ont été proposés pour enrichir ce modèle de base, par la modélisation de la volatilité implicite. L'objet de cette recherche est l'étude empirique de la dynamique de la surface de volatilité implicite. Après avoir étudié chacune des dimensions de la surface séparément, nous avons voulu tenir compte des interactions entre ces dernières. Pour cela, nous avons utilisé une forme fonctionnelle de l'Analyse en Composantes Principales, basée sur une décomposition de Karhunen-Loève. Nous avons ainsi isolé et analysé les principaux facteurs de chocs influençant la surface de volatilité implicite. Nos résultats indiquent un comportement différent des volatilités courtes et longues. En outre, l'étude des séries chronologiques des facteurs obtenus indique que ceux-ci sont bien représentés par des processus à sauts, en particulier le premier facteur, qui représente la variation globale de la surface de volatilité implicite. Nous avons ensuite analysé le contenu informatif de la surface de volatilité implicite, par l'estimation et l'étude des courbes de densité risque neutres. Nous avons ainsi retrouvé les mêmes phénomènes de sauts, dans l'évolution des anticipations des investisseurs. Les applications des modèles proposés sont nombreuses, en particulier pour la gestion du risque en Véga d'un portefeuille d'options, ou encore pour l'évaluation et la couverture d'options exotiques ou de produits dérivés sur la volatilité
The Black and Scholes Formula is very popular among market practitioners, despite the differences between reality and the hypothesis. Market models have been proposed to expand this model, by modelling implied volatility. The aim of this research is the empirical study of the dynamics of the implied volatility surface. After studying each dimension of the surface separately, we incorporated the interactions between them. To perform that, we used a functional form of Principal Component Analysis, based on a Karhunen-Loève decomposition. We isolate and study the most important shocks factors driving the implied volatility surface. Our results suggest different behaving for short and long term volatilities. Studying the time series of the obtained factors, we show that these are well represented by jump processes, particularly the first factor, which represents the global variation of the implied volatility surface. We then analyse the informational content of the implied volatility surface, by estimating and studying risk neutral densities. We find that the same jumps phenomenons are present in changes of investors' anticipations. There are many applications of the proposed models, particularly for Vega hedging of options portfolios, and for the valuation and risk management of exotic options and volatility derivatives
APA, Harvard, Vancouver, ISO, and other styles
11

Fountain, David Wilkes. "Implicit systems : orthogonal functions analysis and geometry." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/15750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chang, Jen-Chien Jack. "Implicit solid modeling using interval methods /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/10690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Savarino, Simona. "Proposta di traduzione e analisi di parti del libro "Per dieci minuti" di Chiara Gamberale." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7187/.

Full text
Abstract:
Ce mémoire propose la traduction de certains passages du livre « Per dieci minuti » de Chiara Gamberale. Ce roman raconte l’histoire de Chiara, une fille qui, pendant un période de troubles de sa vie où elle croit avoir perdu tous ses points de repère, décide d’essayer une thérapie-jeu: tous les jours, pendant dix minutes, elle doit faire quelque chose de nouveau, qu’elle n’a jamais expérimenté avant. Ce traitement thérapeutique s’inspire de la théorie des six exercices de Rudolf Steiner, un pédagogue autrichien qui décrit les six étapes fondamentales pour retrouver l’équilibre intérieur. D’un côté, ce mémoire veut souligner l’importance de se consacrer à la traduction d’un genre qui semble avoir de plus en plus du succès de nos jours: la littérature sur le développement personnel et le bien-être. De l’autre, il présente une analyse de l’histoire de la traduction, surtout en ce qui concerne la traduction des référents culturels, à travers une comparaison entre l’approche « normalisante » et l’approche « exotisante ». De plus, il traite le sujet épineux, puisque complexe et fascinant en même temps, des « intraduisibles », c’est-à-dire les implicites culturels. Il s’agit de mots pour lesquels il est impossible de trouver un équivalent direct dans les autres langues puisqu’ils décrivent les habitudes spécifiques d’une certaine communauté ou ils sont le fruit d’une vision spécifique et unique du monde de la part d’une culture. Moi-même, à travers cette traduction, j’ai dû me confronter avec toute une série de défis au niveau traductif , liés surtout au contexte culturel et au style de l’écrivain: j’ai traduit par exemple des parties en dialecte, des passages en rimes et j’ai dû même reproduire volontairement des erreurs de grammaire afin de m’adapter au registre linguistique de certains personnages.
APA, Harvard, Vancouver, ISO, and other styles
14

Pischedda, Francesca. "La violenza contro le donne e il sessismo implicito nel discorso giornalistico scritto. Analisi di due micro-corpora in lingua italiana e francese." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5895/.

Full text
Abstract:
La tesi si articola in quattro parti. La prima, di stampo femminista, propone una panoramica sul femminicidio come fenomeno sociale e sulla relativa situazione giuridica internazionale. La seconda tratta in generale la stampa di qualità, supporto mediatico prescelto per l'analisi linguistica. La terza parte propone un micro-corpus di stampa italiana sul tema del femminicidio e la quarta un micro-corpus di stampa francese sull' "Affaire DSK", entrambe corredate di un' analisi del componente lessicale e discorsivo (Analyse du discours). E' un lavoro comparativo, i cui risultati hanno permesso di mettere in evidenza e provare come la stampa di qualità italiana e francese tendano a veicolare in modo implicito un'immagine sessista, sterotipata e discriminatoria del femminicidio e della vittima di violenza.
APA, Harvard, Vancouver, ISO, and other styles
15

Sengers, Arnaud. "Schémas semi-implicites et de diffusion-redistanciation pour la dynamique des globules rouges." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM032/document.

Full text
Abstract:
Dans ce travail, nous nous sommes intéressés à la mise en place de schémas semi-implicites pour l’amélioration des simulations numériques du déplacement d’un globule rouge dans le sang. Nous considérons la méthode levelset, où l’interface fluide-structure est représentée comme la ligne de niveau 0 d’une fonction auxiliaire et le couplage est effectué en ajoutant un terme source dans l’équation fluide.Le principe de ces schémas semi-implicites est de prédire la position et la forme de la structure par une équation de la chaleur et d’utiliser cette prédiction pour obtenir un terme de force dans l’équation fluide plus précis. Ce type de schémas semi-implicites a d’abord été mis en place dans le cadre d’un système diphasique ou d’une membrane élastique immergée afin d’utiliser un plus grand pas de temps que pour un couplage explicite. Cela a permis d’améliorer les conditions sur le pas de temps et ainsi augmenter l’efficacité globale de l’algorithme complet par rapport à un schéma explicite classique.Pour étendre ce raisonnement au cas d’un globule rouge, nous proposons un algorithme pour simuler le flot de Willmore en dimension 2 et 3. Notre méthode s’inspire des méthodes de mouvements d’interface générés par diffusion et nous arrivons à obtenir un flot non linéaire d’ordre 4 uniquement avec des résolutions d’équations de la chaleur. Pour assurer la conservation du volume et de l’aire d’un globule rouge, nous proposons ensuite une méthode de correction qui déplace légèrement l’interface afin de recoller aux contraintes.La combinaison des deux étapes précédentes décrit le comportement d’un globule rouge laissé au repos. Nous validons cette méthode en obtenant une formed’équilibre d’un globule rouge. Nous proposons enfin un schéma semi-implicite dans le cas d’un globule rouge qui ouvre la voie vers l’utilisation de cette méthode comme prédicteur de l’algorithme de couplage complet
In this work, we propose new semi-implicit schemes to improve the numerical simulations of the motion of an immersed red blood cell. We consider the levelset method where the interface is described as the 0 isoline of an auxiliary function and the fluid-structure coupling is done by adding a source term in the fluid equation.The idea of these semi-implicit scheme is to predict the position and the shape of the structure through a heat equation and to use this prediction to improve the accuracy of the source term in the fluid equation. This type of semi-implicit scheme has firstly been implemented in the case of a multiphase flow and a immersed elastic membrane and has shown better temporal stability than an explicit scheme, resulting in an improved global efficiency.In order extend this method to the case of a red blood cell, we propose an algorithm to compute the Willmore flow in dimenson 2 and 3. In the spirit of the diffusion generated motion methods, our method simulate a non linear four order flow by only solving heat equations. To ensure the conservation of the volume and area of the the vesicle, we add to the method a correction step that slightly moves the interface so that we recover the constraints.Combnation of these two steps allows to compute the behavior of a red blood cell left at rest. We validate this method by obtaining the convergence to an equilibrium shape in both 2D and 3D. Finaly we introduce a semi-implicit scheme in the case of a red blood cell that shows how we can use this method as a prediction in the complete coupling model
APA, Harvard, Vancouver, ISO, and other styles
16

Ciuca, Diana M. "Reducing Subjectivity: Meditation and Implicit Bias." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1213.

Full text
Abstract:
Implicit association of racial stereotypes is brought about by social conditioning (Greenwald & Krieger, 2006). This conditioning can be explained by attractor networks (Sharp, 2011). Reducing implicit bias through meditation can show the effectiveness of reducing the rigidity of attractor networks, thereby reducing subjectivity. Mindfulness meditation has shown to reduce bias from the use of one single guided session conducted before performing an Implicit Association Test (Lueke & Gibson, 2015). Attachment to socially conditioned racial bias should become less prevalent through practicing meditation over time. An experimental model is proposed to test this claim along with a reconceptualization of consciousness based in meditative practice.
APA, Harvard, Vancouver, ISO, and other styles
17

Matsangos, Apostolos. "La conversation ordinaire : son lexique et ses implicites." Aix-Marseille 1, 2003. http://www.theses.fr/2003AIX10049.

Full text
Abstract:
Cette étude est consacrée au fonctionnement conversationnel et propose des concepts et des outils que nous mettons à l'épreuve sur un corpus grec et français. Nous y présentons le concept de conversation, dans une approche multidimensionnelle du dialogue qui aborde l'objet lexical comme hétérogène. Nous traitons des argots grec et français et posons une perspective conversationnelle envisageant les divers styles sociaux comme styles individuels. Puis nous nous attachons aux implicites, en insistant sur la compétence sociale dans sa dimension linguistique, celle du dialogue ludique. L'analyse du corpus montre un fonctionnement langagier qui fonde une définition plus "située", plus interprétative de la conversation, et dégage quelques pistes de travail : - l'approche conversationnelle, prenant en compte tout ce que le "code" nous fournit et qui est repris/modifié par les individus singuliers, semble pertinente pour mettre en évidence la créativité langagière ; - proposer un point d vue langagier "ordinaire" ouvre à la réalité d'un sujet "distribué", "situé" qui, doté d'une compétence sociale, la réadapte en fonction de sa motivation à jouer avec la langue.
APA, Harvard, Vancouver, ISO, and other styles
18

Ayelén, Biscarra María, Karina Conde, and Mariana Cremonte. "Trends in the study of implicit alcohol related cognition." Pontificia Universidad Católica del Perú, 2017. http://repositorio.pucp.edu.pe/index/handle/123456789/101166.

Full text
Abstract:
According to the dual process model, the interaction between explicit (controlled) and implicit (automatic) cognitions would allow the understanding of irrational actions like addictive behaviors. This model has gained great popularity among addiction researchers, leading to an exponential growth in publications on implicit alcohol related cognition (IAC). Hence, the goal of this article is to identify trends in the study of IAC by means of a bibliometric and content analysis of the empirical studies published up to May, 2013. Throughout this paper, the studied topics of IAC were characterized, the most prolific countries, authors and journals were recognized, the most cited publications were detected and the most employed methods were identified.
De acuerdo al modelo del doble procesamiento, la interacción entre cogniciones explícitas (controladas) e implícitas (automáticas) permitiría entender acciones irracionales, tales como los comportamientos adictivos. Este modelo ha ganado mucha popularidad entre quienes investigan el consumo de sustancias, produciéndose un crecimiento exponencial de las publicaciones sobre Cogniciones Implícitas hacia el Alcohol (CIA). Por ello, el objetivo de este artículo es describir las tendencias en el estudio de la CIA mediante un análisis bibliométrico y de contenido de los estudios empíricos publicados hasta mayo del 2013. A lo largo de este trabajo se caracterizan las temáticas de las CIA encontradas y se identifican los países, autores y revistas más productivas, las publicaciones más citadas y los métodos más utilizados.
De acordo com o modelo de processamento dual, a interação entre cognições explícitas (controladas) e implícitas (automáticas) audaría as ações irracionais, tais como os comportamentos aditivos. Este modelo ganhou muita popularidade entre os pesquisadores do consumo de substâncias, produzindo um crescimento exponencial de as publicações sobre cognições implícitas relacionadas com o álcool (CIA). Portanto, o objetivo deste artigo é descrever as tendências no estudo da CIA através de um análise bibliométrico e de conteúdo de estudos empíricos publicados até maio de 2013. Ao longo deste artigo são caracterizados os temas da CIA encontrados e são identificados países, autores e revistas mais produtivos, as publicações mais citadas e os métodos mais utilizados.
APA, Harvard, Vancouver, ISO, and other styles
19

Vigani, G. "Regolazione dei meccanismi di acquisizione e di omeostasi del ferro in piante a strategia I : analisi biochimica e molecolare degli aspetti metabolici implicati." Doctoral thesis, Università degli Studi di Milano, 2008. http://hdl.handle.net/2434/49106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Siu, Mei-ling Jacqueline. "Would students' causal attributions and implicit theories of intelligence be mediated by teachers' feedback on their performance." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B29791261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

MACCA, Emanuele. "Shock-Capturing methods: Well-Balanced Approximate Taylor and Semi-Implicit schemes." Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/556029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Boaretto, Francesca. "Individuazione e caratterizzazione di geni implicati nelle paraparesi spastiche ereditarie." Doctoral thesis, Università degli studi di Padova, 2009. http://hdl.handle.net/11577/3426096.

Full text
Abstract:
Hereditary spastic paraplegias (HSP) represents a group of single-gene disorders characterised by degeneration of the corticospinal tract axons, leading to slowly progressive lower extremity spasticity and weakness. HSP is termed as ‘complicated’ if additional symptoms such as dementia, extrapyramidal disturbance or peripheral neuropathy occur. So far 39 different chromosomal loci have been identified for HSP, which is inherited both as autosomal dominant, recessive and X-linked trait. Mutations in 18 genes involved in intracellular trafficking, axonal transport and impaired mitochondrial function have been identified in HSP patients. Such a scenario make then very difficult to decide what HSP form and consequently which gene have to be investigated in an affect subject. During the last three years many isolated patients and some families with a complicated HSP phenotype, have been studied in our lab in order to identify the genomic locus and/or the gene involved in the disease. We first investigated a subgroup of ten subjects with a specific severe phenotype characterized by the following major features: hydrocephalus, mental retardation, spasticity of the legs, and adducted thumbs (CRASH syndrome). Such patients were investigated by direct sequencing for mutations in L1CAM (Neural cell adhesion molecule L1) coding regions, a gene involved in CRASH syndrome. Five novel and one already known mutations have been detected in six unrelated patients. The large majority of the identified mutations were localized in the extracellular domain, which plays a primary role in the homo- and heterophilic protein-protein interactions. In the patients without causative mutations in the L1CAM coding region a duplication analysis using Multiplex Ligation-dependent Probe Amplification and Real Time-PCR has been performed. None of the analyzed patients showed such a riarrangement. In the second part of the present work, three families affected by a complicated form of recessive HSP were investigated. In the first one characterized by HSP with mental impairment, a linkage analysis for the known HSP loci have been performed. The candidate genes within the most reliable positive region have been direct sequenced without any significant result. Further studies are mandatory for identifying the molecular event responsible for HSP in such family. Haplotype analysis of the second family, in which two sibling were affected by HSP with suspected thin corpus callosum, was performed within the region 15q21.1 between the markers D15S994 and D15S978 where the SPG11 gene is located. As the two affected members shared the same genotype, the SPG11 gene was investigated by direct sequencing. Apparently both subjects were found to carry an homozygous 3 bp insertion at the splice site of exon 39, (c.7000-3_-4insAGG). However, further investigations demonstrated that they were compound heterozygous with the previously described insertion in one allele and an 2,6kb intragenic deletion between the 36 and the 39 introns (c.6754_7152del1397) in the other one. The third family with eleven brothers born from consanguineus parents, was already studied by genome-wide search and fine mapping in a previous project. Such analysis allowed the identification of a region cosegregating with the disease, located on chromosome 21q11.2-q21.1. Only the three subjects affected by HSP complicated by mild mental retardation and distal motor neuropathy, share the same genotype at this locus. By the candidate genes study, a single nucleotide substitution in the 3’UTR of STCH gene (Heat shock protein 70 family member 13) that cosegregated with HSP have been detected. Such variation was never detected in 300 healthy subjects from the same population. In order to evaluate if such variant might affect transcription as well as RNA stability the mouse embryonic spinal cord-neuroblastoma cell line (NSC34) was transfected using a vector expressing the luciferase gene fused with the wild type or mutant STCH 3’UTR. Preliminary results suggest that the STCH 3’UTR single nucleotide substitution significantly affects the luciferase activity. In order to understand if such substitution could lead to an activation of a cryptic miRNA target site using in silico and in vitro approach, we performed a selection of putative miRNAs able to interact with the STCH mutated 3’UTR. Such analysis suggest hsa-miR-134, hsa-miR-194, hsa-miR-637, hsa-miR-758 e hsa-miR-924 might be involved in the pathogenic role of such variant. However further functional studies are mandatory for drawing final conclusions.
Le paraparesi spastiche ereditarie (HSP) sono un gruppo di disordini neurodegenerativi caratterizzate da progressiva spasticità e debolezza degli arti inferiori. Nelle forme complicate si possono osservare altre manifestazioni neurologiche o non neurologiche associate alla spasticità. I dati presenti in letteratura sulle HSP ad oggi riportano 39 loci mappati su diversi cromosomi e sono descritte sia forme a trasmissione autosomica dominante, che recessiva, che X-linked. Nei pazienti affetti da HSP sono state trovate mutazioni in 18 diversi geni coinvolti nel trafficking intracellulare, nel trasporto assonale e in anomalie nel funzionamento dei mitocondri. In un tale scenario in cui la quantità di informazioni raccolte sono molte, non è facile scegliere quale strada intraprendere per determinare da quale forma di HSP un paziente risulti affetto, né tanto meno accrescere, con dati di rilievo, le conoscenze generali. In questi tre anni sono stati studiati molti casi isolati ed alcuni casi familiari che presentavano un fenotipo di HSP complicata, con l’intenzione di individuare il locus e/o il gene coinvolto nei diversi pazienti. In primo luogo è stato analizzato un campione di 10 soggetti con uno grave fenotipo caratterizzato da spasticità agli arti inferiori, idrocefalo, ritardo mentale e pollici addotti (sindrome di CRASH). In questi individui, mediante sequenziamento diretto sono state studiate le regioni codificanti del gene L1CAM, associato alla sindrome di CRASH. Cinque nuove mutazioni sono state trovate in altrettanti pazienti non correlati, più una descritta in precedenza. La maggior parte delle mutazioni identificate in questo studio sono localizzate nella porzione extracellulare della proteina matura che svolge un ruolo primario nelle iterazioni omo- ed etero-filiche proteina-proteina. Nei pazienti privi di mutazioni puntiformi nelle regioni codificanti è stata condotta un’analisi di duplicazione del gene L1CAM mediante differenti metodiche (Multiplex Ligation-dependent Probe Amplification e Real Time-PCR). Nessuno degli individui analizzati si è dimostrato essere portatore di tali riarrangiamenti. Nella seconda parte di questo lavoro si è proceduto con un’indagine molecolare su 3 famiglie affette da paraparesi spastica complicata a trasmissione autosomica recessiva. Nella prima famiglia (Fam. 1) con HSP associata a deficit cognitivo è stata eseguita un’indagine preliminare di esclusione dei loci coinvolti nella HSP. Sono stati valutati geni candidati nella regione di linkage più promettente, ma non sono state trovate mutazioni causative. Ulteriori analisi saranno necessarie per comprendere quale forma di paraparesi spastica sia responsabile della patologia negli affetti di questa famiglia. Nella seconda famiglia (Fam. 2) a cui è stata diagnosticata, in due fratelli, una forma di HSP complicata da sospetto assottigliamento del corpo calloso si è proceduto con la caratterizzazione della regione 15q21.1 dove mappa il geneSPG11 tra i marcatori D15S994 e D15S978. Data la condivisione del genotipo negli individui affetti si è proceduto con il sequenziamento del gene SPG11. Apparentemente i due fratelli affetti risultavano omozigoti per un’inserzione di tre paia di basi a livello del sito di splicing dell’esone 39 (c.7000-3_-4insAGG; NM_025137). Ulteriori analisi hanno dimostrato che sono invece portatori di due mutazioni diverse, la mutazione descritta inizialmente ed una delezione di 2,76 kb tra gli introni 36 e 39 (c.6754_7152del1397). Per il terzo nucleo familiare, 11 fratelli nati da genitori consanguinei (Fam. 3), in passato erano già state eseguite delle analisi (genome-wide search e fine mapping) per cui era stata individuata una regione di omozigosità sul cromosoma 21. Solo gli individui affetti, tre fratelli con HSP complicata da un lieve ritardo mentale e neuropatia periferica, in questa regione condividevano l’assetto genotipico. L’analisi dei geni candidati ha portato all’individuazione di una sostituzione di un singolo nucleotide (T?C) sul 3’UTR del gene STCH (Heat shock protein 70 family member 13) localizzato in posizione 21q11.2 che cosegrega con la patologia e non risulta presente in 300 individui sani analizzati. Uno studio funzionale preliminare ha permesso di valutare gli effetti di questa variazione nella linea cellulare murina motoneurone-simile NSC34. Usando un vettore modificato esprimente il gene della luciferasi fuso con il 3’UTR del gene STCH si è potuto osservare una differente attività trascrizionale evidenziata come minor attività della luciferasi in presenza dell’allele mutato. Tenuto conto che tale sostituzione nucleotidica potrebbe attivare un sito target di appaiamento criptico per un microRNA è stata eseguita un’analisi in silico ed in vitro per selezionare i microRNA che potrebbero interagire con il trascritto di tale gene. Al momento appaiono candidati i miRNA: hsa-miR-134, hsa-miR-194, hsa-miR-637, hsa-miR-758 e hsa-miR-924.
APA, Harvard, Vancouver, ISO, and other styles
23

Kaloč, Jiří. "Hodnocení vlivu znečištění ovzduší na cenu bydlení v Praze." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-149893.

Full text
Abstract:
This thesis is aimed at the topic of economical evaluation of environmental goods, namely at estimating willingness to pay utilizing the hedonic price model. The theoretical part describes the impact air pollution has on the environment and human health. Methods of evaluation are also discussed with special attention on the hedonic approach. Goal of this thesis is to evaluate the impact of air pollution by applying the hedonic model to the real-estate market in Prague, thus giving the authorities a basis for their air pollution management.
APA, Harvard, Vancouver, ISO, and other styles
24

Renaudeau, Julien. "Continuous formulation of implicit structural modeling discretized with mesh reduction methods." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0075/document.

Full text
Abstract:
La modélisation structurale consiste à approximer les structures géologiques du sous-sol en un modèle numérique afin d'en visualiser la géométrie et d'y effectuer des calculs d'estimation et de prédiction. L'approche implicite de la modélisation structurale utilise des données de terrain interprétées pour construire une fonction volumétrique sur le domaine d'étude qui représente la géologie. Cette fonction doit honorer les observations, interpoler entre ces dernières, et extrapoler dans les zones sous-échantillonnées tout en respectant les concepts géologiques. Les méthodes actuelles portent cette interpolation soit sur les données, soit sur un maillage. Ensuite, le problème de modélisation est posé selon la discrétisation choisie : par krigeage dual sur les points de donnée ou en définissant un critère de rugosité sur les éléments du maillage. Dans cette thèse, nous proposons une formulation continue de la modélisation structurale par méthodes implicites. Cette dernière consiste à minimiser une somme de fonctionnelles arbitraires. Les contraintes de donnée sont imposées avec des fonctionnelles discrètes, et l'interpolation est contrôlée par des fonctionnelles continues. Cette approche permet de (i) développer des liens entre les méthodes existantes, (ii) suggérer de nouvelles discrétisations d'un même problème de modélisation, et (iii) modifier le problème de modélisation pour mieux honorer certains cas géologiques sans dépendre de la discrétisation. Nous portons également une attention particulière à la gestion des discontinuités telles que les failles et les discordances. Les méthodes existantes nécessitent soit la création de zones volumétriques avec des géométries complexes, soit la génération d'un maillage volumétrique dont les éléments sont conformes aux surfaces de discontinuité. Nous montrons, en explorant des méthodes sans maillage locales et des concepts de réduction de maillage, qu'il est possible d'assurer l'interpolation des structures tout en réduisant les contraintes liées à la gestion des discontinuités. Deux discrétisations de notre problème de minimisation sont suggérées : l'une utilise les moindres carrés glissants avec des critères optiques pour la gestion des discontinuités, et l'autre utilise des fonctions issues de la méthode des éléments finis avec le concept de nœuds fantômes pour les discontinuités. Une étude de sensibilité et une comparaison des deux méthodes sont proposées en 2D, ainsi que quelques exemples en 3D. Les méthodes développées dans cette thèse ont un grand impact en termes d'efficacité numérique et de gestion de cas géologiques complexes. Par exemple, il est montré que notre problème de minimisation au sens large apporte plusieurs solutions pour la gestion de cas de plis sous-échantillonnés et de variations d'épaisseur dans les couches stratigraphiques. D'autres applications sont également présentées tels que la modélisation d'enveloppe de sel et la restauration mécanique
Implicit structural modeling consists in approximating geological structures into a numerical model for visualization, estimations, and predictions. It uses numerical data interpreted from the field to construct a volumetric function on the domain of study that represents the geology. The function must fit the observations, interpolate in between, and extrapolate where data are missing while honoring the geological concepts. Current methods support this interpolation either with the data themselves or using a mesh. Then, the modeling problem is posed depending on these discretizations: performing a dual kriging between data points or defining a roughness criterion on the mesh elements. In this thesis, we propose a continuous formulation of implicit structural modeling as a minimization of a sum of generic functionals. The data constraints are enforced by discrete functionals, and the interpolation is controlled by continuous functionals. This approach enables to (i) develop links between the existing methods, (ii) suggest new discretizations of the same modeling problem, and (iii) modify the minimization problem to fit specific geological issues without any dependency on the discretization. Another focus of this thesis is the efficient handling of discontinuities, such as faults and unconformities. Existing methods require either to define volumetric zones with complex geometries, or to mesh volumes with conformal elements to the discontinuity surfaces. We show, by investigating local meshless functions and mesh reduction concepts, that it is possible to reduce the constraints related to the discontinuities while performing the interpolation. Two discretizations of the minimization problem are then suggested: one using the moving least squares functions with optic criteria to handle discontinuities, and the other using the finite element method functions with the concept of ghost nodes for the discontinuities. A sensitivity analysis and a comparison study of both methods are performed in 2D, with some examples in 3D. The developed methods in this thesis prove to have a great impact on computational efficiency and on handling complex geological settings. For instance, it is shown that the minimization problem provides the means to manage under-sampled fold structures and thickness variations in the layers. Other applications are also presented such as salt envelope surface modeling and mechanical restoration
APA, Harvard, Vancouver, ISO, and other styles
25

SILVA, FILHO Paulo de Barros e. "Static analysis of implicit control flow: resolving Java reflection and Android intents." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/17637.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-08-08T12:21:17Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) 2016-pbsf-msc.pdf: 596422 bytes, checksum: be9375166fe6e850180863e08b7997d8 (MD5)
Made available in DSpace on 2016-08-08T12:21:17Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) 2016-pbsf-msc.pdf: 596422 bytes, checksum: be9375166fe6e850180863e08b7997d8 (MD5) Previous issue date: 2016-03-04
FACEPE
Implicit or indirect control flow allows a transfer of control to a procedure without having to call the procedure explicitly in the program. Implicit control flow is a staple design pattern that adds flexibility to system design. However, it is challenging for a static analysis to compute or verify properties about a system that uses implicit control flow. When a static analysis encounters a procedure call, the analysis usually approximates the call’s behavior by a summary, which conservatively generalizes the effects of any target of the call. In previous work, a static analysis that verifies security properties was developed for Android apps, but failed to achieve high precision in the presence of implicit control flow. This work presents static analyses for two types of implicit control flow that frequently appear in Android apps: Java reflection and Android intents. In our analyses, the summary of a method is the method’s signature. Our analyses help to resolve where control flows and what data is passed. This information improves the precision of downstream analyses, which no longer need to make conservative assumptions about implicit control flow, while maintaining the soundness. We have implemented our techniques for Java. We enhanced an existing security analysis with a more precise treatment of reflection and intents. In a case study involving ten real-world Android apps that use both intents and reflection, the precision of the security analysis was increased on average by two orders of magnitude. The precision of two other downstream analyses was also improved.
Fluxo de controle implícito, ou indireto, permite que haja uma transferência de controle para um procedimento sem que esse procedimento seja invocado de forma explícita pelo programa. Fluxo de controle implícito é um padrão de projeto comum e bastante utilizado na prática, que adiciona flexibilidade no design de um sistema. Porém, é um desafio para uma análise estática ter que computar e verificar propriedades sobre um sistema que usa fluxos de controle implícito. Quando uma análise estática encontra uma chamada a uma procedimento, geralmente a análise aproxima o comportamento da chamada de acordo com o sumário do método, generalizando de uma forma conservadora os efeitos da chamada ao procedimento. Em trabalho anterior, uma análise estática de segurança foi desenvolvida para aplicações Android, mas falhou em obter uma alta precisão na presença de fluxos de controle implícito. Este trabalho apresenta uma análise estática para dois tipos de fluxos de controle implícito que aparecem frequentemente em aplicações Android: Java reflection e Android intents. Nas nossas análises, o sumário de um método é a assinatura do método. Nossas análises ajudam a descobrir para onde o controle flui e que dados estão sendo passados. Essa informação melhora a precisão de outras análises estáticas, que não precisam mais tomar medidas conservadoras na presença de fluxo de controle implícito. Nós implementamos a nossa técnica em Java. Nós melhoramos uma análise de segurança existente através de um tratamento mais preciso em casos de reflection e intents. Em um estudo de caso envolvendo dez aplicações Android reais que usam reflection e intents, a precisão da análise de segurança aumentou em duas ordens de magnitude. A precisão de outras duas análises estáticas também foi melhorada.
APA, Harvard, Vancouver, ISO, and other styles
26

Gamber, Edward. "Empirical identification of the risk shifting aspect of labor market implicit contracts." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/50019.

Full text
Abstract:
Much of the recent work in the area of implicit contract theory hypothesizes that firms and workers differ in their attitudes towards risk. The optimal wage and employment contract calls for shifting some of the risk associated with a randomly fluctuating marginal product of labor from the more risk averse party to the less risk averse party. The purpose of this dissertation is to explore the empirical implications of this risk shifting hypothesis. In particular, the following question is addressed: "How can we empirically identify whether risk shifting is occurring in the labor market?” Chapter 2 explores this question in the context of an implicit contract model with nominal variables and a randomly fluctuating price level. Under the usual assumption of risk neutral firms and risk averse workers the implications of the model are refuted by the industry level nominal wage stylized facts. Under the assumption that risk neutral workers insure risk averse firms the model is capable of explaining the stylized facts about the co-movements in nominal wages and employment. Chapter 3 explores this question in the context of a long-term implicit contract model with bankruptcy constraints. It is shown that if risk neutral firms insure risk averse workers then the real wage will respond asymmetrically to permanent and temporary revenue function disturbances. In particular, the real wage will respond more to a given permanent shock than to a temporary shock of the same size. Chapter 4 empirically tests this asymmetric wage response implication. A frequency domain technique is developed for decomposing a measure of revenue function disturbances into its permanent and temporary components and the real wage is regressed on each component. A sample of 12 4-digit SIC code industries are tested. The industry wage responses are estimated separately and as a system of seemingly unrelated regressions. Estimated separately, the results support the asymmetric response implication for 7 of the 12 industries at the .10 level of significance and 6 of the 12 industries at the .05 level. Estimated as a system the joint asymmetric response hypothesis is supported at the .01 level of significance for the 12 industries.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
27

Alahmadi, Dimah. "Recommender systems based on online social networks : an Implicit Social Trust And Sentiment analysis approach." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/recommender-systems-based-on-online-social-networks-an-implicit-social-trust-and-sentiment-analysis-approach(ac03f7e5-4fc0-4c4a-bace-82188823eb84).html.

Full text
Abstract:
Recommender systems (RSs) provide personalised suggestions of information or products relevant to user's needs. RSs are considered as powerful tools that help users to find interesting items matching their own taste. Although RSs have made substantial progress in theory and algorithm development and have achieved many commercial successes, how to utilise the widely available information on Online Social Networks (OSNs) has largely been overlooked. Noticing this gap in existing research on RSs and taking into account a user's selection being greatly influenced by his/her trusted friends and their opinions, this thesis proposes a novel personalised Recommender System framework, so-called Implicit Social Trust and Sentiment (ISTS) based RSs. The main motivation was to overcome the overlooked use of OSNs in Recommender Systems and to utilise the widely available information from such networks. This work also designs solutions to a number of challenges inherent to the RSs domain, such as accuracy, cold-start, diversity and coverage. ISTS improves the existing recommendation approaches by exploring a new source of data from friends' short posts in microbloggings. In the case of new users who have no previous preferences, ISTS maps the suggested recommendations into numerical rating scales by applying the three main components. The first component is measuring the implicit trust between friends based on their intercommunication activities and behaviour. Owing to the need to adapt friends' opinions, the implicit social trust model is designed to include the trusted friends and give them the highest weight of contribution in recommendation encounter. The second component is inferring the sentiment rating to reflect the knowledge behind friends' short posts, so-called micro-reviews. The sentiment behind micro-reviews is extracted using Sentiment Analysis (SA) techniques. To achieve the best sentiment representation, our approach considers the special natural environment in OSNs brief posts. Two Sentiment Analysis methodologies are used: a bag of words method and a probabilistic method. The third ISTS component is identifying the impact degree of friends' sentiments and their level of trust by using machine learning algorithms. Two types of machine learning algorithms are used: classification models and regressions models. The classification models include Naive Bayes, Logistic Regression and Decision Trees. Among the three classification models, Decision Trees show the best Mean absolute error (MAE) at 0.836. Support Vector Regression performed the best among all models at 0.45 of MAE. This thesis also proposes an approach with further improvement over ISTS, namely Hybrid Implicit Social Trust and Sentiment (H-ISTS). The enhanced approach applies improvements by optimising trust parameters to identify the impact of the features (re-tweets and followings/followers list) on recommendation results. Unlike the ISTS which allocates equal weight to trust features, H-ISTS provides different weights to determine the different effects of the two trust features. As a result, we found that H-ISTS improved the MAE to be 0.42 which is based on Support Vector Regression. Further, it increases the number of trust features from two to five features in order to include the influence of these features in rating predictions. The integration of the new approach H-ISTS with a Collaborative Filtering recommender system, in particular memory-based, is investigated next. Therefore, existing users with a history of ratings can receive recommendations by fusing their own tastes and their friends' preferences using the two type of memory-based methods: user-based and item-based. H-ISTSitem is the integration of H-ISTS and item-based which provides the lowest error at 0.7091. The experiments show that diversity is better achieved using the H-ISTSuser which is the integration of H-ISTS and user-based technique. To evaluate the performance of these approaches, two real social datasets are collected from Twitter. To verify the proposed framework, the experiments are conducted and the results are compared against the most relevant baselines which confirmed that RSs have been successfully improved using OSNs. These enhancements demonstrate the effectiveness and promises of the proposed approach in RSs.
APA, Harvard, Vancouver, ISO, and other styles
28

Morgan, William Edmund. "A fully implicit stochastic model for hydraulic fracturing based on the discontinuous deformation analysis." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53073.

Full text
Abstract:
In recent years, hydraulic fracturing has led to a dramatic increase in the worldwide production of natural gas. In a typical hydraulic fracturing treatment, millions of gallons of water, sand and chemicals are injected into a reservoir to generate fractures in the reservoir that serve as pathways for fluid flow. Recent research has shown that both the effectiveness of fracturing treatments and the productivity of fractured reservoirs can be heavily influenced by the presence of pre-existing natural fracture networks. This work presents a fully implicit hydro-mechanical algorithm for modeling hydraulic fracturing in complex fracture networks using the two-dimensional discontinuous deformation analysis (DDA). Building upon previous studies coupling the DDA to fracture network flow, this work emphasizes various improvements made to stabilize the existing algorithms and facilitate their convergence. Additional emphasis is placed on validation of the model and on extending the model to the stochastic characterization of hydraulic fracturing in naturally fractured systems. To validate the coupled algorithm, the model was tested against two analytical solutions for hydraulic fracturing, one for the growth of a fixed-length fracture subject to constant fluid pressure, and the other for the growth of a viscosity-storage dominated fracture subject to a constant rate of fluid injection. Additionally, the model was used to reproduce the results of a hydraulic fracturing experiment performed using high-viscosity fracturing fluid in a homogeneous medium. Very good agreement was displayed in all cases, suggesting that the algorithm is suitable for simulating hydraulic fracturing in homogeneous media. Next, this work explores the relationship between the maximum tensile stress and Mohr-Coulomb fracture criteria used in the DDA and the critical stress intensity factor criteria from linear elastic fracture mechanics (LEFM). The relationship between the criteria is derived, and the ability of the model to capture the relationship is examined for both Mode I and Mode II fracturing. The model was then used to simulate the LEFM solution for a toughness-storage dominated bi-wing hydraulic fracture. Good agreement was found between the numerical and theoretical results, suggesting that the simpler maximum tensile stress criteria can serve as an acceptable substitute for the more rigorous LEFM criteria in studies of hydraulic fracturing. Finally, this work presents a method for modeling hydraulic fracturing in reservoirs characterized by pre-existing fracture networks. The ability of the algorithm to correctly model the interaction mechanism of intersecting fractures is demonstrated through comparison with experimental results, and the method is extended to the stochastic analysis of hydraulic fracturing in probabilistically characterized reservoirs. Ultimately, the method is applied to a case study of hydraulic fracturing in the Marcellus Shale, and the sensitivity of fracture propagation to variations in rock and fluid parameters is analyzed.
APA, Harvard, Vancouver, ISO, and other styles
29

Griffay, Gérard. "Modélisation thermique globale du procédé de cokéfaction. Résolution numérique par différences finies implicites bidimensionnelles." Aix-Marseille 1, 1988. http://www.theses.fr/1988AIX11145.

Full text
Abstract:
Modelistion numerique des transferts thermiques a l'aide d'un schema bidimensionnel implicite, sur un domaine d'etude englobant trois fours adjacents ainsi que deux paires de carreaux de chauffage jumeles. L'equation generale de l'energie en milieu poreux utilisee tient compte des differents modes de transfert rencontres lors de la cokefaction du charbon (conduction, rayonnement, convection, sources internes d'energie, evaporation, recondensation)
APA, Harvard, Vancouver, ISO, and other styles
30

Hietala, Jonas. "A Comparison of Katz-eig and Link-analysis for Implicit Feedback Recommender Systems." Thesis, Linköpings universitet, Artificiell intelligens och integrerad datorsystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119169.

Full text
Abstract:
Recommendations are becoming more and more important in a world where there is an abundance of possible choices and e-commerce and content providers are featuring recommendations prominently. Recommendations based on explicit feedback, where user is giving feedback for example with ratings, has been a popular research subject. Implicit feedback recommender systems which passively collects information about the users is an area growing in interest. It makes it possible to generate recommendations based purely from a user's interactions history without requiring any explicit input from the users, which is commercially useful for a wide area of businesses. This thesis builds a recommender system based on implicit feedback using the recommendation algorithms katz-eig and link-analysis and analyzes and implements strategies for learning optimized parameters for different datasets. The resulting system forms the foundation for Comordo Technologies' commercial recommender system.
Rekommendationer blir viktigare och viktigare i en värld där det finns ett överflöd av möjliga val och där e-handel och innehållsleverantörer använder rekommendationer flitigt. Rekommendationer baserad på explicit återkoppling, där användare ger återkoppling med till exempel betyg, har varit ett populärt forskningsområde. Rekommendationssystem med implicit återkoppling som passivt samlar in information om användarna är ett område som blir mer och mer intressant. Det gör det möjligt att generera rekommendationer endast baserat på en användares interaktionshistoria utan krav på explicit input från användarna, vilket är kommersiellt användbart för en rad olika versamheter. Den här uppsatsen bygger ett rekommendationssystem med implicit återkoppling med rekommendationsalgoritmerna katz-eig och link-analysis och analyserar och implementerar optimeringsstrategier för inlärning av optimerade parameterar för olika dataset. Systemet lägger grunden för Comordo Technologies kommersiella rekommendationssystem.
APA, Harvard, Vancouver, ISO, and other styles
31

Haynes, Cody D. "Examining the Relationship Between Functions of Self-Directed Violence and the Suicide Implicit Association Test." TopSCHOLAR®, 2015. http://digitalcommons.wku.edu/theses/1544.

Full text
Abstract:
Suicide and non-suicidal self-injury are concerning and prevalent phenomena in the United States; as a result, much research has been undertaken in order to investigate these topics (Centers for Disease Control and Prevention, 2015a). Although the exploration of risk factors is a common approach, other novel approaches have been developed in order to better understand self-directed violence (Klonsky & May, 2013). One of these is a focus on functions served by these behaviors, which is theorized to contribute to grasping their etiologies and help provide effective treatment (Glenn & Klonsky, 2011). Another approach is investigating implicit cognition and selfassociations’ influences on the development of self-directed violence (Glashouwer et al, 2010). The current study expanded on previous research by using these two novel approaches simultaneously, and measuring the association between the functional aspects of self-directed violence and the Suicide Implicit Association Test. Participants for this study included 32 adolescent inpatients hospitalized at River Valley Behavioral Health Hospital. The Suicide Implicit Association Test served as the independent variable in this study. The following measures served as dependent variables: the Inventory of Statements About Self-Injury, the Self-Harm Behavior Questionnaire, and the Suicide Attempt Self-Injury Interview. Regression analyses revealed non-significant associations for both intrapersonal (β=1.44, S.E.=.91, p=.13) and interpersonal (β=.004, S.E.=.5, p=.99) functions. Poisson regression analyses revealed non-significant associations for both intrapersonal (β=.01, S.E.=.21, p=.97, CI:-.41, .42) and interpersonal (β=.60, S.E.=.51, p=.24, 95% CI:-.40, 1.60) functions. A logistic regression analysis was used to examine the association between Suicide Implicit Association Test scores and number of previous suicide attempts, and this revealed a high odds ratio [OR =4.56, 95% CI: .36, 57.76]. Poisson regression analysis was used to examine the relationship between Suicide Implicit Association Test scores and the frequency of previous non-suicidal self-injury, and this revealed a significant positive association (β=.99, S.E.=.07, p=.00, 95% CI:.86, 1.13). Poisson regression analysis was used to examine the relationship between Suicide Implicit Association Test scores and the severity of previous suicidal ideation, and this revealed a significant positive association (β=1.09, S.E.=.23, p=.00, 95% CI: .65, 1.54).
APA, Harvard, Vancouver, ISO, and other styles
32

Onay, Oguz Kaan. "Approximate Factorization Using Acdi Method On Hybrid Grids And Parallelization Of The Scheme." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615589/index.pdf.

Full text
Abstract:
In this thesis study, a fast implicit iteration scheme called Alternating Cell Directions Imp licit method is combined with Approximate Factorization scheme. This application aims to offer a mathematically well defined version of the Alternating Cell Directions Implicit Method and increase the accuracy of the iteration scheme that is being used for the numerical solutions of the partial differential equations. The iteration scheme presented here is tested using unsteady diffusion equation, Laplace equation and advection-diffusion equation. The accuracy, convergence character and the stability character of the scheme compared with suitable iteration schemes for structured and unstructured quadrilateral grids. Besides, it is shown that the proposed scheme is applicable to triangular and hybrid polygonal grids. A transonic full potential solver is generated using the current scheme. The flow around a 2-D cylinder is solved for subcritical and supercritical cases. Axi-symmetric flow around cylinder is selected as a benchmark problem since the potential flow around bodies with a blunt leading edge is a more challenging problem than slender bodies. Besides, it is shown that, the method is naturally appropriate for parallelization using shared memory approach without using domain decomposition applications. The parallelization that is performed here is partially line, partially point parallelization. T he performance of the application is presented for a 3-D unsteady diffusion problem using Cartesian cells and 2-D unsteady diffusion problem using both structured and unstructured quadrilateral cells.
APA, Harvard, Vancouver, ISO, and other styles
33

Triquet, Frédéric Chaillou Christophe Meseure Philippe. "Habillage de modèles mécaniques facettisation temps réel de surfaces implicites /." [S.l.] : [s.n.], 2001. http://www.univ-lille1.fr/bustl-grisemine/pdf/extheses/50376-2001-255-256.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Roy, Thomas. "Time-Stepping Methods in Cardiac Electrophysiology." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32626.

Full text
Abstract:
Modelling in cardiac electrophysiology results in a complex system of partial differential equations (PDE) describing the propagation of the electrical wave in the heart muscle coupled with a highly nonlinear system of ordinary differential equations (ODE) describing the ionic activity in the cardiac cells. This system forms the widely accepted bidomain model or its slightly simpler version, the monodomain model. To a large extent, the stiffness of the whole model depends on the choice of the ionic model, which varies in terms of complexity and realism. These simulations require accurate and, depending on the ionic model used, possibly very stable numerical methods. At this time, solving these models numerically requires CPU time of around one day per heartbeat. Therefore, it is necessary to use the most efficient method for these simulations. This research focuses on the comparison and analysis of several time-stepping methods: explicit or semi-implicit, operator splitting, deferred correction and Rush-Larsen methods. The goal is to find the optimal method for the ionic model used. For our analysis, we used the monodomain model but our results apply to the bidomain model as well. We compare the methods for three ionic models of varying complexity and stiffness: the Mitchell-Schaeffer models with only 2 variables, the more realistic Beeler-Reuter model with 8 variables, and the stiff and very complex ten Tuscher-Noble-Noble-Panfilov (TNNP) models with 17 variables. For each method, we derived absolute stability criteria of the spatially discretized monodomain model and verified that the theoretical critical time steps obtained closely match the ones in numerical experiments. Convergence tests were also conducted to verify that the numerical methods achieve an optimal order of convergence on the model variables and derived quantities (such as speed of the wave, depolarization time), and this in spite of the local non-differentiability of some of the ionic models. We looked at the efficiency of the different methods by comparing computational times for similar accuracy. Conclusions are drawn on the methods to be used to solve the monodomain model based on the model stiffness and complexity, measured respectively by the most negative eigenvalue of the model's Jacobian and the number of variables, and based on strict stability and accuracy criteria.
APA, Harvard, Vancouver, ISO, and other styles
35

McNelis, Kathleen. "The underlying dimensionality of people's implicit job theories across cognitive sets : implications for comparable worth /." The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487262513406512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Christie, Michael Alexander. "Multiple memory systems: contributions of human and animal serial reaction time tasks." Thesis, University of Canterbury. Psychology, 2001. http://hdl.handle.net/10092/1379.

Full text
Abstract:
Human memory systems have been divided into two broad domains, one responsible for 'declarative memory' and the other for 'non-declarative memory'. The evidence for multiple memory systems is reviewed with respect to the human SRT, a sensitive measure of non-declarative memory. A qualitative review of the human SRT literature concludes that damage to extrapyramidal brain systems disrupts SRT performance whereas limbic system neuropathology (LSN) leaves performance intact. However, a meta-analysis of the SRT literature with neuropathological patients revealed unexpectedly that patients with explicit memory disorders are impaired on the SRT task, although less severely than patients with extrapyramidal damage. Other evidence suggested that the apparent SRT impairment in humans with LSN might be due to the additional pathology (eg frontal) often evident in these patients. A brief review of the animal evidence for multiple memory systems concluded that, like humans, animals too have multiple memory systems but none of the animal tasks used to model non-declarative memory make good conceptual or behavioural contact with the corresponding human tasks. Thus a novel animal-analogue of the human-SRT task, the 'fan-maze', was developed. Although rats displayed a reasonable ability to perform the fan-maze SRT task it was abandoned due to technical and conceptual problems in favour of a better design. The second new SRT task used intra-cranial self-stimulation to promote prolonged, rapid and continuous responding. A control study determined that the optimal conditions for sequence learning was a single large (2820 trial) session. Intact rats that experienced a switch from the repeating to a random sequence under these conditions demonstrated a clear interference effect, the primary measure of SRT performance. A lesion study used these optimal conditions and showed that small caudate lesions impaired, whereas small hippocampal lesions facilitated, rat-SRT performance. Hence, this second task has proven to be a valid animal-analogue of the human SRT task, as rats performed it in a manner similar to that shown by humans and relied on the same neural substrate to perform the task as humans. In addition, this second task resolved the discrepancy of the LSN meta-analysis. Quantitative findings are reviewed in light of theories and studies presented earlier in the thesis. Limitations of the thesis are identified and suggestions are made as to future SRT research in animals or humans.
APA, Harvard, Vancouver, ISO, and other styles
37

Técourt, Jean-Pierre. "Sur le calcul effectif de la topologie de courbes et surfaces implicites." Nice, 2005. http://www.theses.fr/2005NICE4057.

Full text
Abstract:
Dans cette thèse nous nous sommes intéressés au problème du calcul effectif de la topologie de courbes et surfaces implicites. On peut distinguer quatres travaux différents: Dans une première partie, on présente un algorithme permettant de calculer la topologie d'une courbe de R3 définie comme intersection de deux surfaces algébriques. C'est à dire le calcul d'un graphe de points isotope à la courbe de départ. Puis on détaille un algorithme de calcul d'un arrangement de quadriques par balayage, basé sur une décomposition en ``trapézoides'' du plan de balayage. La troisième partie est consacré à un algorithme de triangulation de surfaces algébriques. Cet algorithme basé sur le calcul d'une stratification de Whitney de la surface est le premier fournissant un maillage isotopique à la surface de départ y compris dans le cas de surfaces singulières. Enfin, on étudie une famille de surfaces paramétrées, les surfaces de Steiner, apportant des réponses aux problèmes de classification effective, implicitisation et calcul d'antécénts
In this thesis, we got interested into the effective computation of the topology of implicit curves and surfaces. One can distinguish four different works: In a first part, we present an algorithm for computing the topology of a curve of R3 defined as the intersection of two implicit surfaces. More precisely, we compute a graph of points isotopic to the original curve. Then we detail a sweeping algorithm to compute an arrangement of quadrics based on a ``trapezoidal'' decomposition of the sweeping plane. The third part is devoted to an algorithm of triangulation of algebraic surfaces. This algorithm based on the computation of a Whitney stratification of the surface is the first one providing an isotopic meshing of the original surface even for singular surfaces. Finally, we study a family of parametrized surfaces, the Steiner surfaces, providing answers to the problems of effective classification, implicitization and search of preimages
APA, Harvard, Vancouver, ISO, and other styles
38

Fournier, Marc. "Nouvelles représentations volumiques implicites appliquées au traitement de maillages." Université Louis Pasteur (Strasbourg) (1971-2008), 2008. https://publication-theses.unistra.fr/restreint/theses_doctorat/2008/FOURNIER_Marc_2008.pdf.

Full text
Abstract:
Les données brutes produites par les scanners 3D doivent être traitées afin de reconstruire la forme géométrique des objets numérisés. Le filtrage des données pour réduire le bruit de numérisation introduit par le scanner à l’étape d’acquisition ainsi que la fusion des données à partir de plusieurs scans partiels d’un même objet pour obtenir une description unique et complète de l’objet sont deux étapes importantes dans le processus de reconstruction de l’objet 3D numérique. La transformée en distance scalaire (TDS) d’un maillage 3D est une représentation volumique des données qui décrit la surface d’un objet numérisé de façon implicite. Ce concept est utilisé dans la littérature répertoriée sur le sujet pour réaliser plusieurs opérations sur les maillages telles que la fusion des données. Cette thèse propose de nouvelles représentations implicites complémentaires à la TDS pour améliorer la précision de cette représentation. Une nouvelle transformée en distance vectorielle (TDV) est proposée et les algorithmes de triangulation utilisés pour reconstruire le maillage résultant de la TDS sont adaptés à la nouvelle TDV. La méthode de fusion de maillages appliquée sur la TDS est également adaptée à la TDV afin de produire des résultats de meilleure qualité. Deux méthodes de filtrage adaptatif appliquées sur la TDV sont développées pour diminuer le bruit de numérisation tout en préservant les caractéristiques géométriques des maillages. Une nouvelle transformée en distance réversible (TDR) est aussi introduite pour préserver la topologie des maillages initiaux suite aux traitements des maillages dans le domaine implicite de la transformée en distance. Les applications de filtrage et de fusion développées pour la TDV sont adaptées à la TDR. Des métriques d’évaluation sont mises au point pour mesurer de façon quantitative la qualité des maillages résultants des applications sur les nouvelles transformées
Raw data produced by 3D scanners need to be processed to reconstruct the surface of scanned objects. Mesh filtering to reduce acquisition noise introduced by the scanner and mesh fusion to integrate multiple scans of an object into a complete description of the object are two important steps in the reconstruction process of a scanned object. The scalar field distance transform (SDT) of a 3D mesh is a volumetric representation of the mesh which implicitly describes the surface of a scanned object. This alternative representation is used in the literature to perform many mesh operations such as mesh fusion. This thesis introduces new implicit representations based on the SDT to improve the precision of this alternative representation. A new vector field distance transform (VDT) is proposed and triangulation algorithms used to reconstruct the SDT resulting mesh are adapted to the new VDT. The mesh fusion method applied to the SDT is also adapted to the VDT to improve the results quality of this mesh operation using the distance transform alternative representations. Two adaptive filtering methods applied on the VDT are designed to reduce acquisition noise while preserving the meshes geometric features. A new reversible distance transform (RDT) is also proposed to preserve the initial meshes topology when processing meshes in the implicit distance transform domain. Mesh filtering and mesh fusion applications designed for the VDT are adapted to the RDT
APA, Harvard, Vancouver, ISO, and other styles
39

Balez, Ralph. "Pygmalion au laboratoire : contribution à la modélisation de l'influence implicite des attentes théoriques de l'expérimentateur sur les participants." Paris 10, 2007. http://www.theses.fr/2007PA100193.

Full text
Abstract:
L'objet de cette thèse est en premier lieu une étude critique des théories portant sur le biais de confirmation des attentes telles qu'elles ont été développées par la psychologie sociale anglo-saxonne pour l'essentiel. Cette présentation permet de dégager un modèle de ces études dont les applications expérimentales constituent le volet empirique. L'idée de base est que l'attente et le comportement explicite et implicite du chercheur font partie de la définition de la situation expérimentale. L'expérimentateur est susceptible de transmettre implicitement ses attentes théoriques aux sujets. De ce fait, les sujets sont susceptibles de se comporter de façon à les valider. Deux voies sont possibles pour aborder le problème de la confirmation par les sujets des attentes des chercheurs. On peut mettre en place des précautions méthodologiques qui tentent de limiter cet artefact ou, d'une façon moins économique on peut utiliser la méta-expérimentation pour observer le phénomène, en le considérant non plus comme un biais, mais comme un phénomène psychosocial à part entière. Dans cette dernière perspective, nous proposerons une synthèse des variables susceptibles de jouer un rôle dans l'émergence du phénomène de la confirmation des attentes du chercheur en psychologie. Puis, nous présenterons cinq études empiriques opérationnalisant celui-ci. Nous concluons ce travail en approfondissant le rôle institutionnel et psychosociologique de l'expérimentateur dans l'émergence du phénomène et en proposant des procédures afin d'en prolonger l'étude
The researcher's expectations, and his/her explicit and implicit behaviour combine to form a certain definition of the experimental situation. The researcher may transmit his/her theoretical expectations to the subjects of his/her research during the course of an experiment. The subjects may be susceptible to reconstructing these perceived demands and behave in ways that confirm them. This phenomenon is currently referred to as either confirmation bias, researcher's effect, or the "Pygmalion effect". This thesis is a critical presentation of the theories and studies developed by some social psychologists. We focus on the relationship established between the researcher and the participant. We analyze the experimental situation through the study of its three components: context, topic and researcher (Lemaine, 1975). We subsequently use three kinds of basic precautions to limit the manner in which this artefact could be integrated into the research protocol. The investigation before/after, and the initial and formal steps method are explained and discussed here. These methods apply to all experimental situations that involve interaction between a researcher and each participant. The various elements are synthesized in an operating model called "EMIR", which proposes to use a meta-experiment in order to study "the researcher's effect", not as a bias but as a separate psychological phenomenon. Next, we present five empirical studies of this phenomenon. We conclude this thesis by questioning the psychosocial role of the researcher in the experimental situation. To approach this question we rely on the contributions of the meta-experiment problematics
APA, Harvard, Vancouver, ISO, and other styles
40

Triquet, Frédéric. "Habillage de modèles mécaniques : facettisation temps réel de surfaces implicites." Lille 1, 2001. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2001/50376-2001-255-256.pdf.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre des simulateurs chirurgicaux pédagogiques développés au LIFL. Ces simulateurs nécessitent l'animation temps réel de corps basée sur la physique. C'est l'objet de notre bibliothèque SPORE qui permet entre autres de construire un objet géométrique détaillé autour d'un modèle mécanique généralement trop grossier pour être affiché directement. Cette opération s'appelle l'habillage. Mon travail de thèse s'est intéressé à ce procédé géométrique. En particulier, les surfaces implicites à squelette forment une possibilité intéressante pour fournir un habillage puisqu'elles permettent, grâce à l'opération de mélange appelée blending, d'obtenir des formes complexes à partir de quelques points mécaniques. Cependant, ces surfaces nécessitent des algorithmes spécifiques pour leur affichage. Je me suis basé sur l'algorithme des Marching Cubes, réputé lent, mais en l'enrichissant de plusieurs améliorations combinées à des structures de données efficaces et des algorithmes optimisés. Ces améliorations ne se bornent pas qu'à des accélérations : j'apporte également aux problèmes de facettisations ambigues une solution compatible avec nos contraintes de temps-réel. De plus, grâce à une méthode originale mon implantation peut controler le blending en permettant à l'utilisateur de spécifier là où les fonctions de mélange des surfaces implicites doivent s'appliquer. Mon implantation en C++, sous forme de librairie, permet de facettiser en temps réel des surfaces composées de plusieurs centaines de primitives sur un ordinateur de gamme moyenne. Nous l'utilisons dans un simulateur.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Hong. "Efficient Time Stepping Methods and Sensitivity Analysis for Large Scale Systems of Differential Equations." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/50492.

Full text
Abstract:
Many fields in science and engineering require large-scale numerical simulations of complex systems described by differential equations. These systems are typically multi-physics (they are driven by multiple interacting physical processes) and multiscale (the dynamics takes place on vastly different spatial and temporal scales). Numerical solution of such systems is highly challenging due to the dimension of the resulting discrete problem, and to the complexity that comes from incorporating multiple interacting components with different characteristics. The main contributions of this dissertation are the creation of new families of time integration methods for multiscale and multiphysics simulations, and the development of industrial-strengh tools for sensitivity analysis. This work develops novel implicit-explicit (IMEX) general linear time integration methods for multiphysics and multiscale simulations typically involving both stiff and non-stiff components. In an IMEX approach, one uses an implicit scheme for the stiff components and an explicit scheme for the non-stiff components such that the combined method has the desired stability and accuracy properties. Practical schemes with favorable properties, such as maximized stability, high efficiency, and no order reduction, are constructed and applied in extensive numerical experiments to validate the theoretical findings and to demonstrate their advantages. Approximate matrix factorization (AMF) technique exploits the structure of the Jacobian of the implicit parts, which may lead to further efficiency improvement of IMEX schemes. We have explored the application of AMF within some high order IMEX Runge-Kutta schemes in order to achieve high efficiency. Sensitivity analysis gives quantitative information about the changes in a dynamical model outputs caused by caused by small changes in the model inputs. This information is crucial for data assimilation, model-constrained optimization, inverse problems, and uncertainty quantification. We develop a high performance software package for sensitivity analysis in the context of stiff and nonstiff ordinary differential equations. Efficiency is demonstrated by direct comparisons against existing state-of-art software on a variety of test problems.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Salman, Lubna Hussein. "L'implicite dans "A la recherche du temps perdu" : étude sur un aspect du discours proustien." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00984982.

Full text
Abstract:
L'implicite se définit comme un contenu présent dans le discours sans être formellement exprimé. Le présupposé et le sous-entendu forment les deux concepts fondamentaux de cette notion. Ils agissent comme des informations impliquées dans le discours, dont on peut saisir ou décrypter l'essence à l'aide des théories de la pragmatique et de la linguistique énonciative. Le discours proustien fait un usage remarquable de cette notion et de ses concepts. Ce travail est entièrement consacré à la recherche de l'implicite dans À la recherche du temps perdu de Marcel Proust. Le développement de cette notion, dans cette œuvre, se dessine d'une part, à travers l'interaction verbale des personnages proustiens, et d'autre part, à travers le discours du narrateur qui s'oriente vers une nouvelle tendance, celle d'un narrateur et d'un narrataire implicites. L'intérêt de cette étude consiste à catégoriser l'implicite proustien et à mettre en lumière son statut linguistique, puis la façon dont il est employé.
APA, Harvard, Vancouver, ISO, and other styles
43

Lindén, David. "Exploration of implicit weights in composite indicators : The case of resilience assessment of countries’ electricity supply." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239687.

Full text
Abstract:
Composite indicators, also called indices, are widely used synthetic measures for ranking and benchmarking alternatives across complex concepts. The aim of constructing a composite indicator is, among other things, to simplify and condense the information of a plurality of underlying indicators. However, to avoid misleading results, it is important to ensure that the construction is performed in a transparent and representative manner. To this end, this thesis aims to aid the construction of the Electricity Supply Resilience Index (ESRI) – which is a novel energy index, developed within the Future Resilient Systems (FRS) programme at the Singapore-ETH Centre (SEC) – by looking at the complementary and fundamental component of index aggregation, namely the weighting of the indicators. Normally, weights are assigned to reflect the relative importance of each indicator, based on stakeholders’ or decision-makers’ preferences. Consequently, the weights are often perceived to be importance coefficients, independent from the dataset under analysis. However, it has recently been shown that the structure of the dataset and correlations between the indicators often have a decisive effect on each indicator’s importance in the index. In fact, their importance rarely coincides with the assigned weights. This phenomenon is sometimes referred to as implicit weights. The aim of this thesis is to assess the implicit weights in the aggregation of ESRI.  For this purpose, a six-step analytical framework, based on a novel variance-based sensitivity analysis approach, is presented and applied to ESRI. The resulting analysis shows that statistical dependencies between ESRI’s underlying indicators have direct implications on the outcome values – the equal weights assigned a-priori do not correspond to an equal influence from each indicator. Furthermore, when attempting to optimise the weights to balance the contribution of each indicator, it is found that this would require a highly unbalanced set of weights and come at the expense of representing the indicators in an effective manner. Thereby, it can be concluded that there are significant dependencies between the indicators and that their correlations need to be accounted for to achieve a balanced and representative index construction. Guided by these findings, this thesis provides three recommendations for improving the statistical representation and conceptual coherence of ESRI. These include: (1) avoid aggregating a negatively correlated indicator – keep it aside, (2) remove a conceptually problematic indicator – revise its construction or conceptual contribution, and (3) aggregate three collinear and conceptually intersecting indicators into a sub-index, prior to aggregation – limit their overrepresentation. By revising the index according to these three recommendations, it is found that ESRI showcases a greater conceptual and statistical coherence. It can thus be concluded that the analytical framework, proposed in this thesis, can aid the development of representative indices.
Kompositindikatorer (eller index) är populära verktyg som ofta används vid rankning och benchmarking av olika alternativ utifrån komplexa koncept. Syftet med att konstruera ett index är, bland annat, att förenkla och sammanfatta informationen från ett flertal underliggande indikatorer. För att undvika missvisande resultat är det därmed viktigt att konstruera index på ett transparent och representativt sätt. Med detta i åtanke, avser denna uppsats att stödja konstruktionen av Electricity Supply Resilience Index (ESRI) – vilket är ett nyutvecklat energiindex, framtaget inom Future Resilient Systems (FRS) programmet på Singapore-ETH Centre (SEC). Detta görs genom att studera ett vanligt fenomen (s.k. implicita vikter) som gör sig gällande i ett av konstruktionsstegen, då de underliggande indikatorerna ska viktas och aggregeras till ett index. I detta steg tilldelas vanligtvis vikter till de enskilda indikatorerna som ska spegla deras relativa betydelse i indexet. Det har dock nyligen visats att datastrukturen och korrelationer mellan indikatorerna har en avgörande påverkan på varje indikators betydelse i indexet, vilket ibland kan vara helt oberoende av vikten de tilldelats. Detta fenomen kallas ibland för implicita vikter, då de ej är explicit tilldelade utan uppkommer från datastrukturen. Syftet med denna uppsatts är således att undersöka de implicita vikterna i aggregationen av ESRI.  För detta ändamål sker en tillämpning och utökning av en nyutvecklad variansbaserad känslighetsanalys, baserad på olinjär regression, för bedömning av implicita vikter i kompositindikatorer. Resultaten från denna analys visar att statistiska beroenden mellan ESRIs underliggande indikatorer har direkt inverkan på varje indikators betydelse i indexet. Detta medför att vikterna ej överensstämmer med indikatorernas betydelse. Följaktligen utförs en vikt-optimering, för att balansera bidraget från varje indikator. Utifrån resultaten av denna vikt-optimering kan det konstateras att det inte är tänkbart att balansera bidraget från varje indikator genom att justera vikterna. Om så görs, skulle det ske på bekostnad av att kunna representera varje indikator på ett effektivt sätt. Därmed kan slutsatsen dras att det finns tydliga beroenden mellan indikatorer och att deras korrelationerna måste tas i hänsyn för att uppnå en balanserad och representativ indexkonstruktion. Utifrån dessa insikter presenteras tre rekommendationer för att förbättra den statistiska representationen och konceptuella samstämmigheten i ESRI. Dessa innefattar: (1) Undvik att aggregera en negativt korrelerad indikator - behåll den vid sidan av, (2) ta bort en konceptuellt problematisk indikator - revidera dess konstruktion eller konceptuella bidrag, och (3) sammanställ tre kollinära och konceptuellt överlappande indikatorer i ett sub-index, före aggregering - begränsa deras överrepresentation. När dessa rekommendationer implementerats står det klart att den reviderade ESRI påvisar en förbättrad konceptuell och statistisks samstämmighet. Därmed kan det fastställas att det analytiska verktyg som presenteras i denna uppsats kan bidra till utvecklingen av representativa index.
APA, Harvard, Vancouver, ISO, and other styles
44

Jakobsson, Ina, and Emmalinn Knutsson. "Explicit or Implicit Grammar? - Grammar Teaching Approaches in Three English 5 Textbooks." Thesis, Malmö universitet, Fakulteten för lärande och samhälle (LS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-34559.

Full text
Abstract:
Grammar is an essential part of language learning. Thus, it is important that teachers know how to efficiently teach grammar to students, and with what approach - explicitly orimplicitly as well as through Focus on Forms (FoFs), Focus on Form (FoF) or Focus onMeaning (FoM). Furthermore, the common use of textbooks in English education in Sweden makes it essential to explore how these present grammar. Therefore, to make teachers aware of what grammar teaching approach a textbook has, this degree project intends to examine how and to what degree English textbooks used in Swedish upper secondary schools can be seen to exhibit an overall explicit or implicit approach to grammar teaching. The aim is to analyze three English 5 textbooks that are currently used in classrooms in Sweden, through the use of relevant research regarding grammar teaching as well as the steering documents for English 5 in Swedish upper secondary school. The analysis was carried out with the help of a framework developed by means of research on explicit and implicit grammar teaching as well as the three grammar teaching approaches FoFs, FoF and FoM. Thus, through the textbook analysis, we set out to investigate whether the textbooks present grammar instruction explicitly or implicitly and through FoFs, FoF or FoM. After having collected research on the topic of how to teach grammar, it became apparent that researchers on grammar teaching agree that FoF is the most beneficial out of the three above mentioned approaches, and thus, we decided to take a stand for this approach throughout the project. The results of this study showed that two out of three textbooks used overall implicit grammar teaching through FoM. Moreover, one out of the three textbooks used overall explicit grammar teaching through an FoF approach.
APA, Harvard, Vancouver, ISO, and other styles
45

Leprovost, Damien. "Découverte et analyse des communautés implicites par une approche sémantique en ligne : l'outil WebTribe." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00866489.

Full text
Abstract:
Avec l'essor du Web 2.0 et des technologies collaboratives qui y sont rattachées,le Web est aujourd'hui devenu une vaste plate-forme d'échanges entre internautes.La majeure partie des sites Web sont actuellement soit dédiés aux interactionssociales de leurs utilisateurs, soit proposent des outils pour développer ces interactions.Nos travaux portent sur la compréhension de ces échanges, ainsi que desstructures communautaires qui en découlent, au moyen d'une approche sémantique.Pour répondre aux besoins de compréhension propres aux analystes de siteWeb et autres gestionnaires de communautés, nous analysons ces structures communautairespour en extraire des caractéristiques essentielles comme leurs centresthématiques et contributeurs centraux. Notre analyse sémantique s'appuie notammentsur des ontologies légères de référence pour définir plusieurs nouvelles métriques,comme la centralité sémantique temporelle et la probabilité de propagationsémantique. Nous employons une approche " en ligne " afin de suivre l'activitéutilisateur en temps réel, au sein de notre outil d'analyse communautaire Web-Tribe. Nous avons implémenté et testé nos méthodes sur des données extraites desystèmes réels de communication sociale sur le Web
APA, Harvard, Vancouver, ISO, and other styles
46

Koobus, Bruno. "Algorithmes multigrille et algorithmes implicites pour les écoulements compressibles turbulents." Nice, 1994. http://www.theses.fr/1994NICE4808.

Full text
Abstract:
Dans cette these, sont presentes d'une part des travaux relatifs a des algorithmes multigrille, et d'autre part une etude d'un schema implicite pour le calcul d'ecoulements turbulents. Dans la premiere partie, on propose d'abord une methode multigrille par agglomeration de volumes finis pour la resolution d'equations de type advection-diffusion sur des maillages non structures. Cet algorithme multigrille est ensuite etendu a des equations de navier-stokes compressibles simplifiees. Un schema implicite linearise est utilise pour l'avancement en temps et on etudie la resolution du systeme lineaire par multigrille agglomere. Une analyse abstraite d'un solveur multigrille parallele fonde sur une decomposition du residu est ensuite presentee. On donne une preuve de convergence fondee sur une propriete de lissage et une propriete d'approximation. Dans cette approche, un filtrage de la correction est effectue sur le niveau fin. Dans une seconde partie, une revue des principaux modeles de turbulence est d'abord proposee. On presente ensuite une methode numerique qui combine une approximation mixte elements finis/volumes finis pour des maillages non structures, avec un schema implicite linearise pour l'avancement en temps. L'implicitation du terme source a fait l'objet d'une attention particuliere. Les modeles de turbulence utilises sont des modeles a deux equations k- bas reynolds. Des calculs ont ete realises pour le cas d'un ecoulement subsonique de conduite et un ecoulement supersonique de plaque plane
APA, Harvard, Vancouver, ISO, and other styles
47

Timesli, Abdelaziz. "Simulation du soudage par friction et malaxage à l'aide de méthodes sans maillage." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0111/document.

Full text
Abstract:
Le procédé de soudage par friction et malaxage est un procédé récent qui a été développé au sein de l'institut de soudure britannique "The Welding Institute" au début des années 90. Ce procédé, utilisé généralement en aéronautique, est sans apport de matière et permet de souder principalement des alliages d'aluminium difficilement soudables par les procédés classiques de soudage. Il consiste à malaxer le matériau de base à l'aide d'un outil constitué d'un pion et d'un épaulement frottant sur les faces supérieures des tôles à souder. La modélisation de ce procédé est très complexe puisque ce dernier implique des couplages entre des phénomènes mécaniques, thermiques et métallurgiques. Le malaxage dans le procédé de soudage FSW est difficile à simuler à l'aide de la méthode des éléments finis (en lagrangien) puisque la zone proche de l'outil de soudage est le siège de grandes déformations. Donc le remaillage est nécessaire. Cependant, le remaillage est cher et très difficile pour les problèmes tridimensionnels. Par ailleurs, après un remaillage, il est nécessaire d'interpoler les champs (vitesses, contraintes,...) correspondant à la solution courante, ce qui peut introduire des erreurs supplémentaires dans le calcul (on parle de diffusion numérique). Nous proposons dans ce travail des modèles basés sur la méthode sans maillage dite "Smoothed Particle Hydrodynamics SPH" et la méthode des moindres carrés mobiles (Moving Least Square MLS) pour la simulation de ce procédé. Ces modèles sont formulés dans le cadre lagrangien et utilisent la forme forte des équations aux dérivées partielles. Le premier modèle basé sur SPH considère la zone de soudure comme un fluide non newtonien faiblement compressible et dont la viscosité dépend de la température. Ce modèle est proposé pour la simulation numérique du comportement thermomécanique d'un matériau soudé par le procédé FSW. Dans le deuxième modèle, un algorithme itératif implicite de premier ordre a été proposé, pour simuler le malaxage de la matière dans le cas d'un matériau viscoplastique, en utilisant la méthode MLS et la technique de collocation. Le troisième modèle est un algorithme implicite d'ordre élevé basée sur le couplage de la méthode MLS et la Méthode Asymptotique Numérique MAN. Cet algorithme permet de réduire le temps de calcul par rapport à l'algorithme itératif implicite de premier ordre. La validation de ces trois modèles proposés a été faite par le code industriel Fluent
Friction stir welding is a recent process that has been developed by the British Welding Institute TWI "The Welding Institute" since 1990s. This process, generally used in aerospace, does not need additional material and allows mainly joining plates of aluminum alloys which are difficult to weld by the classical welding processes. It consists in mixing the base material using a tool comprising a pin and a shoulder which heats the plates to be welded by friction. The modeling of this process is very complex since it involves the coupling between mechanical, thermal and metallurgical phenomena. The mixing in welding process FSW is difficult to simulate using finite element method in lagrangian framework since the area near the welding tool is submitted to large deformations. So remeshing procedure is often required. However, remeshing can be very expensive and difficult to perform for three-dimensional problems. Moreover, after remeshing step, it is necessary to interpolate the fields (velocities, constraints ...) corresponding to the current solution, which may lead to additional errors in the calculation (called numerical diffusion). We propose in this work models based on meshless methods called "Smoothed Particle Hydrodynamics SPH" and Moving Least Square method for the simulation of this welding process. These models are formulated in lagrangian framework and use the strong form of partial differential equations. The first model based on SPH considers the welding zone as a weakly compressible non-newtonian fluid and whose viscosity depends on the temperature. This model is proposed for the numerical simulation of thermo-mechanical behavior of a welded material by FSW process. The second model is a first order implicit iterative algorithm proposed to simulate material mixing in the case a visco-plastic behavior using the MLS method and the collocation technique. The third model is a high order implicit algorithm based on the coupling of MLS method and Asymptotic Numerical Method (ANM). This algorithm allows reducing the computation time by comparison with the first order implicit iterative algorithm. The validation of these three proposed models was done by the industrial code Fluent
APA, Harvard, Vancouver, ISO, and other styles
48

Moussaoui, Kamal. "Implicites idéologiques des manuels de français au Maroc : 1979-1993." Rouen, 1996. http://www.theses.fr/1996ROUEL264.

Full text
Abstract:
La reconstitution des visées didactiques et sociologiques, à partir des éléments multiples rencontrés dans les textes proposés aux collégiens marocains, renseigne sur des préoccupations latentes. Dans ce travail, nous avons montré par des exemples nombreux et significatifs les valeurs que les manuels de français veulent installer chez ceux à qui ils s'adressent. L'enracinement de l'apprenant dans son environnement socioculturel ne permet pas une réelle ouverture sur la civilisation de la langue-cible et contribue par conséquent à la méconnaissance de soi et de l'autre
Starting from several elements found in the proposed texts for the first cycle moroccan pupils, the reconstitution of the didactic and sociological aims shed light on some latent preoccupations. In this thesis, we have shown through many significant examples the values that the french textbooks want to install in those they are adressed to. The fact to make the pupil ingrained in his sociocultural environment does not allow a real knowledge of the civilisation of the target language and consequently contributes to the ignorance of oneself and that of the other
APA, Harvard, Vancouver, ISO, and other styles
49

Prévost, Stéphanie. "Modélisation implicite et visualisation multi-échelle par squelette à union de boules et graphe de recouvrement." Reims, 2001. http://www.theses.fr/2001REIMS014.

Full text
Abstract:
Afin de répondre aux besoins des biologistes face à l'évolution des systèmes d'acquisition de données, nous avons élaboré une plate forme logicielle devant à terme de permettre l'étude, la manipulation et l'analyse des données biomédicales spatio-temporelles, leur besoin ayant guidé nos choix. L'objet de cette thèse est donc de proposer un système hybride et global : hybride par son exploitation d'outils sortis aussi bien du domaine de l'analyse que de la synthèse d'images, comme la carte de distances euclidiennes, preuve d'une collaboration étroite entre ces deux domaines est possible et fructueuse pour l'accélération des processus mis en oeuvre, global pour sa capacité à réaliser, à partir d'un unique modèle, différentes applications utiles aux biologistes. La pierre angulaire de ce système est le modèle implicite d'union de boules dont les principales caractéristiques sont d'être exact, compact, et dual entre les espaces Z3 et R3. Ce modèle, par ses deux premières propriétés, nous permet de répondre aux problèmes de la quantité d'informations à traiter, tout en conservant la même qualité. . .
APA, Harvard, Vancouver, ISO, and other styles
50

Pechoux, Romain. "Analyse de la complexité des programmes par interprétation sémantique." Phd thesis, Institut National Polytechnique de Lorraine - INPL, 2007. http://tel.archives-ouvertes.fr/tel-00321917.

Full text
Abstract:
Il existe de nombreuses approches développées par la communauté Implicit Computational Complexity (ICC) permettant d'analyser les ressources nécessaires à la bonne exécution des algorithmes. Dans cette thèse, nous nous intéressons plus particulièrement au contrôle des ressources à l'aide d'interprétations sémantiques.
Après avoir rappelé brièvement la notion de quasi-interprétation ainsi que les différentes propriétés et caractérisations qui en découlent, nous présentons les différentes avancées obtenues dans l'étude de cet outil : nous étudions le problème de la synthèse qui consiste à trouver une quasi-interprétation pour un programme donné, puis, nous abordons la question de la modularité des quasi-interprétations. La modularité permet de diminuer la complexité de la procédure de synthèse et de capturer un plus grand nombre d'algorithmes. Après avoir mentionné différentes extensions des quasi-interprétations à des langages de programmation réactif, bytecode ou d'ordre supérieur, nous introduisons la sup-interprétation. Cette notion généralise la quasi-interprétation et est utilisée dans des critères de contrôle des ressources afin d'étudier la complexité d'un plus grand nombre d'algorithmes dont des algorithmes sur des données infinies ou des algorithmes de type diviser pour régner. Nous combinons cette notion à différents critères de terminaison comme les ordres RPO, les paires de dépendance ou le size-change principle et nous la comparons à la notion de quasi-interprétation. En outre, après avoir caractérisé des petites classes de complexité parallèles, nous donnons quelques heuristiques permettant de synthétiser des sup-interprétations sans la propriété sous-terme, c'est à dire des sup-interprétations qui ne sont pas des quasi-interprétations. Enfin, dans un dernier chapitre, nous adaptons les sup-interprétations à des langages orientés-objet, obtenant ainsi différents critères pour contrôler les ressources d'un programme objet et de ses méthodes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography