To see the other types of publications on this topic, follow the link: Multiples priors.

Dissertations / Theses on the topic 'Multiples priors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multiples priors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lacaussade, Charles-Thierry. "Evaluation d'actifs financiers et frictions de marché." Electronic Thesis or Diss., Université Paris sciences et lettres, 2024. http://www.theses.fr/2024UPSLD021.

Full text
Abstract:
Cette thèse vise à fournir des méthodes théoriques et empiriques innovantes dans le cadre de l'évaluation des actifs financiers aux chercheurs en économie, aux teneurs de marché et aux différents acteurs de marché, dont les courtiers, les négociants, les gestionnaires d'actifs et les régulateurs. Nous proposons une extension du théorème fondamental de l'évaluation des actifs (FTAP) adaptée aux marchés présentant des frictions financières. Par conséquent, nos méthodes d'évaluation des actifs permettent d'obtenir un système de prix présentant des bid-ask spreads (écarts entre le prix d’achat et de vente), tels qu'ils sont observés sur les marchés financiers ce qui les rendent plus facilement interprétables. Dans le premier chapitre, nous introduisons deux théorèmes de représentation pour l'évaluation des actifs financiers sur les marchés à deux dates, en tenant compte d'une variété de frictions financières (coûts de transaction, taxes, frais de commission). Ce résultat repose sur une nouvelle condition d'absence d'arbitrage adaptée au marché avec frictions financières, qui prend en compte les stratégies potentielles d'achat et de vente. En outre, ces modèles d'évaluation des actifs reposent tous deux sur des mesures de probabilité non additives. Le premier modèle est une règle de prix de Choquet, pour laquelle nous proposons un cas particulier adapté à la calibration, et le second est une règle d'évaluation à priors multiples. Dans le deuxième chapitre, en vue de généraliser nos modèles d'évaluation des actifs, nous fournissons les conditions nécessaires et suffisantes pour des règles de prix de Choquet en multi-périodes caractérisées notamment par l’existence des bid-ask spreads. Nous montrons qu'il est possible de modéliser un problème de tarification dynamique sur plusieurs périodes par un problème de tarification sur une période lorsque la filtration est sans friction, ce qui est équivalent à supposer la propriété de martingale, qui est équivalente à supposer la cohérence des prix. Enfin, dans le troisième chapitre, nous présentons l'axiomatisation d'une classe particulière de règles de prix de Choquet, à savoir les règles de tarification dépendantes du rang qui supposent aussi l'absence d'arbitrage et la parité put-call (entre les options de vente et les options d'achat). Les règles de prix dépendantes du rang ont l'avantage d'être facilement calibrées car la mesure de probabilité non additive prend la forme de la probabilité objective distordue. Nous proposons donc une étude empirique de ces règles de prix dépendantes du rang par le biais d'une calibration paramétrique sur des données de marché afin d'explorer l'impact des frictions financières sur les prix. Nous étudions également la validité empirique de la parité put-call. En outre, nous étudions l'impact du délai d'expiration (valeur temps) et de la moneyness (valeur intrinsèque) sur la forme de la fonction de distorsion. Nous trouvons que les règles de prix dépendantes du rang qui en résultent sont toujours plus précises que la règle de référence (FTAP). Enfin, nous établissons un lien entre les frictions du marché et l'aversion au risque du marché
This thesis aims to provide innovative theoretical and empirical methods for valuing securities to economics researchers, market makers, and participants, including brokers, dealers, asset managers, and regulators. We propose an extension of the Fundamental Theorem of Asset Pricing (FTAP) tailored to markets with financial frictions. Hence, our asset pricing methodologies allow for more tractable bid and ask prices, as observed in the financial market. This thesis provides both theoretical models and an empirical application of the pricing rule with bid-ask spreads.In our first chapter, we introduce two straightforward closed-form pricing expressions for securities in two-date markets, encompassing a variety of frictions (transaction cost, taxes, commission fees). This result relies on a novel absence of arbitrage condition tailored to the market with frictions considering potential buy and sell strategies. Furthermore, these asset pricing models both rely on non-additive probability measures. The first is a Choquet pricing rule, for which we offer a particular case adapted for calibration, and the second is a Multiple Priors pricing rule.In the second chapter, as a step toward generalizing our asset pricing models, we provide the necessary and sufficient conditions for multi-period pricing rules characterized by bid-ask spreads. We extend the multi-period version of the Fundamental Theorem of Asset Pricing by assuming the existence of market frictions. We show that it is possible to model a dynamic multi-period pricing problem with a one-stage pricing problem when the filtration is frictionless, which is equivalent to assuming the martingale property, which is equivalent to assuming price consistency.Finally, in the third chapter, we give the axiomatization of a particular class of Choquet pricing rule, namely Rank-Dependent pricing rules assuming the absence of arbitrage and put-call parity. Rank-dependent pricing rules have the appealing feature of being easily calibrated because the non-additive probability measure takes the form of a distorted objective probability. Therefore, we offer an empirical study of these Rank-Dependent pricing rules through a parametric calibration on market data to explore the impact of market frictions on prices. We also study the empirical validity of the put-call parity. Furthermore, we investigate the impact of time to expiration (time value) and moneyness (intrinsic value) on the shape of the distortion function. The resulting rank-dependent pricing rules always exhibit a greater accuracy than the benchmark (FTAP). Finally, we relate the market frictions to the market's risk aversion
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Ang. "Diffuse optical tomography with multiple priors /." Thesis, Connect to Dissertations & Theses @ Tufts University, 2005.

Find full text
Abstract:
Thesis (Ph.D.)--Tufts University, 2005.
Advisers: David A. Boas; Yaacov Shapira. Submitted to the Dept. of Physics. Includes bibliographical references (leaves 113-126). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
APA, Harvard, Vancouver, ISO, and other styles
3

Blanchard, Romain. "Application du contrôle stochastique en théorie de la décision avec croyances multiples et non dominées en temps." Thesis, Reims, 2017. http://www.theses.fr/2017REIMS006/document.

Full text
Abstract:
Cette dissertation traite des trois thématiques suivantes : incertitude, fonctions d’utilité et non-arbitrage. Dans le premier chapitre, nous supposons qu’il n’y a pas d’incertitude sur les croyances et établissons l’existence d’un portefeuille optimal pour un investisseur qui opère dans un marché financier multi-période à temps discret et maximise son espérance terminale d’utilité. Nous considérons des fonctions d’utilité aléatoires non concaves, non continues définies sur l’axe réel positif. La preuve repose sur de la programmation dynamique et des outils de théorie de la mesure.Dans les trois chapitres suivant nous introduisons le concept d’incertitude knightienne et adoptons le modèle de marché financier multi-période à temps discret avec croyances multiples non dominées introduit par B. Bouchard and M. Nutz (Arbitrage and duality in nondominated discrete-time models)Dans le second chapitre, nous étudions la notion de non-arbitrage quasi-sûre introduite par B. Bouchard and M. Nutz (Arbitrage and duality in nondominated discrete-time models) et en proposons deux formulations équivalentes: une version quantitative et une version géométrique. Nous proposons aussi une condition forte de non-arbitrage afin de simplifier des difficultés techniques.Nous utilisons ces résultats dans le troisième chapitre pour résoudre le problème de la maximisation d’espérance d’utilité sous la plus défavorable des croyances pour des fonctions d’utilité concaves, définies sur l’axe positif réel non-bornées. La preuve utilise à nouveau de la programmation dynamique et des techniques de sélection mesurable.Finalement, dans le dernier chapitre, nous développons un modèle de d’évaluation par indifférence d’utilité et démontrons que sous de bonnes conditions, le prix d’indifférence d’un actif contingent converge vers son prix de sur réplication
This dissertation evolves around the following three general thematic: uncertainty, utility and no-arbitrage.In the first chapter we establish the existence of an optimal portfolio for investor trading in a multi-period and discrete-time financial market without uncertainty and maximising its terminal wealth expected utility. We consider general non-concave and non-smooth random utility function defined on the half real-line. The proof is based on dynamic programming and measure theory tools.In the next three chapters, we introduce the concept of Knightian uncertainty and adopt the multi-prior non dominated and discrete time framework introduced in [25]..In this setting, in the second chapter we study the notion of quasi-sure no-arbitrage introduced in [25] and propose two equivalent definitions: a quantitative and geometric characterisation. We also introduce a stronger no-arbitrage condition that simplifies some of the measurability difficulties.In the third chapter, we build on the results obtained in the previous chapter to study the maximisation of multiple-priors non-dominated worst-case expected utility for investors trading in a multi-period and discrete-time financial for general concave utility functions defined on the half-real line unbounded from above. The proof uses again a dynamic programming framework together with measurable selection.Finally the last chapter formulates a utility indifference pricing model for investor trading in a multi-period and discrete-time financial market. We prove that under suitable condition the multiples-priors utility indifference prices of a contingent claim converge to its multiple-priors superreplication price
APA, Harvard, Vancouver, ISO, and other styles
4

Dumont, Julien. "Optimisation conjointe de l'émetteur et du récepteur par utilisation des a priori du canal dans un contexte MIMO." Marne-la-Vallée, 2006. http://www.theses.fr/2006MARN0310.

Full text
Abstract:
Dans cette thèse, nous avons abordé différents aspects de l’utilisation d’informations sur le milieu de propagation, dans un contexte MIMO, afin d’optimiser l’émetteur et le(s) récepteur(s). En effet, la situation idéale dans laquelle le canal serait connu parfaitement et instantanément de l’émetteur, et les stratégies mises en oeuvre définies de façon immédiate, est une hypothèse extrêmement forte qui conduit à la recherche de méthodes utilisant des éléments dont l’évaluation serait plus simple et/ou plus robuste, et sur des durées abordables. Nous avons ici précisément cherché à décrire ou à donner des stratégies utilisant différents a priori du canal. Tout d’abord, nous avons établi une stratégie permettant d’atteindre la capacité de canaux de type Rice corrélés et séparables (chapitre 1). Pour cela, nous avons établi préalablement une expression déterministe de l’information mutuelle de tels canaux, prenant en compte certaines statistiques du milieu, puis nous l’avons maximisée par rapport à la covariance des entrées. C’est donc une stratégie d’émission. Cela a également été l’occasion d’utiliser certains outils mathématiques de la théorie des grandes matrices aléatoires, qui donne ici une belle démonstration de ses possibilités, en répondant à un problème d’une grande complexité théorique. Ensuite, nous avons évalué l’impact d’une implantation pratique de certaines stratégies pratiques d’émission dans le cadre de systèmes broadcast, afin tout d’abord de dégager certains critères de choix de codeurs par rapport à une utilisation in situ, puis pour déterminer si l’utilisation de telles stratégies à l’émetteur était suffisamment robuste aux erreurs d’estimation, ou s’il n’était pas plus pertinent d’utiliser le TDMA (chapitre 2). La stratégie broadcast, souvent considérée comme très sensible au canal et à ses erreurs, se révèle relativement robuste et efficace pour une implantation réaliste, voire même plus que la solution TDMA, même si cette dernière solution ne nécessite qu’une information minime sur le canal (le SINR de chaque récepteur). Enfin, on a également étudié pour le cas pratique de l’interférence canal adjacent, dont on a démontré l’influence notable pour l’UMTS, comment certains paramètres du canal pouvaient aider à la décision d’une stratégie de réception pertinente (chapitre 3). La simple donnée de la distance entre le mobile et la station de base interférente permet de choisir entre différentes solutions en réception pour mieux combattre l’ACI, dont quelques unes que nous avons proposées. Cette décision peut d’ailleurs être prise par l’émetteur si celui-ci possède la donnée de cette distance. Nous voyons ici comment une information simple sur le canal peut être utilisée par un récepteur pour gérer l’évolution du canal et la qualité de son lien. Il est intéressant de privilégier les stratégies d’émission en ce sens que les moyens disponibles à la station de base étant souvent plus élevés, on peut en rapportant à l’émetteur un certain nombre de traitements - et donc une certaine complexité - solliciter moins le récepteur en calculs. Ce dernier peut consacrer alors plus de ressources à d’autres tâches, ou tout simplement augmenter son autonomie, qui est rappelons-le une des principales limitations des récepteurs envisagés les systèmes de nouvelles générations. Les solutions développées dans nos travaux vont donc en ce sens, et contribuent à montrer que, même pour des approches et des problématiques assez dissemblables, l’exploitation d’informations partielles sur le canal est une solution qui permet d’espérer de façon générale une amélioration significative des performances des systèmes MIMO
APA, Harvard, Vancouver, ISO, and other styles
5

McShane, Charlene. "Prior medical history, drug exposure and risk of multiple myeloma." Thesis, Queen's University Belfast, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.678815.

Full text
Abstract:
To date, very little is known about the aetiology of the plasma cell disorder multiple myeloma (MM) and its precursor condition monoclonal gammopathy of undetermined significance (MGUS). Chronic antigenic stimulation and, more recently, medications have been investigated as potential aetiological risk factors however findings from observational studies have been largely inconsistent. This thesis aimed to explore the impact of medical history and drug exposure on the risk of developing MGUS and MM. A systematic review of the literature revealed an elevated risk of MGUS/MM in association with prior autoimmune disease and in particular pernicious anaemia. The findings of this study were further supported by a population-based nested case-control study carried out within the UK Clinical Practice Research Oatalink (CPRO). Similarly an increased risk of MGUS and MM following exposure to common community-acquired infections was observed within studies carried out within the CPRO and the USA SEER-Medicare dataset. Autoimmune disease and infections diagnosed after MGUS were not associated with progression to MM or associated Iymphoproliferative disorders within the CPRO dataset. Oral statin and bisphosphonate use was investigated as a risk factor for the development of MGUS/MM and MGUS progression using the UK CPRO dataset. While there was evidence of a reduced risk of MGUS/MM in association with oral statin use, an increased risk of both MGUS and MM was observed among oral bisphosphonate users most likely as a result of detection bias and/or reverse causality. Post-diagnostic statin use was also associated with a reduced risk of MGUS progression to any Iymphoproliferative disorder but not MM. Overall, the studies conducted as part of this thesis support a role for chronic antigenic stimulation in the development of MGUS and MM, and suggest a potential role for statins as chemopreventive agents within the MGUS/MM setting. Further research is however warranted to confirm these findings.
APA, Harvard, Vancouver, ISO, and other styles
6

Nordin, Henrik, and Gustav Klockby. "Bestämningsfaktorer till regionala bostadspriser : En analys av de svenska länen för perioden 1993-2012." Thesis, Linköpings universitet, Nationalekonomi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111160.

Full text
Abstract:
Bostadsmarknaden är en av de största tillgångsmarknaderna i ett land varpå förändringar i bostadspriserna får långt gångna konsekvenser för det enskilda hushållet, det finansiella systemet och samhällsekonomin i stort. Flertalet tidigare studier har analyserat den svenska bostadsmarknaden utifrån ett storstadsperspektiv alternativt jämfört Sveriges bostadsmarknad mot andra länder. Vi har identifierat att studiet kring vad som bestämmer prisnivån på regionala bostadsmarknader i Sverige är tämligen oexploaterat varför avsikten med den här studien är att analysera bestämningsfaktorer till de svenska bostadspriserna på länsnivå. Sålunda är ett bidragande mål med denna studie att tillföra en bättre förståelse för dynamiken på den svenska bostadsmarknaden. I studien använder vi multipel regression där vi bearbetar paneldata med en Fixed Effect Model. Ett flertal förtester har gjorts för att få fram den mest tillförlitliga modellen i vilken vi skattat bostadspriserna utifrån teoretiskt belagda förklaringsvariabler. De slutsatser vi har dragit är att disponibel inkomst, befolkningstäthet och sysselsättningsgrad kan förklara bostadspriserna på länsnivå med en procents signifikansnivå. Skillnaden i bostadspriserna mellan länen har relativt sett ökat över tidsperioden för studien. Avslutningsvis diskuteras uppvisade avvikelser mellan de verkliga bostadspriserna och de skattade bostadspriserna vilka kan förklaras av att bostadsmarknaden är känslostyrd med inslag av spekulationer.
The housing market is one of the greatest assets markets in a given country. Therefore, changes in housing prices have a big impact on the single household, the financial system and the economic system as a whole. Due to the housing markets vital role in the society, many scientific studies have been done with the purpose of enlighten and discover the dynamics of the Swedish housing market. The focuses in these earlier studies have more than often taken a metropolitan perspective or compared the Swedish housing market with other countries. However, this study divides the Swedish housing market into regional county level with the purpose of analyzing determinants of housing prices due to county specific variables. By analyzing the housing prices due to county specific factors a contributing goal with this study is to deepen the understanding about the dynamics in the Swedish housing market. In this study we have used multiple regressions in order to work with panel data. The Fixed Effect Model fitted our purpose well which is why that kind of model was used in order to estimate the housing prices for every single Swedish county. The conclusions drawn in this study are that disposable income, people density and employment rate are all statistically significant on one percent level in order to explain the housing price at state level. We have also discovered that, during the observed period, the relative differences in housing prices between the different states have increased. Finally, the differences found between the real housing prices and the estimated housing prices, can be explained by the assumption that the housing market is driven by emotions and speculations.
APA, Harvard, Vancouver, ISO, and other styles
7

Gustafsson, Alexander, and Sebastian Wogenius. "Modelling Apartment Prices with the Multiple Linear Regression Model." Thesis, KTH, Matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146735.

Full text
Abstract:
This thesis examines factors that are of most statistical significance for the sales prices of apartments in the Stockholm City Centre. Factors examined are address, area, balcony, construction year, elevator, fireplace, floor number, maisonette, monthly fee, penthouse and number of rooms. On the basis of this examination, a model for predicting prices of apartments is constructed. In order to evaluate how the factors influence the price, this thesis analyses sales statistics and the mathematical method used is the multiple linear regression model. In a minor case-study and literature review, included in this thesis, the relationship between proximity to public transport and the prices of apartments in Stockholm are examined. The result of this thesis states that it is possible to construct a model, from the factors analysed, which can predict the prices of apartments in Stockholm City Centre with an explanation degree of 91% and a two million SEK confidence interval of 95%. Furthermore, a conclusion can be drawn that the model predicts lower priced apartments more accurately. In the case-study and literature review, the result indicates support for the hypothesis that proximity to public transport is positive for the price of an apartment. However, such a variable should be regarded with caution due to the purpose of the modelling, which differs between an individual application and a social economic application
Denna uppsats undersöker faktorer som är av störst statistisk signifikans för priset vid försäljning av lägenheter i Stockholms innerstad. Faktorer som undersöks är adress, yta, balkong, byggår, hiss, kakelugn, våningsnummer, etage, månadsavgift, vindsvåning och antal rum. Utifrån denna undersökning konstrueras en modell för att predicera priset på lägenheter. För att avgöra vilka faktorer som påverkar priset på lägenheter analyseras försäljningsstatistik. Den matematiska metoden som används är multipel linjär regressionsanalys. I en mindre litteratur- och fallstudie, inkluderad i denna uppsats, undersöks sambandet mellan närhet till kollektivtrafik och priset på läagenheter i Stockholm.   Resultatet av denna uppsats visar att det är möjligt att konstruera en modell, utifrån de faktorer som undersöks, som kan predicera priset på läagenheter i Stockholms innerstad med en förklaringsgrad på 91 % och ett två miljoner SEK konfidensintervall på 95 %. Vidare dras en slutsats att modellen preciderar lägenheter med ett lägre pris noggrannare. I litteratur- och fallstudien indikerar resultatet stöd för hypotesen att närhet till kollektivtrafik är positivt för priset på en lägenhet. Detta skall dock betraktas med försiktighet med anledning av syftet med modelleringen vilket skiljer sig mellan en individuell tillämpning och en samhällsekonomisk tillämpning.
APA, Harvard, Vancouver, ISO, and other styles
8

Clapham, Eric S. "Picture priming multiple primes under conditions of normal and limited awareness /." abstract and full text PDF (UNR users only), 2009. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3355576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Franciscani, Juliana de Fátima [UNESP]. "Consenso Iterativo: geração de implicantes primos para minimização de funções booleanas com múltiplas saídas." Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/144517.

Full text
Abstract:
Submitted by JULIANA DE FÁTIMA FRANCISCANI null (ifsp.juliana@gmail.com) on 2016-10-28T18:39:46Z No. of bitstreams: 1 Juliana de Fátima Franciscani.pdf: 3657600 bytes, checksum: dfdbe82d43ba74271de101385cdbbf6e (MD5)
Approved for entry into archive by Felipe Augusto Arakaki (arakaki@reitoria.unesp.br) on 2016-11-03T19:07:45Z (GMT) No. of bitstreams: 1 franciscani_jf_me_ilha.pdf: 3638504 bytes, checksum: 6ca7f15a8be8ef019afd3f8e0ecc1e52 (MD5)
Made available in DSpace on 2016-11-03T19:07:45Z (GMT). No. of bitstreams: 1 franciscani_jf_me_ilha.pdf: 3638504 bytes, checksum: 6ca7f15a8be8ef019afd3f8e0ecc1e52 (MD5) Previous issue date: 2016-08-31
Com a evolução e difusão do desenvolvimento de equipamentos utilizando microtecnologia e nanotecnologia, circuitos cada vez menores, mais eficientes e que consomem menos energia, são necessários. Os métodos de minimização de funções booleanas tornam-se relevantes por possibilitarem a otimização de circuitos lógicos, através da geração de circuitos que possuam a mesma funcionalidade, porém, minimizados. Estudos na área de minimização de funções booleanas são realizados há muito tempo, e estão sendo adaptados às novas tecnologias. A geração de implicantes primos de uma função booleana é um dos passos para a cobertura dos mintermos da função e, consequentemente, para a obtenção da função de custo mínimo. Neste trabalho, a Primeira Fase do Método de Quine-McCluskey para Funções Booleanas com Múltiplas Saídas (QMM) foi implementada para posterior comparação com os Métodos Propostos GPMultiplo e MultiGeraPlex (baseados na filosofia do algoritmo GeraPlex). Os métodos propostos geram os implicantes primos de uma função booleana com múltiplas saídas e utilizam a operação de consenso iterativo para comparar dois termos. Os resultados obtidos, através da comparação do GPMultiplo, MultiGeraPlex e da Primeira Fase do Método de QMM, puderam comprovar que a aplicação dos métodos propostos torna-se mais viável e vantajosa por permitir menor tempo de execução e uso de memória, menor quantidade de implicantes gerados e de comparações entre os termos.
With the evolution and spread of the development of equipment using microtechnology and nanotechnology, circuits in need are smaller, more efficient and consume less power. Methods of Minimizing Boolean Functions become important as they allow optimization of logic circuits by generating circuits having the same functionality, but minimized. Studies in Minimizing Boolean Functions area are carried out long ago, and are being adapted to new technologies. The generation of prime implicants of a Boolean function is one of the steps for covering the function of the minterms, and consequently to obtain the minimum cost function. In this work, the first phase of the Quine-McCluskey Method for Booleans Functions with Multiple Output (QMM) was implemented for comparison with Proposed Methods GPMultiplo and MultiGeraPlex (based on the philosophy of GeraPlex algorithm). The proposed methods generates the prime implicants of a Boolean Function with Multiple Output and using the iterative consensus operation to compare two terms. The results obtained by comparing the GPMultiplo, MultiGeraPlex and the first phase of the QMM Method, were able to prove that the application of the proposed methods becomes more feasible and advantageous, by allowing smaller execution time, number of implicants and number of comparisons.
APA, Harvard, Vancouver, ISO, and other styles
10

Franciscani, Juliana de Fátima. "Consenso Iterativo : geração de implicantes primos para minimização de funções booleanas com múltiplas saídas /." Ilha Solteira, 2016. http://hdl.handle.net/11449/144517.

Full text
Abstract:
Orientador: Alexandre Cesar Rodrigues Silva
Resumo: Com a evolução e difusão do desenvolvimento de equipamentos utilizando microtecnologia e nanotecnologia, circuitos cada vez menores, mais eficientes e que consomem menos energia, são necessários. Os métodos de minimização de funções booleanas tornam-se relevantes por possibilitarem a otimização de circuitos lógicos, através da geração de circuitos que possuam a mesma funcionalidade, porém, minimizados. Estudos na área de minimização de funções booleanas são realizados há muito tempo, e estão sendo adaptados às novas tecnologias. A geração de implicantes primos de uma função booleana é um dos passos para a cobertura dos mintermos da função e, consequentemente, para a obtenção da função de custo mínimo. Neste trabalho, a Primeira Fase do Método de Quine-McCluskey para Funções Booleanas com Múltiplas Saídas (QMM) foi implementada para posterior comparação com os Métodos Propostos GPMultiplo e MultiGeraPlex (baseados na filosofia do algoritmo GeraPlex). Os métodos propostos geram os implicantes primos de uma função booleana com múltiplas saídas e utilizam a operação de consenso iterativo para comparar dois termos. Os resultados obtidos, através da comparação do GPMultiplo, MultiGeraPlex e da Primeira Fase do Método de QMM, puderam comprovar que a aplicação dos métodos propostos torna-se mais viável e vantajosa por permitir menor tempo de execução e uso de memória, menor quantidade de implicantes gerados e de comparações entre os termos.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
11

Gil, Pelluch Laura. "Effects of personal epistemology beliefs, task conditions and prior knowledge on understanding of multiple texts." Doctoral thesis, Universitat de València, 2009. http://hdl.handle.net/10803/10236.

Full text
Abstract:
One of the major challenges of a knowledge society is that students as well asother citizens must learn to understand and integrate information from multiple textualsources. Still, tasks and reader characteristics that may facilitate or constrain suchintertextual processes are not well understood by researchers. In four studies, wecompare the effects of summary and argument essay tasks when undergraduates readseven different texts on a particular scientific topic and we examine whether theseeffects are moderated by some characteristics of the reader.In the first study, we explore and compare the dimensionality of personalepistemology with respect to climate change across the contexts of Norwegian andSpanish students. Additionally, we examine relationships between topic-specificepistemic beliefs and the variables of gender, topic knowledge, and topic interest in thetwo contexts. Even though considerable cross-cultural generalizability in dimensionalitywas demonstrated, this research also draws attention to the cultural embeddedness oftopic-specific epistemic beliefs.In the second study, we compare the effects of summary and argument taskson the students' comprehension and integration about climate change and, using theSpanish results of the first study, we examine whether the effect of tasks might beinfluenced by students' epistemic beliefs. Contrary to our predictions, we found that aninstruction to write summaries may lead to better understanding and integration than aninstruction to write argument essays. We also found that beliefs about the certainty ofknowledge in some instances can moderate the effect of task on comprehensionperformance.The third and the fourth experiment were designed to clarify previous conflictingfindings regarding the effects of summary and argument tasks on the understanding ofmultiple texts. We examine whether the effect of both tasks may be dependent onsome characteristics of the learning situation (i.e. reading amount and readingenvironment) or on reader's prior knowledge of the topic. Results showed that anargument task is not always beneficial in comparison to a summary task and indicatedthat differences in prior knowledge can influence effect of task on both surface anddeep understanding of multiple documents. Educational implications are discussed.
La investigación sobre integración de información con documentos múltiples seha hecho presentando a los estudiantes textos que abordan una misma temática yplanteando tareas que demandan integración de información. Las operacionesmentales y estrategias que demandan estas tareas resultan muy difíciles de resolverpara los estudiantes incluso para aquellos con buenas estrategias de lectura en textossimples(Rouet, 2006). Son varios los estudios que muestran que estudiar un temaconcreto con documentos múltiples, en lugar de hacerlo con un solo texto, beneficia elaprendizaje de los estudiantes. Sin embargo, el simple hecho de estudiar con variostextos, no garantiza que estudiantes inexpertos en el manejo de documentos múltiplesse beneficien de tal actividad.Dentro de este contexto, en la presente investigación se analizan qué tareasson las más adecuadas para promover la comprensión e integración de documentosmúltiples y qué características del lector pueden interactuar con la tarea moderando suefecto en dichos procesos. Por medio de una serie de estudios, el primero conenfoque correlacional y los dos siguientes con enfoque experimental, la tesis examinael efecto de dos de las tareas más comunes para aprender con documentos queguardan una relación temática, i.e. los resúmenes y los ensayos argumentativos.Además, analiza el papel de dos variables individuales que a priori parecen tener unarelevancia clara en estas tareas: las creencias epistemológicas y el conocimientoprevio de los estudiantes. Se discuten las implicaciones educativas de los estudios.
APA, Harvard, Vancouver, ISO, and other styles
12

Alhajri, Rana Ali. "Integrating multiple individual differences in web-based instruction." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/8514.

Full text
Abstract:
There has been an increasing focus on web-based instruction (WBI) systems which accommodate individual differences in educational environments. Many of those studies have focused on the investigation of learners’ behaviour to understand their preferences, performance and perception using hypermedia systems. In this thesis, existing studies focus extensively on performance measurement attributes such as time spent using the system by a user, gained score and number of pages visited in the system. However, there is a dearth of studies which explore the relationship between such attributes in measuring performance level. Statistical analysis and data mining techniques were used in this study. We built a WBI program based on existing designs which accommodated learner’s preferences. We evaluated the proposed system by comparing its results with related studies. Then, we investigated the impact of related individual differences on learners’ preferences, performance and perception after interacting with our WBI program. We found that some individual differences and their combination had an impact on learners' preferences when choosing navigation tools. Consequently, it was clear that the related individual differences altered a learner’s preferences. Thus, we did further investigation to understand how multiple individual differences (Multi-ID) could affect learners’ preferences, performance and perception. We found that the Multi-ID clearly altered the learner’s preferences and performance. Thus, designers of WBI applications need to consider the combination of individual differences rather than these differences individually. Our findings also showed that attributes relationships had an impact on measuring learners’ performance level on learners with Multi-ID. The key contribution of this study lies in the following three aspects: firstly, investigating the impact of our proposed system, using three system features in the design, on a learner’s behavior, secondly, exploring the influence of Multi-ID on a learner’s preferences, performance and perception, thirdly, combining the three measurement attributes to understand the performance level using these measuring attributes.
APA, Harvard, Vancouver, ISO, and other styles
13

BARBOSA, Ana Clara de Oliveira Ferraz. "Avaliação de critérios de compatibilidade entre pares de primers para otimização de sistemas multiplex de genotipagem." Universidade Federal de Goiás, 2010. http://repositorio.bc.ufg.br/tede/handle/tde/1291.

Full text
Abstract:
Made available in DSpace on 2014-07-29T15:16:37Z (GMT). No. of bitstreams: 1 AnaClaraferraz.pdf: 1870286 bytes, checksum: d17e1fe96ee32ca1852c51743beff160 (MD5) Previous issue date: 2010-01-12
The progress of Molecular Biology and Genetics provided the appearance of several molecular markers that detect the genetic polymorphism directly at DNA. Among these markers are the microsatellites (SSR), which are distinguished by their high degree of polymorphism. The use of these markers for individual genotyping has evolved into multiplex systems, which allow many SSR fragments to be detected and analyzed simultaneously. Currently there are several articles in literature discussing the criteria to be used in the primer design for use in PCR, as well as various softwares are available for this end. However, there are few studies and tools for the analysis of compatibility between pairs of primers for use in multiplex systems, where multiple fragments are simultaneously amplified using PCR. This paper evaluated different criteria for compatibility between pairs of primers. A set of 74 combinations of pairs of primers, involving the amplification of 94 SSR loci were evaluated in duplex systems. The same combinations were evaluated according to different criteria, including the degree of complementarity between primers, the magnitude of differences of denaturation temperatures (Tm) and the tendency to annealing between pairs of primers based on the Gibbs free energy resulting from the association between them. The comparison between the different criteria allowed the identification of a set of criteria with positive predictive value equal to 94%. These criteria were implemented for use in a software called Multiplexer, which from the analysis in sequence of pairs of primers, suggests compatible combinations for use in multiplex genotyping systems. Using this tool can significantly reduce the costs related to laboratory activities for genotyping using PCR.
Os avanços da Biologia Molecular e da Genética proporcionaram o surgimento de diversos marcadores moleculares que detectam o polimorfismo genético diretamente no DNA. Entre estes marcadores se encontram os microssatélites (SSR), que se destacam pelo seu elevado grau de polimorfismo. O uso desses marcadores para fins de genotipagem individual tem evoluído para sistemas multiplex, os quais permitem que vários fragmentos SSR sejam detectados e analisados simultaneamente. Atualmente são abundantes na literatura artigos que discutem os critérios a serem utilizados no desenho de pares de primers para aplicação em PCR, bem como estão disponíveis diversos softwares para este fim. No entanto, ainda são escassos os estudos e ferramentas destinados à análise de compatibilidade entre pares de primers para aplicação em sistemas multiplex, onde vários fragmentos são amplificados simultaneamente por PCR. Neste trabalho são avaliados diferentes critérios de compatibilidade entre pares de primers. Um conjunto de 74 combinações de pares de primers, envolvendo a amplificação de 94 locos SSR foram avaliados em sistemas duplex. As mesmas combinações foram avaliadas segundo diferentes critérios, incluindo o grau de complementariedade entre primers, magnitude das diferenças de temperaturas de desnaturação (Tm) e a tendência ao anelamento entre pares de primers com base na energia livre de Gibbs resultante da associação entre eles. A comparação entre os diferentes critérios permitiu a identificação de um conjunto de critérios com valor preditivo positivo igual a 94%. Estes critérios foram implementados para utilização em um software denominado Multiplexer, que a partir da análise de sequências de pares de primers, sugere combinações compatíveis para a utilização em sistemas de genotipagem multiplex. O uso dessa ferramenta pode reduzir consideravelmente os custos laboratoriais relativos às atividades de genotipagem utilizando PCR.
APA, Harvard, Vancouver, ISO, and other styles
14

Söderholm, Simon. "The Complex Genetics of Multiple Sclerosis : A preliminary study of MS-associated SNPs prior to a larger genotyping project." Thesis, Linköpings universitet, Institutionen för fysik, kemi och biologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129423.

Full text
Abstract:
Biomedical research have been revolutionized by recent technological advances, both in the fields of molecular biology and computer science, turning the biomolecular and genetic research into “big data science”. One of the main objectives have been to improve our understanding of complex human diseases. Among those diseases, multiple sclerosis (MS) is considered as one of the most common. MS is a chronic autoimmune disease that cause inflammation and damage to the central nervous system. In this study, a set of bioinformatics analyses have been conducted on SNP data, as an initial step to gain more information prior to an upcoming genotyping project. The results showed extensive regulatory properties for the 761 selected SNPs, which is consistent with current scientific knowledge, and also identified another 332 SNPs in linkage to these. However, during the study some issues have also been identified, which need to be addressed going forward.
APA, Harvard, Vancouver, ISO, and other styles
15

Coetzee, G. J. "A comparison of the Philips price earnings multiple model and the actual future price earnings multiple of selected companies listed on the Johannesburg stock exchange." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51561.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2000.
ENGLISH ABSTRACT: The price earnings multiple is a ratio of valuation and is published widely in the media as a comparative instrument of investment decisions. It is used to compare company valuation levels and their future growth/franchise opportunities. There have been numerous research studies done on the price earnings multiple, but no study has been able to design or derive a model to successfully predict the future price earnings multiple where the current stock price and following year-end earnings per share is used. The most widely accepted method of share valuation is to discount the future cash flows by an appropriate discount rate. Popular and widely used stock valuation models are the Dividend Discount Model and the Gordon Model. Both these models assume that future dividends are cash flows to the shareholder. Thomas K. Philips, the chief investment officer at Paradigm Asset Management in New York, constructed a valuation model at the end of 1999, which he published in The Journal of Portfolio Management. The model (Philips price earnings multiple model) was derived from the Dividend Discount Model and calculates an implied future price earnings multiple. The Philips price earnings multiple model includes the following independent variables: the cost of equity, the return on equity and the dividend payout ratio. Each variable in the Philips price earnings multiple model is a calculated present year-end point value, which was used to calculate the implied future price earnings multiple (present year stock price divided by following year-end earnings per share). This study used a historical five year (1995-2000) year-end data to calculate the implied and actual future price earnings multiple. Out of 225, Johannesburg Stock Exchange listed companies studied, only 36 were able to meet the criteria of the Philips price earnings multiple model. Correlation and population mean tests were conducted on the implied and constructed data sets. It proved that the Philips price earnings multiple model was unsuccesful in predicting the future price earnings multiple, at a statistical 0,20 level of significance. The Philips price earnings multiple model is substantially more complex than the Discount Dividend Model and includes greater restrictions and more assumptions. The Philips price earnings multiple model is a theoretical instrument which can be used to analyse hypothetical (with all model assumptions and restrictions having been met) companies. The Philips price earnings multiple model thus has little to no applicability in the practical valuation of stock price on Johannesburg Stock Exchange listed companies.
AFRIKAANSE OPSOMMING: Die prysverdienste verhouding is 'n waarde bepalingsverhouding en word geredelik gepubliseer in die media. Hierdie verhouding is 'n maatstaf om maatskappye se waarde vlakke te vergelyk en om toekomstige groei geleenthede te evalueer. Daar was al verskeie navorsingstudies gewy aan die prysverdiensteverhouding, maar nog geen model is ontwikkel wat die toekomstige prysverdiensteverhouding (die teenswoordige aandeelprys en toekomstige jaareind verdienste per aandeel) suksesvol kon modelleer nie. Die mees aanvaarbare metode vir waardebepaling van aandele is om toekomstige kontantvloeie te verdiskonteer teen 'n toepaslike verdiskonteringskoers. Van die vernaamste en mees gebruikte waardeberamings modelle is die Dividend Groei Model en die Gordon Model. Beide modelle gebruik die toekomstige dividendstroom as die toekomstige kontantvloeie wat uitbetaal word aan die aandeelhouers. Thomas K. Philips, die hoof beleggingsbeampte by Paradigm Asset Management in New York, het 'n waardeberamingsmodel ontwerp in 1999. Die model (Philips prysverdienste verhoudingsmodei) was afgelei vanaf die Dividend Groei Model en word gebruik om 'n geïmpliseerde toekomstige prysverdiensteverhouding te bereken. Die Philips prysverdienste verhoudingsmodel sluit die volgende onafhanklike veranderlikes in: die koste van kapitaal, die opbrengs op aandeelhouding en die uitbetalingsverhouding. Elke veranderlike in hierdie model is 'n berekende teenswoordige jaareinde puntwaarde, wat gebruik was om die toekomstige geïmpliseerde prysverdiensteverhouding (teenswoordige jaar aandeelprys gedeel deur die toekomstige verdienste per aandeel) te bereken. In hierdie studie word vyf jaar historiese jaareind besonderhede gebruik om die geïmpliseerde en werklike toekomstige prysverdiensteverhouding te bereken. Van die 225 Johannesburg Effektebeurs genoteerde maatskappye, is slegs 36 gebruik wat aan die vereistes voldoen om die Philips prysverdienste verhoudingsmodel te toets. Korrelasie en populasie gemiddelde statistiese toetse is op die berekende en geïmpliseerde data stelle uitgevoer en gevind dat die Philips prysverdienste verhoudingsmodel, teen 'n statistiese 0,20 vlak van beduidenheid, onsuksesvol was om die toekomstige prysverdiensteverhouding vooruit te skat. Die Philips prysverdienste verhoudingsmodel is meer kompleks as die Dividend Groei Model met meer aannames en beperkings. Die Philips prysverdienste verhoudingsmodel is 'n teoretiese instrument wat gebruik kan word om hipotetiese (alle model aannames en voorwaardes is nagekom) maatskappye te ontleed. Dus het die Philips prysverdienste verhoudingsmodel min tot geen praktiese toepassingsvermoë in die werkilke waardasie van aandele nie.
APA, Harvard, Vancouver, ISO, and other styles
16

Kwadjane, Jean-Marc. "Apport de la connaissance a priori de la position de l'émetteur sur les algorithmes MIMO adaptatifs en environnement tunnel pour les métros." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10208/document.

Full text
Abstract:
Cette thèse porte sur la conception d'algorithmes adaptatifs pour des communications sans fil dans un contexte multi-antennes en émission et en réception (MIMO) et en environnement tunnel pour les métros. La technologie MIMO permet de répondre aux besoins de haut débit et de qualité de transmission. Dans les tunnels, ces performances sont réduites en raison de la corrélation spatiale. Dans cette thèse, nous avons étudié les algorithmes de précodage MIMO, qui utilisent la connaissance du canal (CSI) à l'émetteur. Généralement, ces algorithmes nécessitent un lien retour pour transmettre la CSI. Afin de minimiser la perte d'efficacité spectrale due au lien retour, nous avons choisi des précodeurs issus de la littérature qui réduisent le débit sur le lien retour. Nous avons réalisé une chaîne de simulation complète et réaliste afin d'évaluer les performances de ces précodeurs en tenant compte de plusieurs niveaux de quantité et de qualité de la CSI. Les simulations ont été réalisées dans des canaux théoriques et mesurés. Nous avons aussi évalué l'impact du bruit impulsif caractéristique de l'environnement ferroviaire. Nous proposons une borne supérieure théorique de la probabilité d'erreur du précodeur max-dmin dans des environnements décorrelés en présence du bruit impulsif modélisé par une loi de Cauchy ainsi qu'un récepteur adapté à ce bruit. La caractérisation du canal de propagation MIMO en tunnel a aussi permis d'obtenir une connaissance fine des caractéristiques du canal de en fonction de la position dans le tunnel. Ainsi, nous avons donc proposé un précodeur basé sur la connaissance de la matrice de corrélation et étudié la possibilité de supprimer le lien retour
This thesis focuses on the design of adaptive algorithms for wireless communications in a multiple input multiple output (MIMO) design for subway tunnel environment. MIMO system meet the requirement of high capacity and robustness. However, these performance decreased due to the spatial correlation in tunnels. In this thesis, we studied precoding MIMO algorithms that use the channel state information (CSI) at the transmitter. Generally, these algorithms require feedback from receiver. To minimize the loss of spectral efficiency due to the reverse link, we selected from the literature precoder that reduce the feedback. We conducted a complete and realistic simulation system to evaluate the performance of these precoders taking into account several levels of quantity and quality of the CSI. For simulation, we used both theoretical and measured channels. We also assessed the impact of impulsive noise measured in the railway environment. By assuming a Cauchy law, We propose a receiver and a theoretical upper bound of the error probability of max-dmin precoder in uncorrelated environments. Finally, we proposed a precoder based on knowledge of the correlation matrix and studied the possibility of removing the return link thanks to the knowledge of the channel statistiques based on the localization in the tunnel
APA, Harvard, Vancouver, ISO, and other styles
17

Rivenbark, David. "UNCERTAINTY, IDENTIFICATION, AND PRIVACY: EXPERIMENTS IN INDIVIDUAL DECISION-MAKING." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2266.

Full text
Abstract:
The alleged privacy paradox states that individuals report high values for personal privacy, while at the same time they report behavior that contradicts a high privacy value. This is a misconception. Reported privacy behaviors are explained by asymmetric subjective beliefs. Beliefs may or may not be uncertain, and non-neutral attitudes towards uncertainty are not necessary to explain behavior. This research was conducted in three related parts. Part one presents an experiment in individual decision making under uncertainty. Ellsberg s canonical two-color choice problem was used to estimate attitudes towards uncertainty. Subjects believed bets on the color ball drawn from Ellsberg s ambiguous urn were equally likely to pay. Estimated attitudes towards uncertainty were insignificant. Subjective expected utility explained subjects choices better than uncertainty aversion and the uncertain priors model. A second treatment tested Vernon Smith s conjecture that preferences in Ellsberg s problem would be unchanged when the ambiguous lottery is replaced by a compound objective lottery. The use of an objective compound lottery to induce uncertainty did not affect subjects choices. The second part of this dissertation extended the concept of uncertainty to commodities where quality and accuracy of a quality report were potentially ambiguous. The uncertain priors model is naturally extended to allow for potentially different attitudes towards these two sources of uncertainty, quality and accuracy. As they relate to privacy, quality and accuracy of a quality report are seen as metaphors for online security and consumer trust in e-commerce, respectively. The results of parametric structural tests were mixed. Subjects made choices consistent with neutral attitudes towards uncertainty in both the quality and accuracy domains. However, allowing for uncertainty aversion in the quality domain and not the accuracy domain outperformed the alternative which only allowed for uncertainty aversion in the accuracy domain. Finally, part three integrated a public-goods game and punishment opportunities with the Becker-DeGroot-Marschak mechanism to elicit privacy values, replicating previously reported privacy behaviors. The procedures developed elicited punishment (consequence) beliefs and information confidentiality beliefs in the context of individual privacy decisions. Three contributions are made to the literature. First, by using cash rewards as a mechanism to map actions to consequences, the study eliminated hypothetical bias as a confounding behavioral factor which is pervasive in the privacy literature. Econometric results support the  privacy paradox at levels greater than 10 percent. Second, the roles of asymmetric beliefs and attitudes towards uncertainty were identified using parametric structural likelihood methods. Subjects were, in general, uncertainty neutral and believed  bad events were more likely to occur when their private information was not confidential. A third contribution is a partial test to determine which uncertain process, loss of privacy or the resolution of consequences, is of primary importance to individual decision-makers. Choices were consistent with uncertainty neutral preferences in both the privacy and consequences domains.
Ph.D.
Department of Economics
Business Administration
Economics PhD
APA, Harvard, Vancouver, ISO, and other styles
18

Childress, Duane Allen. "A model for evaluating proposals from multiple vendors which have different prices and lead times." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA306819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ball, Nicholas W. "Decay: A Series of Prints Dealing with the Decay of Biomorphic Forms through Multiple States." Kent State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=kent1276637560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Reeves, S. "Exploring the effect of subliminal single-word and multiple-word primes on working memory performance." Thesis, Canterbury Christ Church University, 2015. http://create.canterbury.ac.uk/14174/.

Full text
Abstract:
This PhD thesis focused on subliminal priming, that is, the presentation of information below conscious awareness (Vernon, 2009), which has been shown to influence both cognitive and affective behaviours. Information can be presented subliminally using both ‘Single-Word’ and ‘Multiple-Word’ written primes, although the two prime types have not yet been compared. Currently there is no reported optimal procedure for the presentation of subliminal stimuli, thus such a comparison could guide future research concerning prime choice. Hence, this thesis empirically compared the effects produced by Single-Word and Multiple-Word primes in a series of experiments. In Experiment 1 96 participants were subliminally stimulated with one of six alternative primes, three Single-Word primes (cognitive: intelligent; affective: one; neutral-control: walking), and three Multiple-Word primes (cognitive: I am intelligent; affective: mommy and I are one; neutral-control: people are walking), and their performance measured on a range of cognitive (e.g., working memory, intelligence, selective attention) and affective (e.g., mood and state anxiety) tasks. Results of Experiment 1 showed no clear change in participants’ intelligence, selective attention, mood, or state anxiety. However, post hoc analysis found participants’ significantly improved their working memory performance following exposure to all positive (e.g., cognitive and affective) subliminal primes, regardless of prime type (i.e., Single-Word and Multiple-Word). Experiment 2 followed this up by exploring the effect of subliminal priming on working memory performance. Sixty participants were primed with one of the six subliminal stimuli to assess whether the non-differential effect between prime types found in Experiment 1 was due to the varied length of time between the end of subliminal exposure and the onset of the task. Results found all participants’ performance iv improved regardless of prime type and prime content and thus was concluded to reflect a practice effect. Experiment 3 considered that the absence of any subliminal priming may have been due to participants’ potential lack of motivation to attain the goal of improving working memory. Hence, 106 participants were primed with one of the six subliminal stimuli and their motivation to achieve this goal was enhanced using falsepositive feedback on performance and reading a false article extract highlighting the benefit of a good working memory. Results found, despite increased motivation to improve working memory, that subliminal priming did not have any effect on performance. Experiment 4 considered whether the specific content of the subliminal stimuli, the short prime-target stimulus onset asynchrony (SOA), or the type of task could be accountable for the null results. Thus, in addition to enhancing participant motivation, the content of the stimuli were refined to become more task-relevant, the prime-target SOA was extended from 14ms to 514ms to allow more time for unconscious processing. Eighty-three participants were primed with one of four subliminal stimuli; two Single-Word primes (memory-specific: remember; neutralcontrol: walking) and two Multiple-Word primes (memory-specific: I can remember well; neutral-control: people are walking), and performance was measured using two working memory tasks. Results found all participants’ performance improved on both working memory tasks regardless of prime type and prime content and was concluded to reflect a practice effect. Finally, a meta-analysis conducted on the data from the Conceptual Span Task from all four experiments confirmed an improvement on performance over time but no evidence of any subliminal priming effects. Overall, this thesis found it was not possible to establish a difference between the two prime types, although findings indicate that subliminal priming may not be able to improve performance of the phonological loop component of working memory.
APA, Harvard, Vancouver, ISO, and other styles
21

Rotshtein, Regina. "Coordination of Theory and Evidence and the Role of Personal Epistemology and Prior Knowledge When Reading About the Controversial Topic of Vitamin Supplement Use." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo156104274336628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

TRABUCCHI, Marta. "European electricity day ahead market : a multiple time series approach." Doctoral thesis, Università degli studi di Bergamo, 2015. http://hdl.handle.net/10446/31961.

Full text
Abstract:
The energy market reform is a complex restructuring process that first has liberalized Member State electricity markets and gradually fosters them toward integration into the Single European Market. Even if national markets are still characterized by several differences in the production structures, regulation shapes a common market design at European level and voluntary measures have been adopted to promote market integration. In this framework, Power Exchanges have taken a key role as shown by the growing volumes traded on their different segments and electricity price forecasting has become an interesting research field. Up to now, most of the contributions on short-term forecasting of day-ahead electricity prices do not include the possibility of dynamic interactions between several interconnected markets, despite the recent empirical literature highlights cointegration in the CWE area. After a primer on the economics of electricity markets and the analysis of the regulatory and market framework, the present work proposes a multiple time series approach for electricity price forecasting, joining the two strands of empirical literature on market integration and day-ahead price forecasting. Accounting for the presence of market integration enlarges the model information set, so it may potentially improve the forecasting performance. This thesis considers hourly day-ahead electricity prices for eight European countries (Austria, Belgium, France, Germany, Italy, Netherlands, Slovenia and Switzerland) for the period May 2010–July 2013. At present, an in-depth comparison between multiple and simple time series forecasting accuracy does not allow stating that estimating multiple time series models, and especially including potential cointegration relationships between day-ahead electricity markets, greatly improve their forecasting performances compared to simple time series models. The adoption of multiple time series may lead to better results only in some hours and in other hours simple time series models outperform multiple time series ones (especially ramp- up hours in the morning).
APA, Harvard, Vancouver, ISO, and other styles
23

Strömberg, Peter, Mattias Hedman, and Madeleine Broberg. "Forecasting the House Price Index in Stockholm County 2011-2014 : A multiple regression analysis of four influential macroeconomic variables." Thesis, Mälardalens högskola, Akademin för hållbar samhälls- och teknikutveckling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-12600.

Full text
Abstract:
Purpose of the research: The purpose is to forecast the future trend of housing prices in Stockholm County 2011-2014 based on estimated slope coefficients of selected explanatory variables 1993-2010. Thereafter, the obtained forecast will be discussed with respect to other non-quantifiable concepts within behavioral economics. Method: Multiple regression technique with a deductive and explorative approach. Empirical data: Quantitative. Conclusion: The future trend of housing prices in Stockholm County has been forecasted to be positively sloped throughout all the years 2011-2014, but in 2011, the forecast reveals that the increase of house prices will taper off. Nevertheless, behavioral economics reveals some insights about the trend on the housing market and that the house prices might include a portion of abnormal returns.
Syfte: Syftet är att förutse den framtida utvecklingen av bostadspriserna i Stockholms län 2011-2014 baserade på beräknade lutningskoefficienter av valda förklaringsvariabler 1993-2010. Därefter kommer den erhållna prognosen att diskuteras i förhållande till andra icke-kvantifierbara begrepp inom beteendeekonomi. Metod: Multipel regressionsteknik med en deduktiv och explorativ strategi. Empirisk data: Kvantitativ. Slutsats: Den framtida utvecklingen av bostadspriserna i Stockholms län har beräknats ha en positiv lutning inom samtliga år 2011-2014, men under 2011 visar också prognosen att ökningen av huspriserna kommer att avta successivt. Icke desto mindre avslöjar beteendeekonomi vissa insikter om utvecklingen på bostadsmarknaden och att huspriserna kan innehålla en andel abnorm avkastning.
APA, Harvard, Vancouver, ISO, and other styles
24

Yang, Yue, and Viorica Gonta. "The relationship between volatility of price multiples and volatility of stock prices : A study of the Swedish market from 2003 to 2012." Thesis, Umeå universitet, Handelshögskolan vid Umeå universitet (USBE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-72769.

Full text
Abstract:
The purpose of our study was to examine the relationship between the volatility of price multiples and the volatility of stock prices in the Swedish market from 2003 to 2012. Our focus was on the price-to-earnings ratio and the price-to-book ratio. Some previous studies showed a link between the price multiples and the volatility of stock prices, this made us question whether there should be a link between the volatility of the price multiples and the volatility of the stock prices. The importance of this subject is accentuated by the financial crisis, as we provide investors with information regarding the movements of price multiples and stock prices. Moreover, we test if the volatility of the price multiples can be used to create a prediction model for the volatility of stock prices. Also we fill the gap in the previous researches as there is no previous literature about this topic. We conducted a quantitative research using statistical tests, such as the correlation test and the linear regression test. For our data sample we chose the Sweden Datastream index. We first calculated the volatility using the GARCH model and then continued with our statistical tests. The results of our tests showed that there is a relationship between the volatility of the price multiples and the volatility of the stock prices in the Swedish market in the past ten years. Our findings show that the correlation coefficients vary across industries and over time in both strength and direction. The second part of our tests is concerned with the linear regression tests, mainly calculating the coefficient of determination. Our results show that the volatility of the price multiples do explain changes in the volatility of stock prices. Thus, the volatility of the P/E ratio and the volatility of the P/B ratio can be used in creating a prediction model for the volatility of stock prices. Nevertheless, we also find that this model is best suited when the economic situation is unstable (i.e. crisis, bad economic outlook) as both the correlation coefficient and the coefficient of determination had the highest values in the last five years, with the peak in 2008.
APA, Harvard, Vancouver, ISO, and other styles
25

Hallberg, Robert. "Target Classification Based on Kinematics." Thesis, Linköpings universitet, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81216.

Full text
Abstract:
Modern aircraft are getting more and better sensors. As a result of this, the pilots are getting moreinformation than they can handle. To solve this problem one can automate the information processingand instead provide the pilots with conclusions drawn from the sensor information. An aircraft’smovement can be used to determine which class (e.g. commercial aircraft, large military aircraftor fighter) it belongs to. This thesis focuses on comparing three classification schemes; a Bayesianclassification scheme with uniform priors, Transferable Belief Model and a Bayesian classificationscheme with entropic priors.The target is modeled by a jump Markov linear system that switches between different modes (flystraight, turn left, etc.) over time. A marginalized particle filter that spreads its particles over thepossible mode sequences is used for state estimation. Simulations show that the results from Bayesianclassification scheme with uniform priors and the Bayesian classification scheme with entropic priorsare almost identical. The results also show that the Transferable Belief Model is less decisive thanthe Bayesian classification schemes. This effect is argued to come from the least committed principlewithin the Transferable Belief Model. A fixed-lag smoothing algorithm is introduced to the filter andit is shown that the classification results are improved. The advantage of having a filter that remembersthe full mode sequence (such as the marginalized particle filter) and not just determines the currentmode (such as an interacting multiple model filter) is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

Rahal, Abbas. "Bayesian Methods Under Unknown Prior Distributions with Applications to The Analysis of Gene Expression Data." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42408.

Full text
Abstract:
The local false discovery rate (LFDR) is one of many existing statistical methods that analyze multiple hypothesis testing. As a Bayesian quantity, the LFDR is based on the prior probability of the null hypothesis and a mixture distribution of null and non-null hypothesis. In practice, the LFDR is unknown and needs to be estimated. The empirical Bayes approach can be used to estimate that mixture distribution. Empirical Bayes does not require complete information about the prior and hyper prior distributions as in hierarchical Bayes. When we do not have enough information at the prior level, and instead of placing a distribution at the hyper prior level in the hierarchical Bayes model, empirical Bayes estimates the prior parameters using the data via, often, the marginal distribution. In this research, we developed new Bayesian methods under unknown prior distribution. A set of adequate prior distributions maybe defined using Bayesian model checking by setting a threshold on the posterior predictive p-value, prior predictive p-value, calibrated p-value, Bayes factor, or integrated likelihood. We derive a set of adequate posterior distributions from that set. In order to obtain a single posterior distribution instead of a set of adequate posterior distributions, we used a blended distribution, which minimizes the relative entropy of a set of adequate prior (or posterior) distributions to a "benchmark" prior (or posterior) distribution. We present two approaches to generate a blended posterior distribution, namely, updating-before-blending and blending-before-updating. The blended posterior distribution can be used to estimate the LFDR by considering the nonlocal false discovery rate as a benchmark and the different LFDR estimators as an adequate set. The likelihood ratio can often be misleading in multiple testing, unless it is supplemented by adjusted p-values or posterior probabilities based on sufficiently strong prior distributions. In case of unknown prior distributions, they can be estimated by empirical Bayes methods or blended distributions. We propose a general framework for applying the laws of likelihood to problems involving multiple hypotheses by bringing together multiple statistical models. We have applied the proposed framework to data sets from genomics, COVID-19 and other data.
APA, Harvard, Vancouver, ISO, and other styles
27

Bennett, Maxine Sarah. "Improving the efficiency of clinical trial designs by using historical control data or adding a treatment arm to an ongoing trial." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/271133.

Full text
Abstract:
The most common type of confirmatory trial is a randomised trial comparing the experimental treatment of interest to a control treatment. Confirmatory trials are expensive and take a lot of time in the planning, set up and recruitment of patients. Efficient methodology in clinical trial design is critical to save both time and money and allow treatments to become available to patients quickly. Often there are data available on the control treatment from a previous trial. These historical data are often used to design new trials, forming the basis of sample size calculations, but are not used in the analysis of the new trial. Incorporating historical control data into the design and analysis could potentially lead to more efficient trials. When the historical and current control data agree, incorporating historical control data could reduce the number of control patients required in the current trial and therefore the duration of the trial, or increase the precision of parameter estimates. However, when the historical and current data are inconsistent, there is a potential for biased treatment effect estimates, inflated type I error and reduced power. We propose two novel weights to assess agreement between the current and historical control data: a probability weight based on tail area probabilities; and a weight based on the equivalence of the historical and current control data parameters. For binary outcome data, agreement is assessed using the posterior distributions of the response probability in the historical and current control data. For normally distributed outcome data, agreement is assessed using the marginal posterior distributions of the difference in means and the ratio of the variances of the current and historical control data. We consider an adaptive design with an interim analysis. At the interim, the agreement between the historical and current control data is assessed using the probability or equivalence probability weight approach. The allocation ratio is adapted to randomise fewer patients to control when there is agreement and revert back to a standard trial design when there is disagreement. The final analysis is Bayesian utilising the analysis approach of the power prior with a fixed weight. The operating characteristics of the proposed design are explored and we show how the equivalence bounds can be chosen at the design stage of the current study to control the maximum inflation in type I error. We then consider a design where a treatment arm is added to an ongoing clinical trial. For many disease areas, there are often treatments in different stages of the development process. We consider the design of a two-arm parallel group trial where it is planned to add a new treatment arm during the trial. This could potentially save money, patients, time and resources. The addition of a treatment arm creates a multiple comparison problem. Dunnett (1955) proposed a design that controls the family-wise error rate when comparing multiple experimental treatments to control and determined the optimal allocation ratio. We have calculated the correlation between test statistics for the method proposed by Dunnett when a treatment arm is added during the trial and only concurrent controls are used for each treatment comparison. We propose an adaptive design where the sample size of all treatment arms are increased to control the family-wise error rate. We explore adapting the allocation ratio once the new treatment arm is added to maximise the overall power of the trial.
APA, Harvard, Vancouver, ISO, and other styles
28

Rahil, Abdulla. "Dispatchable operation of multiple electrolysers for demand side response and the production of hydrogen fuel : Libyan case study." Thesis, De Montfort University, 2018. http://hdl.handle.net/2086/17439.

Full text
Abstract:
Concerns over both environmental issues and about the depletion of fossil fuels have acted as twin driving forces to the development of renewable energy and its integration into existing electricity grids. The variable nature of RE generators assessment affects the ability to balance supply and demand across electricity networks; however, the use of energy storage and demand-side response techniques is expected to help relieve this situation. One possibility in this regard might be the use of water electrolysis to produce hydrogen while producing industrial-scale DSR services. This would be facilitated by the use of tariff structures that incentive the operation of electrolysers as dispatchable loads. This research has been carried out to answer the following question: What is the feasibility of using electrolysers to provide industrial-scale of Demand-side Response for grid balancing while producing hydrogen at a competitive price? The hydrogen thus produced can then be used, and indeed sold, as a clean automotive fuel. To these ends, two common types of electrolyser, alkaline and PEM, are examined in considerable detail. In particular, two cost scenarios for system components are considered, namely those for 2015 and 2030. The coastal city of Darnah in Libya was chosen as the basis for this case study, where renewable energy can be produced via wind turbines and photovoltaics (PVs), and where there are currently six petrol stations serving the city that can be converted to hydrogen refuelling stations (HRSs). In 2015 all scenarios for both PEM and alkaline electrolysers were considered and were found to be able to partly meet the project aims but with high cost of hydrogen due to the high cost of system capital costs, low price of social carbon cost and less government support. However, by 2030 the price of hydrogen price will make it a good option as energy storage and clean fuel for many reasons such as the expected drop in capital cost, improvement in the efficiency of the equipment, and the expectation of high price of social carbon cost. Penetration of hydrogen into the energy sector requires strong governmental support by either establishing or modifying policies and energy laws to increasingly support renewable energy usage. Government support could effectively bring forward the date at which hydrogen becomes techno-economically viable (i.e. sooner than 2030).
APA, Harvard, Vancouver, ISO, and other styles
29

CIOCIOLA, GIUSEPPE. "Dynamics of Commodity Prices. A Potential Function Approach with Numerical Implementation." Doctoral thesis, Università degli studi di Bergamo, 2013. http://hdl.handle.net/10446/28630.

Full text
Abstract:
In the present analysis a nonlinear model is discussed in order to capture the presence of several forces acting in commodity markets and the difficulty to disentangle their relative price impacts. Global commodity markets have experienced significant price swings in recent years. Analysts offer two explanations: market forces and speculative expectations, not mutually exclusive. Commodity prices seem to indicate that various factors are acting in a very complex way. We start from one specific feature: price clustering phenomenon, which is the tendency to concentrate in a number of attraction regions, preferring some values over others. Commodities are in the process of becoming mainstream. The mean-reverting class of diffusion models are not able to model the phenomenon of multiple attraction regions. In the potential function approach the price is modelled as a diffusion process governed by a potential function. A fundamental step is to fit the multimodal density of the invariant distribution. We postulate a parametric form of the distribution in the framework of finite mixture models and Expectation-Maximization algorithm. The procedure for identifying and estimating potential function and diffusion parameter is provided. Applications to crude oil and soybean prices capture the essential characteristics of the data remarkably well. An underlying assumption is that potential function and long-term volatility do not change with time. New market conditions and new attraction regions can form, changing the shape of the potential and the magnitude of long-term volatility. We investigate changes in shape of the potential, which reflects new price equilibrium levels (attraction regions) and hence new market conditions. The model allows to generate copies of the observed price series with the same invariant distribution, useful for applications requiring a large number of independent price trajectories. A goodness-of-fit test for the SDE model is provided. A numerical implementation of the analysis is provided.
APA, Harvard, Vancouver, ISO, and other styles
30

Forti, Frank C. "Black & white continuous tone printing using multiple negative working plates, so that each plate prints an equal segment of a determined density range /." Online version of thesis, 1986. http://hdl.handle.net/1850/8746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

BISSLER, ALEXANDER, and SHERVIN GHAHESTANI. "Key Factors in Driving Sustainability Initiatives in the Supply Chain : A multiple case study of manufacturing companies." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299655.

Full text
Abstract:
The manufacturing industry is accountable for a significant amount of carbon emissions released, and manufacturers experience pressure from stakeholders to address the sustainability issues and contribute to the UN Sustainable Development Goals. However, the lack of verified frameworks to achieve a sustainable supply chain makes it difficult to revisit their supply chain strategies. This study benchmarks how manufacturing companies in Sweden achieve sustainability in their supply chains by examining the a priori and post-implementation factors required, as well as how the sustainability initiatives are structured. The study is a multiple case study, which started with a literature review to gain a relevant understanding of the research problem. Semi-structured interviews were held with seven case companies in different sectors in the manufacturing industry and with two environmental consultants. The findings prove that manufacturing companies achieve sustainability in their supply chains by combining a priori and post-implementation factors with aspects regarding the structure of the initiatives. Top management commitment and management providing the necessary means to drive the initiative are crucial a priori factors. Moreover, the findings demonstrate to define ownership of the tasks in the initiative and that top management should integrate sustainability in the business model and have a budget for sustainability initiatives. Prioritizing the activities with the largest value creation is important, where a materiality analysis facilitates. Training employees and management on sustainability, and encouraging the employees to find green improvements are necessary. Post implementation, adopting a circular process is critical, while also ensuring sufficient resources throughout the initiatives. Moreover, the findings highlight strict governance with clearly defined ownership over time, the more decentralized the better. A cross-functional organization is advantageous for achieving the above-mentioned factors. The reason for pursuing an initiative must be defined to enable clear goals. Backcasting and a materiality analysis are useful tools to create measurable goals accordingly, and the goals should be scientifically approved by the Science Based Target initiative. In the execution, using previous experiences on internal and external platforms aid the case companies with their goal conflict prioritization. Lastly, frequent follow ups are critical, where the follow-up process should follow international standards. It is necessary to have a defined process for follow ups and have traceable follow ups to view the progress.
Tillverkningsindustrin står för en betydande mängd av världens koldioxidutsläpp, och tillverkningsbolag blir alltmer pressade av intressenter att ta itu med hållbarhetsfrågorna och bidra till FN:s globala mål för hållbar utveckling. Bristen på verifierade ramverk för att uppnå en hållbar försörjningskedja försvårar för företag att se över deras hållbarhetsstrategier. Denna benchmarkingstudie om hur tillverkningsföretag i Sverige uppnår hållbarhet i sina försörjningskedjor undersöker de faktorer som krävs innan och efter en implementering av ett hållbarhetsinitiativ, samt hur initiativen är strukturerade. Fallstudien började med en litteraturundersökning för att få relevant förståelse och kunskap om forskningsproblemet. Semistrukturerade intervjuer genomfördes med sju fallföretag inom olika sektorer i tillverkningsindustrin och med två miljökonsulter. Resultaten visar att tillverkningsföretag uppnår hållbarhet genom att kombinera kritiska faktorer innan och efter utförandet, samt att följa aspekter som rör initiativens struktur. Engagemang från högsta ledningen som ger nödvändiga medel för att driva initiativet är avgörande innan implementeringen. Dessutom visar resultaten att fallföretagen och högsta ledningen bör integrera hållbarhet i affärsmodellen och ha en budget för hållbarhetsinitiativ. Att prioritera aktiviteter med störst värdeskapande är viktigt, där en väsentlighetsanalys underlättar. Det är nödvändigt att utbilda medarbetarna och ledningen inom hållbarhet och uppmuntra de anställda att hitta miljöförbättringar. Efter implementeringen är det avgörande att arbeta i en cirkulär process, samtidigt som det säkerställs tillräckligt med resurser genom hela initiativet. Dessutom visar resultaten att en strikt styrning med tydligt definierat ägarskap över tid är essentiellt, och ju mer decentraliserat desto bättre. En tvärfunktionell organisation är fördelaktig för att uppnå ovan nämnda faktorer. Anledningen till att driva ett initiativ måste definieras för att möjliggöra tydliga mål. Backcasting och väsentlighetsanalyser är användbara verktyg för att skapa mätbara mål utifrån, samt att målen bör vara vetenskapligt godkända i enlighet med Science Based Target initiative. Under genomförandet anser fallföretagen att det är viktigt att beakta tidigare erfarenheter av att driva hållbarhetsinitiativ, då det underlättar målkonfliktprioriteringen. Slutligen är frekventa uppföljningar kritiska, där uppföljningsprocessen bör följa en internationell standard. Det är även nödvändigt att ha en definierad och spårbar process för uppföljningen för att tydligt se framstegen.
APA, Harvard, Vancouver, ISO, and other styles
32

Hartmann, L. "Perceived ambiguity, ambiguity attitude and strategic ambiguity in games." Thesis, University of Exeter, 2019. http://hdl.handle.net/10871/35581.

Full text
Abstract:
This thesis contributes to the theoretical work on decision and game theory when decision makers or players perceive ambiguity. The first article introduces a new axiomatic framework for ambiguity aversion and provides axiomatic characterizations for important preference classes that thus far had lacked characterizations. The second article introduces a new axiom called Weak Monotonicity which is shown to play a crucial role in the multiple prior model. It is shown that for many important preference classes, the assumption of monotonic preferences is a consequence of the other axioms and does not have to be assumed. The third article introduces an intuitive definition of perceived ambiguity in the multiple prior model. It is shown that the approach allows an application to games where players perceive strategic ambiguity. A very general equilibrium existence result is given. The modelling capabilities of the approach are highlighted through the analysis of examples. The fourth article applies the model from the previous article to a specific class of games with a lattice-structure. We perform comparative statics on perceived ambiguity and ambiguity attitude. We show that more optimism does not necessarily lead to higher equilibria when players have Alpha-Maxmin preferences. We present necessary and sufficient conditions on the structure of the prior sets for this comparative statics result to hold. The introductory chapter provides the basis of the four articles in this thesis. An overview of axiomatic decision theory, decision-making under ambiguity and ambiguous games is given. It introduces and discusses the most relevant results from the literature.
APA, Harvard, Vancouver, ISO, and other styles
33

Ekman, Emelie, and Frida Bergkvist. "Fastighetsbolagens kapplöpning till börsen : En kvantitativ studie över makroekonomiska faktorers påverkan på antalet börsintroduktioner." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-29438.

Full text
Abstract:
Syfte: Studien syftar till att visa hur och varför volymen börsnoteringar av fastighetsbolag varierat över tid och hur denna volym har påverkats av det ekonomiska klimatet. Metod: Studien baseras på en kvantitativ metod. Multipel regressionsanalys tillämpas där makroekonomiska faktorers förklaringsvärde för volymen börsintroduktioner av fastighetsbolag undersöks. Teoretisk referensram: Det teoretiska ramverk som används i denna studie har sin primära utgångspunkt i tidigare forskning gjorda på börsnoteringar. Vidare har The Fisher Di Pasquale Wheaton model använts för att få en djupare förståelse för fastighetsbranschens mekanismer. Kopplingen till aktiemarknaden har sitt ursprung i Den effektiva marknadshypotesen tillsammans med The capital demand hypothesis. Resultat: Denna studie finner ett negativt samband mellan antalet introduktioner av fastighetsbolag samt det aktuella ränte- och konjunkturläget. Aktieprisutvecklingen bland fastighetsbolag och volatilitet på fastighetsaktiemarknaden har båda ett positivt samband med antalet introduktioner av fastighetsbolag. Denna studie finner inget samband mellan antalet introduktioner av fastighetsbolag och inflationsnivå.
Objective: This thesis aims to gain a deeper understanding of IPO activity by real estate firms, and why its volume varies over time. The objective is also to obtain the impacts of macroeconomic factors on the volumes of initial public offerings. Method: This study uses a quantitative method were macroeconomic factors will be used as predictors in a multiple regression analysis. Further, IPO volumes of real estate firms will be considered as the constant. Theorethical references: The basic theories that are used in this thesis are Efficient Market Hypothesis, the FDW-model, and The Capital Demand Hypothesis. Previous thesis that covers IPOs are considered as the fundmental basis of this study. Results: The results shows a negative correlation between the IPO volumes of real estate firms, and the interest rate, as well as the economic cycle. Hence, this study finds a positive correlation between stock prices and the volatility at the stock market. The results don’t find any significant correlation between IPO volumes and the inflation rate.
APA, Harvard, Vancouver, ISO, and other styles
34

Hohotă, Valentina Gabriela. "La construction des identités carcérales dans le discours des prisonniers : approche comparée français et roumain." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOL005/document.

Full text
Abstract:
Notre thèse, La construction des identités carcérales dans le discours des prisonniers. Approche comparée français et roumain, propose une analyse des milieux carcéraux français et roumain dans une perspective sociolinguistique. Nous proposons une vision pluridisciplinaire de l’analyse du milieu de réclusion, ainsi que du discours carcéral en vue de comprendre les manifestations langagières des sujets-parlants constituant notre échantillon. Pour ce qui est de la mise en pratique du discours de la prison, notre thèse la discute en termes de prémisse pour l’expression de l’identité multiple du sujet-parlant détenu et comme moyen d’intégration sociale de celui-ci dans le nouveau groupe social. Le principe qui est à la base de cette recherche est celui de valorisation. Pour avoir une vision complexe du monde carcéral, nous puisons à la fois dans des sciences linguistiques et non linguistiques qui permettent, d’une part, la compréhension du milieu de réclusion en tant que milieu dichotomique et clos, et, d’autre part, l’ouverture de pistes de recherche sociolinguistique. Cette thèse à été construite suite à un travail de terrain auprès de 100 personnes privées de liberté et grâce à la connaissance in situ de ce que nous appelons intimité carcérale. Nous partons de la définition de ce concept pour mieux mettre en évidence l’écart qui s’établit entre l’individuel déviant et la collectivité normative, plus précisément pour souligner l’essence de la psychologie sociale, c’est-à-dire « le conflit entre l’individu et la société. » Notre recherche analyse deux hypostases du sujet-parlant détenu : dans son milieu quotidien, lors du processus de socialisation et de construction de l’identité carcérale multiple à l’aide des relations non officielles et dans des situations de communication officielle. Cette thèse est composée de 3 parties, avec un total de 6 chapitres. Tout d’abord, nous nous penchons sur le contexte social caractérisant les deux milieux de réclusion, ce dernier devenant progressivement le point d’appui pour la mise en discussion des identités carcérales
Our thesis, Construction of prison-related identities in detainees’ speech. A comparative vision on the French and Romanian fields, of Romanian and French prison-related environments proposes an analysis from a sociolinguistic perspective. We put forth a multidisciplinary analysis of the prison-related environment and of the prison speech with the purpose of understanding the linguistic manifestations and behaviours of the subject-speakers making up our sample group. As concerns the putting into practice of detainees’ speech, our thesis considers it as a premise for the expression of the multiple identity of the detainee subject-speaker and as a means for his social reintegration into the new social group. The principle underlying the current research is that of exploitation of results. In order to have a complex vision of the prison-related world, we put together linguistic and non-linguistic sciences which could allow us, on the one hand, to understand the prison-related environment as a closed and dichotomic environment and, on the other hand, to open up new sociolinguistic researches. The current thesis was built as a result of a field work which meant getting into contact with 100 persons in custody and getting to know in situ what we call prison-related intimacy. We start out in our scientific approach by defining this concept to underline the distance between the deviant individual and the regulatory collectivity, more exactly to point out the essence of the social psychology, which is "the conflict between the individual and the society". Our research analyses two aspects of the detainee subject-speaker: in its daily environment, during the process of socializing and building the multiple prison-related identity by means of the unofficial relationships and in official communication situations. The thesis is structured in three parts, having a total of 6 chapters. In the first part, we concentrate on the social context characterizing the two prison-related environments, the latter progressively becoming a support point in discussing the prison-related identities
Teza noastră, Construirea identităţilor carcerale în discursul deţinuţilor. O viziune comparată asupra domeniilor francez şi român, a mediilor carcerale francez şi român propune o analiză din perspectivă sociolingvistiă. Propunem o analiză pluridisciplinară a mediului carceral şi a discursului închisorii cu scopul de a înţelege manifestările şi comportamentele lingvistice ale subiecţilor-vorbitori care constitue eşantionul nostru. In ceea ce priveşte punerea în practică a discursului deţinuţilor, teza noastră îl discută ca premisă a exprimării identităţii multiple a subiectului-vorbitor deţinut şi ca mijloc de integrare socială a acestuia în cadrul noului grup social. Principiul care stă la baza acestei cercetări este cel al valorizării. Pentru a avea o viziune complexă asupra lumii carcerale, reunim ştiinţe lingvistice si nelingvistice care să ne permită, pe de o parte, înţelegerea mediului carceral ca mediu dihotomic şi închis şi, pe de alta parte, deschiderea de piste de cercetare sociolingvistică. Această teză a fost construită ca urmare a unei munci de teren care a însemnat contactarea a 100 de persoane private de libertate şi cunoasterea la faţa locului a ceea ce noi numim intimitate carcerală. Plecăm în demersul nostru ştiinţific de la definirea acestui concept pentru a pune în evidenţa distanţa care se stabileşte între individualul deviant şi colectivitatea normativă, mai exact pentru a sublinia esenţa psihologiei sociale, adică « conflictul intre individ si societate. » Cercetarea noastră analizează doua ipostaze ale subiectului-vorbitor deţinut : în mediul său cotidian, în timpul procesului de socializare şi de construire a identităţii carcerale multiple cu ajutorul relaţiilor neoficiale şi în situaţii de comunicare oficiale. Această teză este structurată în 3 părţi, cu un total de 6 capitole. In prima sa parte, ne concentrăm pe contextul social ce caracterizează cele două medii carcerale, acesta din urmă devenind progresiv punctul de sprjin în discutarea identităţilor carcerale
APA, Harvard, Vancouver, ISO, and other styles
35

Minas, Luís Guerra da Gama. "A evolução dos preços do imobiliário na União Europeia entre 2000-2017." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/20314.

Full text
Abstract:
Mestrado em Ciências Empresariais
Os preços do mercado imobiliário foram o tema escolhido para a elaboração deste Trabalho Final de Mestrado. A análise recaiu sobre os países pertencentes à União Europeia e o seu objeto de estudo é perceber qual foi a evolução dos preços do imobiliário e quais determinantes que mais influenciam o seu comportamento entre os anos de 2000 e 2017. Para compreender melhor o mercado imobiliário e conhecer os estudos já realizados foi elaborada uma revisão de literatura de forma a sustentar também a investigação desta dissertação. Escolhemos alguns determinantes sociais e económicos e correlacionámo-los com o índice de preços do imobiliário através de um modelo de Regressão Linear Múltiplo. Os principais resultados apontam determinantes como a crise financeira, a crise soberana ou as políticas do BCE como muito significativos, bem como a dívida pública a taxa de desemprego, entre outros. As conclusões do estudo vão no sentido de estudos anteriores e servem o propósito de fortalecer a informação sobre a complexidade do mercado imobiliário e os seus determinantes.
Real estate prices were the chosen theme for the preparation of this Master's Final Paper. The analysis is about the countries that belong to the European Union. The object of study is to understand the evolution of real estate prices and which determinants most influence their behaviour between 2000 and 2017. In order to better understand the real estate market and to know the already made studies, a literature review was elaborated to suport also the research of this dissertation. Social and economic determinants were chosen and correlated with the real estate prices índex, using a Multiple Liner Regression model. The main results point to determinants, such as the financial crisis or the ECB's policies as very significant, as well as the public dept and the unemplyment rate, among others. The study's findings are in the direction of previous studies and serve to a purpose of strengthening the information about the complexity of the real estate market and its determinants.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
36

Favero, Luiz Paulo Lopes. "O mercado imobiliário residencial da região metropolitana de São Paulo: uma aplicação de modelos de comercialização hedônica de regressão e correlação canônica." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/12/12139/tde-05122005-151150/.

Full text
Abstract:
Este trabalho destina-se a realizar um estudo sobre o mercado imobiliário de lançamentos residenciais da Região Metropolitana de São Paulo, tendo como base a utilização de modelos de comercialização hedônica. Para tanto, utiliza-se da Teoria dos Atributos proposta por Lancaster e dos modelos hedônicos e de equilíbrio de sub-mercados propostos por Rosen e Palmquist, a partir dos quais é possível analisar a importância relativa de “pacotes” de atributos, em função dos diferentes perfis sócio-demográficos determinados previamente por meio de análise fatorial elaborada com um grupo de 11 variáveis sócio-demográficas de cada Município da Região Metropolitana e de cada distrito do Município de São Paulo. Por meio de um levantamento realizado com especialistas, com compradores de imóveis residenciais e por meio de anúncios específicos, definiram-se as variáveis hedônicas explicativas e dependentes a serem incluídas nos modelos de regressão múltipla de Box-Cox e de correlação canônica, sob a ótica da demanda e da oferta, para cada perfil sócio-demográfico definido. O método proposto permite a determinação e a avaliação dos “pacotes” representativos de atributos para a composição das condições comerciais dos imóveis residenciais em lançamento na Região Metropolitana de São Paulo, propiciando a verificação da existência de eventuais descolamentos entre o comportamento da demanda e da oferta e possibilitando a comparação da importância relativa de cada variável entre os perfis sócio-demográficos. Portanto, a aplicação do método proposto neste trabalho pode propiciar a implementação de iniciativas privadas e políticas públicas voltadas ao estabelecimento de novas estratégias de lançamento imobiliário, designadas para cada tipo específico de empreendimento e de acordo com as preferências dos consumidores e das características de cada localidade.
This work intends to research about the residential launchings of the real estate market in the Metropolitan Region of Sao Paulo, using hedonic models. It’s based on the approach to Theory of Attributes, proposed by Lancaster, and on the hedonic models and the sub-markets equilibrium approach proposed by Rosen and Palmquist, that make possible the analysis of the relative importance of the attributes “bundles” for each different social and demographic group previously defined by the factorial analysis statistical technique, that used 11 social and demographic variables related to each Municipality of the Metropolitan Region of Sao Paulo and each district of the City of Sao Paulo. Using a survey realized with specialists, residential launchings buyers and through specific advertisements, many explicative and dependent hedonic variables were defined and are to be included in the models of Box-Cox multiple regression and canonical correlation, under the perspective of demand and supply, for each social and demographic defined group. The proposed method allows the determination and the evaluation of the representative “bundles” of attributes to the composition of the residential launchings commercial conditions of the Metropolitan Region of Sao Paulo real estate market, making possible to verify the existence of eventual gaps between the demand and supply behaviors and allowing the comparison of the relative importance of each variable among the social and demographic groups. Thus, the method application can facilitate private and public implementations, allowing the establishment of new strategies designated to each specific kind of real estate, according to the consumers’ preferences and local characteristics.
APA, Harvard, Vancouver, ISO, and other styles
37

Belharbi, Soufiane. "Neural networks regularization through representation learning." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR10/document.

Full text
Abstract:
Les modèles de réseaux de neurones et en particulier les modèles profonds sont aujourd'hui l'un des modèles à l'état de l'art en apprentissage automatique et ses applications. Les réseaux de neurones profonds récents possèdent de nombreuses couches cachées ce qui augmente significativement le nombre total de paramètres. L'apprentissage de ce genre de modèles nécessite donc un grand nombre d'exemples étiquetés, qui ne sont pas toujours disponibles en pratique. Le sur-apprentissage est un des problèmes fondamentaux des réseaux de neurones, qui se produit lorsque le modèle apprend par coeur les données d'apprentissage, menant à des difficultés à généraliser sur de nouvelles données. Le problème du sur-apprentissage des réseaux de neurones est le thème principal abordé dans cette thèse. Dans la littérature, plusieurs solutions ont été proposées pour remédier à ce problème, tels que l'augmentation de données, l'arrêt prématuré de l'apprentissage ("early stopping"), ou encore des techniques plus spécifiques aux réseaux de neurones comme le "dropout" ou la "batch normalization". Dans cette thèse, nous abordons le sur-apprentissage des réseaux de neurones profonds sous l'angle de l'apprentissage de représentations, en considérant l'apprentissage avec peu de données. Pour aboutir à cet objectif, nous avons proposé trois différentes contributions. La première contribution, présentée dans le chapitre 2, concerne les problèmes à sorties structurées dans lesquels les variables de sortie sont à grande dimension et sont généralement liées par des relations structurelles. Notre proposition vise à exploiter ces relations structurelles en les apprenant de manière non-supervisée avec des autoencodeurs. Nous avons validé notre approche sur un problème de régression multiple appliquée à la détection de points d'intérêt dans des images de visages. Notre approche a montré une accélération de l'apprentissage des réseaux et une amélioration de leur généralisation. La deuxième contribution, présentée dans le chapitre 3, exploite la connaissance a priori sur les représentations à l'intérieur des couches cachées dans le cadre d'une tâche de classification. Cet à priori est basé sur la simple idée que les exemples d'une même classe doivent avoir la même représentation interne. Nous avons formalisé cet à priori sous la forme d'une pénalité que nous avons rajoutée à la fonction de perte. Des expérimentations empiriques sur la base MNIST et ses variantes ont montré des améliorations dans la généralisation des réseaux de neurones, particulièrement dans le cas où peu de données d'apprentissage sont utilisées. Notre troisième et dernière contribution, présentée dans le chapitre 4, montre l'intérêt du transfert d'apprentissage ("transfer learning") dans des applications dans lesquelles peu de données d'apprentissage sont disponibles. L'idée principale consiste à pré-apprendre les filtres d'un réseau à convolution sur une tâche source avec une grande base de données (ImageNet par exemple), pour les insérer par la suite dans un nouveau réseau sur la tâche cible. Dans le cadre d'une collaboration avec le centre de lutte contre le cancer "Henri Becquerel de Rouen", nous avons construit un système automatique basé sur ce type de transfert d'apprentissage pour une application médicale où l'on dispose d’un faible jeu de données étiquetées. Dans cette application, la tâche consiste à localiser la troisième vertèbre lombaire dans un examen de type scanner. L’utilisation du transfert d’apprentissage ainsi que de prétraitements et de post traitements adaptés a permis d’obtenir des bons résultats, autorisant la mise en oeuvre du modèle en routine clinique
Neural network models and deep models are one of the leading and state of the art models in machine learning. They have been applied in many different domains. Most successful deep neural models are the ones with many layers which highly increases their number of parameters. Training such models requires a large number of training samples which is not always available. One of the fundamental issues in neural networks is overfitting which is the issue tackled in this thesis. Such problem often occurs when the training of large models is performed using few training samples. Many approaches have been proposed to prevent the network from overfitting and improve its generalization performance such as data augmentation, early stopping, parameters sharing, unsupervised learning, dropout, batch normalization, etc. In this thesis, we tackle the neural network overfitting issue from a representation learning perspective by considering the situation where few training samples are available which is the case of many real world applications. We propose three contributions. The first one presented in chapter 2 is dedicated to dealing with structured output problems to perform multivariate regression when the output variable y contains structural dependencies between its components. Our proposal aims mainly at exploiting these dependencies by learning them in an unsupervised way. Validated on a facial landmark detection problem, learning the structure of the output data has shown to improve the network generalization and speedup its training. The second contribution described in chapter 3 deals with the classification task where we propose to exploit prior knowledge about the internal representation of the hidden layers in neural networks. This prior is based on the idea that samples within the same class should have the same internal representation. We formulate this prior as a penalty that we add to the training cost to be minimized. Empirical experiments over MNIST and its variants showed an improvement of the network generalization when using only few training samples. Our last contribution presented in chapter 4 showed the interest of transfer learning in applications where only few samples are available. The idea consists in re-using the filters of pre-trained convolutional networks that have been trained on large datasets such as ImageNet. Such pre-trained filters are plugged into a new convolutional network with new dense layers. Then, the whole network is trained over a new task. In this contribution, we provide an automatic system based on such learning scheme with an application to medical domain. In this application, the task consists in localizing the third lumbar vertebra in a 3D CT scan. A pre-processing of the 3D CT scan to obtain a 2D representation and a post-processing to refine the decision are included in the proposed system. This work has been done in collaboration with the clinic "Rouen Henri Becquerel Center" who provided us with data
APA, Harvard, Vancouver, ISO, and other styles
38

Kulikova, Maria. "Reconnaissance de forme pour l'analyse de scène." Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00477661.

Full text
Abstract:
Cette thèse est composée de deux parties principales. La première partie est dédiée au problème de la classification d'espèces d'arbres en utilisant des descripteurs de forme, en combainison ou non, avec ceux de radiométrie ou de texture. Nous montrons notamment que l'information sur la forme améliore la performance d'un classifieur. Pour ce faire, dans un premier temps, une étude des formes de couronnes d'arbres extraites à partir d'images aériennes, en infrarouge couleur, est eectuée en utilisant une méthodologie d'analyse de formes des courbes continues fermées dans un espace de formes, en utilisant la notion de chemin géodésique sous deux métriques dans des espaces appropriés : une métrique non-élastique en utilisant la reprèsentation par la fonction d'angle de la courbe, ainsi qu'une métrique élastique induite par une représentation par la racinecarée appelée q-fonction. Une étape préliminaire nécessaire à la classification est l'extraction des couronnes d'arbre. Dans une seconde partie, nous abordons donc le problème de l'extraction d'objets de forme complexe arbitraire, à partir d'images de télédétection à très haute résolution. Nous construisons un modèle fondé sur les processus ponctuels marqués. Son originalité tient dans sa prise en compte d'objets de forme arbitraire par rapport aux objets de forme paramétrique, e.g. ellipses ou rectangles. Les formes sélectionnées sont obtenues par la minimisation locale d'une énergie de type contours actifs avec diérents a priori sur la forme incorporé. Les objets de la configuration finale (optimale) sont ensuite sélectionnés parmi les candidats par une dynamique de naissances et morts multiples, couplée à un schéma de recuit simulé. L'approche est validée sur des images de zones forestières à très haute résolution fournies par l'Université d'Agriculture de Suède.
APA, Harvard, Vancouver, ISO, and other styles
39

Souza, Leticia Vasconcellos de. "Congruência modular nas séries finais do ensino fundamental." Universidade Federal de Juiz de Fora, 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/1441.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-05-10T13:29:13Z No. of bitstreams: 1 leticiavasconcellosdesouza.pdf: 334599 bytes, checksum: ecaf1358f31b66f2a2e8740f4db33535 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-06-15T13:12:10Z (GMT) No. of bitstreams: 1 leticiavasconcellosdesouza.pdf: 334599 bytes, checksum: ecaf1358f31b66f2a2e8740f4db33535 (MD5)
Made available in DSpace on 2016-06-15T13:12:10Z (GMT). No. of bitstreams: 1 leticiavasconcellosdesouza.pdf: 334599 bytes, checksum: ecaf1358f31b66f2a2e8740f4db33535 (MD5) Previous issue date: 2015-08-14
Este trabalho é voltado para professores que atuam nas séries finais do Ensino Fundamental. Tem como objetivo mostrar que é possível introduzir o estudo de Congruência Modular nesse segmento de ensino, buscando facilitar a resolução de diversas situações-problema. A motivação para escolha desse tema é que há a possibilidade de tornar mais simples a resolução de muitos exercícios trabalhados nessa etapa de ensino e que são inclusive cobrados em provas de admissão à escolas militares e em olimpíadas de Matemática para esse nível de escolaridade. Inicialmente é feita uma breve síntese do conjunto dos Números Inteiros, com suas operações básicas, relembrando também o conceito de números primos, onde é apresentado o crivo de Eratóstenes; o mmc (mínimo múltiplo comum) e o mdc (máximo divisor comum), juntamente com o Algoritmo de Euclides. Apresenta-se alguns exemplos de situações-problema e exercícios resolvidos envolvendo restos deixados por uma divisão para então, em seguida, ser dada a definição de congruência modular. Finalmente, são apresentadas sugestões de exercícios para serem trabalhados em sala de aula, com uma breve resolução.
The aims of this work is teachers working in the final grades of elementary school. It aspires to show that it is possible to introduce the study of Modular congruence this educational segment, seeking to facilitate the resolution of numerous problem situations. The motivation for choosing this theme is that there is the possibility to make it simpler to solve many problems worked at this stage of education and are even requested for admittance exams to military schools and mathematical Olympiads for that level of education. We begin with a brief summary about integer numbers, their basic operations, also recalling the concept of prime numbers, where the sieve of Eratosthenes is presented; the lcm (least common multiple) and the gcd (greatest common divisor), along with the Euclidean algorithm. We present some examples of problem situations and solved exercises involving debris left by a division and then, we give the definition of modular congruence . Finally , we present suggestions for exercises to be worked in the classroom, with a short resolution.
APA, Harvard, Vancouver, ISO, and other styles
40

Fenyi, Alexis. "Typage moléculaire des maladies neurodégénératives dues à l’agrégation de la protéine alpha synucléine." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS053.

Full text
Abstract:
Les synucléinopathies regroupent les maladies neurodégénératives de Parkinson, les démences à corps de Lewy et l'atrophie multi-systématisée. Des études suggèrent que les synucléinopathies seraient des maladies à prion. Aujourd'hui, certains aspects manquent pour que l'α-synucléine soit reconnue comme un prion. Par exemple, il est à démontrer que chaque synucléinopathie est causée par une souche précise d'α-synucléine. Durant ma thèse j’ai mis au point une méthode d'amplification fiable des dépôts présents dans le cerveau des patients atteints de synucléinopathies. J’ai aussi documenté les procédures de nettoyage à adopter envers des matériels souillés, par diverses fibres amyloïdes, afin de réduire le risque de contamination. Finalement, j’ai été associé à une étude montrant les capacités de propagation d'assemblages d'α synucléine, dans un réseau de neurones humains en culture. Ces résultats permettront des études structurales, et fonctionnelles, des souches d’α-synucléine dans les synucléinopathies
The aggregation of α-synuclein protein has been shown to be associated with Parkinson's disease, dementia with Lewy bodies, and multiple system atrophy, called synucleinopathies. Increasing amount of evidences suggest that synucleinopathies are prion diseases. Some aspects are missing for α-synuclein to be recognized as a prion, such as the existence of strains associated to synucleinopathies. During my thesis I set up a reliable method to amplify α-synuclein-rich deposits from patients tissues. I validated the method using all synucleinopathies tissues. This should allow the identification of α-synuclein strain related to each synucleinopathy. In addition, I also documented cleaning procedures for materials soiled with various amyloid fibers, in order to reduce the risk of contamination. Finally, I was associated to a study that shows the propagation abilities of different α-synuclein assemblies in a neuronal network mimicking human cortico-cortical connections. These results open the way to structural and functional studies of the amplified deposits
APA, Harvard, Vancouver, ISO, and other styles
41

Vega, López Norma Alicia. "Comprensión de múltiples textos expositivos: relaciones entre conocimiento previo y autorregulación." Doctoral thesis, Universitat Ramon Llull, 2011. http://hdl.handle.net/10803/9282.

Full text
Abstract:
L'objectiu general de la tesi va ser analitzar les relacions entre el nivell de coneixements previs del tema (alt i baix), els processos d'autoregulació (planificació, monitoratge i ús d'estratègies) i els nivells de comprensió de múltiples textos expositius (rendiment en una tasca inferencial, comprensió superficial i transferència de coneixements). 40 estudiants de la Llicenciatura en Ciències de l'Educació amb especialitat en Químic-Biològiques van participar en la investigació. L'obtenció de dades va incloure el registre de protocols de pensament en veu alta, així com diferents mesures de comprensió.
En primer lloc, els resultats de l'estudi mostren que no existeix una correlació significativa entre el nivell de coneixement previ i els diferents nivells de comprensió. No obstant això, l'anàlisi del grandària de l'efecte assenyala una tendència per part dels estudiants de baix coneixement previ a obtenir un millor rendiment en la tasca inferencial. Per la seva banda els estudiants amb alt coneixement previ només van rendir bé en la mesura de comprensió superficial. En segon lloc l'anàlisi de la grandària de l'efecte indica que el nivell alt de coneixement previ té un impacte rellevant en un major ús d'estratègies d'autoregulació. En tercer lloc es va trobar una relació positiva entre els processos de planificació i monitoratge per al grup d'estudiants de baix coneixement previ, mentre que en el cas dels estudiants d'alt coneixement no es van trobar relacions significatives. En quart lloc, els resultats indiquen una relació negativa entre el procés de planificació i el rendiment en la tasca inferencial per al grup de baixos coneixements, mentre que per als estudiants d'alts coneixements es va trobar una relació significativa entre els processos de planificació i el rendiment en la tasca inferencial. Finalment, les anàlisis de regressió mostren que ni el nivell de coneixement previ ni els processos d'autoregulació prediuen la variància en els nivells de comprensió. Aquests resultats són discutits en termes de la teoria i investigació de la comprensió tant d'un únic i múltiples textos, així com de les teories d'autoregulació de l'aprenentatge.
El objetivo general de la tesis fue analizar las relaciones entre el nivel de conocimientos previos del tema (alto y bajo), los procesos de autorregulación (planeación, monitoreo y uso de estrategias) y los niveles de comprensión de múltiples textos expositivos (rendimiento en una tarea inferencial, comprensión superficial y transferencia de conocimientos). 40 estudiantes de la Licenciatura en Ciencias de la Educación con especialidad en Químico-Biológicas participaron en la investigación. La obtención de datos incluyó el registro de protocolos de pensamiento en voz alta, así como diferentes medidas de comprensión.

En primer lugar, los resultados del estudio muestran que no existe una correlación significativa entre el nivel de conocimiento previo y los diferentes niveles de comprensión. Sin embargo, el análisis del tamaño del efecto señala una tendencia por parte de los estudiantes de bajo conocimiento previo a obtener un mejor rendimiento en la tarea inferencial. Por su parte los estudiantes con alto conocimiento previo solamente rindieron bien en la medida de comprensión superficial. En segundo lugar el análisis del tamaño del efecto indica que el nivel alto de conocimiento previo tiene un impacto relevante en un mayor uso de estrategias de autorregulación. En tercer lugar se encontró una relación positiva entre los procesos de planeación y monitoreo para el grupo de estudiantes de bajo conocimiento previo, mientras que en el caso de los estudiantes de alto conocimiento no se encontraron relaciones significativas. En cuarto lugar, los resultados indican una relación negativa entre el proceso de planeación y el rendimiento en la tarea inferencial para el grupo de bajos conocimientos, mientras que para los estudiantes de altos conocimientos se encontró una relación significativa entre los procesos de planeación y el rendimiento en la tarea inferencial. Finalmente, los análisis de regresión muestran que ni el nivel de conocimiento previo ni los procesos de autorregulación predicen la varianza en los niveles de comprensión. Estos resultados son discutidos en términos de la teoría e investigación de la comprensión tanto de un único y múltiples textos, así como de las teorías de autorregulación del aprendizaje.
The overall objective of the dissertation was to analyze the relationship between the level of background knowledge (high and low), self-regulatory processes (planning, monitoring and use of strategies) and multiple levels of understanding expository texts (performance in a inferential task, surface understanding and knowledge transfer). 40 students from the Bachelor of Science in Education with specialization in Chemical-Biological participated in the investigation. Data collection included recording think-aloud protocols, and various measures of comprehension.

First, the results show that there is no significant correlation between the level of background knowledge and different levels of comprehension. However, the effect size analysis indicates a tendency for students with low prior knowledge to better performance on the inferential task. For their part, students with high prior knowledge only performed well in the superficial understanding measure. Secondly, the effect size analysis indicates that the high level of prior knowledge has a significant impact on the greater use of self-regulation strategies. Third, we found a positive relationship between planning and monitoring processes for the group of students with low prior knowledge, whereas in the case of students with high knowledge there were no significant relationships. Fourth, the results indicate a negative relationship between the process of planning and inferential task performance for the low knowledge group, while for higher knowledge students found a significant relationship between planning processes and performance the inferential task. Finally, regression analysis showed that neither the level of prior knowledge nor self-regulatory processes predict the variance in levels of comprehension. These results are discussed in terms of theory and research on both single and multiple texts comprehension and theories of self-regulated learning.
APA, Harvard, Vancouver, ISO, and other styles
42

O'Leary, Rebecca A. "Informed statistical modelling of habitat suitability for rare and threatened species." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/17779/1/Rebecca_O%27Leary_Thesis.pdf.

Full text
Abstract:
In this thesis a number of statistical methods have been developed and applied to habitat suitability modelling for rare and threatened species. Data available on these species are typically limited. Therefore, developing these models from these data can be problematic and may produce prediction biases. To address these problems there are three aims of this thesis. The _rst aim is to develop and implement frequentist and Bayesian statistical modelling approaches for these types of data. The second aim is develop and implement expert elicitation methods. The third aim is to apply these novel approaches to Australian rare and threatened species case studies with the intention of habitat suitability modelling. The _rst aim is ful_lled by investigating two innovative approaches for habitat suitability modelling and sensitivity analysis of the second approach to priors. The _rst approach is a new multilevel framework developed to model the species distribution at multiple scales and identify excess zeros (absences outside the species range). Applying a statistical modelling approach to the identi_cation of excess zeros has not previously been conducted. The second approach is an extension and application of Bayesian classi_cation trees to modelling the habitat suitability of a threatened species. This is the _rst `real' application of this approach in ecology. Lastly, sensitivity analysis of the priors in Bayesian classi_cation trees are examined for a real case study. Previously, sensitivity analysis of this approach to priors has not been examined. To address the second aim, expert elicitation methods are developed, extended and compared in this thesis. In particular, one elicitation approach is extended from previous research, there is a comparison of three elicitation methods, and one new elicitation approach is proposed. These approaches are illustrated for habitat suitability modelling of a rare species and the opinions of one or two experts are elicited. The _rst approach utilises a simple questionnaire, in which expert opinion is elicited on whether increasing values of a covariate either increases, decreases or does not substantively impact on a response. This approach is extended to express this information as a mixture of three normally distributed prior distributions, which are then combined with available presence/absence data in a logistic regression. This is one of the _rst elicitation approaches within the habitat suitability modelling literature that is appropriate for experts with limited statistical knowledge and can be used to elicit information from single or multiple experts. Three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression are compared, one of which is the questionnaire approach. Included in this comparison of three elicitation methods are a summary of the advantages and disadvantages of these three methods, the results from elicitations and comparison of the prior and posterior distributions. An expert elicitation approach is developed for classi_cation trees, in which the size and structure of the tree is elicited. There have been numerous elicitation approaches proposed for logistic regression, however no approaches have been suggested for classi_cation trees. The last aim of this thesis is addressed in all chapters, since the statistical approaches proposed and extended in this thesis have been applied to real case studies. Two case studies have been examined in this thesis. The _rst is the rare native Australian thistle (Stemmacantha australis), in which the dataset contains a large number of absences distributed over the majority of Queensland, and a small number of presence sites that are only within South-East Queensland. This case study motivated the multilevel modelling framework. The second case study is the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The application and sensitivity analysis of Bayesian classi_cation trees, and all expert elicitation approaches investigated in this thesis are applied to this case study. This work has several implications for conservation and management of rare and threatened species. Novel statistical approaches addressing the _rst aim provide extensions to currently existing methods, or propose a new approach, for identi _cation of current and potential habitat. We demonstrate that better model predictions can be achieved using each method, compared to standard techniques. Elicitation approaches addressing the second aim ensure expert knowledge in various forms can be harnessed for habitat modelling, a particular bene_t for rare and threatened species which typically have limited data. Throughout, innovations in statistical methodology are both motivated and illustrated via habitat modelling for two rare and threatened species: the native thistle Stemmacantha australis and the brush-tailed rock wallaby Petrogale penicillata.
APA, Harvard, Vancouver, ISO, and other styles
43

O'Leary, Rebecca A. "Informed statistical modelling of habitat suitability for rare and threatened species." Queensland University of Technology, 2008. http://eprints.qut.edu.au/17779/.

Full text
Abstract:
In this thesis a number of statistical methods have been developed and applied to habitat suitability modelling for rare and threatened species. Data available on these species are typically limited. Therefore, developing these models from these data can be problematic and may produce prediction biases. To address these problems there are three aims of this thesis. The _rst aim is to develop and implement frequentist and Bayesian statistical modelling approaches for these types of data. The second aim is develop and implement expert elicitation methods. The third aim is to apply these novel approaches to Australian rare and threatened species case studies with the intention of habitat suitability modelling. The _rst aim is ful_lled by investigating two innovative approaches for habitat suitability modelling and sensitivity analysis of the second approach to priors. The _rst approach is a new multilevel framework developed to model the species distribution at multiple scales and identify excess zeros (absences outside the species range). Applying a statistical modelling approach to the identi_cation of excess zeros has not previously been conducted. The second approach is an extension and application of Bayesian classi_cation trees to modelling the habitat suitability of a threatened species. This is the _rst `real' application of this approach in ecology. Lastly, sensitivity analysis of the priors in Bayesian classi_cation trees are examined for a real case study. Previously, sensitivity analysis of this approach to priors has not been examined. To address the second aim, expert elicitation methods are developed, extended and compared in this thesis. In particular, one elicitation approach is extended from previous research, there is a comparison of three elicitation methods, and one new elicitation approach is proposed. These approaches are illustrated for habitat suitability modelling of a rare species and the opinions of one or two experts are elicited. The _rst approach utilises a simple questionnaire, in which expert opinion is elicited on whether increasing values of a covariate either increases, decreases or does not substantively impact on a response. This approach is extended to express this information as a mixture of three normally distributed prior distributions, which are then combined with available presence/absence data in a logistic regression. This is one of the _rst elicitation approaches within the habitat suitability modelling literature that is appropriate for experts with limited statistical knowledge and can be used to elicit information from single or multiple experts. Three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression are compared, one of which is the questionnaire approach. Included in this comparison of three elicitation methods are a summary of the advantages and disadvantages of these three methods, the results from elicitations and comparison of the prior and posterior distributions. An expert elicitation approach is developed for classi_cation trees, in which the size and structure of the tree is elicited. There have been numerous elicitation approaches proposed for logistic regression, however no approaches have been suggested for classi_cation trees. The last aim of this thesis is addressed in all chapters, since the statistical approaches proposed and extended in this thesis have been applied to real case studies. Two case studies have been examined in this thesis. The _rst is the rare native Australian thistle (Stemmacantha australis), in which the dataset contains a large number of absences distributed over the majority of Queensland, and a small number of presence sites that are only within South-East Queensland. This case study motivated the multilevel modelling framework. The second case study is the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The application and sensitivity analysis of Bayesian classi_cation trees, and all expert elicitation approaches investigated in this thesis are applied to this case study. This work has several implications for conservation and management of rare and threatened species. Novel statistical approaches addressing the _rst aim provide extensions to currently existing methods, or propose a new approach, for identi _cation of current and potential habitat. We demonstrate that better model predictions can be achieved using each method, compared to standard techniques. Elicitation approaches addressing the second aim ensure expert knowledge in various forms can be harnessed for habitat modelling, a particular bene_t for rare and threatened species which typically have limited data. Throughout, innovations in statistical methodology are both motivated and illustrated via habitat modelling for two rare and threatened species: the native thistle Stemmacantha australis and the brush-tailed rock wallaby Petrogale penicillata.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Jun Hong, and 陳俊宏. "Two-step images deblurring via multiple priors." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/82935138551664201001.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
104
Deblurring form a single blurred image is a challenge task in computer vision. It is an ill-posed problem to estimate the unknown blur kernel and recover the original image. There are many significant deblurring methods toward the natural images; however, few of them are not able to perform well on face images. Based on L_0 norm prior, we propose a two-step method for the images deblurring. The proposed method does not require any facial dataset to initialize the gradient of contours or any complex filtering strategies. In first step, we combine L_0 norm prior with our local smooth prior to predict the blur kernel. With simple Gaussian filtering, we could maintain the smooth region in the sharp image. In second step, refine the previous kernel result. In order to discard low intensity pixels (seemed to be noises) on kernel, we impose the sparsity on the kernel with L_0 norm regularization. Experimental results demonstrate that our proposed algorithm perform well on the facial images.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Ta Yuan, and 劉大源. "Physical Layer Secrecy in Multiple-Input Multiple-Output Wireless Systems with No A Priori Channel State Information." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/31906490346794495023.

Full text
Abstract:
博士
國立清華大學
通訊工程研究所
104
This dissertation examines the transmission of confidential messages over a wireless wiretap system with no a priori channel state information (CSI) at any terminal. The studies can be divided into two parts. The first part focuses on conventional training-based transmissions schemes and examines the tradeoff between training and data transmission in wiretap channels; the second part makes no assumption on the transmission scheme and evaluates the asymptotic performance of such a system at high SNR. More specifically, in the first part, training-based transmission schemes are considered for multi-input single-output (MISO) Rayleigh block fading wiretap channels, where each block consists of a training phase followed by a data transmission phase. By taking the cost of obtaining CSI into account, this work considers the joint design of training and data transmission in physical-layer secret communication systems, and examines the role of artificial noise (AN), a key component in many physical layer secret communication techniques, in both of these phases. In particular, AN in the training phase is used to prevent the eavesdropper from obtaining accurate CSI whereas AN in the data transmission phase can be used to mask the transmission of the confidential message. By considering AN-assisted training and secrecy beamforming schemes, upper and lower bounds on the achievable secrecy rate is derived in a closed-form approximation that is asymptotically tight at high signal-to-noise ratio (SNR). Then, by maximizing the approximate achievable secrecy rate, the optimal power allocation between signal and AN in both training and data transmission phases is obtained for both conventional and AN-assisted training based schemes. We show that the use of AN is necessary to achieve a high secrecy rate at high SNR, and its use in the training phase can be more efficient than that in the data transmission phase when the coherence time is large. However, at low SNR, the use of AN provides no advantage since CSI is difficult to obtain in this case. In fact, allocating channel resources for training is inefficient and one can actually do better without it in this case. Numerical results are presented to verify our theoretical claims. Even though training-based transmission schemes have been widely adopted in practice, the optimality of such an approach is unknown and is in fact disproved in conjunction with the secrecy beamforming scheme mentioned in the first part. Therefore, a more general and fundamental study of the wiretap channel with no CSI anywhere is considered in the second part of this dissertation. In particular, we consider a multiple-input multiple-output (MIMO) Rayleigh block fading wiretap channel where the source, the destination, and the eavesdropper have nt, nr and ne antennas, respectively. The length of the coherence interval, where the channel coefficients remain constant within each interval, but vary independently from block to block, is denoted by T. The performance at high SNR is evaluated in terms of the secure degrees of freedom (s.d.o.f.), when T ≥ 2 min(nt, nr). We show that, in this case, the s.d.o.f. is exactly equal to (min(nt, nr)−ne)(T−min(nt, nr))/T . The first multiplicative term in this expression can be interpreted as the loss of ne spatial degrees of freedom at both the transmitter and the legitimate receiver due to the ne receive antennas at the eavesdropper. The second term can be viewed as the ratio of s.d.o.f. remaining after expending resources to acquire CSI at the legitimate receiver. We prove that this s.d.o.f. can be achieved by employing a constant norm channel input, which can be viewed as a generalization of discrete signalling to multiple dimensions. We also show that multiple dimensions in both space and time are needed to achieve a non-zero s.d.o.f. for systems without CSI. That is, one cannot achieve a positive s.d.o.f. with either a long coherence time in a single antenna system or with multiple antennas in a very short (T = 1) coherence time channel. The techniques developed in the second part is also used to examine the performance of a noncoherent network coding system with multiple hops of intermediate relays. A relay recruitment problem is considered for the case where some of the relays are untrustworthy and may be subject to eavesdropping. The source wishes to enlist their help while keeping the message secret against the eavesdropper. By employing random linear network coding at the relays, the problem can be modeled as a noncoherent finite-field wiretap channel. The secrecy capacity is examined and the input distribution is optimized using an efficient projection-based gradient decent algorithm. The untrusted relay recruitment problem is discussed based on the derived secrecy capacity. An interesting scenario is analyzed where each potentially insecure relay may be randomly eavesdropped with a certain probability. Our asymptotic analysis reveals that, with enough untrusted relays, there exists a threshold on the eavesdropping probability below which all untrusted relays should be recruited. Numerical results are presented to illustrate and verify our theoretical claims.
APA, Harvard, Vancouver, ISO, and other styles
46

Dottl, Susan Lysaker. "Women in multiple roles perceived feedback, prior socialization, and psychological well-being /." 1994. http://catalog.hathitrust.org/api/volumes/oclc/31317549.html.

Full text
Abstract:
Thesis (Ph. D.)--University of Wisconsin--Madison, 1994.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 115-141).
APA, Harvard, Vancouver, ISO, and other styles
47

Moreno, González Othón M. "Information structures and their effects on consumption decisions and prices." 2013. http://hdl.handle.net/2152/21968.

Full text
Abstract:
This work analyzes the effects that different information structures on the demand side of the market have on consumption decisions and the way prices are determined. We develop three theoretical models to address this issue in a systematic way. First, we focus our attention on the consumers' awareness, or lack thereof, of substitute products in the market and the strategic interaction between firms competing in prices and costly advertising in such an environment. We find that prior information held by consumers can drastically change the advertising equilibrium predictions. In particular, we provide sufficient conditions for the existence of three types of equilibria, in addition to one previously found in the literature, and provide a necessary condition for a fourth type of equilibrium. Additionally, we show that the effect of the resulting advertising strategies on the expected transaction price is qualitatively significant, although ambiguous when compared to the case of a newly formed market. We can establish, however, that the transaction price is increasing in the size of the smaller firm's captive market. In the second chapter, we study the optimal timing to buy a durable good with an embedded option to resell it at some point in the future, as well as its reservation price, where the agent faces Knightian uncertainty about the process generating the market prices. The problem is modeled as a stopping problem with multiple priors in continuous time with infinite horizon. We find that the direction of the change in the buyer's reservation price depends on the particular parametrization of the model. Furthermore, the change in the buying threshold due to an increase in ambiguity is greater as the fraction of the market at which the agent can resell the good decreases, and the value of the embedded option is decreasing in the perceived level of ambiguity. Finally, we introduce Knightian uncertainty to a model of price search by letting the consumers be ambiguous regarding the industry's cost of production. We characterize the equilibria of this game for high and low levels of the search cost and show that firms extract abnormal profits for low realizations of the marginal cost. Furthermore, we show that, as the search cost goes to zero, the equilibrium of the game under the low cost regime does not converge to the Bertrand marginal-cost pricing. Instead firms follow a mixed-strategy that includes all prices between the high and low production costs.
text
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Tung, and 林彤. "Augmented Lagrange Multiplier Algorithm Using l0-Norm Image Prior In Breast Tomosynthesis." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/w22qqw.

Full text
Abstract:
碩士
國立清華大學
生醫工程與環境科學系
106
Breast tomosynthesis is different from Computer Tomographic(CT) with its limit projection angle. In other words, breast tomosynthesis can be viewed as the undetermined system. Applying conventional CT reconstruction technique like “Filter Back Projection(FBP)” to tomosynthesis will lead to undesirable strip and phantom artifacts. Thus, the model based iterative algorithms, which carefully adjust the image in each iteration would be more suitable for tomosynthesis image reconstruction. Thanks for the booming development of compressive sensing, new class of image reconstruction method shows more promise in reconstructing a three dimensional image from limit angular, or low dose computer tomography. This kind of method such as total variation, dictionary learning also benefit the breast tomosynthesis. Alternating directional method of multiplier(ADMM) is an algorithm that can handle many kinds of optimization problem. The wide variety of uses is one of its big attractions so far. In this thesis, we propose an image reconstruction algorithm based on ADMM framework and use l0-norm smoothing image as prior, which impose the sparsity constraint to the image. To bridge the gap of convergence rate of existing state-of-art iterative algorithms, we use the adaptive parameter in the gradient descent part of our algorithm and backtracking to ensure the convergence. The results from the simulation experiment shows that our proposed algorithm gives the good image quality under the assumption of strong sparsity of the image to be reconstructed.
APA, Harvard, Vancouver, ISO, and other styles
49

Klausewitz, S. Kay. "How prior life experiences influence teaching: Multiple case studies of mature -age elementary student teachers." 2005. https://scholarworks.umass.edu/dissertations/AAI3179892.

Full text
Abstract:
Researchers say that what really differentiates mature age students is not age as much as it is life experiences. How and in what ways does that influence the preparation of pre-service teachers? What happens in the classroom is more related to the teacher than any other variable. All, and especially older student teachers, bring rich experiences and images into the classroom that affect their attitudes, approach, and decision-making. The overall purpose of this research was to learn how life experiences of mature age student teachers influence their learning to teach children in an elementary classroom. Participants are five students between the ages of 38 and 45, who did their student teaching practicum within a traditional teacher preparation program. Data was gathered from three in-depth interviews, three classroom observations with field notes and video tapes, and from selected documents. The Rainbow of Life Roles (Super, 1980) was used to supplement interviews about the life experiences of each participant. Stimulated Recall (Bloom, 1953 and others) was used to discover what past experiences influenced decision making and problem solving. Interview questions focused on participants' interpretation of their life experiences, their perspectives of themselves as learners, workers, and parents, and their ideas about teaching. Based on the data, the following conclusions were reached. (1) Life experiences, from activities such as other jobs, parenting, travel, reading, coaching, and community work were embedded in the perspectives of the emerging teacher serving as a lens or filter through which decisions were made in the classroom. (2) Life experiences provided connections to build upon or barriers to be reconstructed. Examination of prior experiences and beliefs will help to reconstruct these experiences into meaningful ideas about teaching that will be more than an overlay experience that may be washed out in the early rigors of learning to teach. Implications for teacher education include the need for promotion of the examination of prior life experiences to integrate self-knowledge with theory and practice and to remove possible barriers to the development of solid teaching practices.
APA, Harvard, Vancouver, ISO, and other styles
50

STEHLÍKOVÁ, Dagmar. "Návrh a testování multiplex-PCR primerů pro detekci původců bakteriální skvrnitosti rajčete." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-203323.

Full text
Abstract:
The subject of this work is to develop multiplex-PCR assay for specific detection of plant pathogenic bacteria of Xanthomonas genus causing bacterial spot of tomato. PCR primers for detection of groups A (X. euvesicatoria), B (X. vesicatoria), C (X. perforans) and D (X. gardneri) were developed based on the DNA sequences obtained by sequencing and from the GenBank database (NCBI). Four primer pairs - Xe_shotgun_104, Xe_shotgun_1819, Xv_atpD_403, Xp_efP_202 were designed and subsequently thoroughly tested and optimized for parallel detection of these bacteria. Specificity of the primers was tested on a large complex of bacterial strains pathogenic to tomato and related crops. Following the protocol described above X. vesicatoria, X. euvesicatoria, X. perforans and X. gardneri can be quickly and reliably identified in a single multiplex-PCR assay.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography