Tesis sobre el tema "Incomplete approaches"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Incomplete approaches.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 23 mejores tesis para su investigación sobre el tema "Incomplete approaches".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Nguyen, Yen Thi Hong. "Time-frequency distributions : approaches for incomplete non-stationary signals". Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/19681/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
There are many sources of waveforms or signals existing around us. They can be natural phenomena such as sound, light and invisible like electromagnetic fields, voltage, etc. Getting an insight into these waveforms helps explain the mysteries surrounding our world and the signal spectral analysis (i.e. the Fourier transform) is one of the most significant approaches to analyze a signal. Nevertheless, Fourier analysis cannot provide a time-dependent spectrum description for spectrum-varying signals-non-stationary signal. In these cases, time-frequency distribu- tions are employed instead of the traditional Fourier transform. There have been a variety of methods proposed to obtain the time-frequency representations (TFRs) such as the spectrogram or the Wigner-Ville distribution. The time-frequency distributions (TFDs), indeed, offer us a better signal interpretation in a two-dimensional time-frequency plane, which the Fourier transform fails to give. Nevertheless, in the case of incomplete data, the time-frequency displays are obscured by artifacts, and become highly noisy. Therefore, signal time-frequency features are hardly extracted, and cannot be used for further data processing. In this thesis, we propose two methods to deal with compressed observations. The first one applies compressive sensing with a novel chirp dictionary. This method assumes any windowed signal can be approximated by a sum of chirps, and then performs sparse reconstruction from windowed data in the time domain. A few improvements in computational complexity are also included. In the second method, fixed kernel as well as adaptive optimal kernels are used. This work is also based on the assumption that any windowed signal can be approximately represented by a sum of chirps. Since any chirp's auto-terms only occupy a certain area in the ambiguity domain, the kernel can be designed in a way to remove the other regions where auto-terms do not reside. In this manner, not only cross-terms but also missing samples’ artifact are mitigated significantly. The two proposed approaches bring about a better performance in the time-frequency signature estimations of the signals, which are sim- ulated with both synthetic and real signals. Notice that in this thesis, we only consider the non-stationary signals with frequency changing slowly with time. It is because the signals with rapidly varying frequency are not sparse in time-frequency domain and then the compressive sensing techniques or sparse reconstructions could not be applied. Also, the data with random missing samples are obtained by randomly choosing the samples’ positions and replacing these samples with zeros.
2

Albore, Alexandre. "Translation-based approaches to automated planning with incomplete information and sensing". Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/78939.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Artificial Intelligence Planning is about acting in order to achieve a desired goal. Under incomplete information, the task of finding the actions needed to achieve the goal can be modelled as a search problem in the belief space. This task is costly, as belief space is exponential in the number of states, which is exponential in the number of variables. Good belief representations and heuristics are thus critical for scaling up in this setting. The translation-based approach to automated planning with incomplete information deals with both issues by casting the problem of search in belief space to a search problem in state space, where each node of the search space represents a belief state. We develop plan synthesis tools that use translated versions of planning problems under uncertainty, with partial or null sensing available. We show formally under which conditions the introduced translations are polynomial, and capture all and only the plans of the original problems. We study empirically the value of these translations.
La Planificación es la disciplina de Inteligencia Artificial que estudia los procesos de razonamiento necesarios para conseguir las acciones que logren un objetivo dado. En presencia de información incompleta, el problema de planificación puede ser modelado como una búsqueda en el espacio de estados de creencia, cada uno de ellos representando un conjunto de estados posibles. Este problema es costoso ya que el numero de estados de creencia puede ser exponencial en el número de estados, lo cual es exponencial en el número de variables del problema. El uso de buenas representaciónes de los estados y de heurísticas informadas resultan cruciales para escalar en este espacio de búsqueda. En esta tesis se presentan traducciones para planificación con información incompleta, que transforman el problema de búsqueda en el espacio de estados de creencia, en búsqueda en espacio de estados, donde cada nodo representa un estado de creencia. Hemos desarrollado herramientas para la generación de planes para el problema traducido, ya sea con percepción parcial o nula. A su vez, demostramos formalmente bajo qué circunstancias las traducciones son polinómicas, completas y correctas. La evaluación empírica remarca el valor de dichas traducciones
3

Villaverde, Michael. "Stochastic optimization approaches to pricing, hedging and investment in incomplete markets". Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.616209.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Iddrisu, Abdul-Karim. "Sensitivity analysis approaches for incomplete longitudinal data in a multi-centre clinical trial". Doctoral thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31396.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The first major contribution of the thesis is the development of sensitivity analysis strategy for dealing with incomplete longitudinal data. The second important contribution is setting up of simulation experiment to evaluate the performance of some of the sensitivity analysis approaches. The third contribution is that the thesis offers recommendations on which sensitivity analysis strategy to use and in what circumstance. It is recommended that when drawing statistical inferences in the presence of missing data, methods of analysis based on plausible scientific assumptions should be used. One major issue is that such assumptions cannot be verified using the data at hand. In order to verify these assumptions, sensitivity analysis should be performed to investigate the robustness of statistical inferences to plausible alternative assumptions about the missing data. The thesis implemented various sensitivity analysis strategies to incomplete longitudinal CD4 count data in order to investigate the effect of tuberculosis pericarditis (TBP) treatment on CD4 count changes over time. The thesis achieved the first contribution by formulating primary analysis (which assume that the data are missing at random) and then conducting sensitivity analyses to assess whether statistical inferences under the primary analysis model are sensitive to models that assume that the data are not missing at random. The second contribution was achieved via simulation experiment involving formulating hypotheses on how sensitivity analysis strategies would performed under varying rate of missing values and model mis-specification (when the model is mis-specified). The third contribution was achieved based on our experience from the development and application of the sensitivity analysis strategies as well as the simulation experiment. Using the CD4 count data, we observed that statistical inferences under the primary analysis formulation are robust to the sensitivity analyses formulations, suggesting that the mechanism that generated the missing CD4 count measurements is likely to be missing at random. The results also revealed that TBP does not interact with the HIV/AIDS treatment and that TBP treatment had no significant effect on CD4 count changes over time. We have observed in our simulation results that the sensitivity analysis strategies produced unbiased statistical inferences except when a strategy is inappropriately applied in a given trial setting and also, when a strategy is mis-specified. Although the methods considered were applied to data in the IMPI trial setting, these methods can also be applied to clinical trials with similar settings. A sensitivity analysis strategy may not necessarily give bias results because it has been mis-specified, but it may also be that it has been applied in a wrongly defined trial setting. We therefore strongly encourage analysts to carefully study these sensitivity analysis frameworks together with a clearly and precise definition of the trial objective in order to decide on which sensitivity analysis strategy to use.
5

Caliskan, Nilufer. "Asset Pricing Models: Stochastic Volatility And Information-based Approaches". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608213/index.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
We present two option pricing models, both different from the classical Black-Scholes-Merton model. The first model, suggested by Heston, considers the case where the asset price volatility is stochastic. For this model we study the asset price process and give in detail the derivation of the European call option price process. The second model, suggested by Brody-Hughston-Macrina, describes the observation of certain information about the claim perturbed by a noise represented by a Brownian bridge. Here we also study in detail the properties of this noisy information process and give the derivations of both asset price dynamics and the European call option price process.
6

Comerford, Liam. "Artificial neural network approaches and compressive sensing techniques for stochastic process estimation and simulation subject to incomplete data". Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2046540/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This research is themed around development of tools for discrete analysis of stochastic processes subject to limited or missing data; more specifically, estimation of stochastic process power spectra from which new process time-histories may be simulated. In this context, the author proposes three novel approaches to power spectrum estimation subject to missing data which comprise the main body of this work. Of particular importance is the fact that all three approaches are adaptable for use in both stationary and evolutionary power spectrum estimation. Numerous arrangements of missing data are tested to simulate a range of possible scenarios to demonstrate the versatility of the proposed methodologies. The first of the three approaches uses an artificial neural network (ANN) based model for stochastic process power spectrum estimation subject to limited / missing data. In this regard, an appropriately defined ANN is utilized to capture the stochastic pattern in the available data in an “average sense”. Next, the extrapolation capabilities of the ANN are exploited for generating realizations of the underlying stochastic process. Finally, power spectrum estimates are derived based on established frequency (e.g. Fourier analysis), or versatile joint time-frequency analysis techniques (e.g. harmonic wavelets) for the cases of stationary and non-stationary stochastic processes, respectively. One of the significant advantages of the approach relates to the fact that no a priori knowledge about the data is assumed. The second approach uses compressive sensing (CS) to solve the same problem. In this setting, further assumptions are imposed on the nature of the underlying process of interest than in the ANN case, in particular that of sparsity in the frequency domain. The advantages being that when compared to ANN, significant improvements in efficiency and accuracy are achieved with increased reliability for larger amounts of missing data. Specifically, first an appropriate basis is selected for expanding the signal recorded in the time domain. As with the ANN approach, Fourier and harmonic wavelet bases are utilized. Next, an L1 norm minimization procedure is performed for obtaining the sparsest representation of the signal in the selected basis. Further, an adaptive basis procedure is introduced that significantly improves results when working with stochastic process record ensembles. The final approach is somewhat different, in that it aims to quantify uncertainty in power spectrum estimation subject to missing data rather than provide deterministic predictions. By relying on relatively relaxed assumptions for the missing data, utilizing fundamental concepts from probability theory, and resorting to Fourier and harmonic wavelets based representations of stationary and non-stationary stochastic processes, respectively, a closed-form expression is derived for the probability density function (PDF) of the power spectrum value corresponding to a specific frequency. Numerical examples demonstrate the large extent to which any given single estimate using deterministic methods, even for small amounts of missing data, may be unrepresentative of the target spectrum. In this regard, this probabilistic approach can be potentially used to bound deterministic estimates, providing specific validation criteria for missing data reconstruction.
7

Tran, Trong Hieu. "Méthodes d'optimisation hybrides pour des problèmes de routages avec profits". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30367.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
L'optimisation combinatoire est une branche de l'optimisation mathématique qui se concentre sur la recherche de solutions optimales parmi un ensemble fini de combinaisons possibles, tout en respectant un ensemble de contraintes et en maximisant ou minimisant une fonction objectif. Pour résoudre ces problèmes, les méthodes incomplètes sont souvent utilisées en pratique, car ces dernières peuvent produire rapidement des solutions de haute qualité, ce qui est un point critique dans de nombreuses applications. Dans cette thèse, nous nous intéressons au développement d'approches hybrides qui permettent d'améliorer la recherche incomplète en exploitant les méthodes complètes. Pour traiter en cas pratique, nous considérons ici le problème de tournées de véhicules avec profits, dont l'objectif est de sélectionner un sous-ensemble de clients à visiter par des véhicules de manière à maximiser la somme des profits associés aux clients visités. Plus précisément, nous visons tout d'abord à améliorer les algorithmes de recherche incomplets en exploitant les connaissances acquises dans le passé. L'idée centrale est de: (i) apprendre des conflits (combinaisons de décisions qui conduisent à une violation de certaines contraintes ou à une sous-optimalité des solutions) et les utiliser pour éviter de réexaminer les mêmes solutions et guider la recherche, et (ii) exploiter les bonnes caractéristiques de solutions élites afin de produire de nouvelles solutions ayant une meilleure qualité. En outre, nous étudions le développement d'un solveur générique pour des problèmes de routage complexes pouvant impliquer des clients optionnels, des véhicules multiples, des fenêtres temporelles multiples, des contraintes supplémentaires, et/ou des temps de transition dépendant du temps. Le solveur générique proposé exploite des sous-problèmes pour lesquels des méthodes de raisonnement dédiées sont disponibles. L'efficacité des approches proposées est évaluée par diverses expérimentations sur des instances classiques et sur des données réelles liées à un problème d'ordonnancement pour des satellites d'observation de la Terre, qui inclut éventuellement des profits incertains
Combinatorial optimization is an essential branch of computer science and mathematical optimization that deals with problems involving a discrete and finite set of decision variables. In such problems, the main objective is to find an assignment that satisfies a set of specific constraints and optimizes a given objective function. One of the main challenges is that these problems can be hard to solve in practice. In many cases, incomplete methods are preferred to complete methods since the latter may have difficulties in solving large-scale problems within a limited amount of time. On the other hand, incomplete methods can quickly produce high-quality solutions, which is a critical point in numerous applications. In this thesis, we investigate hybrid approaches that enhance incomplete search by exploiting complete search techniques. For this, we deal with a concrete case study, which is the vehicle routing problem with profits. In particular, we aim to boost incomplete search algorithms by extracting some knowledge during the search process and reasoning with the knowledge acquired in the past. The core idea is two-fold: (i) to learn conflicting solutions (that violate some constraints or that are suboptimal) and exploit them to avoid reconsidering the same solutions and guide search, and (ii) to exploit good features of elite solutions in order to hopefully generate new solutions having a higher quality. Furthermore, we investigate the development of a generic framework by decomposing and exchanging information between sub-modules to efficiently solve complex routing problems possibly involving optional customers, multiple vehicles, multiple time windows, multiple side constraints, and/or time-dependent transition times. The effectiveness of the approaches proposed is shown by various experiments on both standard benchmarks (e.g., the Orienteering Problem and its variants) and real-life datasets from the aerospace domain (e.g., the Earth Observation Satellite scheduling problem), and possibly involving uncertain profits
8

Papalaskari, Mary-Angela. "Minimal consequence : a semantic approach to reasoning with incomplete information". Thesis, University of Edinburgh, 1988. http://hdl.handle.net/1842/19214.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Herrmann, Felix J., Deli Wang, Gilles Hennenfent y Peyman P. Moghaddam. "Seismic data processing with curvelets: a multiscale and nonlinear approach". Society of Exploration Geophysicists, 2007. http://hdl.handle.net/2429/557.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
In this abstract, we present a nonlinear curvelet-based sparsity promoting formulation of a seismic processing flow, consisting of the following steps: seismic data regularization and the restoration of migration amplitudes. We show that the curvelet’s wavefront detection capability and invariance under the migration-demigration operator lead to a formulation that is stable under noise and missing data.
10

Paparistodemo, Marios. "Multinomial lattices and a quadratic programming approach for optimal replication in incomplete markets". Thesis, Imperial College London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271650.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Maximin, Grégory. "Les formes organisationnelles hybrides dans le secteur des télécommunications et des nouvelles technologies". Thesis, Antilles, 2022. http://www.theses.fr/2022ANTI0734.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Le but des travaux est de montrer par quels moyens et procédés stratégiques et organisationnels les firmes des secteurs intensifs en nouvelles technologies et en innovation font face à la spécificité des actifs et aux dynamiques propres à la mondialisation de l’économie. La Nouvelle économie institutionnelle guide nos travaux à travers l’approche de la firme utilisée qui est d’essence contractualiste et qui s’appuie sur les travaux de l’Economie des Coûts de transaction et de la Théorie des Contrats incomplets. Cependant d’autres approches de la firme sont abordées afin de pallier aux écueils identifiés au sein de l’approche contractualiste et d’enrichir la portée académique des travaux. De surcroît, la notion d’alliance stratégique est très importante dans nos travaux et nous permet d’opérer une articulation entre les concepts d’Organisation et de Marché afin de mettre en lumière certaines dynamiques propres à la firme dans une économie de marché. Par ailleurs la thèse aborde également les alliances stratégiques sous les deux autres angles suivants : à la fois celui de la multinationalisation des firmes innovantes, ainsi que sur celui de la réponse organisationnelle que constituent les alliances stratégiques face aux spécificités propres aux secteurs innovants. La Théorie des jeux et le concept d’équilibre de Nash peuvent nous aider à modéliser ou à décrire la dynamique des firmes du secteur des télécommunications. La dynamique de d’autres secteurs innovants est étudiée à travers la notion d’alliance stratégique. L’influence des décisions et politiques publiques en matière de règlementation des télécommunications et d’impulsion de l’innovation est aussi évoquée. L’influence des politiques publiques peut se voir notamment en matière de concurrence lorsque le régulateur public autorise dans les secteurs innovants des firmes concurrentes à coopérer en amont en matière de recherche et développement. D’ailleurs des modèles de recherche et développement seront présentés. Il est à noter que les travaux de thèse intègrent des données empiriques faisant état du poids des secteurs intensifs en recherche et développement et donc de l’innovation dans l’économie. Le champ géographique couvert par ces données empiriques est continental et/ou international.La thèse permet donc à partir de la notion d’alliance stratégique de détailler des procédés organisationnels et des stratégies comportementales (modèle de jeux) encourageant : l’innovation, le maintien d’une forte spécificité des actifs et la réactivité nécessaire aux firmes dans une économie mondialisée. Nous donnons les formes organisationnelles les mieux adaptées pour faire face à une forte spécificité des actifs dans le secteur des télécommunications. Il s’agit de la firme (post-fusion) et de la joint-venture relationnelle concernant cette dernière nous démontrons que la présence de contrats relationnels permet de réduire la volonté de renégociation et est une protection contre l’opportunisme contractuel
The aim of the thesis is to show what are the means and strategic and organizational processes the firms of the sectors highly intense in new IT and innovation use to cope with to asset specificity and to the dynamics of the globalisation of the economy. The New Institutional Economics (NIE) leads our work thanks to the conception of the firm used which is based essentially on a contractualist approach which uses the works of the transaction cost economics and of theory of the incomplete contracts; however other approaches of the firms are evocated in order to enrich the academical view of our thesis. Moreover, the notion of strategic alliance is very important for our work, and it allows us to operate an articulation between the concepts of Organization and the Market for put in the light some firm’s dynamics in a market economy. Furthermore, the thesis concerns the strategic alliances under two other following perspectives: the first one is the multinationalization of the innovative firms, and the second one concerns the organizational response that establish the strategic alliances to cope with the specificities of the innovative sectors. The game theory and the concept of Nash equilibrium can help us to model ou describe the dynamic of the firms belonging the sector of telecommunications. The dynamic of other innovative sectors is studied through the notion of strategic alliance. The influence of the decisions and the public policies concerning the telecommunication regulation and the impulse of the innovation is evocated too. The influence of public policies can be seen particularly concerning the competition when the public regulator allows in the innovative sectors the competing firms to cooperate in upstream (for example research and development joined). Some research and developments models are presented in the thesis. It is important to note that the thesis has empirical data highlighting the weight of the sectors highly intense in research and development and so showing the weight of the innovation in the economy. The geographical field covers by these data is international The thesis thanks to the notion of strategic alliance enables to retail the organizational processes and the behavioral strategies (models of game) strengthening the innovation, enabling to cope with to a strong asset specificity and explaining the firms’ responses in a globalized economy. We finish in providing the organizational forms the most adapted for cope with to the strong specificity of the sector of telecommunications. There are the firm (post-merger) and the relational joint venture concerning this latter we demonstrate that the presence of relational contracts enables to reduce the will to renege and is a protection against the contractual opportunism
12

Menon, R. "TRANSLATIONAL BIOINFORMATICS AND SYSTEMS BIOLOGY APPROACHES TO GENETIC AND TRANSCRIPTIONAL DATA INCOMPLEX HUMAN DISORDERS". Doctoral thesis, Università degli Studi di Milano, 2013. http://hdl.handle.net/2434/221054.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Human complex diseases are caused by genetic and environmental factors. Genome-wide association studies (gwas) are aimed to identify common variants predisposing to those disorders. However, till date, the data generated from such studies have not been extensively explored to identify the molecular and functional framework hosting the susceptibility genes. We reconstructed the multiple sclerosis-MS genetic interactome and searched for their interactions with genes predisposing to either neurodegenerative or autoimmune diseases such as Parkinson's disease-PD, Alzheimer's disease-AD, multiple sclerosis-MS, rheumatoid arthritis-RA and Type 1 diabetes-T1D. It was observed that several genes predisposing to the other autoimmune or neurodegenerative disorders may come into contact with MS interactome, suggesting that susceptibility to distinct diseases may converge towards common molecular and biological networks. In order to test this hypothesis, we performed pathway enrichment analyses on each disease interactome independently. Several issues related to immune function and growth factor signaling pathways appeared in all autoimmune diseases. Further, the paired analyses of disease interactomes revealed significant molecular and functional relatedness among the diseases. Therefore, the shift from single genes to molecular frameworks via systems biology approach highlighted several known pathogenic processes, indicating that changes in these functions might be driven or sustained by the framework linked to genetic susceptibility. Notably, MS is a complex disease of the central nervous system (CNS), but many of the susceptibility genes play a role in immune system. Interestingly, the most widely used therapeutic drugs in MS are either immunosuppressive or immunomodulatory agents, indicating that targeting peripheral immune system is beneficial to patients with this CNS disorder. Next, we measured the global gene expression in peripheral blood mono nuclear cells (PBMCs) from MS and healthy subjects to discover disease genes, molecular biomarkers and drug targets. Extending the bioinformatics analysis of the transcriptome data to network-biology level enabled us to identify few crucial transcriptional regulators in MS. Further, as a first step towards translational research, studies were conducted in the animal model of MS, based on the outcomes of the bioinformatics analysis. Significant amelioration of disease activity was observed in diseased animals treated with drug targeting SP1 transcription factor, compared to the untreated group. Hence, disease transcriptomics combined with network-biology analysis provided a powerful platform for the identification of functional networks and molecular targets in MS.
13

Rasch, Vibeke. "Unsafe abortion in Tanzania : an empathetic approach to improve post-abortion quality of care /". Stockholm, 2003. http://diss.kib.ki.se/2003/91-7349-554-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Gamble, Christopher Thomas. "A Bayesian chromosome painting approach to detect signals of incomplete positive selection in sequence data : applications to 1000 genomes". Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:e1f3b484-59b9-4703-ae09-67079408c424.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Methods to detect patterns of variation associated with ongoing positive selection often focus on identifying regions of the genome with extended haplotype homozygosity - indicative of recently shared ancestry. Whilst these have been shown to be powerful they have two major challenges. First, these methods are constructed to detect variation associated with a classical selective sweep; a single haplotype background gets swept up to a higher than expected frequency given its age. Recently studies have shown that other forms of positive selection, e.g. selection on standing variation, may be more prevalent than previous thought. Under such evolution, a mutation that is already segregating in the population becomes beneficial, possibly as a result of an environmental change. The second challenge with these methods is that they base their inference on non-parametric tests of significance which can result in uncontrolled false positive rates. We tackle these problems using two approaches. First, by exploiting a widely used model in population genomics we construct a new approach to detect regions where a subset of the chromosomes are much more related than expected genome-wide. Using this metric we show that it is sensitive to both classical selective sweeps, and to soft selective sweeps, e.g. selection on standing variation. Second, building on existing methods, we construct a Bayesian test which bi-partitions chromosomes at every position based on their allelic type and tests for association between chromosomes carrying one allele and significantly reduced time to common ancestor. Using simulated data we show that this approach results in a powerful, fast, and robust approach to detect signals of positive selection in sequence data. Moreover by comparing our model to existing techniques we show that we have similar power to detect recent classical selective sweeps, and considerably greater power to detect soft selective sweeps. We apply our method, ABACUS, to three human populations using data from the 1000 Genome Project. Using existing and novel candidates of positive selection, we show that the results between ABACUS and existing methods are comparable in regions of classical selection, and are arguably superior in regions that show evidence for recent selection on standing variation.
15

Wan, Wei. "A New Approach to the Decomposition of Incompletely Specified Functions Based on Graph Coloring and Local Transformation and Its Application to FPGA Mapping". PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4698.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The thesis presents a new approach to the decomposition of incompletely specified functions and its application to FPGA (Field Programmable Gate Array) mapping. Five methods: Variable Partitioning, Graph Coloring, Bond Set Encoding, CLB Reusing and Local Transformation are developed in order to efficiently perform decomposition and FPGA (Lookup-Table based FPGA) mapping. 1) Variable Partitioning is a high quality hemistic method used to find the "best" partitions, avoiding the very time consuming testing of all possible decomposition charts, which is impractical when there are many input variables in the input function. 2) Graph Coloring is another high quality heuristic\ used to perform the quasi-optimum don't care assignment, making the program possible to accept incompletely specified function and perform a quasi-optimum assignment to the unspecified part of the function. 3) Bond Set Encoding algorithm is used to simplify the decomposed blocks during the process of decomposition. 4) CLB Reusing algorithm is used to reduce the number of CLBs used in the final mapped circuit. 5) Local Transformation concept is introduced to transform nondecomposable functions into decomposable ones, thus making it possible to apply decomposition method to FPGA mapping. All the above developed methods are incorporated into a program named TRADE, which performs global optimization over the input functions. While most of the existing methods recursively perform local optimization over some kinds of network-like graphs, and few of them can handle incompletely specified functions. Cube calculus is used in the TRADE program, the operations are global and very fast. A short description of the TRADE program and the evaluation of the results are provided at the_ end of the thesis. For many benchmarks the TRADE program gives better results than any program published in the literature.
16

Samuel, John. "Feeding a data warehouse with data coming from web services. A mediation approach for the DaWeS prototype". Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22493/document.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Cette thèse traite de l’établissement d’une plateforme logicielle nommée DaWeS permettant le déploiement et la gestion en ligne d’entrepôts de données alimentés par des données provenant de services web et personnalisés à destination des petites et moyennes entreprises. Ce travail s’articule autour du développement et de l’expérimentation de DaWeS. L’idée principale implémentée dans DaWeS est l’utilisation d’une approche virtuelle d’intégration de données (la médiation) en tant queprocessus ETL (extraction, transformation et chargement des données) pour les entrepôts de données gérés par DaWeS. A cette fin, un algorithme classique de réécriture de requêtes (l’algorithme inverse-rules) a été adapté et testé. Une étude théorique sur la sémantique des requêtes conjonctives et datalog exprimées avec des relations munies de limitations d’accès (correspondant aux services web) a été menée. Cette dernière permet l’obtention de bornes supérieures sur les nombres d’appels aux services web requis dans l’évaluation de telles requêtes. Des expérimentations ont été menées sur des services web réels dans trois domaines : le marketing en ligne, la gestion de projets et les services d’aide aux utilisateurs. Une première série de tests aléatoires a été effectuée pour tester le passage à l’échelle
The role of data warehouse for business analytics cannot be undermined for any enterprise, irrespective of its size. But the growing dependence on web services has resulted in a situation where the enterprise data is managed by multiple autonomous and heterogeneous service providers. We present our approach and its associated prototype DaWeS [Samuel, 2014; Samuel and Rey, 2014; Samuel et al., 2014], a DAta warehouse fed with data coming from WEb Services to extract, transform and store enterprise data from web services and to build performance indicators from them (stored enterprise data) hiding from the end users the heterogeneity of the numerous underlying web services. Its ETL process is grounded on a mediation approach usually used in data integration. This enables DaWeS (i) to be fully configurable in a declarative manner only (XML, XSLT, SQL, datalog) and (ii) to make part of the warehouse schema dynamic so it can be easily updated. (i) and (ii) allow DaWeS managers to shift from development to administration when they want to connect to new web services or to update the APIs (Application programming interfaces) of already connected ones. The aim is to make DaWeS scalable and adaptable to smoothly face the ever-changing and growing web services offer. We point out the fact that this also enables DaWeS to be used with the vast majority of actual web service interfaces defined with basic technologies only (HTTP, REST, XML and JSON) and not with more advanced standards (WSDL, WADL, hRESTS or SAWSDL) since these more advanced standards are not widely used yet to describe real web services. In terms of applications, the aim is to allow a DaWeS administrator to provide to small and medium companies a service to store and query their business data coming from their usage of third-party services, without having to manage their own warehouse. In particular, DaWeS enables the easy design (as SQL Queries) of personalized performance indicators. We present in detail this mediation approach for ETL and the architecture of DaWeS. Besides its industrial purpose, working on building DaWeS brought forth further scientific challenges like the need for optimizing the number of web service API operation calls or handling incomplete information. We propose a bound on the number of calls to web services. This bound is a tool to compare future optimization techniques. We also present a heuristics to handle incomplete information
17

Wu, Chih-Hsueh y 吳志學. "Incomplete Rank Data Analysis-Likelihood Approach". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/34089144293718157738.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
國立臺北大學
統計學系
93
Ranking several items is very common in daily life and produces rank data. Complete rank data arise from a (randomized) complete block design. The block is decision-maker (respondent, voter). The decision-makers can be people, households, firms, judges, or any other decision making unit. Each decision-maker (respondent, voter) is asked to rank a set of items (candidates). However, a rank data set may be incomplete with missing responses. Incomplete rank data usually arise from (randomized) balanced, partial balanced or unbalanced incomplete block design. Incomplete rank data contain missing responses and missingness is a ubiquitous problem in the medical or social researches. In this paper, we develop a likelihood method to analyze incomplete rank data with missing responses. The proposed method combines the proportional hazard model and EM algorithm to estimate the relative probability that a certain item is selected as the first rank. In simulation studies, we compare the results of the analysis of incomplete rank data between the proportional hazard model and the proposed likelihood method. The results show the proposed method reasonably good parameter estimates, and gain few improvements in the mean square error (MSEs) of parameter estimates. The results also show that likelihood ratio test has lower type I error under null hypothesis of equal ranking and still achieve good power. The overall performance of equal ranking is better than that of unequal ranking. We provide an analysis of real data with missing responses. The data arise from that a coffee company conducted an unbalanced block design to determine consumers’ preference of various flavors added to their coffee. We show that the proposed method has smaller variance of parameter estimates than those of the missing data. Furthermore, the rank ordering of the real data based on the proposed likelihood method is the same as the proportional hazard model without considering the missing responses.
18

Su, Hsiu-Te y 蘇修德. "A Robust Watermarking Approach by Using Balanced Incomplete Block Design". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/25871144098121461828.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
國立臺北大學
通訊工程研究所
95
Digital watermarking is a kind of technique by which information can be embedded into digital contents. In recent years, digital contents are extensively used, especially over the Internet. Digital copyright protection thus becomes a more and more serious problem. Digital watermarking may be a good solution due to its robustness and invisibility. This study aims to present a new watermarking approach base on the balanced incomplete block designs (BIBD). The watermark is embedded into the high frequency wavelet coefficients of an original image. Characteristics of human visual system are used to control the watermark strength. At the receiver site, the original image is used to extract the embedded watermark, and then the mathematical structure of the BIBD is used to identify the copyright date. Robustness is achieved by utilizing the mathematical structure of the BIBD. Because of the invisibility and big capacity of information are both satisfy, this approach is adaptable to cover communication.
19

"Pricing and hedging derivative securities in incomplete markets : an e-arbitrage approach". Sloan School of Management, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/2673.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
by Dimitris Bertsimas, Leonid Kogan, and Andrew W. Lo.
Cover title.
Includes bibliographical references (p. 57-60).
Partially supported by the MIT Laboratory for Financial Engineering and a Presidential Young Investigator Award with matching funds from Draper Laboratory. DDM-9158118
20

Suh, Joseph Che. "A study of translation strategies in Guillaume Oyono Mbia's plays". Thesis, 2005. http://hdl.handle.net/10500/1687.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This thesis is focused on a study of translation strategies in Guillaume Oyono Mbia's plays. By using the sociological, formalistic and semiotic approaches to literary criticism to inform the analysis of the source texts and by applying descriptive models outlined within the framework of descriptive translation studies (DTS) to compare the source and target texts, the study establishes the fact that in his target texts Oyono Mbia, self-translating author, has produced a realistic and convincing portrait of his native Bulu culture and society depicted in his source texts by adopting the same default preservation and foreignizing strategy employed in his source texts. Oyono Mbia's works, his translation strategies and translational behaviour are situated in the context of the prevailing trend and attitude (from the sixties to date) of African writers writing in European languages and it is posited that this category of writers are in effect creative translators and that the strategies they use in their original compositions are the same as those outlined by translation scholars or effectively used by practitioners. These strategies enable the writer and the translator of this category of African literature to preserve the "Africanness" which is the essence and main distinguishing feature of that literature. Contrary to some scholars (cf. Bandia 1993:58) who regard the translation phenomenon evident in the creative writings of African writers writing in European languages as a process which is covert, semantic and secondary, the present study of Oyono Mbia's translation strategies clearly reveals the process as overt, communicative and primary. Taking Oyono Mbia's strategies as a case in point, this study postulates that since for the most part, the African writer writing in a European language has captured the African content and form in his original creative translation, what the translator simply needs to do is to carry over such content and form to the other European language.
Linguistics
D.Litt. et Phil. (Linguistics)
21

"A Bayesian Network Approach to Early Reliability Assessment of Complex Systems". Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.38656.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
abstract: Bayesian networks are powerful tools in system reliability assessment due to their flexibility in modeling the reliability structure of complex systems. This dissertation develops Bayesian network models for system reliability analysis through the use of Bayesian inference techniques. Bayesian networks generalize fault trees by allowing components and subsystems to be related by conditional probabilities instead of deterministic relationships; thus, they provide analytical advantages to the situation when the failure structure is not well understood, especially during the product design stage. In order to tackle this problem, one needs to utilize auxiliary information such as the reliability information from similar products and domain expertise. For this purpose, a Bayesian network approach is proposed to incorporate data from functional analysis and parent products. The functions with low reliability and their impact on other functions in the network are identified, so that design changes can be suggested for system reliability improvement. A complex system does not necessarily have all components being monitored at the same time, causing another challenge in the reliability assessment problem. Sometimes there are a limited number of sensors deployed in the system to monitor the states of some components or subsystems, but not all of them. Data simultaneously collected from multiple sensors on the same system are analyzed using a Bayesian network approach, and the conditional probabilities of the network are estimated by combining failure information and expert opinions at both system and component levels. Several data scenarios with discrete, continuous and hybrid data (both discrete and continuous data) are analyzed. Posterior distributions of the reliability parameters of the system and components are assessed using simultaneous data. Finally, a Bayesian framework is proposed to incorporate different sources of prior information and reconcile these different sources, including expert opinions and component information, in order to form a prior distribution for the system. Incorporating expert opinion in the form of pseudo-observations substantially simplifies statistical modeling, as opposed to the pooling techniques and supra Bayesian methods used for combining prior distributions in the literature. The methods proposed are demonstrated with several case studies.
Dissertation/Thesis
Doctoral Dissertation Industrial Engineering 2016
22

Hsin, Che-Wei y 辛哲瑋. "A Test of judging the number one product in a Marketing Survey: A multinomial approach for Incomplete Rank Data". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/4jwd4b.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
中原大學
應用數學研究所
93
One particular area of business activity that depends on detailed sampling activities is marketing.Decision on new product to market is made on the basis of sample survey data. Data are often obtained from a face-to-face survey. A person is asked to answer a survey question may or may not be able to rank products completely according to his/her experience. The more products in survey, the more incomplete ranks in data set. This proposal catorized the data into the complete rank data and the incomplete rank data and then applies the multinomial approach to obtain reasonable estimators for all possible ranks. Consequently, a test between the different ranks of 2 products is also provided using delta method to approximate such distribution.Finally, the baseline-Categorical logit model is introduced to see if this differences is changed corresponding the chang levels of the predicted variable.
23

Freitas, Mauro. "Personalized approach on a smart image search engine, handling default data". Master's thesis, 2014. http://hdl.handle.net/1822/37466.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Dissertação de mestrado em Engenharia Informática
Search engines are becoming a tool widely used these days, there are different examples like Google, Yahoo!, Bing and so on. Adjacent to these tools is the need to get the results closer to each one of the users. Is in this area that some work has been developed recently, which allowed users to take advantage of the information presented to them, with no randomness or a sort of generic factors. This process of bringing the user closer to the results is called Personalization. Personalization is a process that involves obtaining and storing information about users of a system, which will be used later as a way to adapt the information to present. This process is crucial in many situations where the filtering of content is a requirement, since we deal daily with large amounts of information and it is not always helpful. (Doman J., 2012) In this project, the importance of personalization will be evaluated in the context of intelligent image search, making a contribution to the project CLOUD9-SIS. So, it will be evaluated the information to be treated, how it will be treated and how it will appear. This evaluation will take into account also other examples of existing search engines. These concepts will be used later to integrate a new system of searching for images, capable of adapting its results depending on the preferences captured from user interactions. The usage of the images was only chosen because CLOUD9-SIS is intended to return images as a result, it was not developed or used any technique for image interpretation.
Os motores de busca estão a tornar-se uma ferramenta bastante utilizada nos dias de hoje, existindo diferentes exemplos, tais como Google, Yahoo!, Bing, etc. Adjacente a essas ferramentas surge a necessidade de aproximar cada vez mais os resultados produzidos a cada um dos utilizadores. É nesta área que tem sido desenvolvido algum trabalho recentemente, o que permitiu que os utilizadores, pudessem tirar o melhor proveito da informação que lhes é apresentada, não havendo apenas uma aleatoriedade ou factores de ordenação genéricos. A este processo de aproximação do utilizador aos resultados dá-se o nome de Personalização. A Personalização é um processo que consiste na obtenção e armazenamento de informações sobre os utilizadores de um sistema, para posteriormente serem utilizadas como forma de adequar a informação que se vai utilizar. Este processo é determinante em várias situações onde a filtragem dos conteúdos é um requisito, pois lidamos diariamente com grandes quantidades de informação e nem sempre esta é útil. Neste projecto, vai ser avaliada a preponderância da Personalização no contexto da pesquisa inteligente de imagens, dando um contributo ao projecto CLOUD9-SIS. Assim, será avaliada a informação a ser tratada, a forma como será tratada e como será apresentada. Esta avaliação terá em consideração também exemplos de outros motores de busca já existentes. Estes conceitos serão posteriormente utilizados para integrar um novo sistema de procura de imagens que seja capaz de adaptar os seus resultados, consoante as preferências que vão sendo retiradas das interacções do utilizador. O uso das imagens foi apenas escolhido porque o projecto CLOUD9-SIS é suposto retornar imagens como resultado, não foi desenvolvida nem utilizada nenhuma técnica de interpretação de imagens.

Pasar a la bibliografía