Tesis sobre el tema "Computational analysis"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Computational analysis.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Computational analysis".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Pocock, Matthew Richard. "Computational analysis of genomes". Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615724.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Cattinelli, I. "INVESTIGATIONS ON COGNITIVE COMPUTATION AND COMPUTATIONAL COGNITION". Doctoral thesis, Università degli Studi di Milano, 2011. http://hdl.handle.net/2434/155482.

Texto completo
Resumen
This Thesis describes our work at the boundary between Computer Science and Cognitive (Neuro)Science. In particular, (1) we have worked on methodological improvements to clustering-based meta-analysis of neuroimaging data, which is a technique that allows to collectively assess, in a quantitative way, activation peaks from several functional imaging studies, in order to extract the most robust results in the cognitive domain of interest. Hierarchical clustering is often used in this context, yet it is prone to the problem of non-uniqueness of the solution: a different permutation of the same input data might result in a different clustering result. In this Thesis, we propose a new version of hierarchical clustering that solves this problem. We also show the results of a meta-analysis, carried out using this algorithm, aimed at identifying specific cerebral circuits involved in single word reading. Moreover, (2) we describe preliminary work on a new connectionist model of single word reading, named the two-component model because it postulates a cascaded information flow from a more cognitive component that computes a distributed internal representation for the input word, to an articulatory component that translates this code into the corresponding sequence of phonemes. Output production is started when the internal code, which evolves in time, reaches a sufficient degree of clarity; this mechanism has been advanced as a possible explanation for behavioral effects consistently reported in the literature on reading, with a specific focus on the so called serial effects. This model is here discussed in its strength and weaknesses. Finally, (3) we have turned to consider how features that are typical of human cognition can inform the design of improved artificial agents; here, we have focused on modelling concepts inspired by emotion theory. A model of emotional interaction between artificial agents, based on probabilistic finite state automata, is presented: in this model, agents have personalities and attitudes that can change through the course of interaction (e.g. by reinforcement learning) to achieve autonomous adaptation to the interaction partner. Markov chain properties are then applied to derive reliable predictions of the outcome of an interaction. Taken together, these works show how the interplay between Cognitive Science and Computer Science can be fruitful, both for advancing our knowledge of the human brain and for designing more and more intelligent artificial systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Shenoy, A. "Computational analysis of facial expressions". Thesis, University of Hertfordshire, 2010. http://hdl.handle.net/2299/4359.

Texto completo
Resumen
This PhD work constitutes a series of inter-disciplinary studies that use biologically plausible computational techniques and experiments with human subjects in analyzing facial expressions. The performance of the computational models and human subjects in terms of accuracy and response time are analyzed. The computational models process images in three stages. This includes: Preprocessing, dimensionality reduction and Classification. The pre-processing of face expression images includes feature extraction and dimensionality reduction. Gabor filters are used for feature extraction as they are closest biologically plausible computational method. Various dimensionality reduction methods: Principal Component Analysis (PCA), Curvilinear Component Analysis (CCA) and Fisher Linear Discriminant (FLD) are used followed by the classification by Support Vector Machines (SVM) and Linear Discriminant Analysis (LDA). Six basic prototypical facial expressions that are universally accepted are used for the analysis. They are: angry, happy, fear, sad, surprise and disgust. The performance of the computational models in classifying each expression category is compared with that of the human subjects. The Effect size and Encoding face enable the discrimination of the areas of the face specific for a particular expression. The Effect size in particular emphasizes the areas of the face that are involved during the production of an expression. This concept of using Effect size on faces has not been reported previously in the literature and has shown very interesting results. The detailed PCA analysis showed the significant PCA components specific for each of the six basic prototypical expressions. An important observation from this analysis was that with Gabor filtering followed by non linear CCA for dimensionality reduction, the dataset vector size may be reduced to a very small number, in most cases it was just 5 components. The hypothesis that the average response time (RT) for the human subjects in classifying the different expressions is analogous to the distance measure of the data points from the classification hyper-plane was verified. This means the harder a facial expression is to classify by human subjects, the closer to the classifying hyper-plane of the classifier it is. A bi-variate correlation analysis of the distance measure and the average RT suggested a significant anti-correlation. The signal detection theory (SDT) or the d-prime determined how well the model or the human subjects were in making the classification of an expressive face from a neutral one. On comparison, human subjects are better in classifying surprise, disgust, fear, and sad expressions. The RAW computational model is better able to distinguish angry and happy expressions. To summarize, there seems to some similarities between the computational models and human subjects in the classification process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Etherington, Graham John. "Computational analysis of foodborne viruses". Thesis, University of East Anglia, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423473.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wen, Wen. "Computational texture analysis and segmentation". Thesis, University of Strathclyde, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358812.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Buchala, Samarasena. "Computational analysis of face images". Thesis, University of Hertfordshire, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431938.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Hussain, R. "Computational geometry using fourier analysis". Thesis, De Montfort University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391483.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ikoma, Hayato. "Computational microscopy for sample analysis". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91427.

Texto completo
Resumen
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.
46
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 41-44).
Computational microscopy is an emerging technology which extends the capabilities of optical microscopy with the help of computation. One of the notable example is super resolution fluorescence microscopy which achieves sub-wavelength resolution. This thesis explores the novel application of computational imaging methods to fluorescence microscopy and oblique illumination microscopy. In fluorescence spectroscopy, we have developed a novel nonlinear matrix unmixing algorithm to separate fluorescence spectra distorted by absorption effect. By extending the method to tensor form, we have also demonstrated the performance of a nonlinear fluorescence tensor unmixing algorithm on spectral fluorescence imaging. In the future, this algorithm may be applied to fluorescence unmixing in deep tissue imaging. The performance of the two algorithms were examined on simulation and experiments. In another project, we applied switchable multiple oblique illuminations to reflected-light microscopy. While the proposed system is easily implemented compared to existing methods, we demonstrate that the microscope detects the direction of surface roughness whose height is as small as illumination wavelength.
by Hayato Ikoma.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Xu, Yangjian. "Computational analysis of fretting fatigue". Düsseldorf VDI-Verl, 2009. http://d-nb.info/996624554/04.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Li, Xiang. "Computational analysis of ultraviolet reactors /". Online version of thesis, 2009. http://hdl.handle.net/1850/11175.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

SCARDONI, Giovanni. "Computational Analysis of Biological networks". Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/343983.

Texto completo
Resumen
Caratterizzare, descrivere ed estrarre informazioni da un network, è sicuramente uno dei principali obbiettivi della scienza, dato che lo studio dei network interessa differenti campi della ricerca, come la biologia, l'economia, le scienze sociali, l'informatica e così via. Ciò che si vuole è riuscire ad estrarre le proprietà fondamentali dei network e comprenderne la funzionalità. Questa tesi riguarda sia l'analisi topologica che l' analisi dinamica dei network biologici, anche se i risultati possono essere applicati a diversi campi. Per quanto riguarda l'analisi topologica viene utilizzato un approccio orientato ai nodi, utilizzando le centralità per individuare i nodi più rilevanti e integrando tali risultati con dati da laboratorio. Viene inoltre descritto CentiScaPe, un software implementato per effettuare tale tipo di analisi. Vengono inoltre introdotti i concetti di "interference" e "robustness" che permettono di comprendere come un network si riarrangia in seguito alla rimozione o all'aggiunta di nodi. Per quanto riguarda l'analisi dinamica, si mostra come l'abstract interpretation può essere utilizzata nella simulazione di pathways per ottenere i risultati di migliaia di simulazioni in breve tempo e come possibile soluzione del problema della stima dei parametri mancanti.
This thesis, treating both topological and dynamic points of view, concerns several aspects of biological networks analysis. Regarding the topological analysis of biological networks, the main contribution is the node-oriented point of view of the analysis. It means that instead of concentrating on global properties of the networks, we analyze them in order to extract properties of single nodes. An excellent method to face this problem is to use node centralities. Node centralities allow to identify nodes in a network having a relevant role in the network structure. This can not be enough if we are dealing with a biological network, since the role of a protein depends also on its biological activity that can be detected with lab experiments. Our approach is to integrate centralities analysis and data from biological experiments. A protocol of analysis have been produced, and the CentiScaPe tool for computing network centralities and integrating topological analysis with biological data have been designed and implemented. CentiScaPe have been applied to a human kino-phosphatome network and according to our protocol, kinases and phosphatases with highest centralities values have been extracted creating a new subnetwork of most central kinases and phosphatases. A lab experiment established which of this proteins presented high activation level and through CentiScaPe the proteins with both high centrality values and high activation level have been easily identified. The notion of node centralities interference have also been introduced to deal with central role of nodes in a biological network. It allow to identify which are the nodes that are more affected by the remotion of a particular node measuring the variation on their centralities values when such a node is removed from the network. The application of node centralities interference to the human kino-phosphatome revealed that different proteins affect centralities values of different nodes. Similarly to node centralities interference, the notion of centrality robustness of a node is introduced. This notion reveals if the central role of a node depends on other particular nodes in the network or if the node is ``robust'' in the sense that even if we remove or add other nodes the central role of the node remains almost unchanged. The dynamic aspects of biological networks analysis have been treated from an abstract interpretation point of view. Abstract interpretation is a powerful framework for the analysis of software and is excellent in deriving numerical properties of programs. Dealing with pathways, abstract interpretation have been adapted to the analysis of pathways simulation. Intervals domain and constants domain have been succesfully used to automatically extract information about reactants concentration. The intervals domain allow to determine the range of concentration of the proteins, and the constants domain have been used to know if a protein concentration become constant after a certain time. The other domain of analysis used is the congruences domain that, if applied to pathways simulation can easily identify regular oscillating behaviour in reactants concentration. The use of abstract interpretation allows to execute thousands of simulation and to completely and automatically characterize the behaviour of the pathways. In such a way it can be used also to solve the problem of parameters estimation where missing parameters can be detected with a brute force algorithm combined with the abstract interpretation analysis. The abstract interpretation approach have been succesfully applied to the mitotic oscillator pathway, characterizing the behaviour of the pathway depending on some reactants. To help the analysis of relation between reactants in the network, the notions of variables interference and variables abstract interference have been introduced and adapted to biological pathways simulation. They allow to find relations between properties of different reactants of the pathway. Using the abstract interference techniques we can say, for instance, which range of concentration of a protein can induce an oscillating behaviour of the pathway.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Delmotte, Varinthira Duangudom. "Computational auditory saliency". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45888.

Texto completo
Resumen
The objective of this dissertation research is to identify sounds that grab a listener's attention. These sounds that draw a person's attention are sounds that are considered salient. The focus here will be on investigating the role of saliency in the auditory attentional process. In order to identify these salient sounds, we have developed a computational auditory saliency model inspired by our understanding of the human auditory system and auditory perception. By identifying salient sounds we can obtain a better understanding of how sounds are processed by the auditory system, and in particular, the key features contributing to sound salience. Additionally, studying the salience of different auditory stimuli can lead to improvements in the performance of current computational models in several different areas, by making use of the information obtained about what stands out perceptually to observers in a particular scene. Auditory saliency also helps to rapidly sort the information present in a complex auditory scene. Since our resources are finite, not all information can be processed equally. We must, therefore, be able to quickly determine the importance of different objects in a scene. Additionally, an immediate response or decision may be required. In order to respond, the observer needs to know the key elements of the scene. The issue of saliency is closely related to many different areas, including scene analysis. The thesis provides a comprehensive look at auditory saliency. It explores the advantages and limitations of using auditory saliency models through different experiments and presents a general computational auditory saliency model that can be used for various applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Rabiei, Nima. "Decomposition techniques for computational limit analysis". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284217.

Texto completo
Resumen
Limit analysis is relevant in many practical engineering areas such as the design of mechanical structure or the analysis of soil mechanics. The theory of limit analysis assumes a rigid, perfectly-plastic material to model the collapse of a solid that is subjected to a static load distribution. Within this context, the problem of limit analysis is to consider a continuum that is subjected to a fixed force distribution consisting of both volume and surfaces loads. Then the objective is to obtain the maximum multiple of this force distribution that causes the collapse of the body. This multiple is usually called collapse multiplier. This collapse multiplier can be obtained analytically by solving an infinite dimensional nonlinear optimisation problem. Thus the computation of the multiplier requires two steps, the first step is to discretise its corresponding analytical problem by the introduction of finite dimensional spaces and the second step is to solve a nonlinear optimisation problem, which represents the major difficulty and challenge in the numerical solution process. Solving this optimisation problem, which may become very large and computationally expensive in three dimensional problems, is the second important step. Recent techniques have allowed scientists to determine upper and lower bounds of the load factor under which the structure will collapse. Despite the attractiveness of these results, their application to practical examples is still hampered by the size of the resulting optimisation process. Thus a remedy to this is the use of decomposition methods and to parallelise the corresponding optimisation problem. The aim of this work is to present a decomposition technique which can reduce the memory requirements and computational cost of this type of problems. For this purpose, we exploit the important feature of the underlying optimisation problem: the objective function contains one scaler variable. The main contributes of the thesis are, rewriting the constraints of the problem as the intersection of appropriate sets, and proposing efficient algorithmic strategies to iteratively solve the decomposition algorithm.
El análisis en estados límite es una herramienta relente en muchas aplicaciones de la ingeniería como por ejemplo en el análisis de estructuras o en mecánica del suelo. La teoría de estados límite asume un material rígido con plasticidad perfecta para modelar la capacidad portante y los mecanismos de derrumbe de un sólido sometido a una distribución de cargas estáticas. En este contexto, el problema en estados límite considera el continuo sometido a una distribución de cargas, tanto volumétricas como de superficie, y tiene como objetivo hallar el máximo multiplicador de la carga que provoca el derrumbe del cuerpo. Este valor se conoce como el máximo factor de carga, y puede ser calculado resolviendo un problema de optimización no lineal de dimensión infinita. Desde el punto de vista computacional, se requieren pues dos pasos: la discretización del problema analítico mediante el uso de espacios de dimensión finita, y la resolución del problema de optimización resultante. Este último paso representa uno de los mayores retos en el proceso del cálculo del factor de carga. El problema de optimización mencionado puede ser de gran tamaño y con un alto coste computacional, sobretodo en el análisis límite tridimensional. Técnicas recientes han permitido a investigadores e ingenieros determinar cotas superiores e inferiores del factor de carga. A pesar del atractivo de estos resultados, su aplicación práctica en ejemplos realistas está todavía obstaculizada por el tamaño del problema de optimización resultante. Posibles remedios a este obstáculo son el diseño de técnicas de descomposición y la paralelizarían del problema de optimización. El objetivo de este trabajo es presentar una técnica de descomposición que pueda reducir los requerimientos y el coste computacional de este tipo de problemas. Con este propósito, se explotan una propiedad importante del problema de optimización: la función objetivo contiene una único escalar (el factor de carga). La contribución principal de la tesis es el replanteamiento del problema de optimización como la intersección de dos conjuntos, y la propuesta de un algoritmo eficiente para su resolución iterativa.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Petracca, Massimo. "Computational multiscale analysis of masonry structures". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/393942.

Texto completo
Resumen
Masonry is an ancient building material that has been used throughout the history, and it is still used nowadays. Masonry constitutes the main building technique adopted in historical constructions, and a deep understanding of its behavior is of primary importance for the preservation of our cultural heritage. Despite its extensive usage, masonry has always been used following a trial and error approach and rules-of-thumb, due to a poor understanding of the complex mechanical behavior of such a composite material. Advanced numerical methods are therefore attractive tools to understand and predict the behavior of masonry up to and including its complete failure, allowing to estimate the residual strength and safety of structures. Several numerical methods have been proposed in recent years, either based on a full micro-modeling of masonry constituents, or on phenomenological macro models. In-between these two approaches, computational homogenization techniques have recently emerged as a promising tool joining their advantages. The problem is split into two scales: the structural scale is treated as an equivalent homogeneous medium, while the complex behavior of the heterogeneous micro-structure is taken into account solving a micro-scale problem on a representative sample of the micro-structure. The aim of this research is the development of a computational multiscale homogenization technique for the analysis of masonry structure, subjected to quasi-static in-plane and out-of-plane loadings. Classical Cauchy continuum theory is used at both scales, thus using the so-called first order computational homogenization. Due to the brittle nature of masonry constituents, particular attention is given to the problem of strain localization. In this context, the present research proposes an extension of the fracture-energy-based regularization to the two-scale homogenization problem, allowing the use of first order computational homogenization in problems involving strain localization. The method is first stated for the standard continuum case, and it is applied to the two-dimensional analysis of in-plane loaded shear walls made of periodic brick masonry. Then, the aforementioned method is extended to the case of shell structures for the analysis of out-of-plane loaded masonry walls. For this purpose, a novel homogenization technique based on thick shell theory is developed. Both in the in-plane and in the out-of-plane loading conditions, the accuracy of the proposed method is validated comparing it with experimental evidences and with micro-model analyses. The regularization properties are also assessed. The obtained results show how computational homogenization is an ideal tool for an accurate evaluation of the structural response of masonry structures, accounting for the complex behavior of its micro-structure.
La obra de fábrica es un material de construcción tradicional que ha sido utilizado a lo largo de la historia y que sigue siendo utilizado hoy en día. La obra de fábrica constituye la principal técnica de construcción adoptada en estructuras históricas, y una comprensión profunda de su comportamiento es de vital importancia para la conservación de nuestro patrimonio cultural. A pesar de su amplio uso, la obra de fábrica ha sido utilizada frecuentemente adoptando un enfoque empírico, debido a un escaso conocimiento del comportamiento mecánico complejo de este tipo de material compuesto. Los métodos numéricos avanzados son herramientas atractivas para entender y predecir el comportamiento de la obra de fábrica hasta su fallo, permitiendo estimar la resistencia residual y la seguridad de las estructuras. Durante los últimos años, han sido propuestos diferentes modelos computacionales, basados bien en una micro-modelización completa de los constituyentes del material (ladrillos y juntas de mortero), o bien en macro-modelos fenomenológicos. A partir de estos dos enfoques, los métodos de homogenización computacional han emergido recientemente como una herramienta prometedora que puede combinar las ventajas de la micro- y macro-modelización. El problema se divide en dos pasos: la escala estructural se trata como un medio homogéneo equivalente, mientras el comportamiento complejo de la microestructura heterogénea se tiene en cuenta mediante la resolución de un problema micro-mecánico reconducible a una muestra representativa de la microestructura. El objetivo de esta investigación es el desarrollo de una técnica de homogenización computacional multi-escala para el análisis de estructuras de obra de fábrica sometidas a cargas horizontales cuasi-estáticas que actúan en el plano y fuera del plano. Se adopta la teoría clásica del medio continuo de Cauchy en ambas las escalas, utilizando así la homogeneización computacional del primer orden. Debido a la naturaleza frágil de los componentes de la obra de fábrica, el estudio contempla también el problema de la localización de la deformación en el marco del enfoque numérico de fisura distribuida. En este contexto, la presente investigación propone una extensión de la regularización basada en la energía de fractura para el problema de homogenización en dos escalas, permitiendo el uso de la homogenización computacional del primer orden en problemas que implican la localización de la deformación. El método se plantea en primer lugar para el caso continuo general, y a continuación se aplica al análisis de muros de corte cargados en su plano y hechos de fábrica de ladrillos con aparejo periódico. Posteriormente, el método se extiende al caso de estructuras tipo placa para el análisis de muros de obra de fábrica cargados fuera de su plano. Para este propósito, se desarrolla una nueva técnica de homogenización basada en la teoría de placas gruesas. En ambos los casos de carga en el plano y fuera del plano, la precisión del método propuesto se valida mediante la comparación con ensayos experimentales y análisis de micro-modelización. También se validan las propiedades de regularización. Los resultados obtenidos muestran cómo la homogeneización computacional pueda resultar una herramienta válida para una evaluación precisa de la respuesta estructural de las estructuras de obra de fábrica, teniendo en cuenta el comportamiento complejo de la micro-estructura.
La muratura è un antico materiale da costruzione che è stato utilizzato in special modo nel corso della storia, ma che è ancora oggi piuttosto diffuso. La muratura è la tecnica principale di costruzione adottata in edifici storici, e una profonda comprensione del suo comportamento è di vitale importanza per la conservazione del nostro patrimonio culturale. Nonostante il suo ampio utilizzo, la muratura è sempre stata utilizzata seguendo un approccio empirico, a causa di una scarsa comprensione del complesso comportamento meccanico di tale materiale composito. I metodi numerici avanzati sono, quindi, strumenti attraenti per studiare e comprendere il comportamento della muratura fino al suo collasso, permettendo di stimare la resistenza residua e la sicurezza delle strutture. Diversi metodi numerici sono stati proposti negli ultimi anni, basati o sulla completa micro-modellazione dei componenti della muratura (mattoni e giunti di malta), o su macro-modelli fenomenologici. A metà strada tra questi due approcci, le tecniche di omogeneizzazione computazionale sono emerse recentemente come uno strumento promettente che unisce i vantaggi della micro- e macromodellazione. Il problema viene diviso in due scale: la scala strutturale viene trattata come un mezzo omogeneo equivalente, mentre il complesso comportamento della microstruttura eterogenea viene preso in considerazione risolvendo un problema di micro-scala su un volume rappresentativo della microstruttura. Lo scopo di questa ricerca è lo sviluppo di una tecnica di omogeneizzazione computazionale multiscala per l’analisi di strutture in muratura, sottoposte a carichi orizzontali quasi-statici agenti nel piano e fuori dal piano. La teoria classica del continuo di Cauchy è adottata in entrambe le scale, utilizzando quindi la cosiddetta omogeneizzazione computazionale del primo ordine. A causa della natura fragile dei costituenti della muratura, particolare attenzione viene dedicata al problema della local-izzazione delle deformazioni nel modello numerico a danneggiamento distribuito. In questo contesto, la presente ricerca propone un’estensione della regolarizzazione basata sull’energia di frattura al problema di omogeneizzazione a due scale, permettendo l’uso dell’omogeneizzazione computazionale di primo ordine in problemi che coinvolgono localizzazione delle deformazioni. Il metodo viene prima impostato per il caso continuo generale, e viene in seguito applicato all’analisi bidimensionale di pareti a taglio, caricate nel piano, fatte di muratura di mattoni a disposizione periodica. Poi, il suddetto metodo viene esteso al caso di strutture tipo piastra per l’analisi di pareti in muratura caricate fuori dal piano. A questo scopo, si sviluppa una nuova tecnica di omogeneizzazione basata sulla teoria delle piastre spesse. In entrambi i casi di carico nel piano e fuori dal piano, l’accuratezza del metodo proposto è validata mediante il confronto con evidenze sperimentali e con analisi di micro-modellazione. Allo stesso modo, le proprietà di regolarizzazione vengono validate. I risultati ottenuti evidenziano come l’omogeneizzazione computazionale sia uno strumento valido per una valutazione accurata della risposta strutturale delle strutture in muratura, tenendo conto del comportamento complesso della sua microstruttura.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Yan, Rujiao [Verfasser]. "Computational Audiovisual Scene Analysis / Rujiao Yan". Bielefeld : Universitätsbibliothek Bielefeld, 2014. http://d-nb.info/1058945572/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Refig, Andre. "Computational electromagnetic analysis of periodic structures". Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520979.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Leung, Chi-Yin. "Computational methods for integral equation analysis". Thesis, Imperial College London, 1995. http://hdl.handle.net/10044/1/8139.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Fraser, Ross Macdonald. "Computational analysis of nucleosome positioning datasets". Thesis, University of Edinburgh, 2006. http://hdl.handle.net/1842/29110.

Texto completo
Resumen
Monomer extension (ME) is an established in vitro experimental technique which maps the positions adopted by reconstituted core histone octamers on a defined DNA sequence. It provides quantitative positioning information, at high resolution, over long continuous stretches of DNA sequence. This technique has been employed to map several genes: globin genes (8 kbp), the beta-lactoglobulin gene (10 kbp) and various imprinting genes (4 kbp). This study explores and analyses this unique dataset, utilising computational and stochastic techniques, to gain insight into the potential influence of nucleosomal positioning on the structure and function of chromatin. The first section of this thesis expands upon prior analyses, explores general features of the dataset using common bioinformatics tools, and attempts to relate the quantitative positioning information from ME to data from other commonly used competitive reconstitution protocols. Finally, evidence of a correlation between the in vitro ME dataset and in vivo nucleosome positions for the beta-lactoglobulin gene region is presented. The second section presents the development of a novel method for the analysis of ME maps using Monte Carlo simulation methods. The goal was to use the ME datasets to simulate a higher order chromatin fibre, taking advantage of the long-range and quantitative nature of the ME datasets. The Monte Carlo simulations have allowed new insights to be gleaned from the datasets. Analysis of the beta-lactoglobulin positioning map indicates the potential for discrete disruption of nucleosomal organisation, at specific physiological nucleosome densities, over regions found to have unusual chromatin structure in vivo. This suggests a correspondence between the quantitative histone octamer positioning information in vitro and the positioning of nucleosomes in vivo. Taken together, these studies lend weight to the hypothesis that necleosome positioning information encoded within DNA plays a fundamental role in directing chromatin structure in vivo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Wang, Jianrong. "Computational algorithm development for epigenomic analysis". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/48984.

Texto completo
Resumen
Multiple computational algorithms were developed for analyzing ChIP-seq datasets of histone modifications. For basic ChIP-seq data processing, the problems of ambiguous short sequence read mapping and broad peak calling of diffuse ChIP-seq signals were solved by novel statistical methods. Their performance was systematically evaluated compared with existing approaches. The potential utility of finding meaningful biological information was demonstrated by the applications on real datasets. For biological question driven data mining, several important topics were selected for algorithm developments, including hypothesis-driven insulator prediction, unbiased chromatin boundary element discovery and combinatorial histone modification signature inference. The integrative computational pipeline for insulator prediction not only produced a list of putative insulators but also recovered specific associated chromatin and functional features. Selected predictions have been experimentally validated. The unbiased chromatin boundary element prediction algorithm was feature-free and had the capability to discover novel types of boundary elements. The predictions found a set of chromatin features and provided the first report of tRNA-derived boundary elements in the human genome. The combinatorial chromatin signature algorithm employed chromatin profile alignments for unsupervised inferences of histone modification patterns. The signatures were associated with various regulatory elements and functional activities. Both the computational advantages and the biological discoveries were discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Nikishkov, Yuri G. "Computational stability analysis of dynamical systems". Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/12149.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Maranha, J. R. M. "Analysis of embankment dams : computational aspects". Thesis, Swansea University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.638000.

Texto completo
Resumen
Aspects of the numerical simulation of the static behaviour of embankment dams were investigated in this thesis. Different stress updated algorithms were compared and the implicit backward Euler scheme was found to be the most accurate and robust. Particular emphasis was given to the use of a tangent modulus matrix consistent with the backward Euler scheme in the context of the Newton-Raphson method of solution of the nonlinear finite element equations. This combination was found to provide a robust algorithm. A closed form solution and a numerical procedure to evaluate the consistent tangent modulus matrix were described. The implications of using 2D and 3D formulations of elasto-plastic models were analysed. In particular 3D version of Tresca, Mohr-Coulomb and the critical state model were compared to their respective 2D versions. A robust stress update algorithm for the Mohr-Coulomb model formulated in principal stresses was presented. The comparisons were made by means of numerical examples that included: a rigid smooth strip footing, a purely cohesive slope and the construction of a dam. A 3D analysis of a hypothetical dam located in a v shaped narrow valley was made in parallel with a plane strain analysis of the same dam and the arching effect evidenced. A framework for incorporating plastic anistropic behaviour into elasto-plastic constitutive laws for soils based on a second order tensor was described. A rotational anistropic extension of the 2D version Tresca model and a shear anisotropy extension of the 2D version of the critical state model were presented and aspects of their performance illustrated by means of numerical examples. An investigation of an algorithm to model collapse settlement of soils having a general constitutive law was conducted. A version of the algorithm particularly adequate for the displacement based finite element method was presented. A back analysis of an actual dam (Beliche dam) incorporating an elasto-plastic model (c.s.m.) and collapse settlement was performed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Waterhouse, Robert Michael. "Computational comparative analysis of insect genomes". Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.512005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Meade, Andrew Paul. "The computational analysis of DNA sequences". Thesis, University of Reading, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412195.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Taroni, Chiara. "Computational analysis of protein-carbohydrate interactions". Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300932.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Panteli, Maria. "Computational analysis of world music corpora". Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/36696.

Texto completo
Resumen
The comparison of world music cultures has been considered in musicological research since the end of the 19th century. Traditional methods from the field of comparative musicology typically involve the process of manual music annotation. While this provides expert knowledge, the manual input is timeconsuming and limits the potential for large-scale research. This thesis considers computational methods for the analysis and comparison of world music cultures. In particular, Music Information Retrieval (MIR) tools are developed for processing sound recordings, and data mining methods are considered to study similarity relationships in world music corpora. MIR tools have been widely used for the study of (mainly) Western music. The first part of this thesis focuses on assessing the suitability of audio descriptors for the study of similarity in world music corpora. An evaluation strategy is designed to capture challenges in the automatic processing of world music recordings and different state-of-the-art descriptors are assessed. Following this evaluation, three approaches to audio feature extraction are considered, each addressing a different research question. First, a study of singing style similarity is presented. Singing is one of the most common forms of musical expression and it has played an important role in the oral transmission of world music. Hand-designed pitch descriptors are used to model aspects of the singing voice and clustering methods reveal singing style similarities in world music. Second, a study on music dissimilarity is performed. While musical exchange is evident in the history of world music it might be possible that some music cultures have resisted external musical influence. Low-level audio features are combined with machine learning methods to find music examples that stand out in a world music corpus, and geographical patterns are examined. The last study models music similarity using descriptors learned automatically with deep neural networks. It focuses on identifying music examples that appear to be similar in their audio content but share no (obvious) geographical or cultural links in their metadata. Unexpected similarities modelled in this way uncover possible hidden links between world music cultures. This research investigates whether automatic computational analysis can uncover meaningful similarities between recordings of world music. Applications derive musicological insights from one of the largest world music corpora studied so far. Computational analysis as proposed in this thesis advances the state-of-the-art in the study of world music and expands the knowledge and understanding of musical exchange in the world.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Ellis, Daniel Patrick Whittlesey. "Prediction-driven computational auditory scene analysis". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11006.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 173-180).
by Daniel P.W. Ellis.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Hinuma, Yoyo. "Computational structure analysis of multicomponent oxides". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44208.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references.
First principles density functional theory (DFT) energy calculations combined with the cluster expansion and Monte Carlo techniques are used to understand the cation ordering patterns of multicomponent oxides. Specifically, the lithium ion battery cation material LiNi0.5Mn0.5O2 and the thermoelectric material P2-NaxCoO2 (0.5 =/< x =/< 1) are investigated in the course of this research. It is found that at low temperature the thermodynamically stable state of LiNi0.5Mn0.5O2 has almost no Li/Ni disorder between the Li-rich and transition metal-rich (TM) layer, making it most suitable for battery applications. Heating the material above ~600°C causes an irreversible transformation, which yields a phase with 10~12% Li/Ni disorder and partial disorder of cations in the TM layer. Phase diagrams for the NaxCoO2 system were derived from the results of calculations making use of both the Generalized Gradient Approximation (GGA) to DFT and GGA with Hubbard U correction (GGA+U). This enabled us to study how hole localization, or delocalization, on Co affects the ground states and order-disorder transition temperatures of the system. Comparison of ground states, c lattice parameter and Na1/Na2 ratio with experimental observations suggest that results from the GGA, in which the holes are delocalized, matches the experimental results better for 0.5 =/< x =/< 0.8. We also present several methodological improvements to the cluster expansions. An approach to limit phase space and methods to deal with multicomponent charge balance constrained open systems while including both weak, long-range electrostatic interactions and strong, short-range interactions in a single cluster expansion.
by Yoyo Hinuma.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Khan, Zeeshan Rahman. "A computational linguistic analysis of Bangla". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/35460.

Texto completo
Resumen
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 77-78).
by Zeeshan Rahman Khan.
M.Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Kao, Chung-Yao 1972. "Efficient computational methods for robustness analysis". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29258.

Texto completo
Resumen
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2002.
Includes bibliographical references (p. 209-215).
Issues of robust stability and performance have dominated the field of systems and control theory because of their practical importance. The recently developed Integral Quadratic Constraint (IQC) based analysis method provides a framework for systematically checking robustness properties of large complex dynamical systems. In IQC analysis, the system to be analyzed is represented as a nominal, Linear Time-Invariant (LTI) subsystem interconnected with a perturbation term. The perturbation is characterized in terms of IQCs. The robustness condition is expressed as a feasibility problem which can be solved using interiorpoint algorithms. Although the systems to be analyzed have nominal LTI subsystems in many applications, this is not always the case. A typical example is the problem of robustness analysis of the oscillatory behavior of nonlinear systems, where the nominal subsystem is generally Linear Periodically Time-Varying (LPTV). The objective of the first part of this thesis is to develop new techniques for robustness analysis of LPTV systems. Two different approaches are proposed. In the first approach, the harmonic terms of the LPTV nominal model are extracted, and the system is transformed into the standard setup for robustness analysis. Robustness analysis is then performed on the transformed system based on the IQC analysis method. In the second approach, we allow the nominal system to remain periodic, and we extend the IQC analysis method to include the case where the nominal system is periodically time-varying.
(cont.) The robustness condition of this new approach is posed as semi-infinite convex feasibility problems which requires a new method to solve. A computational algorithm is developed for checking the robustness condition.In the second part of the thesis, we consider the optimization problems arising from IQC analysis. The conventional way of solving these problems is to transform them into semi-definite programs which are then solved using interior-point algorithms. The disadvantage of this approach is that the transformation introduces additional decision variables. In many situations, these auxiliary decision variables become the main computational burden, and the conventional method then becomes very inefficient and time consuming. In the second part of the thesis, a number of specialized algorithms are developed to solve these problems in a more efficient fashion. The crucial advantage in this development is that it avoids the equivalent transformation. The results of numerical experiments confirm that these algorithms can solve a problem arising from IQC analysis much faster than the conventional approach does.
by Chung-Yao Kao.
Sc.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Stone, J. V. "Shape from texture : a computational analysis". Thesis, University of Sussex, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240346.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Ballweg, Richard A. III. "Computational Analysis of Heterogeneous Cellular Responses". University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin159216973756476.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Adhikari, Param C. "Computational Analysis of Mixing in Microchannels". Youngstown State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1370799440.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Yancey, Madison E. "Computational Simulation and Analysis of Neuroplasticity". Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1622582138544632.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Teixeira, Bellina Ribau. "Computational methods for microarray data analysis". Master's thesis, Universidade de Aveiro, 2009. http://hdl.handle.net/10773/3989.

Texto completo
Resumen
Mestrado em Engenharia de Computadores e Telemática
Os microarrays de ácido desoxirribonucleico (ADN) são uma importante tecnologia para a análise de expressão genética. Permitem medir o nível de expressão de genes em várias amostras para, por exemplo, identificar genes cuja expressão varia com a administração de determinado medicamento. Um slide de microarray mede o nível de expressão de milhares de genes numa amostra ao mesmo tempo e uma experiência pode usar vários slides, surgindo assim muitos dados que é preciso processar e analisar, com recurso a meios informáticos. Esta dissertação inclui um levantamento de métodos e recursos de software utilizados na análise de dados de experiências de microarrays. Em seguida, descreve-se o desenvolvimento de um novo módulo de análise de dados que visa, usando métodos de identificação de genes diferencialmente expressos, identificar genes que se encontram diferencialmente expressos entre dois ou mais grupos experimentais. No final, é apresentado o trabalho resultante, a nível de interfaces gráficas e funcionamento.
Deoxyribonucleic acid (DNA) microarrays are an important technology for the analysis of gene expression. They allow measuring the expression of genes among several samples in order to, for example, identify genes whose expression varies with the administration of a certain drug. A microarray slide measures the expression level of thousands of genes in a sample at the same time, and an experiment can include various slides, leading to a lot of data to be processed and analyzed, with the aid of computerized means. This dissertation includes a review of methods and software tools used in the analysis of microarray experimental data. Then it is described the development of a new data analysis module that intends, using methods of identifying differentially expressed genes, to identify genes that are differentially expressed between two more groups. Finally, the resulting work is presented, describing its graphical interface and structural design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Gu, Maoshi Carleton University Dissertation Engineering Mechanical. "Computational weld analysis for long welds". Ottawa, 1992.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Hache, Hendrik. "Computational analysis of gene regulatory networks". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2009. http://dx.doi.org/10.18452/16043.

Texto completo
Resumen
Genregulation bezeichnet die geregelte Steuerung der Genexpression durch das Zusammenspiel einer Vielzahl von Transkriptionsfaktoren die in ihrer Gesamtheit hoch komplexe und zell-spezifische genregulatorische Netzwerke bilden. Im Rahmen meiner Arbeit beschäftigte ich mich mit zwei Ansätzen der computergestützten Analyse solcher Netzwerke, Modellierung und Reverse Engineering. Der erste Teil meiner Arbeit beschreibt die Entwicklung der Web-Anwendung GEne Network GEnerator (GeNGe). Hierbei handelt es sich um ein System für die automatische Erzeugung von genregulatorischen Netzwerken. Hierfür entwickelte und implementierte ich einen neuartigen Algorithmus für die Generierung von Netzwerkstrukturen die wichtige Eigenschaften biologischer Netzwerke zeigen. Für die dynamische Beschreibung der Transkription modifizierte ich eine nicht-lineare Kinetik. Diese neue Formulierung der Kinetik eignet sich besonders für die Erstellung von komplexen genregulatorischen Modellen am Computer. Desweiteren unterstützt GeNGe die Durchführung verschiedener in silico Experimente, um theoretische Aussagen über den Einfluss von Störungen des Systems treffen zu können. Der zweite Teil meiner Arbeit beschreibt die Entwicklung von GNRevealer. Es handelt sich hierbei um eine Methode zur Rekonstruktion von genregulatorischen Netzwerken auf Basis zeitdiskreter Messungen der Genexpression. Diese Methode verwendet ein neuronales Netz zusammen mit einem passenden Lernalgorithmus (backpropagation through time). Modifizierungen, welche notwendig für die Anwendung im Reverse Engineering Bereich sind, wurden von mir entwickelt, wie z.B. die Etablierung eines vollständigen Lernprozesses, die Diskretisierung der Ergebnisse und anschließende Validierungen. Im letzten Teil dieser Arbeit beschreibe ich eine Studie, in der sechs verschiedene Reverse Engineering Anwendungen von mir miteinander verglichen wurden. Diese Untersuchung hebt GNRevealer als geeignetste Anwendung aller getesteten Methoden hervor.
Gene regulation is accomplished mainly by the interplay of multiple transcription factors. This gives rise to highly complex and cell-type specific, interwoven structures of regulatory interactions summarized in gene regulatory networks. In this thesis, I address two approaches of computational analysis of such networks, forward modeling and reverse engineering. The first part of this thesis is about the Web application GEne Network GEnerator (GeNGe) which I have developed as a framework for automatic generation of gene regulatory network models. I have developed a novel algorithm for the generation of network structures featuring important biological properties. In order to model the transcriptional kinetics, I have modified an existing non-linear kinetic. This new kinetic is particularly useful for the computational set-up of complex gene regulatory models. GeNGe supports also the generation of various in silico experiments for predicting effects of perturbations as theoretical counterparts of biological experiments. Moreover, GeNGe facilitates especially the collection of benchmark data for evaluating reverse engineering methods. The second part of my thesis is about the development of GNRevealer, a method for reverse engineering of gene regulatory networks from temporal data. This computational approach uses a neural network together with a sophisticated learning algorithm (backpropagation through time). Specialized features developed in the course of my thesis include essential steps in reverse engineering processes such as the establishment of a learning workflow, discretization, and subsequent validation. Additionally, I have conducted a large comparative study using six different reverse engineering applications based on different mathematical backgrounds. The results of the comparative study highlight GNRevealer as best performing method among those under study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Tekiroglu, Serra Sinem. "Computational Sensory Analysis of Creative Language". Doctoral thesis, Università degli studi di Trento, 2018. https://hdl.handle.net/11572/368184.

Texto completo
Resumen
Sensory information in language enables us to share perceptual experiences and create a common understanding of the world around us. Especially in the creative language, which reveals itself in many forms, such as figurative language, persuasive or effective language, sensory factors impose a leveraging effect into semantic meaning by its expressive power. Although in the last decade, the studies focusing on the perceptual aspects of language have been thriving, automatic creative language analysis still suffers from the lack of perceptual grounding with its characteristics of vivid, non-literal and complex semantics. In this thesis, we propose the exploitation of the association between human senses and words as an external device to improve the computational linguistic models focusing on creative language. First, we present that sensory information reserved in the word meaning is obtainable by a distributional strategy over language. Second, we show that properly encoded sensory cues can enhance the automatic identification of figurative language. Finally, we argue that the exploitation of sensory information residing in linguistic modality in combination with the information coming from the perceptual modalities reinforces the computational assessment of multimodal creativity. We present a large scale sensory lexicon generation approach followed by its utilization in two main computational creativity experiments to confirm our arguments: 1) phrase-level and word-level metaphor identification in existing metaphor corpora; 2) creativity appreciation assessment in multimodal advertising prints incorporating the linguistic and visual modalities. The findings of the experiments show that sensory information is an invaluable indication of the creative aspect of the language and makes a significant contribution to the state of the art creative language analysis systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Tekiroglu, Serra Sinem. "Computational Sensory Analysis of Creative Language". Doctoral thesis, University of Trento, 2018. http://eprints-phd.biblio.unitn.it/2950/1/PhD_Thesis_polished.pdf.

Texto completo
Resumen
Sensory information in language enables us to share perceptual experiences and create a common understanding of the world around us. Especially in the creative language, which reveals itself in many forms, such as figurative language, persuasive or effective language, sensory factors impose a leveraging effect into semantic meaning by its expressive power. Although in the last decade, the studies focusing on the perceptual aspects of language have been thriving, automatic creative language analysis still suffers from the lack of perceptual grounding with its characteristics of vivid, non-literal and complex semantics. In this thesis, we propose the exploitation of the association between human senses and words as an external device to improve the computational linguistic models focusing on creative language. First, we present that sensory information reserved in the word meaning is obtainable by a distributional strategy over language. Second, we show that properly encoded sensory cues can enhance the automatic identification of figurative language. Finally, we argue that the exploitation of sensory information residing in linguistic modality in combination with the information coming from the perceptual modalities reinforces the computational assessment of multimodal creativity. We present a large scale sensory lexicon generation approach followed by its utilization in two main computational creativity experiments to confirm our arguments: 1) phrase-level and word-level metaphor identification in existing metaphor corpora; 2) creativity appreciation assessment in multimodal advertising prints incorporating the linguistic and visual modalities. The findings of the experiments show that sensory information is an invaluable indication of the creative aspect of the language and makes a significant contribution to the state of the art creative language analysis systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Busetto, Nicolo' <1988&gt. "A computational analysis of Shakespeare's Sonnets". Master's Degree Thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/11652.

Texto completo
Resumen
The paucity of possible clear interpretations, narrative frameworks, and common constants has led us to seek and adopt an innovative approach to Shakespeare’s Sonnets, in order to avoid falling into the umpteenth compilation study. The innovation lies in choosing irony as a specific semantic category to achieve the universal. To do this, experimentation with the use of psychological-linguistics theory known as Appraisal was used for the annotation, and through the aid of artificial intelligence a new inquiry that allows answering the paramount questions regarding this work. The results led to the conclusion that some sonnets could have been written in the by the end of author's life and not in the last decade of the16th century. This emerges from an in-depth evaluation of the author’s position since his intentions were more similar to that of his last poetic production, that of romances. Moreover, the juxtaposition between human interpretations and artificial intelligence highlighted how the use of irony in sonnets is a kind of sarcasm with educational ends, so as to convey teaching to posterity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Frankenstein, William. "Computational Models of Nuclear Proliferation". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/782.

Texto completo
Resumen
This thesis utilizes social influence theory and computational tools to examine the disparate impact of positive and negative ties in nuclear weapons proliferation. The thesis is broadly in two sections: a simulation section, which focuses on government stakeholders, and a large-scale data analysis section, which focuses on the public and domestic actor stakeholders. In the simulation section, it demonstrates that the nonproliferation norm is an emergent behavior from political alliance and hostility networks, and that alliances play a role in current day nuclear proliferation. This model is robust and contains second-order effects of extended hostility and alliance relations. In the large-scale data analysis section, the thesis demonstrates the role that context plays in sentiment evaluation and highlights how Twitter collection can provide useful input to policy processes. It first highlights the results of an on-campus study where users demonstrated that context plays a role in sentiment assessment. Then, in an analysis of a Twitter dataset of over 7.5 million messages, it assesses the role of ‘noise’ and biases in online data collection. In a deep dive analyzing the Iranian nuclear agreement, we demonstrate that the middle east is not facing a nuclear arms race, and show that there is a structural hole in online discussion surrounding nuclear proliferation. By combining both approaches, policy analysts have a complete and generalizable set of computational tools to assess and analyze disparate stakeholder roles in nuclear proliferation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Demir, H. Ozgur. "Computational Fluid Dynamics Analysis Of Store Separation". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605294/index.pdf.

Texto completo
Resumen
In this thesis, store separation from two different configurations are solved using computational methods. Two different commercially available CFD codes
CFD-FASTRAN, an implicit Euler solver, and an unsteady panel method solver USAERO, coupled with integral boundary layer solution procedure are used for the present computations. The computational trajectory results are validated against the available experimental data of a generic wing-pylon-store configuration at Mach 0.95. Major trends of the separation are captured. Same configuration is used for the comparison of unsteady panel method with Euler solution at Mach 0.3 and 0.6. Major trends are similar to each other while some differences in lateral and longitudinal displacements are observed. Trajectories of a fueltank separated from an F-16 fighter aircraft wing and full aircraft configurations are found at Mach 0.3 using only the unsteady panel code. The results indicate that the effect of fuselage is to decrease the drag and to increase the side forces acting on the separating fueltank from the aircraft. It is also observed that the yawing and rolling directions of the separating fueltank are reversed when it is separated from the full aircraft configuration when compared to the separation from the wing alone configuration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Basaran, Mustafa Bulent. "Computational Analysis Of Advanced Composite Armor Systems". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608858/index.pdf.

Texto completo
Resumen
Achieving light weight armor design has become an important engineering challenge in the last three decades. As weapons becoming highly sophisticated, so does the ammunition, potential targets have to be well protected against such threats. In order to provide mobility, light and effective armor protection materials should be used. In this thesis, numerical simulation of the silicon carbide armor backed by KevlarTM composite and orthogonally impacted by 7.62mm armor piercing (AP) projectile at an initial velocity of 850 m/s is analyzed by using AUTODYN hydrocode. As a first step, ceramic material behavior under impact conditions is validated numerically by comparing the numerical simulation result with the test result which is obtained from the literature. Then, different numerical simulations are performed by changing the backing material thickness, i.e. 2, 4, 6 and 8mm, while the thickness of the ceramic is held constant, i.e. 8mm. At the end of the simulations, optimum ceramic/composite thickness ratio is sought. The results of the simulations showed that for the backing thickness values of 4, 6 and 8mm, the projectile could not perforate the armor system. On the contrary, the projectile could penetrate and perforate the armor system for the backing thickness value of 2mm and it has still some residual velocity. From these results, it is inferred that the optimum ceramic/composite thickness ratio is equal to about 2 for the silicon carbide and kevlar configuration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

(unal), Kutlu Ozge. "Computational 3d Fracture Analysis In Axisymmetric Media". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609872/index.pdf.

Texto completo
Resumen
In this study finite element modeling of three dimensional elliptic and semielliptic cracks in a hollow cylinder is considered. Three dimensional crack and cylinder are modeled by using finite element analysis program ANSYS. The main objectives of this study are as follows. First, Ansys Parametric Design Language (APDL) codes are developed to facilitate modeling of different types of cracks in cylinders. Second, by using these codes the effect of some parameters of the problem like crack location, cylinder&rsquo
s radius to thickness ratio (R/t), the crack geometry ratio (a/c) and crack minor axis to cylinder thickness ratio (a/t) on stress intensity factors for surface and internal cracks are examined. Mechanical and thermal loading cases are considered. Displacement Correlation Technique (DCT) is used to obtain Stress Intensity Factors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Wareham, Harold Todd. "Systematic parameterized complexity analysis in computational phonology". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ37368.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Shao, Yang. "Sequential organization in computational auditory scene analysis". Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1190127412.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Rousseau, Mathieu. "Computational modeling and analysis of chromatin structure". Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116941.

Texto completo
Resumen
The organization of DNA in the nucleus of a cell has long been known to play an important role in processes such as DNA replication and repair, and the regulation of gene expression. Recent advances in microarray and high-throughput sequencing technologies have enabled the creation of novel techniques for measuring certain aspects of the three-dimensional conformation of chromatin in vivo. The data generated by these methods contain both structural information and noise from the experimental procedures. Methods for modeling and analyzing these data to infer three-dimensional chromatin structure will constitute invaluable tools in the discovery of the mechanism by which chromatin structure is mediated. The overall objective of my thesis is to develop robust three-dimensional computational models of DNA and to analyze these data to gain biological insight into the role of chromatin structure on cellular processes. This thesis presents three main results, firstly, a novel computational modeling and analysis approach for the inference of three-dimensional structure from chromatin conformation capture carbon copy (5C) and Hi-C data. Our method named MCMC5C is based on Markov chain Monte Carlo sampling and can generate representative ensembles of three-dimensional models from noisy experimental data. Secondly, our investigation of the relationship between chromatin structure and gene expression during cellular differentiation shows that chromatin architecture is a dynamic structure which adopts an open conformation for actively transcribed genes and a condensed conformation for repressed genes. And thirdly, we developed a support vector machine classifier from 5C data and demonstrate a proof-of-concept that chromatin conformation signatures could be used to discriminate between human acute lymphoid and myeloid leukemias.
L'organisation de l'ADN à l'intérieur du noyau d'une cellule est connue pour jouer un rôle important pour des processus tels que la réplication et la réparation de l'ADN et la régulation de l'expression de gènes. Des avancées technologiques récentes concernant les puces à ADN et le séquençage à haut débit ont permis la création de nouvelles techniques mesurant la conformation de la chromatine in vivo. Les données générées par ces méthodes constituent une mesure approximative de la structure de la chromatine. Des méthodes modélisant et analysant ces données afin de déduire la structure tridimensionnelle de la chromatine constitueront des outils précieux pour la découverte du mécanisme gouvernant la structure de la chromatine. L'objectif global de ma thèse est de développer des modèles computationnels analysant la structure tridimensionnelle de l'ADN et d'effectuer l'analyse de données afin de mieux comprendre le rôle de la structure de la chromatine dans divers processus cellulaires. Cette thèse présente trois résultats principaux. Premièrement, un nouvel ensemble d'outils pour la modélisation computationnelle et l'analyse des données provenant de la capture copie conforme de la conformation de la chromatine (5C) et Hi-C. Notre méthode nommée MCMC5C se base sur une méthode de Monte Carlo par chaînes de Markov et peut générer des ensembles de modèles tridimensionnels représentatifs à partir de données expérimentales contenant du bruit. Deuxièmement, notre enquête sur la relation entre la structure de la chromatine et l'expression de gènes durant la différenciation cellulaire démontre que l'architecture de la chromatine est une structure dynamique qui adopte une conformation ouverte pour les gènes activement transcrits et une conformation condensée pour les gènes non-transcrits. Troisièmement, nous avons développé un classifieur basé sur une machine à vecteur de support à partir de nos données 5C et nous avons montré que les signatures de la conformation de la chromatine pourraient être utilisées pour différencier entre la leucémie lymphoïde et myéloïde.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Serguieva, Antoaneta. "Computational intelligence techniques in asset risk analysis". Thesis, Brunel University, 2004. http://bura.brunel.ac.uk/handle/2438/5424.

Texto completo
Resumen
The problem of asset risk analysis is positioned within the computational intelligence paradigm. We suggest an algorithm for reformulating asset pricing, which involves incorporating imprecise information into the pricing factors through fuzzy variables as well as a calibration procedure for their possibility distributions. Then fuzzy mathematics is used to process the imprecise factors and obtain an asset evaluation. This evaluation is further automated using neural networks with sign restrictions on their weights. While such type of networks has been only used for up to two network inputs and hypothetical data, here we apply thirty-six inputs and empirical data. To achieve successful training, we modify the Levenberg-Marquart backpropagation algorithm. The intermediate result achieved is that the fuzzy asset evaluation inherits features of the factor imprecision and provides the basis for risk analysis. Next, we formulate a risk measure and a risk robustness measure based on the fuzzy asset evaluation under different characteristics of the pricing factors as well as different calibrations. Our database, extracted from DataStream, includes thirty-five companies traded on the London Stock Exchange. For each company, the risk and robustness measures are evaluated and an asset risk analysis is carried out through these values, indicating the implications they have on company performance. A comparative company risk analysis is also provided. Then, we employ both risk measures to formulate a two-step asset ranking method. The assets are initially rated according to the investors' risk preference. In addition, an algorithm is suggested to incorporate the asset robustness information and refine further the ranking benefiting market analysts. The rationale provided by the ranking technique serves as a point of departure in designing an asset risk classifier. We identify the fuzzy neural network structure of the classifier and develop an evolutionary training algorithm. The algorithm starts with suggesting preliminary heuristics in constructing a sufficient training set of assets with various characteristics revealed by the values of the pricing factors and the asset risk values. Then, the training algorithm works at two levels, the inner level targets weight optimization, while the outer level efficiently guides the exploration of the search space. The latter is achieved by automatically decomposing the training set into subsets of decreasing complexity and then incrementing backward the corresponding subpopulations of partially trained networks. The empirical results prove that the developed algorithm is capable of training the identified fuzzy network structure. This is a problem of such complexity that prevents single-level evolution from attaining meaningful results. The final outcome is an automatic asset classifier, based on the investors’ perceptions of acceptable risk. All the steps described above constitute our approach to reformulating asset risk analysis within the approximate reasoning framework through the fusion of various computational intelligence techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Jakee, Khan Md Kamall. "Computational convex analysis using parametric quadratic programming". Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/45182.

Texto completo
Resumen
The class of piecewise linear-quadratic (PLQ) functions is a very important class of functions in convex analysis since the result of most convex operators applied to a PLQ function is a PLQ function. Although there exists a wide range of algorithms for univariate PLQ functions, recent work has focused on extending these algorithms to PLQ functions with more than one variable. First, we recall a proof in [Convexity, Convergence and Feedback in Optimal Control, Phd thesis, R. Goebel, 2000] that PLQ functions are closed under partial conjugate computation. Then we use recent results on parametric quadratic programming (pQP) to compute the inf-projection of any multivariate convex PLQ function. We implemented the algorithm for bivariate PLQ functions, and modi ed it to compute conjugates. We provide a complete space and time worst-case complexity analysis and show that for bivariate functions, the algorithm matches the performance of [Computing the Conjugate of Convex Piecewise Linear-Quadratic Bivariate Functions, Bryan Gardiner and Yves Lucet, Mathematical Programming Series B, 2011] while being easier to extend to higher dimensions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Khan, Asad Iqbal. "Computational schemes for parallel finite element analysis". Thesis, Heriot-Watt University, 1994. http://hdl.handle.net/10399/1422.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Jonsson, Pall Freyr. "Computational analysis of protein-protein interaction networks". Thesis, University College London (University of London), 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.439848.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía