Dissertations / Theses on the topic 'Partial Data processing'

To see the other types of publications on this topic, follow the link: Partial Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Partial Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wu, Qinyi. "Partial persistent sequences and their applications to collaborative text document editing and processing." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44916.

Full text
Abstract:
In a variety of text document editing and processing applications, it is necessary to keep track of the revision history of text documents by recording changes and the metadata of those changes (e.g., user names and modification timestamps). The recent Web 2.0 document editing and processing applications, such as real-time collaborative note taking and wikis, require fine-grained shared access to collaborative text documents as well as efficient retrieval of metadata associated with different parts of collaborative text documents. Current revision control techniques only support coarse-grained shared access and are inefficient to retrieve metadata of changes at the sub-document granularity. In this dissertation, we design and implement partial persistent sequences (PPSs) to support real-time collaborations and manage metadata of changes at fine granularities for collaborative text document editing and processing applications. As a persistent data structure, PPSs have two important features. First, items in the data structure are never removed. We maintain necessary timestamp information to keep track of both inserted and deleted items and use the timestamp information to reconstruct the state of a document at any point in time. Second, PPSs create unique, persistent, and ordered identifiers for items of a document at fine granularities (e.g., a word or a sentence). As a result, we are able to support consistent and fine-grained shared access to collaborative text documents by detecting and resolving editing conflicts based on the revision history as well as to efficiently index and retrieve metadata associated with different parts of collaborative text documents. We demonstrate the capabilities of PPSs through two important problems in collaborative text document editing and processing applications: data consistency control and fine-grained document provenance management. The first problem studies how to detect and resolve editing conflicts in collaborative text document editing systems. We approach this problem in two steps. In the first step, we use PPSs to capture data dependencies between different editing operations and define a consistency model more suitable for real-time collaborative editing systems. In the second step, we extend our work to the entire spectrum of collaborations and adapt transactional techniques to build a flexible framework for the development of various collaborative editing systems. The generality of this framework is demonstrated by its capabilities to specify three different types of collaborations as exemplified in the systems of RCS, MediaWiki, and Google Docs respectively. We precisely specify the programming interfaces of this framework and describe a prototype implementation over Oracle Berkeley DB High Availability, a replicated database management engine. The second problem of fine-grained document provenance management studies how to efficiently index and retrieve fine-grained metadata for different parts of collaborative text documents. We use PPSs to design both disk-economic and computation-efficient techniques to index provenance data for millions of Wikipedia articles. Our approach is disk economic because we only save a few full versions of a document and only keep delta changes between those full versions. Our approach is also computation-efficient because we avoid the necessity of parsing the revision history of collaborative documents to retrieve fine-grained metadata. Compared to MediaWiki, the revision control system for Wikipedia, our system uses less than 10% of disk space and achieves at least an order of magnitude speed-up to retrieve fine-grained metadata for documents with thousands of revisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Vitale, Raffaele. "Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90442.

Full text
Abstract:
The present Ph.D. thesis, primarily conceived to support and reinforce the relation between academic and industrial worlds, was developed in collaboration with Shell Global Solutions (Amsterdam, The Netherlands) in the endeavour of applying and possibly extending well-established latent variable-based approaches (i.e. Principal Component Analysis - PCA - Partial Least Squares regression - PLS - or Partial Least Squares Discriminant Analysis - PLSDA) for complex problem solving not only in the fields of manufacturing troubleshooting and optimisation, but also in the wider environment of multivariate data analysis. To this end, novel efficient algorithmic solutions are proposed throughout all chapters to address very disparate tasks, from calibration transfer in spectroscopy to real-time modelling of streaming flows of data. The manuscript is divided into the following six parts, focused on various topics of interest: Part I - Preface, where an overview of this research work, its main aims and justification is given together with a brief introduction on PCA, PLS and PLSDA; Part II - On kernel-based extensions of PCA, PLS and PLSDA, where the potential of kernel techniques, possibly coupled to specific variants of the recently rediscovered pseudo-sample projection, formulated by the English statistician John C. Gower, is explored and their performance compared to that of more classical methodologies in four different applications scenarios: segmentation of Red-Green-Blue (RGB) images, discrimination of on-/off-specification batch runs, monitoring of batch processes and analysis of mixture designs of experiments; Part III - On the selection of the number of factors in PCA by permutation testing, where an extensive guideline on how to accomplish the selection of PCA components by permutation testing is provided through the comprehensive illustration of an original algorithmic procedure implemented for such a purpose; Part IV - On modelling common and distinctive sources of variability in multi-set data analysis, where several practical aspects of two-block common and distinctive component analysis (carried out by methods like Simultaneous Component Analysis - SCA - DIStinctive and COmmon Simultaneous Component Analysis - DISCO-SCA - Adapted Generalised Singular Value Decomposition - Adapted GSVD - ECO-POWER, Canonical Correlation Analysis - CCA - and 2-block Orthogonal Projections to Latent Structures - O2PLS) are discussed, a new computational strategy for determining the number of common factors underlying two data matrices sharing the same row- or column-dimension is described, and two innovative approaches for calibration transfer between near-infrared spectrometers are presented; Part V - On the on-the-fly processing and modelling of continuous high-dimensional data streams, where a novel software system for rational handling of multi-channel measurements recorded in real time, the On-The-Fly Processing (OTFP) tool, is designed; Part VI - Epilogue, where final conclusions are drawn, future perspectives are delineated, and annexes are included.
La presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos.
La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos.
Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
TESIS
APA, Harvard, Vancouver, ISO, and other styles
3

Karasev, Peter A. "Feedback augmentation of pde-based image segmentation algorithms using application-specific exogenous data." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50257.

Full text
Abstract:
This thesis is divided into five chapters. The scope of problems considered is defined in chapter I. Next, chapter II provides background material on image processing with partial differential equations and a review of prior work in the field. Chapter III covers the medical imaging portion of the research; the key contribution is a control-based algorithm for interactive image segmentation. Applications of the feedback-augmented level set method to fracture reconstruction and surgical planning are shown. Problems in vision-based control are considered in Chapters IV and V. A method of improving performance in closed-loop target tracking using level set segmentation is developed, with unmanned aerial vehicle or next-generation missile guidance being the primary applications of interest. Throughout this thesis, the two application types are connected into a unified viewpoint of open-loop systems that are augmented by exogenous data.
APA, Harvard, Vancouver, ISO, and other styles
4

Kalyon, Gabriel. "Supervisory control of infinite state systems under partial observation." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210032.

Full text
Abstract:
A discrete event system is a system whose state space is given by a discrete set and whose state transition mechanism is event-driven i.e. its state evolution depends only on the occurrence of discrete events over the time. These systems are used in many fields of application (telecommunication networks, aeronautics, aerospace,). The validity of these systems is then an important issue and to ensure it we can use supervisory control methods. These methods consist in imposing a given specification on a system by means of a controller which runs in parallel with the original system and which restricts its behavior. In this thesis, we develop supervisory control methods where the system can have an infinite state space and the controller has a partial observation of the system (this implies that the controller must define its control policy from an imperfect knowledge of the system). Unfortunately, this problem is generally undecidable. To overcome this negative result, we use abstract interpretation techniques which ensure the termination of our algorithms by overapproximating, however, some computations. The aim of this thesis is to provide the most complete contribution it is possible to bring to this topic. Hence, we consider more and more realistic problems. More precisely, we start our work by considering a centralized framework (i.e. the system is controlled by a single controller) and by synthesizing memoryless controllers (i.e. controllers that define their control policy from the current observation received from the system). Next, to obtain better solutions, we consider the synthesis of controllers that record a part or the whole of the execution of the system and use this information to define the control policy. Unfortunately, these methods cannot be used to control an interesting class of systems: the distributed systems. We have then defined methods that allow to control distributed systems with synchronous communications (decentralized and modular methods) and with asynchronous communications (distributed method). Moreover, we have implemented some of our algorithms to experimentally evaluate the quality of the synthesized controllers. /

Un système à événements discrets est un système dont l'espace d'états est un ensemble discret et dont l'évolution de l'état courant dépend de l'occurrence d'événements discrets à travers le temps. Ces systèmes sont présents dans de nombreux domaines critiques tels les réseaux de communications, l'aéronautique, l'aérospatiale. La validité de ces systèmes est dès lors une question importante et une manière de l'assurer est d'utiliser des méthodes de contrôle supervisé. Ces méthodes associent au système un dispositif, appelé contrôleur, qui s'exécute en parrallèle et qui restreint le comportement du système de manière à empêcher qu'un comportement erroné ne se produise. Dans cette thèse, on s'intéresse au développement de méthodes de contrôle supervisé où le système peut avoir un espace d'états infini et où les contrôleurs ne sont pas toujours capables d'observer parfaitement le système; ce qui implique qu'ils doivent définir leur politique de contrôle à partir d'une connaissance imparfaite du système. Malheureusement, ce problème est généralement indécidable. Pour surmonter cette difficulté, nous utilisons alors des techniques d'interprétation abstraite qui assurent la terminaison de nos algorithmes au prix de certaines sur-approximations dans les calculs. Le but de notre thèse est de fournir la contribution la plus complète possible dans ce domaine et nous considèrons pour cela des problèmes de plus en plus réalistes. Plus précisement, nous avons commencé notre travail en définissant une méthode centralisée où le système est contrôlé par un seul contrôleur qui définit sa politique de contrôle à partir de la dernière information reçue du système. Ensuite, pour obtenir de meilleures solutions, nous avons défini des contrôleurs qui retiennent une partie ou la totalité de l'exécution du système et qui définissent leur politique de contrôle à partir de cette information. Malheureusement, ces méthodes ne peuvent pas être utilisées pour contrôler une classe intéressante de systèmes: les sytèmes distribués. Nous avons alors défini des méthodes permettant de contrôler des systèmes distribués dont les communications sont synchrones (méthodes décentralisées et modulaires) et asynchrones (méthodes distribuées). De plus, nous avons implémenté certains de nos algorithmes pour évaluer expérimentalement la qualité des contrôleurs qu'ils synthétisent.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
5

Jett, David B. "Selection of flip-flops for partial scan paths by use of a statistical testability measure." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-12302008-063234/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lalevée, André. "Towards highly flexible hardware architectures for high-speed data processing : a 100 Gbps network case study." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0054/document.

Full text
Abstract:
L’augmentation de la taille des réseaux actuels ainsi que de la diversité des applications qui les utilisent font que les architectures de calcul traditionnelles deviennent limitées. En effet, les architectures purement logicielles ne permettent pas de tenir les débits en jeu, tandis que celles purement matérielles n’offrent pas assez de flexibilité pour répondre à la diversité des applications. Ainsi, l’utilisation de solutions de type matériel programmable, en particulier les Field Programmable Gate Arrays (FPGAs), a été envisagée. En effet, ces architectures sont souvent considérées comme un bon compromis entre performances et flexibilité, notamment grâce à la technique de Reconfiguration Dynamique Partielle (RDP), qui permet de modifier le comportement d’une partie du circuit pendant l’exécution. Cependant, cette technique peut présenter des inconvénients lorsqu’elle est utilisée de manière intensive, en particulier au niveau du stockage des fichiers de configuration, appelés bitstreams. Pour palier ce problème, il est possible d’utiliser la relocation de bitstreams, permettant de réduire le nombre de fichiers de configuration. Cependant cette technique est fastidieuse et exige des connaissances pointues dans les FPGAs. Un flot de conception entièrement automatisé a donc été développé dans le but de simplifier son utilisation.Pour permettre une flexibilité sur l’enchaînement des traitements effectués, une architecture de communication flexible supportant des hauts débits est également nécessaire. Ainsi, l’étude de Network-on-Chips dédiés aux circuits reconfigurables et au traitements réseaux à haut débit.Enfin, un cas d’étude a été mené pour valider notre approche
The increase in both size and diversity of applications regarding modern networks is making traditional computing architectures limited. Indeed, purely software architectures can not sustain typical throughputs, while purely hardware ones severely lack the flexibility needed to adapt to the diversity of applications. Thus, the investigation of programmable hardware, such as Field Programmable Gate Arrays (FPGAs), has been done. These architectures are indeed usually considered as a good tradeoff between performance and flexibility, mainly thanks to the Dynamic Partial Reconfiguration (DPR), which allows to reconfigure a part of the design during run-time.However, this technique can have several drawbacks, especially regarding the storing of the configuration files, called bitstreams. To solve this issue, bitstream relocation can be deployed, which allows to decrease the number of configuration files required. However, this technique is long, error-prone, and requires specific knowledge inFPGAs. A fully automated design flow has been developped to ease the use of this technique. In order to provide flexibility regarding the sequence of treatments to be done on our architecture, a flexible and high-throughput communication structure is required. Thus, a Network-on-Chips study and characterization has been done accordingly to network processing and bitstream relocation properties. Finally, a case study has been developed in order to validate our approach
APA, Harvard, Vancouver, ISO, and other styles
7

He, Chuan. "Numerical solutions of differential equations on FPGA-enhanced computers." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lazcano, Vanel. "Some problems in depth enhanced video processing." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373917.

Full text
Abstract:
In this thesis we tackle two problems, namely, the data interpolation prob- lem in the context of depth computation both for images and for videos, and the problem of the estimation of the apparent movement of objects in image sequences. The rst problem deals with completion of depth data in a region of an image or video where data are missing due to occlusions, unreliable data, damage or lost of data during acquisition. In this thesis we tackle it in two ways. First, we propose a non-local gradient-based energy which is able to complete planes locally. We consider this model as an extension of the bilateral lter to the gradient domain. We have successfully evaluated our model to complete synthetic depth images and also incomplete depth maps provided by a Kinect sensor. The second approach to tackle the problem is an experimental study of the Biased Absolutely Minimizing Lipschitz Extension (biased AMLE in short) for anisotropic interpolation of depth data to big empty regions without informa- tion. The AMLE operator is a cone interpolator, but the biased AMLE is an exponential cone interpolator which makes it more addapted to depth maps of real scenes that usually present soft convex or concave surfaces. Moreover, the biased AMLE operator is able to expand depth data to huge regions. By con- sidering the image domain endowed with an anisotropic metric, the proposed method is able to take into account the underlying geometric information in order not to interpolate across the boundary of objects at di erent depths. We have proposed a numerical model to compute the solution of the biased AMLE which is based on the eikonal operators. Additionally, we have extended the proposed numerical model to video sequences. The second problem deals with the motion estimation of the objects in a video sequence. This problem is known as the optical ow computation. The Optical ow problem is one of the most challenging problems in computer vision. Traditional models to estimate it fail in presence of occlusions and non-uniform illumination. To tackle these problems we proposed a variational model to jointly estimate optical ow and occlusion. Moreover, the proposed model is able to deal with the usual drawback of variational methods in dealing with fast displacements of objects in the scene which are larger than the object it- self. The addition of a term that balance gradient and intensities increases the robustness to illumination changes of the proposed model. The inclusions of a supplementary matches given by exhaustive search in speci cs locations helps to follow large displacements.
En esta tesis se abordan dos problemas: interpolación de datos en el contexto del cálculo de disparidades tanto para imágenes como para video, y el problema de la estimación del movimiento aparente de objetos en una secuencia de imágenes. El primer problema trata de la completación de datos de profundidad en una región de la imagen o video dónde los datos se han perdido debido a oclusiones, datos no confiables, datos dañados o pérdida de datos durante la adquisición. En esta tesis estos problemas se abordan de dos maneras. Primero, se propone una energía basada en gradientes no-locales, energía que puede (localmente) completar planos. Se considera este modelo como una extensión del filtro bilateral al dominio del gradiente. Se ha evaluado en forma exitosa el modelo para completar datos sintéticos y también mapas de profundidad incompletos de un sensor Kinect. El segundo enfoque, para abordar el problema, es un estudio experimental del biased AMLE (Biased Absolutely Minimizing Lipschitz Extension) para interpolación anisotrópica de datos de profundidad en grandes regiones sin información. El operador AMLE es un interpolador de conos, pero el operador biased AMLE es un interpolador de conos exponenciales lo que lo hace estar más adaptado a mapas de profundidad de escenas reales (las que comunmente presentan superficies convexas, concavas y suaves). Además, el operador biased AMLE puede expandir datos de profundidad a regiones grandes. Considerando al dominio de la imagen dotado de una métrica anisotrópica, el método propuesto puede tomar en cuenta información geométrica subyacente para no interpolar a través de los límites de los objetos a diferentes profundidades. Se ha propuesto un modelo numérico, basado en el operador eikonal, para calcular la solución del biased AMLE. Adicionalmente, se ha extendido el modelo numérico a sequencias de video. El cálculo del flujo óptico es uno de los problemas más desafiantes para la visión por computador. Los modelos tradicionales fallan al estimar el flujo óptico en presencia de oclusiones o iluminación no uniforme. Para abordar este problema se propone un modelo variacional para conjuntamente estimar flujo óptico y oclusiones. Además, el modelo propuesto puede tolerar, una limitación tradicional de los métodos variacionales, desplazamientos rápidos de objetos que son más grandes que el tamaño objeto en la escena. La adición de un término para el balance de gradientes e intensidades aumenta la robustez del modelo propuesto ante cambios de iluminación. La inclusión de correspondencias adicionales (obtenidas usando búsqueda exhaustiva en ubicaciones específicas) ayuda a estimar grandes desplazamientos.
APA, Harvard, Vancouver, ISO, and other styles
9

Nakanishi, Rafael Umino. "Recuperação de objetos tridimensionais utilizando características de séries temporais." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23112016-112604/.

Full text
Abstract:
Com o aumento da capacidade de armazenamento de informação em bancos de dados e em computadores pessoais, surge a necessidade de algoritmos computacionais capazes de realizar o processamento automático para recuperação desses dados. Esse fato não é diferente para objetos tridimensionais armazenados em formato de arquivos. Nesta Dissertação de Mestrado foram estudadas novas técnicas para processamento desses objetos utilizando uma abordagem não comum à área: técnicas para análise de séries temporais, tais como scattering wavelets e gráficos de recorrência. No caso de recuperação total de objetos, ou seja, dado uma malha tridimensional encontrar outras malhas que são visualmente semelhantes, uma única característica é extraída curvatura gaussiana e variação de superfície, por exemplo e ordenada como uma série com a informação provida pelo vetor de Fiedler. Então processa-se essa série utilizando a técnica scattering wavelets, que são capazes de analisar o comportamento temporal de conjunto de dados seriais. Para esse problema, os resultados obtidos são comparáveis com outras abordagens apresentadas na literatura que utilizam várias características para se chegar ao resultado. Já no caso de recuperação parcial de objetos, em que apenas uma parte do objeto é dado como parâmetro de busca, é necessário realizar uma segmentação das malhas para se encontrar outras partes que são visualmente semelhantes. Ao utilizarmos um gráfico de recorrência para analisar os objetos, é possível encontrar não apenas a região mais semelhante dentro da mesma (ou de outra) malha, mas também se obtém todas as regiões que são similares ao parâmetro de busca.
With the increasing data storage capacity of databases and personal computers, arises the necessity of computer algorithms capable of performing processing for automatic recovery of data and information. This fact is no different for three-dimensional objects stored in files. In this Masters Thesis we studied new techniques for processing such data objects using an unusual approach to the geometric processing area: techniques for analyzing time series, such as scattering wavelets and recurrence plots. For shape retrieval problem, i.e., given a tridimensional mesh try finding other meshes that are visually similar, our method extract only one feature Gaussian curvature and surface variation, for example and organize it as a series using information given by Fiedler vector. Then, the next step is to process the resulting series using a technique called scattering wavelets, that is capable of analyzing the temporal behavior of a set of serial data. For this problem, the results are comparable with other approaches reported in the literature that use multiple characteristics to find a matching mesh. In the case of partial retrieval of objects, in which only a part of the object is given as search parameter, it is necessary to perform a segmentation of the meshes in order to find other parts that are visually similar to the query. By using Recurrence Plot to analyze the objects, our method can find not only the most similar region within the same (or other) object, but also get all the regions that are similar to the search parameter.
APA, Harvard, Vancouver, ISO, and other styles
10

Pike, Scott Mason. "Distributed resource allocation with scalable crash containment." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1092857584.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xiv, 154 p.; also includes graphics, map. Includes bibliographical references (p. 148-154). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
11

Tidmus, Jonathan Paul. "Task and data management for parallel particle tracing." Thesis, University of the West of England, Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Fei, and 王緋. "Complex stock trading strategy based on parallel particle swarm optimization." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49858889.

Full text
Abstract:
Trading rules have been utilized in the stock market to make profit for more than a century. However, only using a single trading rule may not be sufficient to predict the stock price trend accurately. Although some complex trading strategies combining various classes of trading rules have been proposed in the literature, they often pick only one rule for each class, which may lose valuable information from other rules in the same class. In this thesis, a complex stock trading strategy, namely Performance-based Reward Strategy (PRS), is proposed. PRS combines the seven most popular classes of trading rules in financial markets, and for each class of trading rule, PRS includes various combinations of the rule parameters to produce a universe of 1059 component trading rules in all. Each component rule is assigned a starting weight and a reward/penalty mechanism based on profit is proposed to update these rules’ weights over time. To determine the best parameter values of PRS, we employ an improved time variant Particle Swarm Optimization (PSO) algorithm with the objective of maximizing the annual net profit generated by PRS. Due to the large number of component rules and swarm size, the optimization time is significant. A parallel PSO based on Hadoop, an open source parallel programming model of MapReduce, is employed to optimize PRS more efficiently. By omitting the traditional reduce phase of MapReduce, the proposed parallel PSO avoids the I/O cost of intermediate data and gets higher speedup ratio than previous parallel PSO based on MapReduce. After being optimized in an eight years training period, PRS is tested on an out-of-sample data set. The experimental results show that PRS outperforms all of the component rules in the testing period.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
13

van, der Gracht Joseph. "Partially coherent image enhancement by source modification." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/13379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

鄺建華 and Kin-wa Kwong. "Computer-aided parting line and parting surface generation in mould design." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1992. http://hub.hku.hk/bib/B31233119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Pang, Bo. "Handwriting Chinese character recognition based on quantum particle swarm optimization support vector machine." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Michel, Thomas. "Analyse mathématique et calibration de modèles de croissance tumorale." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0222/document.

Full text
Abstract:
Cette thèse présente des travaux sur l’étude et la calibration de modèles d’équations aux dérivées partielles pour la croissance tumorale. La première partie porte sur l’analyse d’un modèle de croissance tumorale pour le cas de métastases au foie de tumeurs gastro-intestinales (GIST). Le modèle est un système d’équations aux dérivées partielles couplées et prend en compte plusieurs traitements dont un traitement anti-angiogénique. Le modèle permet de reproduire des données cliniques. La première partie de ce travail concerne la preuve d’existence/unicité de la solution du modèle. La seconde partie du travail porte sur l’étude du comportement asymptotique de la solution du modèle lorsqu’un paramètre du modèle, décrivant la capacité de la tumeur à évacuer la nécrose, converge vers 0. La seconde partie de la thèse concerne le développement d’un modèle de croissance pour des sphéroïdes tumoraux ainsi que sur la calibration de ce modèle à partir de données expérimentales in vitro. L’objectif est de développer un modèle permettant de reproduire quantitativement la distribution des cellules proliférantes à l’intérieur d’un sphéroïde en fonction de la concentration en nutriments. Le travail de modélisation et de calibration du modèle a été effectué à partir de données expérimentales permettant d’obtenir la répartition spatiale de cellules proliférantes dans un sphéroïde tumoral
In this thesis, we present several works on the study and the calibration of partial differential equations models for tumor growth. The first part is devoted to the mathematical study of a model for tumor drug resistance in the case of gastro-intestinal tumor (GIST) metastases to the liver. The model we study consists in a coupled partial differential equations system and takes several treatments into account, such as a anti-angiogenic treatment. This model is able to reproduce clinical data. In a first part, we present the proof of the existence/uniqueness of the solution to this model. Then, in a second part, we study the asymptotic behavior of the solution when a parameter of this model, describing the capacity of the tumor to evacuate the necrosis, goes to 0. In the second part of this thesis, we present the development of model for tumor spheroids growth. We also present the model calibration thanks to in vitro experimental data. The main objective of this work is to reproduce quantitatively the proliferative cell distribution in a spheroid, as a function of the concentration of nutrients. The modeling and calibration of this model have been done thanks to experimental data consisting of proliferative cells distribution in a spheroid
APA, Harvard, Vancouver, ISO, and other styles
17

Bouktache, Essaid. "Analysis of an adaptive antenna array with intermediate-frequency weighting partially implemented by digital processing /." The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487260135357046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Truter, J. N. J. "Using CAMAC hardware for access to a particle accelerator." Master's thesis, University of Cape Town, 1988. http://hdl.handle.net/11427/17049.

Full text
Abstract:
Includes bibliographical references and index.
The design and implementation of a method to software interface high level applications programs used for the control and monitoring of a Particle Accelerator is described. Effective methods of interfacing the instrumentation bus system with a Real time multitasking computer operating system were examined and optimized for efficient utilization of the operating system software and available hardware. Various methods of accessing the instrumentation bus are implemented as well as demand response servicing of the instruments on the bus.
APA, Harvard, Vancouver, ISO, and other styles
19

Tai, Hio Kuan. "Protein-ligand docking and virtual screening based on chaos-embedded particle swarm optimization algorithm." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3948431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Anderson, Christopher R. "Evaluation of gigabit links for use in HEP trigger processing." Thesis, University of Liverpool, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234245.

Full text
Abstract:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
APA, Harvard, Vancouver, ISO, and other styles
22

Cirkic, Mirsad. "Modular General-Purpose Data Filtering for Tracking." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-14917.

Full text
Abstract:

In nearly allmodern tracking systems, signal processing is an important part with state estimation as the fundamental component. To evaluate and to reassess different tracking systems in an affordable way, simulations that are in accordance with reality are largely used. Simulation software that is composed of many different simulating modules, such as high level architecture (HLA) standardized software, is capable of simulating very realistic data and scenarios.

A modular and general-purpose state estimation functionality for filtering provides a profound basis for simulating most modern tracking systems, which in this thesis work is precisely what is created and implemented in an HLA-framework. Some of the most widely used estimators, the iterated Schmidt extended Kalman filter, the scaled unscented Kalman filter, and the particle filter, are chosen to form a toolbox of such functionality. An indeed expandable toolbox that offers both unique and general features of each respective filter is designed and implemented, which can be utilized in not only tracking applications but in any application that is in need of fundamental state estimation. In order to prepare the user to make full use of this toolbox, the filters’ methods are described thoroughly, some of which are modified with adjustments that have been discovered in the process.

Furthermore, to utilize these filters easily for the sake of user-friendliness, a linear algebraic shell is created, which has very straight-forward matrix handling and uses BOOST UBLAS as the underlying numerical library. It is used for the implementation of the filters in C++, which provides a very independent and portable code.

APA, Harvard, Vancouver, ISO, and other styles
23

Čirkić, Mirsad. "Modular General-Purpose Data Filtering for Tracking." Thesis, Linköpings universitet, Institutionen för systemteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-14917.

Full text
Abstract:
In nearly allmodern tracking systems, signal processing is an important part with state estimation as the fundamental component. To evaluate and to reassess different tracking systems in an affordable way, simulations that are in accordance with reality are largely used. Simulation software that is composed of many different simulating modules, such as high level architecture (HLA) standardized software, is capable of simulating very realistic data and scenarios. A modular and general-purpose state estimation functionality for filtering provides a profound basis for simulating most modern tracking systems, which in this thesis work is precisely what is created and implemented in an HLA-framework. Some of the most widely used estimators, the iterated Schmidt extended Kalman filter, the scaled unscented Kalman filter, and the particle filter, are chosen to form a toolbox of such functionality. An indeed expandable toolbox that offers both unique and general features of each respective filter is designed and implemented, which can be utilized in not only tracking applications but in any application that is in need of fundamental state estimation. In order to prepare the user to make full use of this toolbox, the filters’ methods are described thoroughly, some of which are modified with adjustments that have been discovered in the process. Furthermore, to utilize these filters easily for the sake of user-friendliness, a linear algebraic shell is created, which has very straight-forward matrix handling and uses BOOST UBLAS as the underlying numerical library. It is used for the implementation of the filters in C++, which provides a very independent and portable code.
APA, Harvard, Vancouver, ISO, and other styles
24

Lundqvist, Viktor. "A smoothed particle hydrodynamic simulation utilizing the parallel processing capabilites of the GPUs." Thesis, Linköping University, Department of Science and Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-21761.

Full text
Abstract:

Simulating fluid behavior has proven to be a demanding challenge which requires complex computational models and highly efficient data structures. Smoothed Particle Hydrodynamics (SPH) is a particle based computational model used to simulate fluid behavior that has been found capable of producing convincing results. However, the SPH algorithm is computational heavy which makes it cumbersome to work with.

This master thesis describes how the SPH algorithm can be accelerated by utilizing the GPU’s computational resources. It describes a model for how to distribute the work load on the GPU and presents a suitable data structure. In addition, it proposes a method to represent and handle moving objects in the fluids surroundings. Finally, the performance gain due to the GPU is evaluated by comparing processing times with an identical implementation running solely on the CPU.

APA, Harvard, Vancouver, ISO, and other styles
25

Wong, Cheok Meng. "A distributed particle swarm optimization for fuzzy c-means algorithm based on an apache spark platform." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Pomareda, Sesé Victor. "Signal Processing Approaches to the Detection and Localization of Gas Chemical Sources using Partially Selective Sensors." Doctoral thesis, Universitat de Barcelona, 2013. http://hdl.handle.net/10803/119727.

Full text
Abstract:
Due to recent progress, higher-order chemical instrumentation provides large amounts of data which need automated processing in order to extract relevant information. In most cases, the raw signals or spectra are too complex for manual analysis. The ability to detect, identify and quantitate chemical substances in gas phase in field operations is required in a huge number of applications. Among them, I would like to highlight the need for chemical sensing on diverse humanitarian, safety and security applications. In these cases, it becomes extremely important to continuously monitor the environments where chemicals are spread in order to be ready to act when abnormal events are discovered. In most critical scenarios the sample can not just be taken to the laboratory and analyzed, since an immediate answer is needed. In some other scenarios, the exploration of the area must be performed because the localization of the gas source or material of interest is unknown. This exploration can be performed using multiple mobile sensors in order to localize the chemical source or material. Different sensing technologies have been successfully used to detect and identify different chemical substances (gases or volatile compounds). These compounds could be either toxic or hazardous, or they can be signatures of the materials to be detected, for instance, explosives or drugs. Among these technologies, mobility based analyzers provide fast responses with high sensitivity. However, IMS instruments are not exempt of problems. Typically, they provide moderate selectivity, appearing overlapped peaks in the spectra. Moreover, the presence of humidity makes peaks wider, thus worsening the resolving power and the resolution. Furthermore, the response of IMS is non-linear as substance concentration increases and more than one peak can appear in the spectra due to the same compound. In the present thesis, these problems are addressed and applications using an Ion Mobility Spectrometer (IMS) and a Differential Mobility Analyzer (DMA) are shown. It is demonstrated that multivariate data analysis tools are more effective when dealing with these technologies. For the first time, multivariate data analysis tools have been applied to a novel DMA. It is shown that DMA could be established as a good instrumentation for the detection of explosives and the detection and quantitation of VOCs. Furthermore, Multivariate curve resolution Alternating Least Squares (MCR-ALS) is shown to be suitable to analyze IMS spectra qualitatively when interfering chemicals appear in the spectra and even when their behaviour is non-linear. Partial Least Squares (PLS) methods are demonstrated to work properly for the quantitative analysis of these signals; from this analysis the chemical concentrations of the target substances are obtained. It is also demonstrated in this thesis that the quantitative measurements from these sensors can be integrated in a gas source localization algorithm in order to improve the localization of the source in those scenarios where it is required. It is shown that the new proposal works significantly better in cases where the source strength is weak. This is illustrated presenting results from simulations generated under realistic conditions. Moreover, real-world data were obtained using a mobile robot mounting a photo ionization detector (PID). Experiments were carried out under forced ventilation and turbulences in indoors and outdoors environments. The results obtained validate the simulation results and confirm that the new localization algorithm can effectively operate in real environments.
Debido a los progresos recientes, la instrumentación química genera mayores volúmenes de datos los cuales requieren de un procesado automático con la finalidad de extraer la información relevante, ya que un análisis manual no suele ser viable debido a la elevada complejidad de los datos. La habilidad de detectar, identificar y cuantificar sustancias químicas en fase gas en operaciones de campo es requerida en un gran número de aplicaciones. Entre ellas, aplicaciones humanitarias y de seguridad. En estos casos, la monitorización continua de los entornos es extremadamente importante, ya que se debe estar alerta de eventos anormales. En los escenarios más críticos, debe realizarse una exploración del área porque la posición de la fuente de gas de interés es desconocida. Esta exploración puede realizarse usando múltiples robots. Diferentes tecnologías de sensores se han aplicado con éxito a la detección e identificación de diferentes sustancias químicas (gases o compuestos volátiles). Estos compuestos pueden ser tóxicos, peligrosos, o precursores de explosivos o drogas. De entre estas tecnologías, los analizadores basados en movilidad iónica (IMS) proporcionan rápidas respuestas con gran sensibilidad. Sin embargo, estos instrumentos no están exentos de problemas. Típicamente, proporcionan una moderada selectividad, apareciendo picos solapados en los espectros. Además, la presencia de humedad provoca que los picos se ensanchen, así empeorando la resolución. Además, la respuesta de IMS es no lineal al incrementar la concentración y es posible que más de un pico debido al mismo compuesto aparezca en el espectro. En la presente tesis se trata con estos problemas y se demuestra que las herramientas de análisis de datos multivariantes son más efectivas que las herramientas típicas univariantes al tratar con tecnologías de movilidad iónica (IMS y DMA), especialmente para el análisis cualitativo y cuantitativo de sus espectros. Además, se demuestra que las medidas cuantitativas pueden integrarse de manera efectiva en un algoritmo de localización de fuentes químicas. Los resultados obtenidos (simulaciones realistas y datos reales) muestran que el algoritmo desarrollado durante la tesis puede funcionar especialmente bien en situaciones en las que la potencia de emisión de la fuente a detectar sea débil.
APA, Harvard, Vancouver, ISO, and other styles
27

Andress, John Charles. "Development of the BaBar trigger for the investigation of CP violation." Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hall, Richard James. "Development of methods for improving throughput in the processing of single particle cryo-electron microscopy data, applied to the reconstruction of E. coli RNA polymerase holoenzyne - DNA complex." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.411621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Fan, Yang, Hidehiko Masuhara, Tomoyuki Aotani, Flemming Nielson, and Hanne Riis Nielson. "AspectKE*: Security aspects with program analysis for distributed systems." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4136/.

Full text
Abstract:
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
APA, Harvard, Vancouver, ISO, and other styles
30

Silva, Carlos Alexandre Moreira da 1984. "Aplicação de tecnologias analíticas de processo e inteligência artificial para monitoramento e controle de processo de recobrimento de partículas em leito fluidizado." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/266036.

Full text
Abstract:
Orientador: Osvaldir Pereira Taranto
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química
Made available in DSpace on 2018-08-27T00:40:14Z (GMT). No. of bitstreams: 1 Silva_CarlosAlexandreMoreirada_D.pdf: 33350422 bytes, checksum: 046e0a2c090474593621166c81042136 (MD5) Previous issue date: 2015
Resumo: As indústrias química, alimentícia e farmacêutica têm empregado extensivamente a operação de fluidização em inúmeros processos, devido às suas características bastante atrativas, que possibilitam um contato efetivo entre a fase sólida e fluida, o que reflete na geração de altas taxas de transferência de calor e de massa. No entanto, o regime de fluidização borbulhante, o qual é condição de partida dos processos que envolvem esta operação, frequentemente é afetado pelas condições operacionais. As temperaturas elevadas, o conteúdo de umidade excessivo das partículas e a introdução de líquidos no leito fluidizado podem conduzir a instabilidades no regime fluidodinâmico e provocar o colapso parcial ou total do leito, reduzindo a eficiência do processo. A manutenção de condições estáveis do regime de fluidização durante processos de recobrimento de partículas em leitos fluidizados é de fundamental importância para garantir uma eficiência de recobrimento favorável e evitar a formação de zonas sem movimentação e aglomeração das partículas no leito, pois estes fatores indesejáveis comprometem a mistura entre as fases e conseqüentemente a qualidade do produto final. Dentro deste contexto, a utilização de um sistema de monitoramento e controle em tempo real de processos de recobrimento de partículas é extremamente desejável para permitir a operação de regimes de fluidização estáveis e garantir um filme de recobrimento uniforme e boas condições de escoabilidade dos sólidos. A presente proposta de tese de doutorado tem por objetivo aplicar a metodologia de análise espectral Gaussiana dos sinais de flutuação de pressão (Parise et al. (2008)), para o desenvolvimento de sistemas de controle baseados em inteligência artificial (Lógica Fuzzy), visando monitorar a estabilidade do regime de fluidização em processo de recobrimento de partículas. Comparações entre as condições fluidodinâmicas dos processos com e sem controle foram analisadas para operações em leito fluidizado em escala de laboratorio. Para avaliar a qualidade das partículas foi utilizada uma sonda de monitoramento in-line (Parsum IPP70), onde se pôde verificar os instantes iniciais da aglomeração indesejada. Com a aplicação desde sistema automatizado foi possível associar a estabilidade da fluidização em função do elevado grau de aglomeração. O ponto de parada do processo pôde ser definido em 420 µm (inicial em 360 µm) e a partir deste o mecanismo de recobrimento acontece simultaneamente com o de aglomeração. Os parâmetros de monitoramento do regime conseguiram não somente identificar a fase inicial da defluidização, como também foi possível a partir deles, controlar o processo por Lógica Fuzzy-PI e estabilizar a operação para altas taxas de suspensão atomizadas
Abstract: The chemical, food and pharmaceutical industries have extensively used fluidization operation in many cases, due to its very attractive features that enable effective contact between the solid and fluid phase, which reflects the generation of high heat and mass transfer rates. However, the bubbling fluidization regime, which is the starting condition of the processes involved in this operation is often affected by operating conditions. Elevated temperatures, excessive moisture content of the particles and introduction of liquid into the fluidized bed may lead to instabilities in the fluid-dynamic regime and cause partial or total collapse of the bed, reducing the process efficiency. The maintenance of stable conditions of the fluidization regime for particle coating processes in fluidized beds is of fundamental importance to ensure a favorable coating efficiency and to avoid zones without movement and agglomeration of particles in the bed, because these undesirable factors compromise the mixing between the phases and therefore the quality of the final product. Within this context, the use of a monitoring system and real-time control of particle coating processes is highly desirable to allow operation in stable fluidization regimes and to ensure a uniform coating film and good condition of flowability of the solids. This doctoral thesis aims to apply the Gaussian spectral analysis methodology of the pressure fluctuation signals (Parise et al. (2008)) , for the development of control systems based on artificial intelligence (Fuzzy Logic), to monitor the stability of fluidization regime particle coating process. Comparisons between the fluid dynamic conditions of the processes with and without control were analyzed for operations in fluidized bed laboratory scale. To assess early stages of unwanted agglomeration, a monitoring in-line probe (Parsum IPP70) was used. With the application of this automated system, it was possible to associate the stability of fluidization with a high degree of agglomeration. The process stopping point could be set at 420 µm (initial in 360 µm) and after, the coating mechanism takes place simultaneously with the agglomeration one. The monitoring parameters of the system were able to identify the initial phase of defluidization, as well as it was possible to control the process by using Fuzzy Logic and to stabilize the operation for high rates of the coating suspension atomized onto the bed
Doutorado
Engenharia de Processos
Doutor em Engenharia Química
APA, Harvard, Vancouver, ISO, and other styles
31

Oteniya, Lloyd. "Bayesian belief networks for dementia diagnosis and other applications : a comparison of hand-crafting and construction using a novel data driven technique." Thesis, University of Stirling, 2008. http://hdl.handle.net/1893/497.

Full text
Abstract:
The Bayesian network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model. We have conducted a literature review of the expert-driven approach, and we have cherry-picked a number of common methods, and engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed; in particular, algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables --- an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance. More recent approaches attempt to ''learn'' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures --- we present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a ''real-life'' medical domain, dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data.
APA, Harvard, Vancouver, ISO, and other styles
32

Le, Floch Hervé. "Acquisition des images en microscopie electronique a balayage in situ." Toulouse 3, 1986. http://www.theses.fr/1986TOU30026.

Full text
Abstract:
Chaine d'acquisition du signal image du mebis. Etude des sources de bruit associees aux detecteurs a semiconducteur et mise au point d'un processus de realisation de diodes de detection a barriere de surface. Conception d'une carte electronique compatible avec un microordinateur. Cette carte permet la numerisation, le stockage sur disquette et la visualisation des images fournies par le mebis
APA, Harvard, Vancouver, ISO, and other styles
33

Montes, De Oca Roldan Marco. "Incremental social learning in swarm intelligence systems." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209909.

Full text
Abstract:
A swarm intelligence system is a type of multiagent system with the following distinctive characteristics: (i) it is composed of a large number of agents, (ii) the agents that comprise the system are simple with respect to the complexity of the task the system is required to perform, (iii) its control relies on principles of decentralization and self-organization, and (iv) its constituent agents interact locally with one another and with their environment.

Interactions among agents, either direct or indirect through the environment in which they act, are fundamental for swarm intelligence to exist; however, there is a class of interactions, referred to as "interference", that actually blocks or hinders the agents' goal-seeking behavior. For example, competition for space may reduce the mobility of robots in a swarm robotics system, or misleading information may spread through the system in a particle swarm optimization algorithm. One of the most visible effects of interference in a swarm intelligence system is the reduction of its efficiency. In other words, interference increases the time required by the system to reach a desired state. Thus, interference is a fundamental problem which negatively affects the viability of the swarm intelligence approach for solving important, practical problems.

We propose a framework called "incremental social learning" (ISL) as a solution to the aforementioned problem. It consists of two elements: (i) a growing population of agents, and (ii) a social learning mechanism. Initially, a system under the control of ISL consists of a small population of agents. These agents interact with one another and with their environment for some time before new agents are added to the system according to a predefined schedule. When a new agent is about to be added, it learns socially from a subset of the agents that have been part of the system for some time, and that, as a consequence, may have gathered useful information. The implementation of the social learning mechanism is application-dependent, but the goal is to transfer knowledge from a set of experienced agents that are already in the environment to the newly added agent. The process continues until one of the following criteria is met: (i) the maximum number of agents is reached, (ii) the assigned task is finished, or (iii) the system performs as desired. Starting with a small number of agents reduces interference because it reduces the number of interactions within the system, and thus, fast progress toward the desired state may be achieved. By learning socially, newly added agents acquire knowledge about their environment without incurring the costs of acquiring that knowledge individually. As a result, ISL can make a swarm intelligence system reach a desired state more rapidly.

We have successfully applied ISL to two very different swarm intelligence systems. We applied ISL to particle swarm optimization algorithms. The results of this study demonstrate that ISL substantially improves the performance of these kinds of algorithms. In fact, two of the resulting algorithms are competitive with state-of-the-art algorithms in the field. The second system to which we applied ISL exploits a collective decision-making mechanism based on an opinion formation model. This mechanism is also one of the original contributions presented in this dissertation. A swarm robotics system under the control of the proposed mechanism allows robots to choose from a set of two actions the action that is fastest to execute. In this case, when only a small proportion of the swarm is able to concurrently execute the alternative actions, ISL substantially improves the system's performance.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
34

Hrbáček, Jan. "Navigation and Information System for Visually Impaired." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-391817.

Full text
Abstract:
Poškození zraku je jedním z nejčastějších tělesných postižení -- udává se, že až 3 % populace trpí vážným poškozením nebo ztrátou zraku. Oslepnutí výrazně zhoršuje schopnost orientace a pohybu v okolním prostředí -- bez znalosti uspořádání prostoru, jinak získané převážně pomocí zraku, postižený zkrátka neví, kudy se pohybovat ke svému cíli. Obvyklým řešením problému orientace v neznámých prostředích je doprovod nevidomého osobou se zdravým zrakem; tato služba je však velmi náročná a nevidomý se musí plně spolehnout na doprovod. Tato práce zkoumá možnosti, kterými by bylo možné postiženým ulehčit orientaci v prostoru, a to využitím existujících senzorických prostředků a vhodného zpracování jejich dat. Téma je zpracováno skrze analogii s mobilní robotikou, v jejímž duchu je rozděleno na část lokalizace a plánování cesty. Zatímco metody plánování cesty jsou vesměs k dispozici, lokalizace chodce často trpí značnými nepřesnostmi určení polohy a komplikuje tak využití standardních navigačních přístrojů nevidomými uživateli. Zlepšení odhadu polohy může být dosaženo vícero cestami, zkoumanými analytickou kapitolou. Předložená práce prvně navrhuje fúzi obvyklého přijímače systému GPS s chodeckou odometrickou jednotkou, což vede k zachování věrného tvaru trajektorie na lokální úrovni. Pro zmírnění zbývající chyby posunu odhadu je proveden návrh využití přirozených význačných bodů prostředí, které jsou vztaženy ke globální referenci polohy. Na základě existujících formalismů vyhledávání v grafu jsou zkoumána kritéria optimality vhodná pro volbu cesty nevidomého skrz městské prostředí. Generátor vysokoúrovňových instrukcí založený na fuzzy logice je potom budován s motivací uživatelského rozhraní působícího lidsky; doplňkem je okamžitý haptický výstup korigující odchylku směru. Chování navržených principů bylo vyhodnoceno na základě realistických experimentů zachycujících specifika cílového městského prostředí. Výsledky vykazují značná zlepšení jak maximálních, tak středních ukazatelů chyby určení polohy.
APA, Harvard, Vancouver, ISO, and other styles
35

Jamier, Robert. "Génération automatique de parties opératives de circuits VLSI de type microprocesseur." Phd thesis, Grenoble INPG, 1986. http://tel.archives-ouvertes.fr/tel-00322276.

Full text
Abstract:
Le compilateur de parties opératives Apollon qui est présenté dans cette thèse, génère automatiquement le dessin des masques de parties opératives de circuits VLSI de type microprocesseur à partir d'une description comportementale de niveau transfert de registres constituée d'un ensemble non ordonné d'instructions opératives. Une instruction opérative est formée d'un ensemble d'actions opératives dont le format est prédéfini (transferts - opérations unaires ou binaires et entrées-sorties) devant se dérouler en parallèle en au plus deux cycles opératifs. Un cycle opératif comprend 4 phases qui correspondent aux 4 phases d'exécution d'un transfert entre 2 registres. Apollon est basé sur un modèle dérivé de la partie opérative du MC68000. Ce modèle fournit à la fois: un modèle architectural: la partie opérative est formée d'un ensemble de sous parties opératives alignées à deux bus qui traversent tous les éléments d'une sous partie opérative; un modèle temporel: une opération prend 2 cycles, un transfert un seul; un modèle électrique: les bus sont complémentés et à précharge; un modèle topologique: le plan de masse est basé sur la structure en tranches appelée communément bis slice. Le compilateur génère d'abord l'architecture de la partie opérative, puis les spécification des masques à partir de cette architecture. Pour générer l'architecture de la partie opérative en un temps raisonnable, le compilateur doit recourir à des heuristiques. Pour générer le dessin des masques, le compilateur utilise l'assembleur de silicium Lubrick qui permet d'assembler et de connecter automatiquement les cellules de base des éléments fonctionnels de la partie opérative. Les spécifications des masques sont générées à partir des spécifications des cellules prédéfinies d'une bibliothèque NMOS.
APA, Harvard, Vancouver, ISO, and other styles
36

Srinivasamurthy, Ajay. "A Data-driven bayesian approach to automatic rhythm analysis of indian art music." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/398986.

Full text
Abstract:
Las colecciones de música son cada vez mayores y más variadas, haciendo necesarias nuevas fórmulas para su organización automática. El análisis automático del ritmo tiene como fin la extracción de información rítmica de grabaciones musicales y es una de las principales áreas de investigación en la disciplina de recuperación de la información musical (MIR por sus siglas en inglés). La dimensión rítmica de la música es específica a una cultura y por tanto su análisis requiere métodos que incluyan el contexto cultural. Las complejidades rítmicas de la música clásica de la India, una de las mayores tradiciones musicales del mundo, no han sido tratadas hasta la fecha en MIR, motivo por el cual la elegimos como nuestro principal objeto de estudio. Nuestra intención es abordar cuestiones de análisis rítmico aún no tratadas en MIR con el fin de contribuir a la disciplina con nuevos métodos sensibles al contexto cultural y generalizables a otras tradiciones musicales. El objetivo de la tesis es el desarrollo de técnicas de procesamiento de señales y aprendizaje automático dirigidas por datos para el análisis, descripción y descubrimiento automáticos de estructuras y patrones rítmicos en colecciones de audio de música clásica de la India. Tras identificar retos y posibilidades, así como varias tareas de investigación relevantes para este objetivo, detallamos la elaboración del corpus de estudio y conjuntos de datos, fundamentales para métodos dirigidos por datos. A continuación, nos centramos en las tareas de análisis métrico y descubrimiento de patrones de percusión. El análisis métrico consiste en la alineación de eventos métricos a diferentes niveles con una grabación de audio. En la tesis formulamos las tareas de deducción de metro, seguimiento de metro y seguimiento informado de metro de acuerdo a la tradición estudiada, se evalúan diferentes modelos bayesianos capaces de incorporar explícitamente información de estructuras métricas de niveles superiores y se proponen nuevas extensiones. Los métodos propuestos superan las limitaciones de las propuestas existentes y los resultados indican la efectividad del análisis informado de metro. La percusión en la música clásica de la India utiliza onomatopeyas para la transmisión del repertorio y la técnica. Utilizamos estas sílabas para definir, representar y descubrir patrones en grabaciones de solos de percusión. A tal fin generamos una transcripción automática basada en un modelo oculto de Márkov, seguida de una búsqueda aproximada de subcadenas usando una biblioteca de patrones de percusión derivada de datos. Experimentos preliminares en patrones de percusión de ópera de Pekín, y en grabaciones de solos de tabla y mridangam, demuestran la utilidad de estas sílabas, identificando nuevos retos para el desarrollo de sistemas prácticos de descubrimiento. Las tecnologías resultantes de esta investigación son parte de un conjunto de herramientas desarrollado en el proyecto CompMusic para el mejor entendimiento y organización de la música clásica de la India, con el objetivo de proveer una experiencia mejorada de escucha y descubrimiento de música. Estos datos y herramientas pueden ser también relevantes para estudios musicológicos dirigidos por datos y otras tareas de MIR que puedan beneficiarse de análisis automáticos de ritmo.
Large and growing collections of a wide variety of music are now available on demand to music listeners, necessitating novel ways of automatically structuring these collections using different dimensions of music. Rhythm is one of the basic music dimensions and its automatic analysis, which aims to extract musically meaningful rhythm related information from music, is a core task in Music Information Research (MIR). Musical rhythm, similar to most musical dimensions, is culture-specific and hence its analysis requires culture-aware approaches. Indian art music is one of the major music traditions of the world and has complexities in rhythm that have not been addressed by the current state of the art in MIR, motivating us to choose it as the primary music tradition for study. Our intent is to address unexplored rhythm analysis problems in Indian art music to push the boundaries of the current MIR approaches by making them culture-aware and generalizable to other music traditions. The thesis aims to build data-driven signal processing and machine learning approaches for automatic analysis, description and discovery of rhythmic structures and patterns in audio music collections of Indian art music. After identifying challenges and opportunities, we present several relevant research tasks that open up the field of automatic rhythm analysis of Indian art music. Data-driven approaches require well curated data corpora for research and efforts towards creating such corpora and datasets are documented in detail. We then focus on the topics of meter analysis and percussion pattern discovery in Indian art music. Meter analysis aims to align several hierarchical metrical events with an audio recording. Meter analysis tasks such as meter inference, meter tracking and informed meter tracking are formulated for Indian art music. Different Bayesian models that can explicitly incorporate higher level metrical structure information are evaluated for the tasks and novel extensions are proposed. The proposed methods overcome the limitations of existing approaches and their performance indicate the effectiveness of informed meter analysis. Percussion in Indian art music uses onomatopoeic oral mnemonic syllables for the transmission of repertoire and technique, providing a language for percussion. We use these percussion syllables to define, represent and discover percussion patterns in audio recordings of percussion solos. We approach the problem of percussion pattern discovery using hidden Markov model based automatic transcription followed by an approximate string search using a data derived percussion pattern library. Preliminary experiments on Beijing opera percussion patterns, and on both tabla and mridangam solo recordings in Indian art music demonstrate the utility of percussion syllables, identifying further challenges to building practical discovery systems. The technologies resulting from the research in the thesis are a part of the complete set of tools being developed within the CompMusic project for a better understanding and organization of Indian art music, aimed at providing an enriched experience with listening and discovery of music. The data and tools should also be relevant for data-driven musicological studies and other MIR tasks that can benefit from automatic rhythm analysis.
Les col·leccions de música són cada vegada més grans i variades, fet que fa necessari buscar noves fórmules per a organitzar automàticament aquestes col·leccions. El ritme és una de les dimensions bàsiques de la música, i el seu anàlisi automàtic és una de les principals àrees d'investigació en la disciplina de l'recuperació de la informació musical (MIR, acrònim de la traducció a l'anglès). El ritme, com la majoria de les dimensions musicals, és específic per a cada cultura i per tant, el seu anàlisi requereix de mètodes que incloguin el context cultural. La complexitat rítmica de la música clàssica de l'Índia, una de les tradicions musicals més grans al món, no ha estat encara treballada en el camp d'investigació de MIR - motiu pel qual l'escollim com a principal material d'estudi. La nostra intenció és abordar les problemàtiques que presenta l'anàlisi rítmic de la música clàssica de l'Índia, encara no tractades en MIR, amb la finalitat de contribuir en la disciplina amb nous models sensibles al context cultural i generalitzables a altres tradicions musicals. L'objectiu de la tesi consisteix en desenvolupar tècniques de processament de senyal i d'aprenentatge automàtic per a l'anàlisi, descripció i descobriment automàtic d'estructures i patrons rítmics en col·leccions de música clàssica de l'Índia. Després d'identificar els reptes i les oportunitats, així com les diverses tasques d'investigació rellevants per a aquest objectiu, detallem el procés d'elaboració del corpus de dades, fonamentals per als mètodes basats en dades. A continuació, ens centrem en les tasques d'anàlisis mètric i descobriment de patrons de percussió. L'anàlisi mètric consisteix en alinear els diversos esdeveniments mètrics -a diferents nivells- que es produeixen en una gravació d'àudio. En aquesta tesi formulem les tasques de deducció, seguiment i seguiment informat de la mètrica. D'acord amb la tradició musical estudiada, s'avaluen diferents models bayesians que poden incorporar explícitament estructures mètriques d'alt nivell i es proposen noves extensions per al mètode. Els mètodes proposats superen les limitacions dels mètodes ja existents i el seu rendiment indica l'efectivitat dels mètodes informats d'anàlisis mètric. La percussió en la música clàssica de l'Índia utilitza onomatopeies per a la transmissió del repertori i de la tècnica, fet que construeix un llenguatge per a la percussió. Utilitzem aquestes síl·labes percussives per a definir, representar i descobrir patrons en enregistraments de solos de percussió. Enfoquem el problema del descobriment de patrons percussius amb un model de transcripció automàtica basat en models ocults de Markov, seguida d'una recerca aproximada de strings utilitzant una llibreria de patrons de percussions derivada de dades. Experiments preliminars amb patrons de percussió d'òpera de Pequín, i amb gravacions de solos de tabla i mridangam, demostren la utilitat de les síl·labes percussives. Identificant, així, nous horitzons per al desenvolupament de sistemes pràctics de descobriment. Les tecnologies resultants d'aquesta recerca són part de les eines desenvolupades dins el projecte de CompMusic, que té com a objectiu millorar l'experiència d'escoltar i descobrir música per a la millor comprensió i organització de la música clàssica de l'Índia, entre d'altres. Aquestes dades i eines poden ser rellevants per a estudis musicològics basats en dades i, també, altres tasques MIR poden beneficiar-se de l'anàlisi automàtic del ritme.
APA, Harvard, Vancouver, ISO, and other styles
37

Matulík, Martin. "Modelování a animace biologických struktur." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-377662.

Full text
Abstract:
Following work deals with subject matter of digital modelling and animation of biological structures. Software tools for computer generated images (CGI), well proven in common practice, are evaluated, as well as tools for specific activities, available inside chosen software environment. Among vast pool of modelling approaches are discussed tools suitable for creation and representation of selected structures, along with tools essential for their consequent animation. Possible rendering approaches and their parameters in relation to qualities of resulting computer-generated images are discussed as well. Above-mentioned approaches will be consequently utilized for modelling, physical simulation and animation of erythrocyte’s flow throughout blood vessel in following project. Resulting output of that work will be based on series of digital images, suitable for creating video-sequence containing abovementioned animation in end-user digestible form.
APA, Harvard, Vancouver, ISO, and other styles
38

Ugail, Hassan. "3D data modelling and processing using partial differential equations." 2007. http://hdl.handle.net/10454/2672.

Full text
Abstract:
No
In this paper we discuss techniques for 3D data modelling and processing where the data are usually provided as point clouds which arise from 3D scanning devices. The particular approaches we adopt in modelling 3D data involves the use of Partial Differential Equations (PDEs). In particular we show how the continuous and discrete versions of elliptic PDEs can be used for data modelling. We show that using PDEs it is intuitively possible to model data corresponding to complex scenes. Furthermore, we show that data can be stored in compact format in the form of PDE boundary conditions. In order to demonstrate the methodology we utlise several examples of practical nature.
APA, Harvard, Vancouver, ISO, and other styles
39

"Partial EBGM and face synthesis methods for non-frontal recognition." 2009. http://library.cuhk.edu.hk/record=b5894015.

Full text
Abstract:
Cheung, Kin Wang.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 76-82).
Abstract also in Chinese.
Chapter 1. --- INTRODUCTION --- p.1
Chapter 1.1. --- Background --- p.1
Chapter 1.1.1. --- Introduction to Biometrics --- p.1
Chapter 1.1.2. --- Face Recognition in General --- p.2
Chapter 1.1.3. --- A Typical Face Recognition System Architecture --- p.4
Chapter 1.1.4. --- Face Recognition in Surveillance Cameras --- p.6
Chapter 1.1.5. --- Face recognition under Pose Variation --- p.9
Chapter 1.2. --- Motivation and Objectives --- p.11
Chapter 1.3. --- Related Works --- p.13
Chapter 1.3.1. --- Overview of Pose-invariant Face Recognition --- p.13
Chapter 1.3.2. --- Standard Face Recognition Setting --- p.14
Chapter 1.3.3. --- Multi-Probe Setting --- p.19
Chapter 1.3.4. --- Multi-Gallery Setting --- p.21
Chapter 1.3.5. --- Non-frontal Face Databases --- p.23
Chapter 1.3.6. --- Evaluation Metrics --- p.26
Chapter 1.3.7. --- Summary of Non-frontal Face Recognition Settings --- p.27
Chapter 1.4. --- Proposed Methods for Non-frontal Face Recognition --- p.28
Chapter 1.5. --- Thesis Organization --- p.30
Chapter 2. --- PARTIAL ELASTIC BUNCH GRAPH MATCHING --- p.31
Chapter 2.1. --- Introduction --- p.31
Chapter 2.2. --- EBGM for Non-frontal Face Recognition --- p.31
Chapter 2.2.1. --- Overview of Baseline EBGM Algorithm --- p.31
Chapter 2.2.2. --- Modified EBGM for Non-frontal Face Matching --- p.33
Chapter 2.3. --- Experiments --- p.35
Chapter 2.3.1. --- Experimental Setup --- p.35
Chapter 2.3.2. --- Experimental Results --- p.37
Chapter 2.4. --- Discussions --- p.40
Chapter 3. --- FACE RECOGNITION BY FRONTAL VIEW SYNTHESIS WITH CALIBRATED STEREO CAMERAS --- p.43
Chapter 3.1. --- Introduction --- p.43
Chapter 3.2. --- Proposed Method --- p.44
Chapter 3.2.1. --- Image Rectification --- p.45
Chapter 3.2.2. --- Face Detection --- p.49
Chapter 3.2.3. --- Head Pose Estimation --- p.51
Chapter 3.2.4. --- Virtual View Generation --- p.52
Chapter 3.2.5. --- Feature Localization --- p.54
Chapter 3.2.6. --- Face Morphing --- p.56
Chapter 3.3. --- Experiments --- p.58
Chapter 3.3.1. --- Data Collection --- p.58
Chapter 3.3.2. --- Synthesized Results --- p.59
Chapter 3.3.3. --- Experiment Setup --- p.60
Chapter 3.3.4. --- Experiment Results on FERET database --- p.61
Chapter 3.3.5. --- Experiment Results on CAS-PEAL-R1 database --- p.62
Chapter 3.4. --- Discussions --- p.64
Chapter 3.5. --- Summary --- p.66
Chapter 4. --- "EXPERIMENTS, RESULTS AND OBSERVATIONS" --- p.67
Chapter 4.1. --- Experiment Setup --- p.67
Chapter 4.2. --- Experiment Results --- p.69
Chapter 4.3. --- Discussions --- p.70
Chapter 5. --- CONCLUSIONS --- p.74
Chapter 6. --- BIBLIOGRAPHY --- p.76
APA, Harvard, Vancouver, ISO, and other styles
40

Mueller, Bernd. "Package optimisation model : [a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Engineering and Industrial Management at Massey University, Palmerston North, New Zealand] EMBARGOED until 1 January 2013." 2010. http://hdl.handle.net/10179/1413.

Full text
Abstract:
A bulk export orientated company has to optimise their packaging to be able to compete in a globalised world. Therefore it is important to maximise the container load to save shipping costs. This can be done in different ways, • by changing the product weight, • the packaging material or size, • the pallet/container size or, for some products, • the bulk density. With so many parameters affecting the container load finding the best packaging solution is difficult. To solve the problem an Add-on to for the existing packaging optimisation software Cape Pack called SADIE was developed. SADIE automates the process of data input into Cape Pack and allows browsing of different packaging combinations in a short time. Main feature of SADIE is that it allows testing complete weight and/or bulk density ranges in one Query. For that it takes the weight and the bulk density combination that is going to be tested and calculates the start dimension for a regular slotted case (RSC) with a 2:1:2 ratio, which, for a RSC, is the ratio that uses a minimum quantity of board. Those dimensions are then, with many other parameters, transferred into the Cape Pack Design mode where the new packaging solution is calculated and transferred back to SADIE. The data coming from SADIE was tested for consistency and was also used for physical pack size validations, both successfully. Packaging solutions for products with higher bulk densities could be optimised. A new packaging solution calculated for salted butter could save 231 container per annum. Depending on the destination of the butter cost savings from 184,000 US$ to 577,500 US$ would be possible. The results show that there are improvements in container load possible, especially for products in a higher bulk density range, like butter and cheese. An increase in container load for Whole milk powder (WMP) might be possible if another packaging system is used whereas for Skim milk powder (SMP), with its higher densities compared to WMP, the program can calculate improved container load without a change to the packaging system used.
APA, Harvard, Vancouver, ISO, and other styles
41

Stals, Linda. "Parallel multigrid on unstructured grids using adaptive finite element methods." Phd thesis, 1995. http://hdl.handle.net/1885/138505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Correia, Filipe Miguel Silva. "Privacy-Enhanced Query Processing in a Cloud-Based Encrypted DBaaS (Database as a Service)." Master's thesis, 2018. http://hdl.handle.net/10362/130557.

Full text
Abstract:
In this dissertation, we researched techniques to support trustable and privacy enhanced solutions for on-line applications accessing to “always encrypted” data in remote DBaaS (data-base-as-a-service) or Cloud SQL-enabled backend solutions. Although solutions for SQL-querying of encrypted databases have been proposed in recent research, they fail in providing: (i) flexible multimodal query facilities includ ing online image searching and retrieval as extended queries to conventional SQL-based searches, (ii) searchable cryptographic constructions for image-indexing, searching and retrieving operations, (iii) reusable client-appliances for transparent integration of multi modal applications, and (iv) lack of performance and effectiveness validations for Cloud based DBaaS integrated deployments. At the same time, the study of partial homomorphic encryption and multimodal searchable encryption constructions is yet an ongoing research field. In this research direction, the need for a study and practical evaluations of such cryptographic is essential, to evaluate those cryptographic methods and techniques towards the materialization of effective solutions for practical applications. The objective of the dissertation is to design, implement and perform experimental evaluation of a security middleware solution, implementing a client/client-proxy/server appliance software architecture, to support the execution of applications requiring on line multimodal queries on “always encrypted” data maintained in outsourced cloud DBaaS backends. In this objective we include the support for SQL-based text-queries enhanced with searchable encrypted image-retrieval capabilities. We implemented a prototype of the proposed solution and we conducted an experimental benchmarking evaluation, to observe the effectiveness, latency and performance conditions in support ing those queries. The dissertation addressed the envisaged security middleware solution, as an experimental and usable solution that can be extended for future experimental testbench evaluations using different real cloud DBaaS deployments, as offered by well known cloud-providers.
Nesta dissertação foram investigadas técnicas para suportar soluções com garantias de privacidade para aplicações que acedem on-line a dados que são mantidos sempre cifrados em nuvens que disponibilizam serviços de armazenamento de dados, nomeadamente soluções do tipo bases de dados interrogáveis por SQL. Embora soluções para suportar interrogações SQL em bases de dados cifradas tenham sido propostas anteriormente, estas falham em providenciar: (i) capacidade de efectuar pesquisas multimodais que possam incluir pesquisa combinada de texto e imagem com obtenção de imagens online, (ii) suporte de privacidade com base em construções criptograficas que permitam operações de indexacao, pesquisa e obtenção de imagens como dados cifrados pesquisáveis, (iii) suporte de integração para aplicações de gestão de dados em contexto multimodal, e (iv) ausência de validações experimentais com benchmarking dobre desempenho e eficiência em soluções DBaaS em que os dados sejam armazenados e manipulados na sua forma cifrada. A pesquisa de soluções de privacidade baseada em primitivas de cifras homomórficas parciais, tem sido vista como uma possível solução prática para interrogação de dados e bases de dados cifradas. No entanto, este é ainda um campo de investigação em desenvolvimento. Nesta direção de investigação, a necessidade de estudar e efectuar avaliações experimentais destas primitivas em bibliotecas de cifras homomórficas, reutilizáveis em diferentes contextos de aplicação e como solução efetiva para uso prático mais generalizado, é um aspeto essencial. O objectivo da dissertação e desenhar, implementar e efectuar avalições experimentais de uma proposta de solução middleware para suportar pesquisas multimodais em bases de dados mantidas cifradas em soluções de nuvens de armazenamento. Esta proposta visa a concepção e implementação de uma arquitectura de software client/client-proxy/server appliance para suportar execução eficiente de interrogações online sobre dados cifrados, suportando operações multimodais sobre dados mantidos protegidos em serviços de nuvens de armazenamento. Neste objectivo incluímos o suporte para interrogações estendidas de SQL, com capacidade para pesquisa e obtenção de dados cifrados que podem incluir texto e pesquisa de imagens por similaridade. Foi implementado um prototipo da solução proposta e foi efectuada uma avaliação experimental do mesmo, para observar as condições de eficiencia, latencia e desempenho do suporte dessas interrogações. Nesta avaliação incluímos a análise experimental da eficiência e impacto de diferentes construções criptográficas para pesquisas cifradas (searchable encryption) e cifras parcialmente homomórficas e que são usadas como componentes da solução proposta. A dissertaçao aborda a soluçao de seguranca projectada, como uma solução experimental que pode ser estendida e utilizavel para futuras aplcações e respetivas avaliações experimentais. Estas podem vir a adoptar soluções do tipo DBaaS, oferecidos como serviços na nuvem, por parte de diversos provedores ou fornecedores.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhu, Jihai. "Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand." 2007. http://hdl.handle.net/10179/704.

Full text
Abstract:
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
APA, Harvard, Vancouver, ISO, and other styles
44

Wu, Kuan-Sheng, and 吳冠陞. "GPS tracking loop designs using the unscented particle filter with data wipe-off processing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/31449949323485492295.

Full text
Abstract:
碩士
國立臺灣海洋大學
通訊與導航工程學系
102
ABSTRACT In this paper, the vector tracking loop design based on the UPF with mitigation capability is proposed for the Global Positioning System (GPS) position determination. The I (in-phase) and Q (quadrature) accumulator outputs from the GPS correlators are used as the observational measurements of UPF to estimate the code delay, phase, and carrier Doppler. Since the measurement of (I and Q) for the filter and the system states are highly nonlinearly related, the nonlinear filters are potentially useful. The vector tracking loop design based on pre-filter is selected, and the navigation filter used includes the UPF and the UKF. The vector tracking loop better than the scalar tracking loop .The scalar tracking loop uses the independent parallel tracking loop approach, and the vector based tracking loop technique uses the correlation of each satellite signal and user dynamics. For certain problems, a Gaussian assumption cannot be applied with confidence. One of the adequate methods is to use the Bayesian filter/particle filter (PF) to improve the estimation capability. The PF possess superior performance as compared to EKF and UKF as an alternative estimator for dealing with the nonlinear, non-Gaussian system. The degeneracy of particles and accumulation of estimation errors in the PF are difficult to overcome. To handle the problem of heavy-tailed probability distribution, one of the strategies is to incorporate the UKF into the PF as the proposal distribution, leading to the UPF. Performance evaluation for the UPF as compared to the conventional approaches will be carried out. To achieve higher tracking performance in weak signal environment, the integration time in a GPS receiver has to be increased, but tracking accuracy of a weak GPS signal is decreased by the possible data bit sign reversal every 20 ms to the integration interval. The GPS navigation message contains information that is necessary to perform navigation computations (e.g. satellite orbit and timing information). The navigation data are binary phase-shift keying (BPSK) modulated onto the GPS carrier phase with a bit duration of 20 ms (i.e., 50 bit/s). To solve this problem, a data wipe off algorithm is presented on the basis of pre-detection meathod to detect data bit sign reversal every 20 ms and wipe off. The coherent integration interval is extended over 20 ms. This paper proposes data wipe off (DWO) method using carrier phase in the vector tracking loop. The method has an advantage to estimate data bit continuously and simply. KEYWORDS: Kalman Filter, Unscented Particle Filter (UPF), Tracking loop, Vector Tracking Loop, Data Wipe-off.
APA, Harvard, Vancouver, ISO, and other styles
45

YEH, JAI-WEI, and 葉家維. "The Collective Motion of Active Particle in Micro-channel: From Channel Fabrication to Image Processing and Data Analysis." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/f59733.

Full text
Abstract:
碩士
國立中正大學
物理系研究所
107
The active particle steeped in suspension in micro confinement with tunable self-propelling strength can be one of the simplest non-equilibrium system. Microscopically, the motion of active particle in micro confinement is caused by the interplay of background thermal fluctuation, interaction between particles, self-propulsion, and boundary effect. Both of the former factors balanced with each other is the Brownian motion in thermal equilibrium. The last two factors drive the system away from the equilibrium. Because of the complex properties of the system and potential applications, the system has caught researchers’ attention in resent. Recently, the collective motion of active particle in open space is well studied, but the collective motion of active particle in micro confinement is still short of experimental investigations owing to the difficulty of stable micro confinement fabricating for long-term particle observation. In this work, the hemisphere Au coated SiO2 particle is the active particle. Then, the suspension of active particle is injected into micro-channel, the collective motion is obtained by microscope system. With our self-develop micro-channel and computer program of particle tracking, the long-term collective behavior is analyzed. It found that the self-propelling force drives the system away from equilibrium. Finally, we fabricate a long-term stable micro-channel for active particle observation and it is the important step for further investigations.
APA, Harvard, Vancouver, ISO, and other styles
46

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A30873.

Full text
Abstract:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
APA, Harvard, Vancouver, ISO, and other styles
47

Schulte, Lukas. "New Computational Tools for Sample Purification and Early-Stage Data Processing in High-Resolution Cryo-Electron Microscopy." Doctoral thesis, 2018. http://hdl.handle.net/11858/00-1735-0000-002E-E54F-B.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Akova, Ferit. "A nonparametric Bayesian perspective for machine learning in partially-observed settings." Thesis, 2014. http://hdl.handle.net/1805/4825.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Robustness and generalizability of supervised learning algorithms depend on the quality of the labeled data set in representing the real-life problem. In many real-world domains, however, we may not have full knowledge of the underlying data-generating mechanism, which may even have an evolving nature introducing new classes continually. This constitutes a partially-observed setting, where it would be impractical to obtain a labeled data set exhaustively defined by a fixed set of classes. Traditional supervised learning algorithms, assuming an exhaustive training library, would misclassify a future sample of an unobserved class with probability one, leading to an ill-defined classification problem. Our goal is to address situations where such assumption is violated by a non-exhaustive training library, which is a very realistic yet an overlooked issue in supervised learning. In this dissertation we pursue a new direction for supervised learning by defining self-adjusting models to relax the fixed model assumption imposed on classes and their distributions. We let the model adapt itself to the prospective data by dynamically adding new classes/components as data demand, which in turn gradually make the model more representative of the entire population. In this framework, we first employ suitably chosen nonparametric priors to model class distributions for observed as well as unobserved classes and then, utilize new inference methods to classify samples from observed classes and discover/model novel classes for those from unobserved classes. This thesis presents the initiating steps of an ongoing effort to address one of the most overlooked bottlenecks in supervised learning and indicates the potential for taking new perspectives in some of the most heavily studied areas of machine learning: novelty detection, online class discovery and semi-supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
49

Agbugba, Emmanuel Emenike. "Hybridization of particle Swarm Optimization with Bat Algorithm for optimal reactive power dispatch." Diss., 2017. http://hdl.handle.net/10500/23630.

Full text
Abstract:
This research presents a Hybrid Particle Swarm Optimization with Bat Algorithm (HPSOBA) based approach to solve Optimal Reactive Power Dispatch (ORPD) problem. The primary objective of this project is minimization of the active power transmission losses by optimally setting the control variables within their limits and at the same time making sure that the equality and inequality constraints are not violated. Particle Swarm Optimization (PSO) and Bat Algorithm (BA) algorithms which are nature-inspired algorithms have become potential options to solving very difficult optimization problems like ORPD. Although PSO requires high computational time, it converges quickly; while BA requires less computational time and has the ability of switching automatically from exploration to exploitation when the optimality is imminent. This research integrated the respective advantages of PSO and BA algorithms to form a hybrid tool denoted as HPSOBA algorithm. HPSOBA combines the fast convergence ability of PSO with the less computation time ability of BA algorithm to get a better optimal solution by incorporating the BA’s frequency into the PSO velocity equation in order to control the pace. The HPSOBA, PSO and BA algorithms were implemented using MATLAB programming language and tested on three (3) benchmark test functions (Griewank, Rastrigin and Schwefel) and on IEEE 30- and 118-bus test systems to solve for ORPD without DG unit. A modified IEEE 30-bus test system was further used to validate the proposed hybrid algorithm to solve for optimal placement of DG unit for active power transmission line loss minimization. By comparison, HPSOBA algorithm results proved to be superior to those of the PSO and BA methods. In order to check if there will be a further improvement on the performance of the HPSOBA, the HPSOBA was further modified by embedding three new modifications to form a modified Hybrid approach denoted as MHPSOBA. This MHPSOBA was validated using IEEE 30-bus test system to solve ORPD problem and the results show that the HPSOBA algorithm outperforms the modified version (MHPSOBA).
Electrical and Mining Engineering
M. Tech. (Electrical Engineering)
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Andy Bowei. "Application of quantitative analysis in treatment of osteoporosis and osteoarthritis." Thesis, 2013. http://hdl.handle.net/1805/3662.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
As our population ages, treating bone and joint ailments is becoming increasingly important. Both osteoporosis, a bone disease characterized by a decreased density of mineral in bone, and osteoarthritis, a joint disease characterized by the degeneration of cartilage on the ends of bones, are major causes of decreased movement ability and increased pain. To combat these diseases, many treatments are offered, including drugs and exercise, and much biomedical research is being conducted. However, how can we get the most out of the research we perform and the treatment we do have? One approach is through computational analysis and mathematical modeling. In this thesis, quantitative methods of analysis are applied in different ways to two systems: osteoporosis and osteoarthritis. A mouse model simulating osteoporosis is treated with salubrinal and knee loading. The bone and cell data is used to formulate a system of differential equations to model the response of bone to each treatment. Using Particle Swarm Optimization, optimal treatment regimens are found, including a consideration of budgetary constraints. Additionally, an in vitro model of osteoarthritis in chondrocytes receives RNA silencing of Lrp5. Microarray analysis of gene expression is used to further elucidate the mode of regulation of ADAMTS5, an aggrecanase associated with cartilage degradation, by Lrp5, including the development of a mathematical model. The math model of osteoporosis reveals a quick response to salubrinal and a delayed but substantial response to knee loading. Consideration of cost effectiveness showed that as budgetary constraints increased, treatment did not start until later. The quantitative analysis of ADAMTS5 regulation suggested the involvement of IL1B and p38 MAPK. This research demonstrates the application of quantitative methods to further the usefulness of biomedical and biomolecular research into treatment and signaling pathways. Further work using these techniques can help uncover a bigger picture of osteoarthritis's mode of action and ideal treatment regimens for osteoporosis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography