Дисертації з теми "Action algorithms"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Action algorithms.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Action algorithms".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Cox, Jürgen 1970. "Solution of sign and complex action problems with cluster algorithms." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8646.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2001.
Includes bibliographical references (p. [105]-109) and index.
Two kinds of models are considered which have a Boltzmann weight which is either not real or real but not positive and so standard Monte Carlo methods are not applicable. These sign or complex action problems are solved with the help of cluster algorithms. In each case improved estimators for the Boltzmann weight are constructed which are real and positive. The models considered belong to two classes: fermionic and non-fermionic models. An example for a non-fermionic model is the Potts model approximation to QCD at non-zero baryon density. The three-dimensional three-state Potts model captures the qualitative features of this theory. It has a complex action and so the Boltzmann weight cannot be interpreted as a probability. The complex action problem is solved by using a cluster algorithm. The improved estimator for the complex phase of the Boltzmann factor is real and positive and is used for importance sampling. The first order deconfinement transition line is investigated and the universal behavior at its critical endpoint is studied.
(cont.) An example for a fermionic model with a sign problem are staggered fermions with 2 flavors in 3+1 dimensions. Here the sign is connected to the permutation sign of fermion world lines and is of nonlocal nature. Cluster flips change the topology of the fermion world lines and they have a well defined effect on the permutation sign independent of the other clusters. The sign problem is solved by suppressing those clusters whose contribution to the partition function and observables of interest would be zero. We confirm that the universal critical behavior of the finite temperature chiral phase transition is the one of the three dimensional Ising model. We also study staggered fermions with one flavor in 2+1 dimensions and confirm that the chiral phase transition then belongs to the universality class of the two dimensional Ising model.
by Jürgen Cox.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Oppon, Ekow CruickShank. "Synergistic use of promoter prediction algorithms: a choice of small training dataset?" Thesis, University of the Western Cape, 2000. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8222_1185436339.

Повний текст джерела
Анотація:

Promoter detection, especially in prokaryotes, has always been an uphill task and may remain so, because of the many varieties of sigma factors employed by various organisms in transcription. The situation is made more complex by the fact, that any seemingly unimportant sequence segment may be turned into a promoter sequence by an activator or repressor (if the actual promoter sequence is made unavailable). Nevertheless, a computational approach to promoter detection has to be performed due to number of reasons. The obvious that comes to mind is the long and tedious process involved in elucidating promoters in the &lsquo
wet&rsquo
laboratories not to mention the financial aspect of such endeavors. Promoter detection/prediction of an organism with few characterized promoters (M.tuberculosis) as envisaged at the beginning of this work was never going to be easy. Even for the few known Mycobacterial promoters, most of the respective sigma factors associated with their transcription were not known. If the information (promoter-sigma) were available, the research would have been focused on categorizing the promoters according to sigma factors and training the methods on the respective categories. That is assuming that, there would be enough training data for the respective categories. Most promoter detection/prediction studies have been carried out on E.coli because of the availability of a number of experimentally characterized promoters (+- 310). Even then, no researcher to date has extended the research to the entire E.coli genome.

Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wanek, J. F. "Direct action of radiation on mummified cells : modelling of computed tomography by Monte Carlo algorithms." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1432807/.

Повний текст джерела
Анотація:
X-ray imaging is a non-destructive and preferred method in paleopathology to reconstruct the history of ancient diseases. Sophisticated imaging technologies such as computed tomography (CT) have become common for the investigation of skeletal disorders in human remains. The effects of CT exposure on ancient cells have not been quantitatively examined in the past and may be important for subsequent genetic analysis. To remedy this shortcoming, different Monte Carlo models were developed to simulate X-ray irradiation on ancient cells. Effects of mummification and physical processes were considered by using two sizes of cells and three different phantom tissues that enclosed the investigated cell cluster. This cluster was positioned at the isocentre of a CT scanner model, where the cell hit probabilities P(0,1,…, n) were calculated according to the Poisson distribution. To study the impact of the dominant physical process, CT scans for X-ray spectra of 80 and 120 kVp were simulated. The calculated risk of DNA damage according to the multi-hit, multi-target model revealed that the probability of two DNA hits was pNT = 0.001 with cell size 6x6x10 μm3 (NT6610) and pNT = 0.00033 with cell size 4x4x6 μm3 (NT446) for normal tissue (NT) at 80 kVp. A further decrease in DNA damage was observed with pNT = 0.0006 (NT6610) and pNT = 0.00009 (NT446) at 120 kVp. All values of p are in good agreement with those given by the X-ray risk of cancer. It is concluded that the probability of ancient DNA (aDNA) damage following CT imaging depends on the number and volume of fragments m with paDNA < pNT^m (m ≥ 1). Increasing the number of aDNA fragments m is associated with rapidly decreasing aDNA damage through X-ray imaging.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Umuhoza, Denise. "Metric of trust for mobile ad hoc networks using source routing algorithms." Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8946_1183465240.

Повний текст джерела
Анотація:

This thesis proposed and presented technical details of new probabilistic metrics of trust in the links wireless ad hoc networks for unobservable communications. In covert communication networks, only the end nodes are aware of the communication characteristics of the overall path. The most widely used protocols of ad hoc networks was overviewed. The routing protocols of ad hoc networks with trust considerations and select Destination Sequence Routing, a protocol that can be used in distributed ad hoc network settings for path discovery, was also reviewed. It establishes a path through which all packets sent by a source must pass to the destination. The end nodes are responsible for examining the statistics of the received packets and deriving inferences on path feature variations which are used for computing new trust metrics. When a path is judged not trustworthy based on the metrics, the Destination Sequence Routing is informed to undertake a new trusted path discovery between the end points. The thesis added a new feature based on the quality of service parameters of the path to create trust in the links in recognition of attacks.

Стилі APA, Harvard, Vancouver, ISO та ін.
5

Brualla, Barberà Llorenç. "Path integral Monte Carlo. Algorithms and applications to quantum fluids." Doctoral thesis, Universitat Politècnica de Catalunya, 2002. http://hdl.handle.net/10803/6577.

Повний текст джерела
Анотація:
Path integral Monte Carlo (PIMC) is a method suitable for quantum liquid simulations at finite temperature. We present in this thesis a study of PIMC dealing with the theory and algorithms related to it, and then two applications of PIMC to current research problems of quantum fluids in the Bolzmann regime.
The first part encompasses a study of the different ingredients of a PIMC code: action, sampling and physical property estimators. Particular attention has been paid to Li-Broughton's higher order approximation to the action. Regarding sampling, several collective movement methods have been derived, including the bisection algorithm, that has been thoroughly tested. We also include a study of estimators for different physical properties, such as, the energy (through the thermodynamic and virial estimators), the pair distribution function, the structure factor, and the momentum distribution.
In relation to the momentum distribution, we have developed a novel algorithm for its estimation, the trail method. It surmounts some of the problems exposed by previous approaches, such as the open chain method or McMillan's algorithm.
The Richardson extrapolation used within PIMC simulations, is another contribution of this thesis. Up until now, this extrapolation has not been used in this context. We present studies of the energy dependence on the number of "beads", along with the betterment provide by the Richardson extrapolation.
Inasmuch as our goal is to perform research of quantum liquids at finite temperature, we have produced a library of codes, written from scratch, that implement most of the features theoretically developed. The most elaborated parts of these codes are included in some of the appendixes.
The second part shows two different applications of the algorithms coded. We present results of a PIMC calculation of the momentum distribution of Ne and normal 4He at low temperatures. In the range of temperatures analysed, exchanges can be disregarded and both systems are considered Boltzmann quantum liquids. Their quantum character is well reflected in their momentum distributions witch show clear departures from the classical limit. The PIMC momentum distributions which show clear departures from the classical limit. The PIMC momentum distributions are sampled using the trail method. Kinetic energies of both systems, as a function of temperature and at a fixed density, are also reported.
Finally, the solid-liquid neon phase transition along the 35 K isotherm has been characterized.While thermodynamic properties of the solid phase are well known the behaviour of some properties, such as the energy or the dessity, during the trasition presen6 some uncertainties For example, experimental data for the place diagram, which determines solid and liquid boundaries, present sizeable differences. The temperature chosen is high enough so that Bose or Fermi statistics corrections are small, although the system is strongly quantum mechanical. The results obtained show a discontinuity in the kinetic energy during the transition.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Празднікова, Маргарита Олександрівна. "Додаток на базі Android «Кіногід» для керування контентом". Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2020. https://ela.kpi.ua/handle/123456789/34968.

Повний текст джерела
Анотація:
Структура та обсяг роботи. Пояснювальна записка дипломного проекту складається з шести розділів, містить 8 рисунків, 5 таблиць, 5 додатка, 45 джерел. Дипломний проект присвячений розробці додатку на базі Android “Кіногід”. У розділі аналіз існуючих рішень для систематизації інформації про кінопокази розглянуть існуючі сервіси їх функції та недоліки, які необхідно вдосконалити. У розділі аналіз мов програмування та технології розроблення Android додатку розглянуті теоретичні аспекти методів виконання завданяя всі можливі ускладнення та недоліки різних систем у розробці Android додатку. Теоретичні відомості щодо теми диплому. Показані методи розробки додатку, алгоритми дій при поставленій задачі, а також вирішення проблеми, які могли виникнути. У розділа особливості реалізаціх програмного додатку для мобільної платформи показані теоретичні відомості про алгоритми, які були використанні. У розділі аналіз додатку розгялнуті методи тестування додатку, аналіз подальшого вдоскналення.
Structure and scope of work. The explanatory note of the diploma project consists of six sections, contains 8 figures, 5 tables, 5 appendices, 45 sources. Thesis project is dedicated to the development of an application based on Android "Kinogid". In the section analysis of existing solutions for systematization of information about film screenings will consider the existing services, their functions and shortcomings that need to be improved. The section analysis of programming languages and technologies of Android application development considers theoretical aspects of execution methods causing all possible complications and shortcomings of different systems in Android application development. Theoretical information on the topic of the diploma. The methods of application development, algorithms of actions at the set task, and also the decision of a problem which could arise are shown. The section features of the implementation of the software application for the mobile platform shows theoretical information about the algorithms that were used. In the section analysis of the application the methods of testing the application, analysis of further improvement are explained.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Stephanos, Dembe. "Machine Learning Approaches to Dribble Hand-off Action Classification with SportVU NBA Player Coordinate Data." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/etd/3908.

Повний текст джерела
Анотація:
Recently, strategies of National Basketball Association teams have evolved with the skillsets of players and the emergence of advanced analytics. One of the most effective actions in dynamic offensive strategies in basketball is the dribble hand-off (DHO). This thesis proposes an architecture for a classification pipeline for detecting DHOs in an accurate and automated manner. This pipeline consists of a combination of player tracking data and event labels, a rule set to identify candidate actions, manually reviewing game recordings to label the candidates, and embedding player trajectories into hexbin cell paths before passing the completed training set to the classification models. This resulting training set is examined using the information gain from extracted and engineered features and the effectiveness of various machine learning algorithms. Finally, we provide a comprehensive accuracy evaluation of the classification models to compare various machine learning algorithms and highlight their subtle differences in this problem domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gill, Harnavpreet Singh. "Computationally Robust Algorithms for Hypoid Gear Cutting and Contact Line Determination using Ease-Off Methodology." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587499768039312.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chaaraoui, Alexandros Andre. "Vision-based Recognition of Human Behaviour for Intelligent Environments." Doctoral thesis, Universidad de Alicante, 2014. http://hdl.handle.net/10045/36395.

Повний текст джерела
Анотація:
A critical requirement for achieving ubiquity of artificial intelligence is to provide intelligent environments with the ability to recognize and understand human behaviour. If this is achieved, proactive interaction can occur and, more interestingly, a great variety of services can be developed. In this thesis we aim to support the development of ambient-assisted living services with advances in human behaviour analysis. Specifically, visual data analysis is considered in order to detect and understand human activity at home. As part of an intelligent monitoring system, single- and multi-view recognition of human actions is performed, along several optimizations and extensions. The present work may pave the way for more advanced human behaviour analysis techniques, such as the recognition of activities of daily living, personal routines and abnormal behaviour detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Seita, Marcelo Ruiz. "Simulação multi agente em mercados financeiros artificiais utilizando algoritmos genéticos." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/11936.

Повний текст джерела
Анотація:
Submitted by Marcelo Seita (mrseita@gmail.com) on 2014-08-20T16:21:47Z No. of bitstreams: 1 Versão_4.3.pdf: 1745536 bytes, checksum: ae0ce9636f907bd1139ff730270fa1ce (MD5)
Approved for entry into archive by JOANA MARTORINI (joana.martorini@fgv.br) on 2014-08-20T16:29:41Z (GMT) No. of bitstreams: 1 Versão_4.3.pdf: 1745536 bytes, checksum: ae0ce9636f907bd1139ff730270fa1ce (MD5)
Made available in DSpace on 2014-08-20T19:02:48Z (GMT). No. of bitstreams: 1 Versão_4.3.pdf: 1745536 bytes, checksum: ae0ce9636f907bd1139ff730270fa1ce (MD5) Previous issue date: 2014-07-29
Aiming to establish a methodology capable of segregate market’s moments and identifying investors's characteristics acting on a given financial market, this study employs simulations created by an Agent-based Artificial Financial Market, using a Genetic Algorithm to adjust such simulations to the real observed historic data. For this purpose, a Bovespa's index future contracts application was developed. This methodology could easily be extended to other financial markets by simply changing the model's parameters. Over the foundations established by Toriumi et al. (2011), significant contributions were achieved, promoting knowledge enhancements on the chosen target market, as well as on Artificial Financial Market modelling techniques, and also on the application of Genetic Algorithm into financial markets, resulting on experiments and analysis that suggest the efficacy of the methodology herein proposed.
Com o objetivo de estabelecer uma metodologia capaz segregar momentos de mercado e de identificar as características predominantes dos investidores atuantes em um determinado mercado financeiro, este trabalho emprega simulações geradas em um Mercado Financeiro Artificial baseado em agentes, utilizando um Algoritmo Genético para ajustar tais simulações ao histórico real observado. Para tanto, uma aplicação foi desenvolvida utilizando-se o mercado de contratos futuros de índice Bovespa. Esta metodologia poderia facilmente ser estendida a outros mercados financeiros através da simples parametrização do modelo. Sobre as bases estabelecidas por Toriumi et al. (2011), contribuições significativas foram atingidas, promovendo acréscimo de conhecimento acerca tanto do mercado alvo escolhido, como das técnicas de modelagem em Mercados Financeiros Artificiais e também da aplicação de Algoritmos Genéticos a mercados financeiros, resultando em experimentos e análises que sugerem a eficácia do método ora proposto.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Arseneau, Shawn. "Robust image segementation towards an action recognition algorithm." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=30234.

Повний текст джерела
Анотація:
To facilitate proper recognition of a human's action from a video sequence, several key features must first be determined. Initially, the person performing the action must be isolated from the background scene. This information is then used to decipher pertinent action attributes that may include the center of mass, contours, and regions of motion. It is these characteristics that will become the feature elements in the recognition of a person's actions.
This thesis will investigate the various image processing tools available to obtain the aforementioned action attributes. The applicability of filters, background removal techniques, skin-tone matching, and contouring schemes will all be investigated. A thorough comparison with both existing and novel approaches to action recognition is then discussed. Overall, the temporal based algorithm is best suited for an action recognition application as the spatially based approaches rely too heavily on a priori knowledge of the background scene.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Arseneau, Shawn. "Robust image segmentation towards an action recognition algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0033/MQ64210.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Duan, Jie. "Active Control of Vehicle Powertrain Noise Applying Frequency Domain Filtered-x LMS Algorithm." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1243614246.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Singh, Krishna Kant. "Algorithmes pour la dynamique moléculaire restreinte de manière adaptative." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM067/document.

Повний текст джерела
Анотація:
Les méthodes de dynamique moléculaire (MD pour Molecular Dynamics en anglais) sont utilisées pour simuler des systèmes volumineux et complexes. Cependant, la simulation de ce type de systèmes sur de longues échelles temporelles demeure un problème coûteux en temps de calcul. L'étape la plus coûteuse des méthodes de MD étant la mise à jour des forces entre les particules. La simulation de particules restreintes de façon adaptative (ARMD pour Adaptively Restrained Molecular Dynamics en anglais) est une nouvelle approche permettant d'accélérer le processus de simulation en réduisant le nombre de calculs de forces effectués à chaque pas de temps. La méthode ARMD fait varier l'état des degrés de liberté en position en les activants ou en les désactivants de façon adaptative au cours de la simulation. Du fait, que le calcul des forces dépend majoritairement de la distance entre les atomes, ce calcul peut être évité entre deux particules dont les degrés de liberté en position sont désactivés. En revanche, le calcul des forces pour les particules actives (i.e. celles dont les degrés de liberté en position sont actifs) est effectué. Afin d'exploiter au mieux l'adaptabilité de la méthode ARMD, nous avons conçu de nouveaux algorithmes permettant de calculer et de mettre à jour les forces de façon plus efficace. Nous avons développé des algorithmes permettant de construire et de mettre à jour des listes de voisinage de manière incrémentale. En particulier, nous avons travaillé sur un algorithme de mise à jour incrémentale des forces en un seul passage deux fois plus rapide que l'ancien algorithme également incrémental mais qui nécessitait deux passages. Les méthodes proposées ont été implémentées et validées dans le simulateur de MD appelé LAMMPS, mais elles peuvent s'appliquer à n'importe quel autre simulateur de MD. Nous avons validé nos algorithmes pour différents exemples sur les ensembles NVE et NVT. Dans l'ensemble NVE, la méthode ARMD permet à l'utilisateur de jouer sur le précision pour accélérer la vitesse de la simulation. Dans l'ensemble NVT, elle permet de mesurer des grandeurs statistiques plus rapidement. Finalement, nous présentons des algorithmes parallèles pour la mise à jour incrémentale en un seul passage permettant d'utiliser la méthode ARMD avec le standard Message Passage Interface (MPI)
Molecular Dynamics (MD) is often used to simulate large and complex systems. Although, simulating such complex systems for the experimental time scales are still computationally challenging. In fact, the most computationally extensive step in MD is the computation of forces between particles. Adaptively Restrained Molecular Dynamics (ARMD) is a recently introduced particles simulation method that switches positional degrees of freedom on and off during simulation. Since force computations mainly depend upon the inter-atomic distances, the force computation between particles with positional degrees of freedom off~(restrained particles) can be avoided. Forces involving active particles (particles with positional degrees of freedom on) are computed.In order to take advantage of adaptability of ARMD, we designed novel algorithms to compute and update forces efficiently. We designed algorithms not only to construct neighbor lists, but also to update them incrementally. Additionally, we designed single-pass incremental force update algorithm that is almost two times faster than previously designed two-pass incremental algorithm. These proposed algorithms are implemented and validated in the LAMMPS MD simulator, however, these algorithms can be applied to other MD simulators. We assessed our algorithms on different and diverse benchmarks in both microcanonical ensemble (NVE) and canonical (NVT) ensembles. In the NVE ensemble, ARMD allows users to trade between precision and speed while, in the NVT ensemble, it makes it possible to compute statistical averages faster. In Last, we introduce parallel algorithms for single-pass incremental force computations to take advantage of adaptive restraints using the Message Passage Interface (MPI) standard
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Sharp, Graham R. "Recognition algorithms for actions of permutation groups on pairs." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244602.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Sosnowski, Scott T. "Approximate Action Selection For Large, Coordinating, Multiagent Systems." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459468867.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Bonnefoy, Antoine. "Elimination dynamique : accélération des algorithmes d'optimisation convexe pour les régressions parcimonieuses." Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4011/document.

Повний текст джерела
Анотація:
Les algorithmes convexes de résolution pour les régressions linéaires parcimonieuses possèdent de bonnes performances pratiques et théoriques. Cependant, ils souffrent tous des dimensions du problème qui dictent la complexité de chacune de leur itération. Nous proposons une approche pour réduire ce coût calculatoire au niveau de l'itération. Des stratégies récentes s'appuyant sur des tests d'élimination de variables ont été proposées pour accélérer la résolution des problèmes de régressions parcimonieuse pénalisées tels que le LASSO. Ces approches reposent sur l'idée qu'il est profitable de dédier un petit effort de calcul pour localiser des atomes inactifs afin de les retirer du dictionnaire dans une étape de prétraitement. L'algorithme de résolution utilisant le dictionnaire ainsi réduit convergera alors plus rapidement vers la solution du problème initial. Nous pensons qu'il existe un moyen plus efficace pour réduire le dictionnaire et donc obtenir une meilleure accélération : à l'intérieur de chaque itération de l'algorithme, il est possible de valoriser les calculs originalement dédiés à l'algorithme pour obtenir à moindre coût un nouveau test d'élimination dont l'effet d'élimination augmente progressivement le long des itérations. Le dictionnaire est alors réduit de façon dynamique au lieu d'être réduit de façon statique, une fois pour toutes, avant la première itération. Nous formalisons ce principe d'élimination dynamique à travers une formulation algorithmique générique, et l'appliquons en intégrant des tests d'élimination existants, à l'intérieur de plusieurs algorithmes du premier ordre pour résoudre les problèmes du LASSO et Group-LASSO
Applications in signal processing and machine learning make frequent use of sparse regressions. Resulting convex problems, such as the LASSO, can be efficiently solved thanks to first-order algorithms, which are general, and have good convergence properties. However those algorithms suffer from the dimension of the problem, which impose the complexity of their iterations. In this thesis we study approaches, based on screening tests, aimed at reducing the computational cost at the iteration level. Such approaches build upon the idea that it is worth dedicating some small computational effort to locate inactive atoms and remove them from the dictionary in a preprocessing stage so that the regression algorithm working with a smaller dictionary will then converge faster to the solution of the initial problem. We believe that there is an even more efficient way to screen the dictionary and obtain a greater acceleration: inside each iteration of the regression algorithm, one may take advantage of the algorithm computations to obtain a new screening test for free with increasing screening effects along the iterations. The dictionary is henceforth dynamically screened instead of being screened statically, once and for all, before the first iteration. Our first contribution is the formalisation of this principle and its application to first-order algorithms, for the resolution of the LASSO and Group-LASSO. In a second contribution, this general principle is combined to active-set methods, whose goal is also to accelerate the resolution of sparse regressions. Applying the two complementary methods on first-order algorithms, leads to great acceleration performances
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Novoa, Del Toro Elva Maria. "Detecting active modules in multiplex biological networks." Thesis, Aix-Marseille, 2020. http://theses.univ-amu.fr.lama.univ-amu.fr/200514_NOVOADELTORO_173hc776k263go601dzf_TH(1).pdf.

Повний текст джерела
Анотація:
L'expression des gènes est régulée dans le temps, les types de cellules et les conditions. Nous avons de nos jours accès à des technologies nous permettant de mesurer l'expression des gènes. Nous pouvons donc calculer les différences d'expression génique entre patients et témoins, et identifier ainsi les gènes dont l'expression est dérégulées. Nous pouvons aussi essayer de trouver un enrichissement des fonctions cellulaires à partir de la liste des gènes dérégulés. Dans ce contexte, j'ai analysé les données d'expression transcriptomiques de patients ayant le syndrome progéria de Hutchinson-Gilford (HGPS) et aux témoins sains. Ces analyses ont conduit à l'identification ARNs candidats pour validation expérimentale.À l'intérieur des cellules les molécules n'agissent pas isolément, mais interagissent pour accomplir ses fonctions. Actuellement, nous disposons de techniques pour déchiffrer ces interactions à grande échelle. Les interactions peut être représenté comme des réseaux, où les noeuds représentent des molécules, et les arêtes représentent des relations physiques et/ou fonctionnelles. L'hypothèse principale de ma thèse est que des sous-réseaux denses et dérégulé correspondent aux processus cellulaires affectés chez les patients. J'ai intégré des données d'expression génique et des réseaux pour identifier de tels modules. J'ai développé MOGAMUN, un algorithme génétique multi-objectif qui recherche des modules actifs. MOGAMUN est le premier algorithme d'identification de modules actifs capable de considérer les réseaux multiplexes, i.e. réseaux composés de différentes couches d'interactions biologiques
Gene expression is regulated in time, cell types and conditions. We have access to technologies allowing us to measure the gene expression. We can therefore calculate the differences in gene expression between patients and controls, thereby identifying deregulated genes. We can also try to find significant enrichment of one or more cellular functions from the list of deregulated genes. I analyzed transcriptomics data corresponding to Hutchinson-Gilford Progeria Syndrome (HGPS) patients and healthy controls. Our analyses led to the identification of candidate RNAs for experimental validation.Inside cells the molecules do not act isolated, but they interact to accomplish their functions. Nowadays, we have experimental techniques to decipher these interactions on a large scale. Biological interactions can be represented as networks, where the nodes represent molecules, and the edges represent physical and/or functional relationships. The main hypothesis I followed during my thesis is that dense subnetworks associated to an overall expression deregulation correspond to affected cellular processes in patients. I integrated gene expression data and networks to identify such modules. I developed MOGAMUN, a multi-objective genetic algorithm that seeks for active modules. MOGAMUN is the first active module identification algorithm able to consider multiplex networks, i.e. networks composed of different layers of biological interactions
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Bellino, Kathleen Ann. "Computational Algorithms for Face Alignment and Recognition." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/32847.

Повний текст джерела
Анотація:
Real-time face recognition has recently become available for the government and industry due to developments in face recognition algorithms, human head detection algorithms, and faster/low cost computers. Despite these advances, however, there are still some critical issues that affect the performance of real-time face recognition software. This paper addresses the problem of off-centered and out-of-pose faces in pictures, particularly in regard to the eigenface method for face recognition. We first demonstrate how the representation of faces by the eigenface method, and ultimately the performance of the software depend on the location of the eyes in the pictures. The eigenface method for face recognition is described: specifically, the creation of a face basis using the singular value decomposition, the reduction of dimension, and the unique representation of faces in the basis. Two different approaches for aligning the eyes in images are presented. The first considers the rotation of images using the orthogonal Procrustes Problem. The second approach looks at locating features in images using energy-minimizing active contours. We then conclude with a simple and fast algorithm for locating faces in images. Future research is also discussed.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Nkansah-Gyekye, Yaw. "An intelligent vertical handoff decision algorithm in next generation wireless networks." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_2726_1307443785.

Повний текст джерела
Анотація:

The objective of the thesis research is to design such vertical handoff decision algorithms in order for mobile field workers and other mobile users equipped with contemporary multimode mobile devices to communicate seamlessly in the NGWN. In order to tackle this research objective, we used fuzzy logic and fuzzy inference systems to design a suitable handoff initiation algorithm that can handle imprecision and uncertainties in data and process multiple vertical handoff initiation parameters (criteria)
used the fuzzy multiple attributes decision making method and context awareness to design a suitable access network selection function that can handle a tradeoff among many handoff metrics including quality of service requirements (such as network conditions and system performance), mobile terminal conditions, power requirements, application types, user preferences, and a price model
used genetic algorithms and simulated annealing to optimise the access network selection function in order to dynamically select the optimal available access network for handoff
and we focused in particular on an interesting use case: vertical handoff decision between mobile WiMAX and UMTS access networks. The implementation of our handoff decision algorithm will provide a network selection mechanism to help mobile users select the best wireless access network among all available wireless access networks, that is, one that provides always best connected services to users.

Стилі APA, Harvard, Vancouver, ISO та ін.
21

Bahramgiri, Moshen. "Algorithmic approaches to graph states under the action of local Clifford groups." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/38936.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2007.
Includes bibliographical references (p. 87-88).
Graph states are quantum states (quantum codes) in qn-dimensional space ... (q being a power of some prime number) which can be described by graphs with edges labeled from the field of order q, Fq. Graph states are determined as a common eigenvector of independent elements of the n-fold Pauli group, on which the local Clifford group has a natural action. This action induces the natural action of the local Clifford group on graph states and hence, its action on graphs. Locally equivalent graphs can be described using this action. For q being a prime number, two graphs are locally equivalent when they are located on the same orbit of this action, in other words, when there is an element of the local Clifford group mapping one graph to the other one. When q is some power of a prime number, the definition of this action is the natural generalization of this action in the case where q is prime. We translate the action of local Clifford groups on graphs to a set of linear and quadratic equations in the field F,. In the case that q is an odd number, given two arbitrary graphs, we present an efficient algorithm (polynomial in n) to verify whether these graphs are locally equivalent or not. Moreover, we present a computational method to calculate the number of inequivalent graph states. We give some estimations on the size of the orbits of this action on graphs, and prove that when either q is equal to 2 or is an odd number, the number of inequivalent quantum codes (i.e., the number of classes of equivalency) is equal to ..., which is essentially as large as the total number of graphs.
by Mohsen Bahramgiri.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Ahmed, Fareed. "Un nouvel a priori de formes pour les contours actifs." Thesis, Tours, 2014. http://www.theses.fr/2014TOUR4008/document.

Повний текст джерела
Анотація:
Les contours actifs sont parmi les méthodes de segmentation d'images les plus utilisées et de nombreuses implémentations ont vu le jour durant ces 25 dernières années. Parmi elles, l'approche greedy est considérée comme l'une des plus rapides et des plus stables. Toutefois, quelle que soit l'implémentation choisie, les résultats de segmentation souffrent grandement en présence d'occlusions, de concavités ou de déformation anormales de la forme. Si l'on dispose d'informations a priori sur la forme recherchée, alors son incorporation à un modèle existant peut permettre d'améliorer très nettement les résultats de segmentation. Dans cette thèse, l'inclusion de ce type de contraintes de formes dans un modèle de contour actif explicite est proposée. Afin de garantir une invariance à la rotation, à la translation et au changement d'échelle, les descripteurs de Fourier sont utilisés. Contrairement à la plupart des méthodes existantes, qui comparent la forme de référence et le contour actif en cours d'évolution dans le domaine d'origine par le biais d'une transformation inverse, la méthode proposée ici réalise cette comparaison dans l'espace des descripteurs. Cela assure à notre approche un faible temps de calcul et lui permet d'être indépendante du nombre de points de contrôle choisis pour le contour actif. En revanche, cela induit un biais dans la phase des coefficients de Fourier, handicapant l'invariance à la rotation. Ce problème est résolu par un algorithme original. Les expérimentations indiquent clairement que l'utilisation de ce type de contrainte de forme améliore significativement les résultats de segmentation du modèle de contour actif utilisé
Active contours are widely used for image segmentation. There are many implementations of active contours. The greedy algorithm is being regarded as one of the fastest and stable implementations. No matter which implementation is being employed, the segmentation results suffer greatly in the presence of occlusion, context noise, concavities or abnormal deformation of shape. If some prior knowledge about the shape of the object is available, then its addition to an existing model can greatly improve the segmentation results. In this thesis inclusion of such shape constraints for explicit active contours is being implemented. These shape priors are introduced through the use of robust Fourier based descriptors which makes them invariant to the translation, scaling and rotation factors and enables the deformable model to converge towards the prior shape even in the presence of occlusion and contextual noise. Unlike most existing methods which compare the reference shape and evolving contour in the spatial domain by applying the inverse transforms, our proposed method realizes such comparisons entirely in the descriptor space. This not only decreases the computational time but also allows our method to be independent of the number of control points chosen for the description of the active contour. This formulation however, may introduce certain anomalies in the phase of the descriptors which affects the rotation invariance. This problem has been solved by an original algorithm. Experimental results clearly indicate that the inclusion of these shape priors significantly improved the segmentation results of the active contour model being used
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Duvenage, Eugene. "miRNAMatcher: High throughput miRNA discovery using regular expressions obtained via a genetic algorithm." Thesis, University of the Western Cape, 2008. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_5752_1266536340.

Повний текст джерела
Анотація:

In summary there currently exist techniques to discover miRNA however both require many calculations to be performed during the identification limiting their use at a genomic level. Machine learning techniques are currently providing the best results by combining a number of calculated and statistically derived features to identify miRNA candidates, however almost all of these still include computationally intensive secondary-structure calculations. It is the aim of this project to produce a miRNA identification process that minimises and simplifies the number of computational elements required during the identification process.

Стилі APA, Harvard, Vancouver, ISO та ін.
24

Ndiaye, Eugene. "Safe optimization algorithms for variable selection and hyperparameter tuning." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT004/document.

Повний текст джерела
Анотація:
Le traitement massif et automatique des données requiert le développement de techniques de filtration des informations les plus importantes. Parmi ces méthodes, celles présentant des structures parcimonieuses se sont révélées idoines pour améliorer l’efficacité statistique et computationnelle des estimateurs, dans un contexte de grandes dimensions. Elles s’expriment souvent comme solution de la minimisation du risque empirique régularisé s’écrivant comme une somme d’un terme lisse qui mesure la qualité de l’ajustement aux données, et d’un terme non lisse qui pénalise les solutions complexes. Cependant, une telle manière d’inclure des informations a priori, introduit de nombreuses difficultés numériques pour résoudre le problème d’optimisation sous-jacent et pour calibrer le niveau de régularisation. Ces problématiques ont été au coeur des questions que nous avons abordées dans cette thèse.Une technique récente, appelée «Screening Rules», propose d’ignorer certaines variables pendant le processus d’optimisation en tirant bénéfice de la parcimonie attendue des solutions. Ces règles d’élimination sont dites sûres lorsqu’elles garantissent de ne pas rejeter les variables à tort. Nous proposons un cadre unifié pour identifier les structures importantes dans ces problèmes d’optimisation convexes et nous introduisons les règles «Gap Safe Screening Rules». Elles permettent d’obtenir des gains considérables en temps de calcul grâce à la réduction de la dimension induite par cette méthode. De plus, elles s’incorporent facilement aux algorithmes itératifs et s’appliquent à un plus grand nombre de problèmes que les méthodes précédentes.Pour trouver un bon compromis entre minimisation du risque et introduction d’un biais d’apprentissage, les algorithmes d’homotopie offrent la possibilité de tracer la courbe des solutions en fonction du paramètre de régularisation. Toutefois, ils présentent des instabilités numériques dues à plusieurs inversions de matrice, et sont souvent coûteux en grande dimension. Aussi, ils ont des complexités exponentielles en la dimension du modèle dans des cas défavorables. En autorisant des solutions approchées, une approximation de la courbe des solutions permet de contourner les inconvénients susmentionnés. Nous revisitons les techniques d’approximation des chemins de régularisation pour une tolérance prédéfinie, et nous analysons leur complexité en fonction de la régularité des fonctions de perte en jeu. Il s’ensuit une proposition d’algorithmes optimaux ainsi que diverses stratégies d’exploration de l’espace des paramètres. Ceci permet de proposer une méthode de calibration de la régularisation avec une garantie de convergence globale pour la minimisation du risque empirique sur les données de validation.Le Lasso, un des estimateurs parcimonieux les plus célèbres et les plus étudiés, repose sur une théorie statistique qui suggère de choisir la régularisation en fonction de la variance des observations. Ceci est difficilement utilisable en pratique car, la variance du modèle est une quantité souvent inconnue. Dans de tels cas, il est possible d’optimiser conjointement les coefficients de régression et le niveau de bruit. Ces estimations concomitantes, apparues dans la littérature sous les noms de Scaled Lasso, Square-Root Lasso, fournissent des résultats théoriques aussi satisfaisants que celui du Lasso tout en étant indépendant de la variance réelle. Bien que présentant des avancées théoriques et pratiques importantes, ces méthodes sont aussi numériquement instables et les algorithmes actuellement disponibles sont coûteux en temps de calcul. Nous illustrons ces difficultés et nous proposons à la fois des modifications basées sur des techniques de lissage pour accroitre la stabilité numérique de ces estimateurs, ainsi qu’un algorithme plus efficace pour les obtenir
Massive and automatic data processing requires the development of techniques able to filter the most important information. Among these methods, those with sparse structures have been shown to improve the statistical and computational efficiency of estimators in a context of large dimension. They can often be expressed as a solution of regularized empirical risk minimization and generally lead to non differentiable optimization problems in the form of a sum of a smooth term, measuring the quality of the fit, and a non-smooth term, penalizing complex solutions. Although it has considerable advantages, such a way of including prior information, unfortunately introduces many numerical difficulties both for solving the underlying optimization problem and to calibrate the level of regularization. Solving these issues has been at the heart of this thesis. A recently introduced technique, called "Screening Rules", proposes to ignore some variables during the optimization process by benefiting from the expected sparsity of the solutions. These elimination rules are said to be safe when the procedure guarantees to not reject any variable wrongly. In this work, we propose a unified framework for identifying important structures in these convex optimization problems and we introduce the "Gap Safe Screening Rules". They allows to obtain significant gains in computational time thanks to the dimensionality reduction induced by this method. In addition, they can be easily inserted into iterative algorithms and apply to a large number of problems.To find a good compromise between minimizing risk and introducing a learning bias, (exact) homotopy continuation algorithms offer the possibility of tracking the curve of the solutions as a function of the regularization parameters. However, they exhibit numerical instabilities due to several matrix inversions and are often expensive in large dimension. Another weakness is that a worst-case analysis shows that they have exact complexities that are exponential in the dimension of the model parameter. Allowing approximated solutions makes possible to circumvent the aforementioned drawbacks by approximating the curve of the solutions. In this thesis, we revisit the approximation techniques of the regularization paths given a predefined tolerance and we propose an in-depth analysis of their complexity w.r.t. the regularity of the loss functions involved. Hence, we propose optimal algorithms as well as various strategies for exploring the parameters space. We also provide calibration method (for the regularization parameter) that enjoys globalconvergence guarantees for the minimization of the empirical risk on the validation data.Among sparse regularization methods, the Lasso is one of the most celebrated and studied. Its statistical theory suggests choosing the level of regularization according to the amount of variance in the observations, which is difficult to use in practice because the variance of the model is oftenan unknown quantity. In such case, it is possible to jointly optimize the regression parameter as well as the level of noise. These concomitant estimates, appeared in the literature under the names of Scaled Lasso or Square-Root Lasso, and provide theoretical results as sharp as that of theLasso while being independent of the actual noise level of the observations. Although presenting important advances, these methods are numerically unstable and the currently available algorithms are expensive in computation time. We illustrate these difficulties and we propose modifications based on smoothing techniques to increase stability of these estimators as well as to introduce a faster algorithm
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Swathanthira, Kumar Murali Murugavel Manjakkattuvalasu. "Implementation of an actuator placement, switching algorithm for active vibration control in flexible structures." Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-1120102-210634.

Повний текст джерела
Анотація:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Actuator placement algorithm; piezoelectric actuators; LQR; Galerkin; supervisory control; active vibration control; FEA; switching policy; dSPACE. Includes bibliographical references (p. 58-64).
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Salcburger, Martin. "Semiaktivní systém odpružení vozidla." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2016. http://www.nusl.cz/ntk/nusl-254452.

Повний текст джерела
Анотація:
The aim of this diploma thesis is to design algorithms of semi-active suspension system for a quarter car model, compare their function with respect to ride comfort and drive safety. Consequently, setup control algorithms for using in the Adams software. The result of this thesis should be to determine the effect of semi-active suspension control system on vehicle behavior.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Carneiro, Milena Bueno Pereira. "Reconhecimento de íris utilizando algoritmos genéticos e amostragem não uniforme." Universidade Federal de Uberlândia, 2010. https://repositorio.ufu.br/handle/123456789/14276.

Повний текст джерела
Анотація:
The automatic recognition of individuals through the iris characteristics is an e±cient biometric technique that is widely studied and applied around the world. Many image processing stages are necessary to make possible the representation and the interpretation of the iris information. This work presents the state of the art in iris recognition systems where the most re- markable works and the di®erent techniques applied to perform each process- ing stage are quoted. The implementations of each processing stage using traditional techniques are presented and, afterwards, two innovator methods are proposed with the common objective of bringing bene¯t to the system. The ¯rst processing stage should be the localization of the iris region in an eye image. The ¯rst method proposed in this work presents an algorithm to achieve the iris localization through the utilization of the called Memetic Algorithms. The new method is compared to a classical method and the obtained results show advantages concerning e±ciency and processing time. In another processing stage there must be a pixels sampling from the iris region, from where the information used to di®erentiate the individuals is extracted. Traditionally, this sampling process is accomplished in an uni- form way along the whole iris region. It is proposed a pre-processing method which suggests a non uniform pixels sampling from the iris region with the objective of selecting the group of pixels which carry more information about the iris structure. The search for this group of pixels is done through Ge- netic Algorithms. The application of the new method improves the e±ciency of the system and also, allows the generation of smaller templates. In this work, a study on the called Active Shape Models is also accomplished and its application to perform the iris region segmentation is evaluated. To execute the simulations and the evaluation of the methods, it was used two public and free iris images database: UBIRIS database and MMU database.
O reconhecimento automático de pessoas utilizando-se características da íris é uma eficiente técnica biométrica que está sendo largamente estudada e aplicada em todo o mundo. Diversas etapas de processamento são necessárias para tornar possível a representação e a interpretação da informação contida na íris. Neste trabalho é apresentado o estado da arte de sistemas de reconhecimento de íris onde são citados os trabalhos de maior destaque e as diferentes técnicas empregadas em cada etapa de processamento. São apresentadas implementações de cada etapa de processamento utilizando técnicas tradicionais e, posteriormente, são propostos dois métodos inovadores que têm o objetivo comum de trazer benefícios ao sistema. A primeira etapa de processamento é a localização da região da íris na imagem. O primeiro método proposto neste trabalho apresenta um algoritmo para realizar a localização da íris utilizando os chamados Algoritmos Meméticos. O novo método é comparado a um método clássico e os resultadosnobtidos demonstram vantagens no que diz respeito à eficiência e ao tempo de processamento. Em uma outra etapa de processamento deve haver uma amostragem de pixels na região da íris, de onde são retiradas as informações utilizadas para diferenciar os indivíduos. Tradicionalmente, esta amostragem é realizada de maneira uniforme ao longo de toda a região da íris. É proposto um método de pré-processamento que sugere uma amostragem não uniforme de pixels na região da íris com o objetivo de selecionar o conjunto de pixels que carregam mais informações da estrutura da íris. A busca por esse conjunto de pixels é realizada utilizando-se Algoritmos Genéticos. A aplicação deste novo método aumenta a eficiência do sistema e ainda possibilita a geração de templates binários menores. Neste trabalho é realizado, ainda, um estudos dos chamados Active Shape Models e a sua aplicação para segmentar a região da íris é avaliada. Para a simulação e avaliação dos métodos, foram utilizados dois bancos de imagens de íris públicos e gratuitos: o banco de imagens UBIRIS e o banco de imagens MMU.
Doutor em Ciências
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Greenwood, Aaron Blake. "Implementation of Adaptive Filter Algorithms for the Suppression of Thermoacoustic Instabilities." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/31299.

Повний текст джерела
Анотація:
The main goal of this work was to develop adaptive filter algorithms and test their performance in active combustion control. Several algorithms were incorporated, which are divided into gradient descent algorithms and pattern searches. The algorithms were tested on three separate platforms. The first was an analog electronic simulator, which uses a second order acoustics model and a first order low pass filter to simulate the flame dynamics of an unstable tube combustor. The second was a flat flame, methane-air Rijke tube. The third can be considered a quasi-LDI liquid fuel combustor with a thermal output of approximately 30 kW. Actuation included the use of an acoustic actuator for the Rijke tube and a proportional throttling valve for the liquid fuel rig. Proportional actuation, pulsed actuation, and subharmonic control were all investigated throughout this work. The proportional actuation tests on the Rijke tube combustor have shown that, in general, the gradient descent algorithms outperformed the pattern search algorithms. Although, the pattern search algorithms were able to suppress the pressure signal to levels comparable to the gradient descent algorithms, the convergence time was lower for the gradient descent algorithms. The gradient algorithms were also superior in the presence of actuator authority limitations. The pulsed actuation on the Rijke tube showed that the convergence time is decreased for this type of actuation. This is due to the fact that there is a fixed amplitude control signal and algorithms did not have to search for sufficient magnitude. It was shown that subharmonic control could be used in conjunction with the algorithms. Control was achieved at the second and third subharmonic, and control was maintained for much higher subharmonics. The cost surface of the liquid fuel rig was obtained as the mean squared error of the combustor pressure as a function of the magnitude and phase of the controller. The adaptive algorithms were able to achieve some suppression of the pressure oscillations but did not converge to the optimal phase as shown in the cost surface. Simulations using the data from this cost surface were also performed. With the addition of a probing function, the algorithms were able to converge to a near-optimal condition.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Araújo, Willian Fernandes. "As narrativas sobre os algoritmos do Facebook : uma análise dos 10 anos do feed de notícias." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157660.

Повний текст джерела
Анотація:
Esta tese acompanha a construção do Feed de Notícias do Facebook ao longo dos seus primeiros 10 anos (2006-2016) com o objetivo de descrever as formas como o mecanismo e a noção de algoritmo são definidos ao longo do período estudado. São analisados os conteúdos digitais, chamados de dispositivos textuais, que compõem publicamente o que o Feed de Notícias é e faz, descrevendo os atores implicados na composição dessa narrativa, mapeando seus objetivos e seus efeitos. A amostra analisada toma como ponto de partida os dispositivos textuais alocados em dois espaços digitais institucionais do Facebook: Facebook Blog e Facebook Newsroom. A partir da leitura de mais de mil publicações digitais do Facebook e de outros agentes (usuários, produtores de conteúdo, imprensa, ativistas etc.), foram selecionadas as publicações mais relevantes ao estudo, escolhidas com ênfase em eventos e circunstâncias de negociação ou mudança. A abordagem aqui construída representa uma composição de perspectivas dos estudos de ciência e tecnologia (STS) e da Teoria Ator-Rede (TAR). Trata-se do conjunto de procedimentos utilizados na descrição do caráter performativo dos textos. Na análise realizada na tese, são identificados três momentos distintos da construção da noção de algoritmo ao longo da trajetória do Feed de Notícias, chamados de Algoritmo Edgerank, Algoritmo Certo e Algoritmo Centrado no Usuário. Ao mesmo tempo, é apresentada a formulação do Feed de Notícias como um fluxo constante. É argumentado que as transformações no mecanismo são orientadas para gerar engajamento e manter usuários conectados ao Facebook. Engajamento é, na racionalidade emergente da construção do Feed de Notícias, uma mercadoria resultante de sua ação. Outra noção relevante decorrente da análise é a ideia de norma algorítmica como lógica normativa de visibilidade que busca regular o relacionamento entre produtores de conteúdo e o mecanismo, punindo os que não seguem as chamadas boas práticas.
This study follows the Facebook News Feed construction throughout its first ten years (2006– 2016). The objective of this research is to describe the way this mechanism and the notion of algorithm were compounded, enacted and transformed during that period. This is achieved through an analysis of the digital content (referred to here as ‘textual devices’) that publicly constructs what the News Feed is and how it functions. This analysis describes the actors involved within this narrative, mapping their objectives and effects. The sample is constructed beginning with the textual devices published on Facebook’s institutional websites: Facebook Blog and Facebook Newsroom. Following the reading of more than 1,000 texts of Facebook and other agents (users, content producers, media, activists, etc.), the most relevant publications were selected, emphasizing situations of change, conflict and controversy. The research approach, which was based on science and technology studies (STS) and actornetwork theory (ANT), involved constructing a body of procedures used to describe the performative character of texts. The current study found that during the development of the News Feed, Facebook’s notion of algorithm has gone through three different phases, referred to here respectively as the Edgerank Algorithm, Right Algorithm and User-centered Algorithm. One of the most interesting findings was that the changes in the News Feed are primarily oriented towards the objective of creating engagement by keeping users connected to Facebook. Engagement is an important commodity within the rationality that emerged from this scenario. It is argued that the News Feed development may be seen as a continuous flow. Another important finding was the notion called algorithmic norm, as a normative logic of visibility that rules the relationship between content producers and the News Feed. The algorithmic norm tends to enact specific judgements and to punish content producers who do not follow what Facebook calls good practices.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Rousselle, Jean-Jacques. "Les contours actifs, une méthode de ségmentation : application à l'imagerie médicale." Tours, 2003. http://www.theses.fr/2003TOUR4032.

Повний текст джерела
Анотація:
Les méthodes de segmentation d'images sont nombreuses ; toutes présentent des avantages mais ne donnent pas entière satisfaction. Toutes doivent être adaptées en fonction des appplications que l'on se propose de réaliser. Les contours actifs ou modèles déformables ont permis de s'affranchir du chaînage des points du contour mais nécessitent le réglage de nombreux paramètres. Les contours actifs que nous avons étudiés sont implémentés par un algorithme "greedy". D'abord, nous proposons une variante basée sur une minimisation par algorithme génétique. Puis nous présentons trois approches pour régler les paramètres qui contrôlent l'évolution du contour. Les plans d'expériences permettent, sur un jeu d'images, de choisir très rapidement un jeu de paramètres performants. Les algorithmes génétiques peuvent être utilisés pour optimiser les paramètres. Enfin, nous décrivons une approche originale où les paramètres sont locaux et tirés aléatoirement. Ces contours actifs autonomes permettent uné évolution des contours sans aucun réglage. Les applications développés trouvent leur intérêt dans le domaine médical
The segmentation methods of images are numerous ; all have advantages but do not give full satisfaction. All must be adapted according to the application which has to be carried out. Active contours or deformable models made it possible to avoid to chain the contour points but require the adjustment of many parameters. Active contours that we have studied are implemented using a greedy algorithm. First, we propose an alternative based on a minimization by genetic algorithm. Then we propose three approaches to regulate the parameters which control the evolution of contour. Design of experiments makes it possible from a set of images to very quickly choose a set of powerful parameters. The genetic algorithms can be used to optimize the parameters. Finally we propose an original approach where the parameters are local and randomly defined. These autonomous snake allow an evolution of contours without any adjustment. The applications use various images, but in particular medical images
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Bornschlegell, Augusto Salomao. "Optimisation aérothermique d'un alternateur à pôles saillants pour la production d'énergie électrique décentralisée." Thesis, Valenciennes, 2012. http://www.theses.fr/2012VALE0023/document.

Повний текст джерела
Анотація:
La présente étude concerne l’étude d’optimisation thermique d’une machine électrique. Un modèle nodal est utilisé pour la simulation du champ de température. Ce modèle résout l’équation de la chaleur en trois dimensions, en coordonnées cylindriques et en régime transitoire ou permanent. On prend en compte les deux mécanismes de transport les plus importants : La conduction et la convection. L’évaluation de ce modèle est effectuée par l’intermédiaire de 13 valeurs de débits de référence. C’est en faisant varier ces variables qu’on évalue la performance du refroidissement dans la machine. Avant de partir sur l’étude d’optimisation de cettegéométrie, on a lancé une étude d’optimisation d’un cas plus simple afin de mieux comprendre les différents outils d’optimisation disponibles. L’expérience acquise avec les cas simples est utilisée dans l’optimisation thermique de la machine. La machine est thermiquement évaluée sur la combinaison de deux critères : la température maximale et la température moyenne. Des contraintes ont été additionnées afin d’obtenir des résultats physiquement acceptables. Le problème est résolu à l’aide des méthodes de gradient (Active-set et Point-Intérieur) et des Algorithmes Génétiques
This work relates the thermal optimization of an electrical machine. The lumped method is used to simulate the temperature field. This model solves the heat equation in three dimensions, in cylindrical coordinates and in transient or steady state. We consider two transport mechanisms: conduction and convection. The evaluation of this model is performed by means of 13 design variables that correspond to the main flow rates of the equipment. We analyse the machine cooling performance by varying these 13 flow rates. Before starting the study of such a complicated geometry, we picked a simpler case in order to better understand the variety of the available optimization tools. The experience obtained in the simpler case is applyed in the resolution of the thermal optimization problem of the electrical machine. This machine is evaluated from the thermal point of view by combining two criteria : the maximum and the mean temperature. Constraints are used to keep the problem consistent. We solved the problem using the gradient based methods (Active-set and Interior-Point) and the Genetic Algorithms
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hossain, M. Alamgir, M. O. Tokhi, and Keshav P. Dahal. "Impact of algorithm design in implementing real-time active control systems." Springer, 2004. http://hdl.handle.net/10454/2597.

Повний текст джерела
Анотація:
This paper presents an investigation into the impact of algorithm design for real-time active control systems. An active vibration control (AVC) algorithm for flexible beam systems is employed to demonstrate the critical design impact for real-time control applications. The AVC algorithm is analyzed, designed in various forms and implemented to explore the impact. Finally, a comparative real-time computing performance of the algorithms is presented and discussed to demonstrate the merits of different design mechanisms through a set of experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Loth, Manuel. "Algorithmes d'Ensemble Actif pour le LASSO." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00845441.

Повний текст джерела
Анотація:
Cette thèse aborde le calcul de l'opérateur LASSO (Least Absolute Shrinkage and Selection Operator), ainsi que des problématiques qui lui sont associées, dans le domaine de la régression. Cet opérateur a suscité une attention croissante depuis son introduction par Robert Tibshirani en 1996, par sa capacité à produire ou identi fier des modèles linéaires parcimonieux à partir d'observations bruitées, la parcimonie signi fiant que seules quelques unes parmi de nombreuses variables explicatives apparaissent dans le modèle proposé. Cette sélection est produite par l'ajout à la méthode des moindres-carrés d'une contrainte ou pénalisation sur la somme des valeurs absolues des coe fficients linéaires, également appelée norme l1 du vecteur de coeffi cients. Après un rappel des motivations, principes et problématiques de la régression, des estimateurs linéaires, de la méthode des moindres-carrés, de la sélection de modèle et de la régularisation, les deux formulations équivalentes du LASSO contrainte ou régularisée sont présentées; elles dé finissent toutes deux un problème de calcul non trivial pour associer un estimateur à un ensemble d'observations et un paramètre de sélection. Un bref historique des algorithmes résolvant ce problème est dressé, et les deux approches permettant de gérer la non-di fferentiabilité de la norme l1 sont présentées, ainsi que l'équivalence de ces problèmes avec un programme quadratique. La seconde partie se concentre sur l'aspect pratique des algorithmes de résolution du LASSO. L'un d'eux, proposé par Michael Osborne en 2000, est reformulé. Cette reformulation consiste à donner une défi nition et explication générales de la méthode d'ensemble actif, qui généralise l'algorithme du simplex à la programmation convexe, puis à la spéci fier progressivement pour la programmation LASSO, et à adresser les questions d'optimisation des calculs algébriques. Bien que décrivant essentiellement le même algorithme que celui de Michael Osborne, la présentation qui en est faite ici a l'ambition d'en exposer clairement les mécanismes, et utilise des variables di fférentes. Outre le fait d'aider à mieux comprendre cet algorithme visiblement sous-estimé, l'angle par lequel il est présenté éclaire le fait nouveau que la même méthode s'applique naturellement à la formulation régularisée du LASSO, et non uniquement à la formulation contrainte. La populaire méthode par homotopie (ou LAR-LASSO, ou LARS) est ensuite présentée comme une dérivation de la méthode d'ensemble actif, amenant une formulation alternative et quelque peu simpli fiée de cet algorithme qui fournit les solutions du LASSO pour chaque valeur de son paramètre. Il est montré que, contrairement aux résultats d'une étude récente de Jerome H. Friedman, des implémentations de ces algorithmes suivant ces reformulations sont plus effi caces en terme de temps de calcul qu'une méthode de descente par coordonnées. La troisième partie étudie dans quelles mesures ces trois algorithmes (ensemble actif, homotopie, et descente par coordonnées) peuvent gérer certains cas particuliers, et peuvent être appliqués à des extensions du LASSO ou d'autres problèmes similaires. Les cas particuliers incluent les dégénérescences, comme la présence de variables lineairement dépendantes, ou la sélection/désélection simultanée de variables. Cette dernière problématique, qui était délaissée dans les travaux précédents, est ici expliquée plus largement et une solution simple et efficace y est apportée. Une autre cas particulier est la sélection LASSO à partir d'un nombre très large, voire infi ni de variables, cas pour lequel la méthode d'ensemble actif présente un avantage majeur. Une des extensions du LASSO est sa transposition dans un cadre d'apprentissage en ligne, où il est désirable ou nécessaire de résoudre le problème sur un ensemble d'observations qui évolue dans le temps. A nouveau, la flexibilité limitée de la méthode par homotopie la disquali fie au pro fit des deux autres. Une autre extension est l'utilisation de la pénalisation l1 sur d'autres fonction coûts que la norme l2 du résidu, ou en association avec d'autres pénalisations, et il est rappelé ou établi dans quelles mesures et de quelle façon chaque algorithme peut être transposé à ces problèmes.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Aidarous, Yasser. "Optimisation des modèles actifs d'apparence." Rennes 1, 2012. http://www.theses.fr/2012REN1S037.

Повний текст джерела
Анотація:
Nous utilisons les modèles actifs d'apparence (AAM) pour l'alignement de bouchessous différentes expressions. Cependant, la prédiction utilisée par cette méthodes'avère très sensible à l'initialisation et consommatrice à la fois en temps de calcul eten espace mémoire. Nous remplaçons la prédiction classique des AAM par uneoptimisation basée sur le simplexe de Nelder & Mead. Nous utilisons unecombinaison de Gaussiennes pour initialiser le simplexe et contraindre les solutionsproposées, à l'aide d'un calcul d'appartenance en temps réel, à appartenir à unespace d'apparences plausibles. Les paramètres d'apparence sont divisés en deuxensembles traités différemment : les paramètres dominants, qui ont une grandeinfluence sur le modèle, et les paramètres récessifs, qui influent sur les détails del'objet modélisé. Deux nouvelles méthodes sont proposées : la première applique unsimplexe adapté à l'ensemble des paramètres d'apparence, et la deuxième exploitel'optimum d'un premier simplexe adapté, appliqué aux paramètres dominants, suivid'un autre appliqué à l'ensemble des paramètres. Tout en présentant des taux deconvergence supérieurs et des intervalles d'initialisation plus larges que ceux desAAM classiques, les méthodes proposées sont moins consommatrices en mémoireet en temps de traitement. La comparaison des méthodes proposées permet deprivilégier l'utilisation de l'une ou de l'autre méthode suivant les performancesimposées à l'algorithme d'alignement
We use active appearance models (AAM) to align mouths under differentexpressions. However, the prediction used by this method is very sensitive toinitialization and is time and memory consuming. We replace the classical predictionof AAM with a Nelder & Mead simplex optimization. We use a Gaussian mixture toinitialize the simplex and constraint solutions, using a real time belonging calculation,to belong to a plausible appearances space. The appearance parameters are dividedinto two sets which treated differently: a set of dominant parameters, which havegreat influence on the model, and a set of recessive parameters, which manipulatesthe details of the modeled object. Two new methods are suggested: the first appliesan adapted simplex to all appearance parameters, and the second uses the optimumof an adapted simplex applied to dominant parameters followed by another oneapplied to all parameters. Convergence time and memory used by the simplex-basedmethods are lower than those of classical AAM. These methods have a higherconvergence rates and initialization intervals are wider than those of classical AAM. The comparison of the two proposed methods allows us to select one of themdepending on the required performance of the application
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Moura, Carolina Silva de. "Associações sociotécnicas: mediações algorítmicas e a economia das ações no Facebook." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/8427.

Повний текст джерела
Анотація:
Submitted by Franciele Moreira (francielemoreyra@gmail.com) on 2018-05-03T13:53:27Z No. of bitstreams: 2 Dissertação - Carolina Silva de Moura - 2018.pdf: 5329584 bytes, checksum: 5bfa61f6504213d3213b09998b948dfa (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-05-03T14:53:50Z (GMT) No. of bitstreams: 2 Dissertação - Carolina Silva de Moura - 2018.pdf: 5329584 bytes, checksum: 5bfa61f6504213d3213b09998b948dfa (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-05-03T14:53:50Z (GMT). No. of bitstreams: 2 Dissertação - Carolina Silva de Moura - 2018.pdf: 5329584 bytes, checksum: 5bfa61f6504213d3213b09998b948dfa (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-04-02
Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG
The project seeks to map the economy mediated by the association between the algorithms and the user on Facebook turning to the commercial pages.Economics keep a direct relationship with the processes of collecting, storing and articulating data favoring a personalization of content. The first part presents the project regarding the problem, objectives and your relevance. The second delivery of conceptual discussion about algorithms, their developments, research perspectives and which ones are sought in this analysis. The third one deals with the Theory-Actor Network, a theoretical proposal for research to consider symmetrical human and non-human highlightly Latour (1994a, 2012). Therefore, we explore ways of implementing empirical research based on the development of cartography (VENTURINNI, 2010; BARROS, KASTRUP, 2009; ROMAGNOLI, 2009; PRADO FILHO, TETI, 2013). Taking this as a reference, the empirical exercise was carried out from the profile of the researcher on Facebook for a month was observed on based on the selection of some internal and external variables about platform. The following reading exposes the data observed in this analysis that point to an economy based on the circulation of segmented contents based on the actions of the user, in which the user's action deals of algorithm transactions that reverberate is not that it is not News Feed and so the user also carries out their translations in a network that is made and remade in a constant way.
O projeto busca cartografar a economia mediada pela associação entre os algoritmos e o usuário no Facebook voltando-se às páginas comerciais. Economia essa que estabelece relação direta com os processos de coleta, armazenamento e articulação de dados favorecendo a personalização de conteúdo. Na primeira parte apresenta-se o projeto quanto ao problema, objetivos e relevância. A segunda dá seguimento à discussão conceitual sobre algoritmos, seus desdobramentos, perspectivas de pesquisa e quais delas são buscadas nessa análise. A terceira aborda a Teoria Ator-Rede, proposta teórica para a pesquisa por considerar simétricos agentes humanos e não humanos a luz especialmente de Latour (1994a, 2012). Por conseguinte se explora as formas de implementação da pesquisa empírica fundamentada no desenvolvimento de uma cartografia (VENTURINNI, 2010; BARROS, KASTRUP, 2009; ROMAGNOLI, 2009; PRADO FILHO, TETI, 2013). Tomando isso como referência, o exercício empírico, foi realizado a partir do perfil da pesquisadora no Facebook que durante um mês foi observado com base na seleção de algumas variáveis internas e externas à plataforma. A leitura seguinte expõe os dados observados nessa análise que apontam para uma economia pautada na circulação de conteúdos segmentados com base nas ações do usuário, em que o agir dele promove traduções do algoritmo as quais reverberam no que é exposto no Feed de Notícias e assim o usuário também realiza suas traduções em uma rede que se faz e refaz de modo constante.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Liu, Siwei. "Apport d'un algorithme de segmentation ultra-rapide et non supervisé pour la conception de techniques de segmentation d'images bruitées." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4371.

Повний текст джерела
Анотація:
La segmentation d'image constitue une étape importante dans le traitement d'image et de nombreuses questions restent ouvertes. Il a été montré récemment, dans le cas d'une segmentation à deux régions homogènes, que l'utilisation de contours actifs polygonaux fondés sur la minimisation d'un critère issu de la théorie de l'information permet d'aboutir à un algorithme ultra-rapide qui ne nécessite ni paramètre à régler dans le critère d'optimisation, ni connaissance a priori sur les fluctuations des niveaux de gris. Cette technique de segmentation rapide et non supervisée devient alors un outil élémentaire de traitement.L'objectif de cette thèse est de montrer les apports de cette brique élémentaire pour la conception de nouvelles techniques de segmentation plus complexes, permettant de dépasser un certain nombre de limites et en particulier :- d'être robuste à la présence dans les images de fortes inhomogénéités ;- de segmenter des objets non connexes par contour actif polygonal sans complexifier les stratégies d'optimisation ;- de segmenter des images multi-régions tout en estimant de façon non supervisée le nombre de régions homogènes présentes dans l'image.Nous avons pu aboutir à des techniques de segmentation non supervisées fondées sur l'optimisation de critères sans paramètre à régler et ne nécessitant aucune information sur le type de bruit présent dans l'image. De plus, nous avons montré qu'il était possible de concevoir des algorithmes basés sur l'utilisation de cette brique élémentaire, permettant d'aboutir à des techniques de segmentation rapides et dont la complexité de réalisation est faible dès lors que l'on possède une telle brique élémentaire
Image segmentation is an important step in many image processing systems and many problems remain unsolved. It has recently been shown that when the image is composed of two homogeneous regions, polygonal active contour techniques based on the minimization of a criterion derived from information theory allow achieving an ultra-fast algorithm which requires neither parameter to tune in the optimized criterion, nor a priori knowledge on the gray level fluctuations. This algorithm can then be used as a fast and unsupervised processing module. The objective of this thesis is therefore to show how this ultra-fast and unsupervised algorithm can be used as a module in the conception of more complex segmentation techniques, allowing to overcome several limits and particularly:- to be robust to the presence of strong inhomogeneity in the image which is often inherent in the acquisition process, such as non-uniform illumination, attenuation, etc.;- to be able to segment disconnected objects by polygonal active contour without complicating the optimization strategy;- to segment multi-region images while estimating in an unsupervised way the number of homogeneous regions in the image.For each of these three problems, unsupervised segmentation techniques based on the optimization of Minimum Description Length criteria have been obtained, which do not require the tuning of parameter by user or a priori information on the kind of noise in the image. Moreover, it has been shown that fast segmentation techniques can be achieved using this segmentation module, while keeping reduced implementation complexity
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Oz, Sinan. "Implement Of Three Segmentation Algorithms For Ct Images Of Torso." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612866/index.pdf.

Повний текст джерела
Анотація:
Many practical applications in the field of medical image processing require valid and reliable segmentation of images. In this dissertation, we propose three different semi-automatic segmentation frameworks for 2D-upper torso medical images to construct 3D geometric model of the torso structures. In the first framework, an extended version of the Otsu&rsquo
s method for three level thresholding and a recursive connected component algorithm are combined. The segmentation process is accomplished by first using Extended Otsu&rsquo
s method and then labeling in each consecutive slice. Since there is no information about pixel positions in the outcome of Extended Otsu&rsquo
s method, we perform some processing after labeling to connect pixels belonging with the same tissue. In the second framework, Chan-Vese (CV) method, which is an example of active contour models, and a recursive connected component algorithm are used together. The segmentation process is achieved using CV method without egde information as stopping criteria. In the third and last framework, the combination of watershed transformation and K-means are used as the segmentation method. After segmentation operation, the labeling is performed for the determination of the medical structures. In addition, segmentation and labeling operation is realized for each consecutive slice in each framework. The results of each framework are compared quantitatively with manual segmentation results to evaluate their performances.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Gupta, Vaibhav. "A Characterization of Wireless Network Interface Card Active Scanning Algorithms." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/28.

Повний текст джерела
Анотація:
In this thesis, we characterize the proprietary active scanning algorithm of several wireless network interface cards. Our experiments are the first of its kind to observe the complete scanning process as the wireless network interface cards probe all the channels in the 2.4GHz spectrum. We discuss the: 1) correlation of channel popularity during active scanning and access point channel deployment popularity; 2) number of probe request frames statistics on each channel; 3) channel probe order; and 4) dwell time. The knowledge gained from characterizing wireless network interface cards is important for the following reasons: 1) it helps one understand how active scanning is implemented in different hardware and software; 2) it can be useful in identifying a wireless rogue host; 3) it can help implement Active Scanning in network simulators; and 4) it can radically influence research in the familiar fields like link-layer handovers and effective deployment of access points.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Nguyen, Thi Thanh. "Algorithmes gloutons orthogonaux sous contrainte de positivité." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0133/document.

Повний текст джерела
Анотація:
De nombreux domaines applicatifs conduisent à résoudre des problèmes inverses où le signal ou l'image à reconstruire est à la fois parcimonieux et positif. Si la structure de certains algorithmes de reconstruction parcimonieuse s'adapte directement pour traiter les contraintes de positivité, il n'en va pas de même des algorithmes gloutons orthogonaux comme OMP et OLS. Leur extension positive pose des problèmes d'implémentation car les sous-problèmes de moindres carrés positifs à résoudre ne possèdent pas de solution explicite. Dans la littérature, les algorithmes gloutons positifs (NNOG, pour “Non-Negative Orthogonal Greedy algorithms”) sont souvent considérés comme lents, et les implémentations récemment proposées exploitent des schémas récursifs approchés pour compenser cette lenteur. Dans ce manuscrit, les algorithmes NNOG sont vus comme des heuristiques pour résoudre le problème de minimisation L0 sous contrainte de positivité. La première contribution est de montrer que ce problème est NP-difficile. Deuxièmement, nous dressons un panorama unifié des algorithmes NNOG et proposons une implémentation exacte et rapide basée sur la méthode des contraintes actives avec démarrage à chaud pour résoudre les sous-problèmes de moindres carrés positifs. Cette implémentation réduit considérablement le coût des algorithmes NNOG et s'avère avantageuse par rapport aux schémas approximatifs existants. La troisième contribution consiste en une analyse de reconstruction exacte en K étapes du support d'une représentation K-parcimonieuse par les algorithmes NNOG lorsque la cohérence mutuelle du dictionnaire est inférieure à 1/(2K-1). C'est la première analyse de ce type
Non-negative sparse approximation arises in many applications fields such as biomedical engineering, fluid mechanics, astrophysics, and remote sensing. Some classical sparse algorithms can be straightforwardly adapted to deal with non-negativity constraints. On the contrary, the non-negative extension of orthogonal greedy algorithms is a challenging issue since the unconstrained least square subproblems are replaced by non-negative least squares subproblems which do not have closed-form solutions. In the literature, non-negative orthogonal greedy (NNOG) algorithms are often considered to be slow. Moreover, some recent works exploit approximate schemes to derive efficient recursive implementations. In this thesis, NNOG algorithms are introduced as heuristic solvers dedicated to L0 minimization under non-negativity constraints. It is first shown that the latter L0 minimization problem is NP-hard. The second contribution is to propose a unified framework on NNOG algorithms together with an exact and fast implementation, where the non-negative least-square subproblems are solved using the active-set algorithm with warm start initialisation. The proposed implementation significantly reduces the cost of NNOG algorithms and appears to be more advantageous than existing approximate schemes. The third contribution consists of a unified K-step exact support recovery analysis of NNOG algorithms when the mutual coherence of the dictionary is lower than 1/(2K-1). This is the first analysis of this kind
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Mathema, Najma. "Predicting Plans and Actions in Two-Player Repeated Games." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8683.

Повний текст джерела
Анотація:
Artificial intelligence (AI) agents will need to interact with both other AI agents and humans. One way to enable effective interaction is to create models of associates to help to predict the modeled agents' actions, plans, and intentions. If AI agents are able to predict what other agents in their environment will be doing in the future and can understand the intentions of these other agents, the AI agents can use these predictions in their planning, decision-making and assessing their own potential. Prior work [13, 14] introduced the S# algorithm, which is designed as a robust algorithm for many two-player repeated games (RGs) to enable cooperation among players. Because S# generates actions, has (internal) experts that seek to accomplish an internal intent, and associates plans with each expert, it is a useful algorithm for exploring intent, plan, and action in RGs. This thesis presents a graphical Bayesian model for predicting actions, plans, and intents of an S# agent. The same model is also used to predict human action. The actions, plans and intentions associated with each S# expert are (a) identified from the literature and (b) grouped by expert type. The Bayesian model then uses its transition probabilities to predict the action and expert type from observing human or S# play. Two techniques were explored for translating probability distributions into specific predictions: Maximum A Posteriori (MAP) and Aggregation approach. The Bayesian model was evaluated for three RGs (Prisoners Dilemma, Chicken and Alternator) as follows. Prediction accuracy of the model was compared to predictions from machine learning models (J48, Multi layer perceptron and Random Forest) as well as from the fixed strategies presented in [20]. Prediction accuracy was obtained by comparing the model's predictions against the actual player's actions. Accuracy for plan and intent prediction was measured by comparing predictions to the actual plans and intents followed by the S# agent. Since the plans and the intents of human players were not recorded in the dataset, this thesis does not measure the accuracy of the Bayesian model against actual human plans and intents. Results show that the Bayesian model effectively models the actions, plans, and intents of the S# algorithm across the various games. Additionally, the Bayesian model outperforms other methods for predicting human actions. When the games do not allow players to communicate using so-called “cheap talk”, the MAP-based predictions are significantly better than Aggregation-based predictions. There is no significant difference in the performance of MAP-based and Aggregation-based predictions for modeling human behavior when cheaptalk is allowed, except in the game of Chicken.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Cabell, Randolph H. III. "A Principal Component Algorithm for Feedforward Active Noise and Vibration Control." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30461.

Повний текст джерела
Анотація:
A principal component least mean square (PC-LMS) adaptive algorithm is described that has considerable benefits for large control systems used to implement feedforward control of single frequency disturbances. The algorithm is a transform domain version of the multichannel filtered-x LMS algorithm. The transformation corresponds to the principal components of the transfer function matrix between the sensors and actuators in a control system at a single frequency. The method is similar to other transform domain LMS algorithms because the transformation can be used to accelerate convergence when the control system is ill-conditioned. This ill-conditioning is due to actuator and sensor placement on a continuous structure. The principal component transformation rotates the control filter coefficient axes to a more convenient coordinate system where (1) independent convergence factors can be used on each coordinate to accelerate convergence, (2) insignificant control coordinates can be eliminated from the controller, and (3) coordinates that require excessive control effort can be eliminated from the controller. The resulting transform domain algorithm has lower computational requirements than the filtered-x LMS algorithm. The formulation of the algorithm given here applies only to single frequency control problems, and computation of the decoupling transforms requires an estimate of the transfer function matrix between control actuators and error sensors at the frequency of interest. The feasibility of the method was demonstrated in real-time noise control experiments involving 48 microphones and 12 control actuators mounted on a closed cylindrical shell. Convergence of the PC-LMS algorithm was more stable than the filtered-x LMS algorithm. In addition, the PC-LMS controller produced more noise reduction with less control effort than the filtered-x LMS controller in several tests.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Johansson, Sven. "Active Control of Propeller-Induced Noise in Aircraft : Algorithms & Methods." Doctoral thesis, Karlskrona, Ronneby : Blekinge Institute of Technology, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00171.

Повний текст джерела
Анотація:
In the last decade acoustic noise has become more and more regarded as a problem. In cars, boats, trains and aircraft, low-frequency noise reduces comfort. Lightweight materials and more powerful engines are used in high-speed vehicles, resulting in a general increase in interior noise levels. Low-frequency noise is annoying and during periods of long exposure it causes fatigue and discomfort. The masking effect which low-frequency noise has on speech reduces speech intelligibility. Low-frequency noise is sought to be attenuated in a wide range of applications in order to improve comfort and speech intelligibility. The use of conventional passive methods to attenuate low-frequency noise is often impractical since considerable bulk and weight are required; in transportation large weight is associated with high fuel consumption. In order to overcome the problems of ineffective passive suppression of low-frequency noise, the technique of active noise control has become of considerable interest. The fundamental principle of active noise control is based on secondary sources producing ``anti-noise.'' Destructive interference between the generated and the primary sound fields results in noise attenuation. Active noise control systems significantly increase the capacity for attenuating low-frequency noise without major increase in volume and weight. This doctoral dissertation deals with the topic of active noise control within the passenger cabin in aircraft, and within headsets. The work focuses on methods, controller structures and adaptive algorithms for attenuating tonal low-frequency noise produced by synchronized or moderately synchronized propellers generating beating sound fields. The control algorithm is a central part of an active noise control system. A multiple-reference feedforward controller based on the novel actuator-individual normalized Filtered-X Least-Mean-Squares algorithm is introduced, yielding significant attenuation of such period noise. This algorithm is of the LMS-type, and owing to the novel normalization it can also be regarded as a Newton-type algorithm. The new algorithm combines low computational complexity with high performance. For that reason the algorithm is suitable for use in systems with a large number of control sources and control sensors in order to reduce the computional power required by the control system. The computational power of the DSP hardware is limited, and therefore algorithms with high computational complexity allow fewer control sources and sensors to be used, often with reduced noise attenuation as a result. In applications, such as controlling aircraft cabin noise, where a large multiple-channel system is needed to control the relative complex interior sound field, it is of great importance to keep down the computational complexity of the algorithm so that a large number of loudspeakers and microphones can be used. The dissertation presents theoretical work, off-line computer experiments and practical real-time experiments using the actuator-individual normalized algorithm. The computer experiments are principally based on real-life cabin noise data recorded during flight in a twin-engine propeller aircraft and in a helicopter. The practical experiments were carried out in a full-scale fuselage section from a propeller aircraft.
Buller i vår dagliga miljö kan ha en negativ inverkan på vår hälsa. I många sammanhang, i tex bilar, båtar och flygplan, förekommer lågfrekvent buller. Lågfrekvent buller är oftast inte skadligt för hörseln, men kan vara tröttande och försvåra konversationen mellan personer som vistas i en utsatt miljö. En dämpning av bullernivån medför en förbättrad taluppfattbarhet samt en komfortökning. Att dämpa lågfrekvent buller med traditionella passiva metoder, tex absorbenter och reflektorer, är oftast ineffektivt. Det krävs stora, skrymmande absorbenter för att dämpa denna typ av buller samt tunga skiljeväggar för att förhindra att bullret transmitteras vidare från ett utrymme till ett annat. Metoder som är mera lämpade vid dämpning av lågfrekvent buller är de aktiva. De aktiva metoderna baseras på att en vågrörelse som ligger i motfas med en annan överlagras och de släcker ut varandra. Bullerdämpningen erhålls genom att ett ljudfält genereras som är lika starkt som bullret men i motfas med detta. De aktiva bullerdämpningsmetoderna medför en effektiv dämpning av lågfrekvent buller samtidigt som volymen, tex hos bilkupen eller båt/flygplanskabinen ej påverkas nämnvärt. Dessutom kan fordonets/farkostens vikt reduceras vilket är tacksamt för bränsleförbrukningen. I de flesta tillämpningar varierar bullrets karaktär, dvs styrka och frekvensinnehåll. För att följa dessa variationer krävs ett adaptivt (självinställande) reglersystem som styr genereringen av motljudet. I propellerflygplan är de dominerande frekvenserna i kabinbullret relaterat till propellrarnas varvtal, man känner alltså till frekvenserna som skall dämpas. Man utnyttjar en varvtalssignal för att generera signaler, så kallade referenssignaler, med de frekvenser som skall dämpas. Dessa bearbetas av ett reglersystem som generar signaler till högtalarna som i sin tur generar motljudet. För att ställa in högtalarsignalerna så att en effektiv dämpning erhålls, används mikrofoner utplacerade i kabinen som mäter bullret. För att åstadkomma en effektiv bullerdämpning i ett rum, tex i en flygplanskabin, behövs flera högtalare och mikrofoner, vilket kräver ett avancerat reglersystem. I doktorsavhandlingen ''Active Control of Propeller-Induced Noise in Aircraft'' behandlas olika metoder för att reducera kabinbuller härrörande från propellrarna. Här presenteras olika strukturer på reglersystem samt beräkningsalgoritmer för att ställa in systemet. För stora system där många högtalare och mikrofoner används, samt flera frekvenser skall dämpas, är det viktigt att systemet inte behöver för stor beräkningskapacitet för att generera motljudet. Metoderna som behandlas ger en effektiv dämpning till låg beräkningskostnad. Delar av materialet som presenteras i avhandlingen har ingått i ett EU-projekt med inriktning mot bullerundertryckning i propellerflygplan. I projektet har flera europeiska flygplanstillverkare deltagit. Avhandlingen behandlar även aktiv bullerdämpning i headset, som används av helikopterpiloter. I denna tillämpning har aktiv bullerdämpning används för att öka taluppfattbarheten.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Shieh, Yih-Dar. "Arithmetic Aspects of Point Counting and Frobenius Distributions." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4108/document.

Повний текст джерела
Анотація:
Cette thèse se compose de deux parties. Partie 1 étudie la décomposition des groupes de cohomologie pour une famille de courbes non hyperelliptiques de genre 3 avec une involution, et le bénéfice d'une telle décomposition dans le calcul de Frobenius utilisant l'algorithme de Kedlaya. L'involution d'une telle courbe C induit un morphisme de degré 2 vers une courbe elliptique E, ce qui donne une décomposition de Jac(C) en E et en une surface abélienne A, à partir desquelles le Frobenius sur C peut être récupérée. En E, le polynôme caractéristique du Frobenius peut être calculé en utilisant un algorithme efficace et rapide en pratique. En travaillant avec le sous-groupe V de $H^1_{MW}(C)$, on obtient une meilleure constante que l'application directe de la méthode de Kedlaya à C. À ma connaissance, ceci est la première utilisation de la décomposition de la cohomologie induite par une décomposition (à isogénie près) de la jacobienne en l'algorithme de Kedlaya. Dans partie 2, je propose une nouvelle approche aux distributions de Frobenius et aux groupes de Sato-Tate, qui utilise les relations d'orthogonalité des caractères irréductibles du groupe de Lie USp(2g) et ses sous-groupes. Dans ce but, je présente d'abord une méthode simple pour calculer les caractères irréductibles de USp(2g), et puis je développe un algorithme basé sur la formule de Brauer-Klimyk. Les avantages de cette nouvelle approche sont examinés en détail. J'utilise aussi la famille de courbes dans partie 1 comme une étude de cas. Les analyses et les comparaisons montrent que l'approche par la théorie des caractères est un outil plus intrinsèque et très prometteur pour l'étude des groupes de Sato-Tate
This thesis consists of two parts. Part 1 studies the decomposition of cohomology groups induced by automorphisms for a family of non-hyperelliptic genus 3 curves with involution, and I investigate the benefit of such decomposition in the computation of Frobenius using Kedlaya's algorithm. The involution of a curve C in this family induces a degree 2 map to an elliptic curve E, which gives a decomposition of the Jacobian of C into E and an abelian surface A, from which the Frobenius on C can be recovered. On E, the characteristic polynomial of the Frobenius can be computed using an efficient and fast algorithm. By working with the cohomology subgroup V of $H^1_{MW}(C)$, we get a constant speed-up over a straightforward application of Kedlaya's method to C. To my knowledge, this is the first use of decomposition of the cohomology induced by an isogeny decomposition of the Jacobian in Kedlaya's algorithm. In Part 2, I propose a new approach to Frobenius distributions and Sato-Tate groups, which uses the orthogonality relations of the irreducible characters of the compact Lie group USp(2g) and its subgroups. To this purpose, I first present a simple method to compute the irreducible characters of USp(2g), then I develop an algorithm based on the Brauer-Klimyk formula. The advantages of this new approach to Sato-Tate groups are examined in detail. The results show that the error grows slowly. I also use the family of genus 3 curves studied in Part 1 as a case study. The analyses and comparisons show that the character theory approach is a more intrinsic and very promising tool for studying Sato-Tate groups
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Crook, Deborah. "Polynomial invariants of the Euclidean group action on multiple screws : a thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Master of Science in Mathematics /." ResearchArchive@Victoria e-Thesis, 2009. http://hdl.handle.net/10063/1205.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Mishchenko, Kateryna. "Numerical Algorithms for Optimization Problems in Genetical Analysis." Doctoral thesis, Västerås : Scool of education, Culture and Communication, Mälardalen University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-650.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Mair, Patrick, Kurt Hornik, and Leeuw Jan de. "Isotone Optimization in R: Pool-Adjacent-Violators Algorithm (PAVA) and Active Set Methods." American Statistical Association, 2009. http://epub.wu.ac.at/3993/1/isotone.pdf.

Повний текст джерела
Анотація:
In this paper we give a general framework for isotone optimization. First we discuss a generalized version of the pool-adjacent-violators algorithm (PAVA) to minimize a separable convex function with simple chain constraints. Besides of general convex functions we extend existing PAVA implementations in terms of observation weights, approaches for tie handling, and responses from repeated measurement designs. Since isotone optimization problems can be formulated as convex programming problems with linear constraints we then develop a primal active set method to solve such problem. This methodology is applied on specific loss functions relevant in statistics. Both approaches are implemented in the R package isotone. (authors' abstract)
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Feng, Tao. "Design and Analysis of Efficient Adaptive Algorithms for Active Control of Vehicle Interior Sound." University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1490354549915601.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Schulz, Sergio Luiz. "Metodologia para a alocação ótima discreta de sensores e atuadores piezoelétricos na simulação do controle de vibrações em estruturas de materiais compósitos laminados." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/62047.

Повний текст джерела
Анотація:
O principal objetivo do controle de vibrações é a sua redução ou minimização, através da modificação automática da resposta estrutural. Em muitas situações isto é necessário para promover a estabilidade estrutural, e para alcançar o alto desempenho mecânico necessário em diversas áreas técnicas, tais como a engenharia aeroespacial, civil e mecânica, bem como a biotecnologia, inclusive em escala micro e nano mecânica. Uma alternativa é o uso de estruturas inteligentes, que são o resultado da combinação de sensores e atuadores integrados em uma estrutura mecânica, e um método de controle adequado. O principal objetivo deste trabalho é o desenvolvimento de rotinas computacionais para a simulação, via método dos elementos finitos, do controle ativo de estruturas inteligentes de cascas, placas e vigas delgadas de material compósito laminado com camadas de material piezoelétrico como sensores e/ou atuadores. Caracterizam esta pesquisa a utilização do elemento GPL-T9 de três nós e seis graus de liberdade mecânicos por nó, mais um grau de liberdade elétrico por camada piezoelétrica, assim como a avaliação de dois métodos de controle, o Proporcional-Integral-Derivativo (PID) e o Regulador Quadrático Linear ou Linear Quadratic Regulator (LQR), incluindo o LQR Modal, e a otimização da localização de pastilhas piezoelétricas através de um Algoritmo Genético (AG). Várias aplicações são apresentadas e os resultados obtidos são comparados aos disponíveis na literatura especializada.
The main objective of vibration control is its reduction or even its minimization by the automatic modification of the structural response. Sometimes this is necessary to increase structural stability and to attain a high mechanical behavior in several areas such as aerospace, civil and mechanical engineering, biotechnology, including macro, micro and nanomechanical scales. An alternative is to use a smart structure, which results of the combinations of integrated sensors and actuators in a mechanical structure and a suitable control method. Development of a computational code to simulate, using finite elements, the active control in smart structures such as slender shells, plates and beams of composite materials with embedded piezoelectric layers acting as actuators and sensors is the main objective of this work. This research is characterized by the use of the GPL-T9 element with three nodes and six mechanical degrees of freedom and one electrical degree of freedom per piezoelectric layer, by the evaluation of two control methods, the Proportional Integral Derivative (PID) and the Linear Quadratic Regulator (LQR), including the Modal LQR, and, finally by the optimization of piezoelectric patches placement using a Genetic Algorithm (GA). Several examples are presented and compared with those obtained by other authors.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Plachkov, Alex. "Soft Data-Augmented Risk Assessment and Automated Course of Action Generation for Maritime Situational Awareness." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35336.

Повний текст джерела
Анотація:
This thesis presents a framework capable of integrating hard (physics-based) and soft (people-generated) data for the purpose of achieving increased situational assessment (SA) and effective course of action (CoA) generation upon risk identification. The proposed methodology is realized through the extension of an existing Risk Management Framework (RMF). In this work, the RMF’s SA capabilities are augmented via the injection of soft data features into its risk modeling; the performance of these capabilities is evaluated via a newly-proposed risk-centric information fusion effectiveness metric. The framework’s CoA generation capabilities are also extended through the inclusion of people-generated data, capturing important subject matter expertise and providing mission-specific requirements. Furthermore, this work introduces a variety of CoA-related performance measures, used to assess the fitness of each individual potential CoA, as well as to quantify the overall chance of mission success improvement brought about by the inclusion of soft data. This conceptualization is validated via experimental analysis performed on a combination of real- world and synthetically-generated maritime scenarios. It is envisioned that the capabilities put forth herein will take part in a greater system, capable of ingesting and seamlessly integrating vast amounts of heterogeneous data, with the intent of providing accurate and timely situational updates, as well as assisting in operational decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Ganti, Mahapatruni Ravi Sastry. "New formulations for active learning." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51801.

Повний текст джерела
Анотація:
In this thesis, we provide computationally efficient algorithms with provable statistical guarantees, for the problem of active learning, by using ideas from sequential analysis. We provide a generic algorithmic framework for active learning in the pool setting, and instantiate this framework by using ideas from learning with experts, stochastic optimization, and multi-armed bandits. For the problem of learning convex combination of a given set of hypothesis, we provide a stochastic mirror descent based active learning algorithm in the stream setting.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії