Дисертації з теми "CNN ALGORITHM"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: CNN ALGORITHM.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "CNN ALGORITHM".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Shaif, Ayad. "Predictive Maintenance in Smart Agriculture Using Machine Learning : A Novel Algorithm for Drift Fault Detection in Hydroponic Sensors." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42270.

Повний текст джерела
Анотація:
The success of Internet of Things solutions allowed the establishment of new applications such as smart hydroponic agriculture. One typical problem in such an application is the rapid degradation of the deployed sensors. Traditionally, this problem is resolved by frequent manual maintenance, which is considered to be ineffective and may harm the crops in the long run. The main purpose of this thesis was to propose a machine learning approach for automating the detection of sensor fault drifts. In addition, the solution’s operability was investigated in a cloud computing environment in terms of the response time. This thesis proposes a detection algorithm that utilizes RNN in predicting sensor drifts from time-series data streams. The detection algorithm was later named; Predictive Sliding Detection Window (PSDW) and consisted of both forecasting and classification models. Three different RNN algorithms, i.e., LSTM, CNN-LSTM, and GRU, were designed to predict sensor drifts using forecasting and classification techniques. The algorithms were compared against each other in terms of relevant accuracy metrics for forecasting and classification. The operability of the solution was investigated by developing a web server that hosted the PSDW algorithm on an AWS computing instance. The resulting forecasting and classification algorithms were able to make reasonably accurate predictions for this particular scenario. More specifically, the forecasting algorithms acquired relatively low RMSE values as ~0.6, while the classification algorithms obtained an average F1-score and accuracy of ~80% but with a high standard deviation. However, the response time was ~5700% slower during the simulation of the HTTP requests. The obtained results suggest the need for future investigations to improve the accuracy of the models and experiment with other computing paradigms for more reliable deployments.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Reiling, Anthony J. "Convolutional Neural Network Optimization Using Genetic Algorithms." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1512662981172387.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Brandt, Carl-Simon, Jonathan Kleivard, and Andreas Turesson. "Convolutional, adversarial and random forest-based DGA detection : Comparative study for DGA detection with different machine learning algorithms." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20103.

Повний текст джерела
Анотація:
Malware is becoming more intelligent as static methods for blocking communication with Command and Control (C&C) server are becoming obsolete. Domain Generation Algorithms (DGAs) are a common evasion technique that generates pseudo-random domain names to communicate with C&C servers in a difficult way to detect using handcrafted methods. Trying to detect DGAs by looking at the domain name is a broad and efficient approach to detect malware-infected hosts. This gives us the possibility of detecting a wider assortment of malware compared to other techniques, even without knowledge of the malware’s existence. Our study compared the effectiveness of three different machine learning classifiers: Convolutional Neural Network (CNN), Generative Adversarial Network (GAN) and Random Forest (RF) when recognizing patterns and identifying these pseudo-random domains. The result indicates that CNN differed significantly from GAN and RF. It achieved 97.46% accuracy in the final evaluation, while RF achieved 93.89% and GAN achieved 60.39%. In the future, network traffic (efficiency) could be a key component to examine, as productivity may be harmed if the networkis over burdened by domain identification using machine learning algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

El-Shafei, Ahmed. "Time multiplexing of cellular neural networks." Thesis, University of Kent, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

MOREIRA, André Luis Cavalcanti. "An adaptable storage slicing algorithm for content delivery networks." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17331.

Повний текст джерела
Анотація:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-12T12:20:38Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Thesis - André Luis Cavalcanti Moreira.pdf: 3666881 bytes, checksum: 956e0e6be2bd9f076c0d30eea9d3ea25 (MD5)
Made available in DSpace on 2016-07-12T12:20:38Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Thesis - André Luis Cavalcanti Moreira.pdf: 3666881 bytes, checksum: 956e0e6be2bd9f076c0d30eea9d3ea25 (MD5) Previous issue date: 2015-08-28
Several works study the performance of Content Delivery Networks (CDNs) under various network infrastructure and demand conditions. Many strategies have been proposed to deal with aspects inherent to the CDN distribution model. Though mostly very effective, a traditional CDN approach of statically positioned elements often fails to meet quality of experience (QoE) requirements when network conditions suddenly change. CDN adaptation is a key feature in this process and some studies go even further and try to also deal with demand elasticity by providing an elastic infrastructure (cloud computing) to such CDNs. Each Content Provider (CP) gets served only the amount of storage space and network throughput that it needs and pays only for what has been used. Some IaaS providers offer simple CDN services on top of their infrastructure. However, in general, there is a lack of PaaS tools to create rapidly a CDN. There is no standard or open source software able to deliver CDN as a service for each tenant through well-known managers. A PaaS CDN should be able to implement content delivery service in a cloud environment, provision and orchestrate each tenant, monitor usage and make decisions on planning and dimensioning of resources. This work introduces a framework for the allocation of resources of a CDN in a multi-tenant environment. The framework is able to provision and orchestrate multi-tenant virtual CDNs and can be seen as a step towards a PaaS CDN. A simple dot product based module for network change detection is presented and a more elaborate multi-tenant resource manager model is defined. We solve the resulting ILP problem using both branch and bound as well as an efficient cache slicing algorithm that employs a three phase heuristic for orchestration of multi-tenant virtual CDNs. We finally show that a distributed algorithm with limited local information may be also offer reasonable resource allocation while using limited coordination among the different nodes. A self-organization behavior emerges when some of the nodes reach consensus.
Vários trabalhos estudam o desempenho de Redes de Distribuição de Conteúdo (CDN) em diferentes condições e demanda e de infraestrutura. Muitas estratégias têm sido propostas para lidar com aspectos inerentes ao modelo de distribuição de CDN. Embora essas técnicas sejam bastante eficazes, uma abordagem tradicional de elementos estaticamente posicionados numa CDN muitas vezes não consegue atender os requisitos de qualidade de experiência (QoE) quando as condições da rede mudam repentinamente. Adaptação CDN é uma característica fundamental neste processo e alguns estudos vão ainda mais longe e tentam lidar com a elasticidade da demanda, proporcionando uma infraestrutura elástica (computação em nuvem) para a CDN. Cada provedor de conteúdo obtém apenas a quantidade de armazenamento e de rede necessários, pagando apenas pelo efetivo uso. Alguns provedores IaaS oferecem serviços de CDN sobre suas estruturas. No entanto, em geral, não existe padrão ou softwares de código aberto capazes de entregar serviços de CDN por meio de gerenciadores. Uma CDN PaaS deve ser capaz de fornecer um serviço de entrega de conteúdo em um ambiente de nuvem, provisionar e orquestrar cada tenant, monitorar uso e tomar decisões de planejamento e dimensionamento de recursos. Este trabalho apresenta um framework para alocação de recursos de uma CDN em ambiente multi-tenant. O framework é capaz de provisionar e orquestrar CDNs virtuais e pode ser visto como um passo em direção a uma PaaS CDN. Um módulo baseado em simples produto escalar para detecção de mudanças na rede é apresentado, bem como um modelo mais elaborado de gerenciamento de recursos. Resolvemos o problema ILP resultante dessa abordagem por meio de um algoritmo de divisão de cache que emprega uma heurística em três fases para a orquestração de CDN virtuais. Por fim, mostramos uma outra abordagem com algoritmo distribuído que usa informação local e que também oferece uma alocação razoável usando coordenação limitada entre os diferentes nós. Um comportamento de auto-organização surge quando alguns desses nós chegam a um consenso.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Yavaş, Gökhan. "Algorithms for Characterizing Structural Variation in Human Genome." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1279345476.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tamaki, Suguru. "Improved Algorithms for CNF Satisfiability Problems." 京都大学 (Kyoto University), 2006. http://hdl.handle.net/2433/68895.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wathugala, Wathugala Gamage Dulan Manujinda. "Formal Modeling Can Improve Smart Transportation Algorithm Development." Thesis, University of Oregon, 2017. http://hdl.handle.net/1794/22608.

Повний текст джерела
Анотація:
201 pages
Ensuring algorithms work accurately is crucial, especially when they drive safety critical systems like self-driving cars. We formally model a published distributed algorithm for autonomous vehicles to collaborate and pass thorough an intersection. Models are built and validated using the “Labelled Transition System Analyser” (LTSA). Our models reveal situations leading to deadlocks and crashes in the algorithm. We demonstrate two approaches to gain insight about a large and complex system without modeling the entire system: Modeling a sub system - If the sub system has issues, the super system too. Modeling a fast-forwarded state - Reveals problems that can arise later in a process. Some productivity tools developed for distributed system development are also presented. Manulator, our distributed system simulator, enables quick prototyping and debugging on a single workstation. LTSA-O, extension to LTSA, listens to messages exchanged in an execution of a distributed system and validates it against a model.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pallotti, Davide. "Integrazione di dati di disparità sparsi in algoritmi per la visione stereo basati su deep-learning." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16633/.

Повний текст джерела
Анотація:
La visione stereo consiste nell’estrarre informazioni di profondità da una scena a partire da una vista sinistra e una vista destra. Il problema si riduce a determinare punti corrispondenti nelle due immagini, che nel caso di immagini rettificate risultano traslati solo orizzontalmente, di una distanza detta disparità. Tra gli algoritmi stereo tradizionali spiccano SGM e la sua implementazione rSGM. SGM minimizza una funzione di costo definita su un volume dei costi, che misura la somiglianza degli intorni di potenziali punti omologhi per numerosi valori di disparità. L’abilità delle reti neurali convoluzionali (CNN) nello svolgere attività di percezione ha rivoluzionato l’approccio alla visione stereo. Un esempio di CNN stereo è GC-Net, adatta alla sperimentazione dato il numero contenuto di parametri. Anche GC-Net produce la mappa di disparità a partire da un volume dei costi, ottenuto combinando feature estratte dalle due viste. Obiettivo di questa tesi è integrare in un algoritmo stereo dati di disparità sparsi suggeriti dall’esterno, con l’intento di migliorare l’accuratezza. L’idea proposta è di utilizzare i dati noti associati a punti sparsi per modulare i valori corrispondenti a quegli stessi punti nel volume dei costi. Inizialmente sperimenteremo questo approccio su GC-Net. Dapprima faremo uso di disparità estratte casualmente dalla ground truth: ciò permetterà di verificare la bontà del metodo e simulerà l’impiego di un sensore di profondità a bassa risoluzione. Dopodiché impiegheremo gli output di SGM e rSGM, ancora campionati casualmente, chiedendoci se ciò risulti già in un primo miglioramento rispetto alla sola GC-Net. In seguito saggeremo l’applicabilità di questo stesso metodo a un algoritmo tradizionale, rSGM, utilizzando soltanto la ground truth come fonte di disparità. Infine riprenderemo l’idea di fornire a GC-Net l’aiuto di rSGM, ma sceglieremo solo i punti più promettenti rispetto a una misura di confidenza calcolata con la rete neurale CCNN.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Essink, Wesley. "CNC milling toolpath generation using genetic algorithms." Thesis, University of Bath, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715245.

Повний текст джерела
Анотація:
The prevalence of digital manufacturing in creating increasingly complex products with small batch sizes, requires effective methods for production process planning. Toolpath generation is one of the challenges for manufacturing technologies that function based on the controlled movement of an end effector against a workpiece. The current approaches for determining suitable tool paths are highly dependent on machine structure, manufacturing technology and product geometry. This dependence can be very expensive in a volatile production environment where the products and the resources change quickly. In this research, a novel approach for the flexible generation of toolpaths using a mathematical formulation of the desired objective is proposed. The approach, based on optimisation techniques, is developed by discretising the product space into a number of grid points and determining the optimal sequence of the tool tip visiting these points. To demonstrate the effectiveness of the approach, the context of milling machining has been chosen and a genetic algorithm has been developed to solve the optimisation problem. The results show that with meta-heuristic methods, flexible tool paths can indeed be generated for industrially relevant parts using existing computational power. Future computing platforms, including quantum computers, could extend the applicability of the proposed approach to much more complex domains for instantaneous optimisation of the detailed manufacturing process plan.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Karaouzene, Thomas. "Bioinformatique et infertilité : analyse des données de séquençage haut-débit et caractérisation moléculaire du gène DPY19L2." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS041/document.

Повний текст джерела
Анотація:
Ces dix dernières années, l’investigation des maladies génétiques a été bouleversée par l’émergence des techniques de séquençage haut-débit. Celles-ci permettent désormais de ne plus séquencer les gènes un par un, mais d’avoir accès à l’intégralité de la séquence génomique ou transcriptomique d’un individu. La difficulté devient alors d’identifier les variants causaux parmi une multitude d’artefacts techniques et de variants bénins, pour ensuite comprendre la physiopathologie des gènes identifiés.L’application du séquençage haut débit est particulièrement prometteuse dans le champ de la génétique de l’infertilité masculine car il s’agit d’une pathologie dont l’étiologie est souvent génétique, qui est génétiquement très hétérogène et pour laquelle peu de gènes ont été identifiés. Mon travail de thèse est donc centré sur la l’infertilité et comporte deux parties majeures : l’analyse des données issues du séquençage haut débit d’homme infertiles et de modèles animaux et la caractérisation moléculaire d’un phénotype spécifique d’infertilité, laglobozoospermie.Le nombre de variants identifiés dans le cadre d’un séquençage exomique pouvant s’élever à plusieurs dizaines de milliers, l’utilisation d’un outil informatique performant est indispensable. Pour arriver à une liste de variants suffisamment restreinte pour pouvoir être interprétée, plusieurs traitements sont nécessaires. Ainsi, j’ai développé un pipeline d’analyse de données issues de séquençage haut-débit effectuant de manière successive l’intégralité des étapes de l’analyse bio-informatique, c’est-à-dire l’alignement des reads sur un génome de référence, l’appel des génotypes, l’annotation des variants obtenus ainsi que le filtrage de ceux considérés comme non pertinents dans le contexte de l’analyse. L’ensemble de ces étapes étant interdépendantes,les réaliser au sein du même pipeline permet de mieux les calibrer pour ainsi réduire le nombre d’erreurs générées. Ce pipeline a été utilisé dans cinq études au sein du laboratoire, et a permis l’identification de variants impactant des gènes candidats prometteurs pouvant expliquer le phénotype d’infertilité des patients.L’ensemble des variants retenus ont ensuite pu être validés expérimentalement.J’ai également pris part aux investigations génétiques et moléculaires permettant la caractérisation du gène DPY19L2, identifié au laboratoire et dont la délétion homozygote entraine une globozoospermie, caractériséepar la présence dans l’éjaculât de spermatozoïdes à tête ronde dépourvus d’acrosome. Pour cela, j’ai contribué à caractériser les mécanismes responsables de cette délétion récurrente, puis, en utilisant le modèle murin Dpy19l2 knock out (KO) mimant le phénotype humain, j’ai réalisé une étude comparative des transcriptomes testiculaires de souris sauvages et de souris KO Dpy19l2-/-. Cette étude a ainsi permis de mettre en évidence la dérégulation de 76 gènes chez la souris KO. Parmi ceux-ci, 23 sont impliqués dans la liaison d’acides nucléiques et de protéines, pouvant ainsi expliquer les défauts d’ancrage de l’acrosome au noyau chez les spermatozoïdes globozoocéphales.Mon travail a donc permis de mieux comprendre la globozoospermie et de développer un pipeline d’analyse bioinformatique qui a déjà permis l’identification de plus de 15 gènes de la gamétogenèse humaine impliqués dans différents phénotypes d’infertilité
In the last decade, the investigations of genetic diseases have been revolutionized by the rise of high throughput sequencing (HTS). Thanks to these new techniques it is now possible to analyze the totality of the coding sequences of an individual (exome sequencing) or even the sequences of his entire genome or transcriptome.The understanding of a pathology and of the genes associated with it now depends on our ability to identify causal variants within a plethora of technical artifact and benign variants.HTS is expected to be particularly useful in the field infertility as this pathology is expected to be highly genetically heterogeneous and only a few genes have so far been associated with it. My thesis focuses on male infertility and is divided into two main parts: HTS data analysis of infertile men and the molecular characterization of a specific phenotype, globozoospermia.Several thousands of distinct variants can be identified in a single exome, thereby using effective informatics is essential in order to obtain a short and actionable list of variants. It is for this purpose that I developed a HTS data analysis pipeline performing successively all bioinformatics analysis steps: 1) reads mapping along a reference genome, 2) genotype calling, 3) variant annotation and 4) the filtering of the variants considered as non-relevant for the analysis. Performing all these independent steps within a single pipeline is a good way to calibrate them and therefore to reduce the number of erroneous calls. This pipeline has been used in five studies and allowed the identification of variants impacting candidate genes that may explain the patients’ infertility phenotype. All these variants have been experimentally validated using Sanger sequencing.I also took part in the genetic and molecular investigations which permitted to demonstrate that the absence of the DPY192 gene induces male infertility due to globozoospermia, the presence in the ejaculate of only round-headed and acrosomeless spermatozoa. Most patients with globozoospermia have a homozygous deletion of the whole gene. I contributed to the characterization of the mechanisms responsible for this recurrent deletion, then, using Dpy19l2 knockout (KO) mice, I realized the comparative study of testicular transcriptome of wild type and Dpy19l2 -/- KO mice. This study highlighted a dysregulation of 76 genes in KO mice. Among them, 23 are involved in nucleic acid and protein binding, which may explain acrosome anchoring defaults observed in the sperm of globozoospermic patients.My work allowed a better understanding of globozoospermia and the development of a HTS data analysis pipeline. The latter allowed the identification of more than 15 human gametogenesis genes involved in different infertility phenotypes
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Presutti, Pasquale. "Algoritmo per la generazione di mappe depth da immagini stereo con CNN." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Знайти повний текст джерела
Анотація:
Lo svolgimento del lavoro è stato condotto dapprima approfondendo le tematiche legate al problema della corrispondenza stereo e, dopo un analisi generale delle reti neurali convoluzionali e delle metodologie già presenti per risolvere il matching stereo attraverso il deep learning, è stata implementata l'architettura CNN in grado di restituire una mappa di disparità sfruttando le teorie legate al mondo della computer vision.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Aguilar, Cesar David. "Self-tuning algorithm for dsp-based motion control in cnc applications." FIU Digital Commons, 1997. http://digitalcommons.fiu.edu/etd/1145.

Повний текст джерела
Анотація:
This thesis describes the development of an adaptive control algorithm for Computerized Numerical Control (CNC) machines implemented in a multi-axis motion control board based on the TMS320C31 DSP chip. The adaptive process involves two stages: Plant Modeling and Inverse Control Application. The first stage builds a non-recursive model of the CNC system (plant) using the Least-Mean-Square (LMS) algorithm. The second stage consists of the definition of a recursive structure (the controller) that implements an inverse model of the plant by using the coefficients of the model in an algorithm called Forward-Time Calculation (FTC). In this way, when the inverse controller is implemented in series with the plant, it will pre-compensate for the modification that the original plant introduces in the input signal. The performance of this solution was verified at three different levels: Software simulation, implementation in a set of isolated motor-encoder pairs and implementation in a real CNC machine. The use of the adaptive inverse controller effectively improved the step response of the system in all three levels. In the simulation, an ideal response was obtained. In the motor-encoder test, the rise time was reduced by as much as 80%, without overshoot, in some cases. Even with the larger mass of the actual CNC machine, decrease of the rise time and elimination of the overshoot were obtained in most cases. These results lead to the conclusion that the adaptive inverse controller is a viable approach to position control in CNC machinery.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Dosi, Shubham. "Optimization and Further Development of an Algorithm for Driver Intention Detection with Fuzzy Logic and Edit Distance." Master's thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-202567.

Повний текст джерела
Анотація:
Inspired by the idea of vision zero, there is a lot of work that needs to be done in the field of advance driver assistance systems to develop more safer systems. Driver intention detection with a prediction of upcoming behavior of the driver is one possible solution to reduce the fatalities in road traffic. Driver intention detection provides an early warning of the driver's behavior to an Advanced Driver Assistance Systems (ADAS) and at the same time reduces the risk of non-essential warnings. This will significantly reduce the problem of warning dilemma and the system will become more safer. A driving maneuver prediction can be regarded as an implementation of driver's behavior. So the aim of this thesis is to determine the driver's intention by early prediction of a driving maneuver using Controller Area Network (CAN) bus data. The focus of this thesis is to optimize and further develop an algorithm for driver intention detection with fuzzy logic and edit distance method. At first the basics concerning driver's intention detection are described as there exists different ways to determine it. This work basically uses CAN bus data to determine a driver's intention. The algorithm overview with the design parameters are described next to have an idea about the functioning of the algorithm. Then different implementation tasks are explained for optimization and further development of the algorithm. The main aim to execute these implementation tasks is to improve the overall performance of the algorithm concerning True Positive Rate (TPR), False Positive Rate (FPR) and earliness values. At the end, the results are validated to check the algorithm performance with different possibilities and a test drive is performed to evaluate the real time capability of the algorithm. Lastly the use of driver intention detection algorithm for an ADAS to make it more safer is described in details. The early warning information can be feed to an ADAS, for example, an automatic collision avoidance or a lane change assistance ADAS to further improve safety for these systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Resiga, Alin. "Design Optimization for a CNC Machine." PDXScholar, 2018. https://pdxscholar.library.pdx.edu/open_access_etds/4257.

Повний текст джерела
Анотація:
Minimizing cost and optimization of nonlinear problems are important for industries in order to be competitive. The need of optimization strategies provides significant benefits for companies when providing quotes for products. Accurate and easily attained estimates allow for less waste, tighter tolerances, and better productivity. The Nelder-Mead Simplex method with exterior penalty functions was employed to solve optimum machining parameters. Two case studies were presented for optimizing cost and time for a multiple tools scenario. In this study, the optimum machining parameters for milling operations were investigated. Cutting speed and feed rate are considered as the most impactful design variables across each operation. Single tool process and scalable multiple tool milling operations were studied. Various optimization methods were discussed. The Nelder-Mead Simplex method showed to be simple and fast.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Zvokel, Kenneth M. "Can a rhino be taught to draw? : a look at path control algorithms." Virtual Press, 1989. http://liblink.bsu.edu/uhtbin/catkey/543993.

Повний текст джерела
Анотація:
In today's high-tech industrialized world, we are always looking for faster, and more reliable ways to produce goods. Robotics offers us a possible replacement for the human worker, but can a robot reliably perform the same tasks as a human arm, for example?The complex problem of teaching a robot to move it's hand in some well defined path can be broken down into a variety of algorithms. These path control algorithms generally compute some path description equation, which is used to generate path points either in terms of the Cartesian coordinates of the robot's work cell or the robot's joint variables. Common functions used in the path generation process include cubic spline functions and linear functions.This research project tests a variety of algorithms on a relatively simple robot in order to perform the task of drawing shapes (lines, squares, circles) on planes (horizontal and vertical) in the workcell. By studying the paths drawn we can determine the effect of each algorithm on the path control process, as well as the effect of plane positioning, robot structure, and the robot's controller.
Department of Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ibn, Khedher Hatem. "Optimization and virtualization techniques adapted to networking." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0007/document.

Повний текст джерела
Анотація:
Dans cette thèse, on présente nos travaux sur la virtualisation dans le contexte de la réplication des serveurs de contenu vidéo. Les travaux couvrent la conception d’une architecture de virtualisation pour laquelle on présente aussi plusieurs algorithmes qui peuvent réduire les couts globaux à long terme et améliorer la performance du système. Le travail est divisé en plusieurs parties : les solutions optimales, les solutions heuristiques pour des raisons de passage à l’échelle, l’orchestration des services, l’optimisation multi-objective, la planification de services dans des réseaux actifs et complexes et l'intégration d'algorithmes dans une plate-forme réelle
In this thesis, we designed and implemented a tool which performs optimizations that reduce the number of migrations necessary for a delivery task. We present our work on virtualization in the context of replication of video content servers. The work covers the design of a virtualization architecture for which there are also several algorithms that can reduce overall costs and improve system performance. The thesis is divided into several parts: optimal solutions, greedy (heuristic) solutions for reasons of scalability, orchestration of services, multi-objective optimization, service planning in complex active networks, and integration of algorithms in real platform. This thesis is supported by models, implementations and simulations which provide results that showcase our work, quantify the importance of evaluating optimization techniques and analyze the trade-off between reducing operator cost and enhancing end user satisfaction index
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Magnusson, Filip. "Evaluating Deep Learning Algorithms for Steering an Autonomous Vehicle." Thesis, Linköpings universitet, Programvara och system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153450.

Повний текст джерела
Анотація:
With self-driving cars on the horizon, vehicle autonomy and its problems is a hot topic. In this study we are using convolutional neural networks to make a robot car avoid obstacles. The robot car has a monocular camera, and our approach is to use the images taken by the camera as input, and then output a steering command. Using this method the car is to avoid any object in front of it. In order to lower the amount of training data we use models that are pretrained on ImageNet, a large image database containing millions of images. The model are then trained on our own dataset, which contains of images taken directly by the robot car while driving around. The images are then labeled with the steering command used while taking the image. While training we experiment with using different amounts of frozen layers. A frozen layer is a layer that has been pretrained on ImageNet, but are not trained on our dataset. The Xception, MobileNet and VGG16 architectures are tested and compared to each other. We find that a lower amount of frozen layer produces better results, and our best model, which used the Xception architecture, achieved 81.19% accuracy on our test set. During a qualitative test the car avoid collisions 78.57% of the time.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Rodríguez, Ramos Julio César. "Diseño de un algoritmo metaheurístico Grasp para la mejoría de un algoritmo minincrease aplicado a la asignación eficiente de incidentes en una mesa de ayuda." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2014. http://tesis.pucp.edu.pe/repositorio/handle/123456789/6110.

Повний текст джерела
Анотація:
La mesa de ayuda es un área importante en la resolución de incidentes de tecnologías de información en las empresas, tanto dentro (para la misma empresa y sus empleados) como fuera (para los clientes que la empresa ofrece sus servicios y productos). Sin embargo, la planificación de la resolución de incidentes se hace difícil debido a la imprevisibilidad y espontaneidad de éstos. Dichos incidentes afectan de manera diversa a la continuidad de negocio con consecuencias y tiempo de resolución de diversa magnitud. Asimismo, los técnicos en la mesa de ayuda tienen un tiempo de resolución diverso, con experiencia laboral distinta y son un número finito de personas. Dicho problema se le conoce en problemas de asignación de tareas como “asignación estocástica en línea”. El algoritmo MinIncrease permite la resolución de problemas de asignación estocásticos en línea. Sin embargo, el problema reside en que los técnicos son personas de diversa experiencia que pueden estar divididos en técnicos con mucha o poca experiencia en el ambiente de una mesa de ayuda. No es preciso que al mejor técnico se le asignen incidentes triviales ni que algún técnico no trabaje hasta que aparezca un incidente de su dificultad apropiada. Es por ello que el algoritmo MinIncrease sólo no basta. El siguiente proyecto presenta el diseño de un algoritmo metaheurístico GRASP para la mejoría de un algoritmo MinIncrease. La combinación de estos algoritmos permitirá que los incidentes, a pesar de que su aparición sea imprevista, puedan asignarse a los técnicos de la mesa de ayuda de manera eficiente.
Tesis
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Thompson, Michael Blaine. "Development of a Five-Axis Machining Algorithm in Flat End Mill Roughing." BYU ScholarsArchive, 2005. https://scholarsarchive.byu.edu/etd/320.

Повний текст джерела
Анотація:
To further the research done in machining complex surfaces, Jensen [1993] developed an algorithm that matches the normal curvature at a point along the surface with the resultant radius formed by tilting a standard flat end mill. The algorithm called Curvature Matched Machining (CM2) is faster and more efficient than conventional three-axis machining [Jensen 1993, Simpson 1995 & Kitchen 1996]. Despite the successes of CM2 there are still many areas available for research. Consider the machining of a mold or die. The complex nature of a mold requires at least 20-30 weeks of lead time. Of those 20-30 weeks 50% is spent in machining. Of that time 50-65% is spent in rough machining. For a mold or die that amounts to 7 to 8 weeks of rough machining. If one could achieve as much as a 10-15% reduction in machining time that would amount to almost one week worth of time savings. As can be seen, small improvements in time and efficiency for rough machining can yield significant results [Fallbohmer 1996]. This research developed an algorithm that focused on reducing the overall machining time for parts and surfaces. Particularly, the focus of this research was within rough machining. The algorithm incorporated principles of three-axis rough cutting with five-axis CM2, hence Rough Curvature Matched Machining (RCM2). In doing so, the algorithm ‘morphed‘ planar machining slices to the semi-roughed surface allowing the finish pass to be complete in one pass. This roughing algorithm has significant time-savings over current roughing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Hai-Chau, Le, Hiroshi Hasegawa, and Kenichi Sato. "Hierarchical optical path network design algorithm that can best utilize WSS/WBSS based cross-connects." IEEE, 2009. http://hdl.handle.net/2237/14026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Molina, Moreno Benjamin. "Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/31637.

Повний текст джерела
Анотація:
Esta tesis se ha creado en el marco de la línea de investigación de Mecanismos de Distribución de Contenidos en Redes IP, que ha desarrollado su actividad en diferentes proyectos de investigación y en la asignatura ¿Mecanismos de Distribución de Contenidos en Redes IP¿ del programa de doctorado ¿Telecomunicaciones¿ impartido por el Departamento de Comunicaciones de la UPV y, actualmente en el Máster Universitario en Tecnologías, Sistemas y Redes de Comunicación. El crecimiento de Internet es ampliamente conocido, tanto en número de clientes como en tráfico generado. Esto permite acercar a los clientes una interfaz multimedia, donde pueden concurrir datos, voz, video, música, etc. Si bien esto representa una oportunidad de negocio desde múltiples dimensiones, se debe abordar seriamente el aspecto de la escalabilidad, que pretende que el rendimiento medio de un sistema no se vea afectado conforme aumenta el número de clientes o el volumen de información solicitada. El estudio y análisis de la distribución de contenido web y streaming empleando CDNs es el objeto de este proyecto. El enfoque se hará desde una perspectiva generalista, ignorando soluciones de capa de red como IP multicast, así como la reserva de recursos, al no estar disponibles de forma nativa en la infraestructura de Internet. Esto conduce a la introducción de la capa de aplicación como marco coordinador en la distribución de contenido. Entre estas redes, también denominadas overlay networks, se ha escogido el empleo de una Red de Distribución de Contenido (CDN, Content Delivery Network). Este tipo de redes de nivel de aplicación son altamente escalables y permiten un control total sobre los recursos y funcionalidad de todos los elementos de su arquitectura. Esto permite evaluar las prestaciones de una CDN que distribuya contenidos multimedia en términos de: ancho de banda necesario, tiempo de respuesta obtenido por los clientes, calidad percibida, mecanismos de distribución, tiempo de vida al utilizar caching, etc. Las CDNs nacieron a finales de la década de los noventa y tenían como objetivo principal la eliminación o atenuación del denominado efecto flash-crowd, originado por una afluencia masiva de clientes. Actualmente, este tipo de redes está orientando la mayor parte de sus esfuerzos a la capacidad de ofrecer streaming media sobre Internet. Para un análisis minucioso, esta tesis propone un modelo inicial de CDN simplificado, tanto a nivel teórico como práctico. En el aspecto teórico se expone un modelo matemático que permite evaluar analíticamente una CDN. Este modelo introduce una complejidad considerable conforme se introducen nuevas funcionalidades, por lo que se plantea y desarrolla un modelo de simulación que permite por un lado, comprobar la validez del entorno matemático y, por otro lado, establecer un marco comparativo para la implementación práctica de la CDN, tarea que se realiza en la fase final de la tesis. De esta forma, los resultados obtenidos abarcan el ámbito de la teoría, la simulación y la práctica.
Molina Moreno, B. (2013). Estudio, análisis y desarrollo de una red de distribución de contenido y su algoritmo de redirección de usuarios para servicios web y streaming [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31637
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
23

McLaughlin, Victoria L. "Can Application of Artifact Reduction Algorithm or Increasing Scan Resolution Improve CBCT Diagnostic Accuracy of TAD - Tooth Root Contact?" The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1616485015766912.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kondamari, Pramod Sai, and Anudeep Itha. "A Deep Learning Application for Traffic Sign Recognition." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21890.

Повний текст джерела
Анотація:
Background: Traffic Sign Recognition (TSR) is particularly useful for novice driversand self-driving cars. Driver Assistance Systems(DAS) involves automatic trafficsign recognition. Efficient classification of the traffic signs is required in DAS andunmanned vehicles for safe navigation. Convolutional Neural Networks(CNN) isknown for establishing promising results in the field of image classification, whichinspired us to employ this technique in our thesis. Computer vision is a process thatis used to understand the images and retrieve data from them. OpenCV is a Pythonlibrary used to detect traffic sign images in real-time. Objectives: This study deals with an experiment to build a CNN model which canclassify the traffic signs in real-time effectively using OpenCV. The model is builtwith low computational cost. The study also includes an experiment where variouscombinations of parameters are tuned to improve the model’s performance. Methods: The experimentation method involve building a CNN model based onmodified LeNet architecture with four convolutional layers, two max-pooling layersand two dense layers. The model is trained and tested with the German Traffic SignRecognition Benchmark (GTSRB) dataset. Parameter tuning with different combinationsof learning rate and epochs is done to improve the model’s performance.Later this model is used to classify the images introduced to the camera in real-time. Results: The graphs depicting the accuracy and loss of the model before and afterparameter tuning are presented. An experiment is done to classify the traffic signimage introduced to the camera by using the CNN model. High probability scoresare achieved during the process which is presented. Conclusions: The results show that the proposed model achieved 95% model accuracywith an optimum number of epochs, i.e., 30 and default optimum value oflearning rate, i.e., 0.001. High probabilities, i.e., above 75%, were achieved when themodel was tested using new real-time data.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Popescu, Andrei-Alin. "Novel algorithms and methodology to help unravel secrets that next generation sequencing data can tell." Thesis, University of East Anglia, 2015. https://ueaeprints.uea.ac.uk/58571/.

Повний текст джерела
Анотація:
The genome of an organism is its complete set of DNA nucleotides, spanning all of its genes and also of its non-coding regions. It contains most of the information necessary to build and maintain an organism. It is therefore no surprise that sequencing the genome provides an invaluable tool for the scientific study of an organism. Via the inference of an evolutionary (phylogenetic) tree, DNA sequences can be used to reconstruct the evolutionary history of a set of species. DNA sequences, or genotype data, has also proven useful for predicting an organisms’ phenotype (i. e. observed traits) from its genotype. This is the objective of association studies. While methods for finding the DNA sequence of an organism have existed for decades, the recent advent of Next Generation Sequencing (NGS) has meant that the availability of such data has increased to such an extent that the computational challenges that now form an integral part of biological studies can no longer be ignored. By focusing on phylogenetics and Genome-Wide Association Studies (GWAS), this thesis aims to help address some of these challenges. As a consequence this thesis is in two parts with the first one centring on phylogenetics and the second one on GWAS. In the first part, we present theoretical insights for reconstructing phylogenetic trees from incomplete distances. This problem is important in the context of NGS data as incomplete pairwise distances between organisms occur frequently with such input and ignoring taxa for which information is missing can introduce undesirable bias. In the second part we focus on the problem of inferring population stratification between individuals in a dataset due to reproductive isolation. While powerful methods for doing this have been proposed in the literature, they tend to struggle when faced with the sheer volume of data that comes with NGS. To help address this problem we introduce the novel PSIKO software and show that it scales very well when dealing with large NGS datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sievers, Jacob, and Rex Wagenius. "Algorithmic composition from text : How well can a computer generated song express emotion?" Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146295.

Повний текст джерела
Анотація:
Algorithmic composition has long been a subject of research in computer science. In this report we investigate how well this knowledge can be applied to generating an emotional song from a piece of text, such as a text message, a tweet or a poem. The algorithm described in this paper uses Markov chains to generate a melody from just a sentence and a mood (happy or sad). It can then be output from some voice software such as Vocaloid, which is used here. The results show that a simple algorithm can be used to generate songs that communicate the wanted emotion.
Algoritmisk komposition har länge varit ett forskningsämne inom datavetenskapen. I den här rapporten undersöker vi hur denna kunskap kan appliceras för att generera en sång ifrån ett stycke text såsom ett textmeddelande, en tweet eller en dikt. Algoritmen som beskrivs använder Markovkedjor för att generera melodier från bara en mening och en känsla (glad eller ledsen). Sången kan sedan spelas från röstmjukvara, såsom Vocaloid, vilket används här. Resultaten visar att en simpel algoritm kan användas för att generera sånger som kommunicerar den önskade känslan.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Luquet, Philippe. "Horn renommage partiel et littéraux purs dans les formules CNF." Caen, 2000. http://www.theses.fr/2000CAEN2035.

Повний текст джерела
Анотація:
Dans la première partie nous formalisons l'idée qu'une conjonction de clauses S qui n'est pas Horn-renommable peut tout de même l'être partiellement. Pour toute formule S nous montrons qu'il existe un sous-ensemble B de variables de S, maximal et unique, tel que S soit Horn renommable relativement à B. Cet ensemble est appelé base de Horn de S. Le reste de S est obtenu à partir de S par suppression de toutes les clauses contenant au moins un littéral sur B. Nous montrons que si S est sans clause unitaire, alors S est satisfaisable si et seulement si le reste de S est satisfaisable. Nous montrons ensuite que ce processus de simplification peut être répété jusqu'à l'obtention d'une formule ayant une base de Horn vide ; on appelle reste itéré de S la formule obtenue en fin de compte. Nous définissons ensuite une nouvelle famille de classes polynomiales pour le probleme SAT. La notion de base de Horn permet en outre de caractériser simplement les formules q-Horn. Nous montrons que la base de Horn ainsi que le reste de S peuvent être calculés en temps linéaire, et que le reste itéré peut être obtenu en temps quadratique. Les algorithmes que nous décrivons s'appuient sur le graphe d'implication exprimant les contraintes du Horn-renommage. La deuxième partie est consacrée à l'étude des données SAT aléatoires, relativement aux littéraux purs. Nous avons tout d'abord réalisé une étude expérimentale sur des données 3-SAT aléatoires afin de détecter quelles formules contiennent des littéraux purs. Nous observons ainsi un phénomène de seuil. Nous prouvons ensuite analytiquement l'existence de ce seuil, en termes identiques pour les trois modèles probabilistes les plus couramment utilisés. Finalement, nous présentons des résultats expérimentaux surprenants qui montrent que les données 2-SAT aléatoires ont, pour la monotonie, des propriétés très particulières
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Maxim, Dorin. "Analyse probabiliste des systèmes temps réel." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00923006.

Повний текст джерела
Анотація:
Les systèmes embarqués temps réel critiques intègrent des architectures complexes qui évoluent constamment a n d'intégrer des nouvelles fonctionnalités requises par les utilisateurs naux des systèmes (automobile, avionique, ferroviaire, etc.). Ces nouvelles architectures ont un impact direct sur la variabilité du comportement temporel des systèmes temps réel. Cette variabilité entraîne un sur-approvisionnement important si la conception du système est uniquement basée sur le raisonnement pire cas. Approches probabilistes proposent des solutions basées sur la probabilité d'occurrence des valeurs les plus défavorables a n d'éviter le sur-approvisionnement, tout en satisfaisant les contraintes temps réel. Les principaux objectifs de ce travail sont de proposer des nouvelles techniques d'analyse des systèmes temps réel probabilistes et des moyens de diminuer la complexité de ces analyses, ainsi que de proposer des algorithmes optimaux d'ordonnancement á priorité xe pour les systèmes avec des temps d'exécution décrits par des variables aléatoires. Les résultats que nous présentons dans ce travail ont été prouvés surs et á utiliser pour les systèmes temps réel durs, qui sont l'objet principal de notre travail. Notre analyse des systèmes avec plusieurs paramètres probabilistes a été démontrée considérablement moins pessimiste que d'autres types d'analyses. Cette analyse combinée avec des algorithmes d'ordonnancement optimaux appropriées pour les systèmes temps réel probabilistes peut aider les concepteurs de systèmes á mieux apprécier la faisabilité d'un systéme, en particulier de ceux qui sont jugé irréalisable par des analyses/algorithmes d'ordonnancement déterministes.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Cheifetz, Nicolas. "Détection et classification de signatures temporelles CAN pour l’aide à la maintenance de sous-systèmes d’un véhicule de transport collectif." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1077/document.

Повний текст джерела
Анотація:
Le problème étudié dans le cadre de cette thèse porte essentiellement sur l'étape de détection de défaut dans un processus de diagnostic industriel. Ces travaux sont motivés par la surveillance de deux sous-systèmes complexes d'un autobus impactant la disponibilité des véhicules et leurs coûts de maintenance : le système de freinage et celui des portes. Cette thèse décrit plusieurs outils dédiés au suivi de fonctionnement de ces deux systèmes. On choisit une approche de diagnostic par reconnaissance des formes qui s'appuie sur l'analyse de données collectées en exploitation à partir d'une nouvelle architecture télématique embarquée dans les autobus. Les méthodes proposées dans ces travaux de thèse permettent de détecter un changement structurel dans un flux de données traité séquentiellement, et intègrent des connaissances disponibles sur les systèmes surveillés. Le détecteur appliqué aux freins s'appuie sur les variables de sortie (liées au freinage) d'un modèle physique dynamique du véhicule qui est validé expérimentalement dans le cadre de nos travaux. L'étape de détection est ensuite réalisée par des cartes de contrôle multivariées à partir de données multidimensionnelles. La stratégie de détection pour l'étude du système porte traite directement les données collectées par des capteurs embarqués pendant des cycles d'ouverture et de fermeture, sans modèle physique a priori. On propose un test séquentiel à base d'hypothèses alimenté par un modèle génératif pour représenter les données fonctionnelles. Ce modèle de régression permet de segmenter des courbes multidimensionnelles en plusieurs régimes. Les paramètres de ce modèle sont estimés par un algorithme de type EM dans un mode semi-supervisé. Les résultats obtenus à partir de données réelles et simulées ont permis de mettre en évidence l'efficacité des méthodes proposées aussi bien pour l'étude des freins que celle des portes
This thesis is mainly dedicated to the fault detection step occurring in a process of industrial diagnosis. This work is motivated by the monitoring of two complex subsystems of a transit bus, which impact the availability of vehicles and their maintenance costs: the brake and the door systems. This thesis describes several tools that monitor operating actions of these systems. We choose a pattern recognition approach based on the analysis of data collected from a new IT architecture on-board the buses. The proposed methods allow to detect sequentially a structural change in a datastream, and take advantage of prior knowledge of the monitored systems. The detector applied to the brakes is based on the output variables (related to the brake system) from a physical dynamic modeling of the vehicle which is experimentally validated in this work. The detection step is then performed by multivariate control charts from multidimensional data. The detection strategy dedicated to doors deals with data collected by embedded sensors during opening and closing cycles, with no need for a physical model. We propose a sequential testing approach using a generative model to describe the functional data. This regression model allows to segment multidimensional curves in several regimes. The model parameters are estimated via a specific EM algorithm in a semi-supervised mode. The results obtained from simulated and real data allow to highlight the effectiveness of the proposed methods on both the study of brakes and doors
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Celik, Vakkas. "Development Of Strategies For Reducing The Worst-case Messageresponse Times On The Controller Area Network." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614075/index.pdf.

Повний текст джерела
Анотація:
The controller area network (CAN) is the de-facto standard for in-vehicle communication. The growth of time-critical applications in modern cars leads to a considerable increase in the message trac on CAN. Hence, it is essential to determine ecient message schedules on CAN that guarantee that all communicated messages meet their timing constraints. The aim of this thesis is to develop oset scheduling strategies that
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Onofri, Claudio. "Design e sviluppo di un nuovo algoritmo di segmentazione basato su CNN per la stima della volumetria atriale sinistra in pazienti con fibrillazione atriale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25390/.

Повний текст джерела
Анотація:
In questo studio si è sviluppato un nuovo algoritmo per la segmentazione dell'endocardio atriale sinistro, per pazienti affetti da fibrillazione atriale. A differenza delle convenzionali tecniche di segmentazione, si è fatto uso di una rete neurale convoluzionale chiamata U-Net, che è stata appositamente addestrata in modo supervisionato e testata, con dati MRI-GE e relativa etichetta di segmentazione binaria, dei volumi appartenenti a 154 pazienti diversi. Per l'addestramento si è fatto uso dei volumi di 123 pazienti (80%) e 31 pazienti sono dedicati al test (20%). Le CNN richiedono molto tempo per l'addestramento (ore) mentre hanno un funzionamento feed-foreward, con velocità di calcolo algebrico nella fase di test. I dati grezzi in uscita dalla rete, sono stati post-elaborati al fine di ottenere una purificazione dalle formazioni di volume spurio, ossia non connessi con il volume endocardico atriale sinistro (LAE). In questo studio si disponeva anche della segmentazione semantica binaria manuale delle immagini di test, le quali sono state considerate il gold standard di riferimento. Si è visualizzata la registrazione tra il volume binario di riferimento ed il volume in uscita della rete purificato per un confronto qualitativo e per l'applicazione di metriche per eseguire una analisi quantitativa, come il coefficiente di Dice e la distanza di Hausdorff. Queste misure vengono estratte e se ne produce una tabella. Sulla base di questi risultati si è potuto eseguire un'analisi statistica per esaminare il funzionamento globale della rete sulla base di variabili statistiche come la media e deviazione standard, pesati sulla base della dimensione del volume di ogni paziente e calcolati sull'errore inteso come differenza tra i due volumi. In termini globali la rete ha generalizzato bene per tutti i pazienti. Con una media di 12,2 cm^3 ed una deviazione standard 11,7 cm^3 la rete ha dimostrato di avere una tendenza a sovrastimare il d
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Kiselev, Ilya. "Can algorithmic trading beat the market? : An experiment with S&P 500, FTSE 100, OMX Stockholm 30 Index." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Economics, Finance and Statistics, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-19495.

Повний текст джерела
Анотація:
The research at hand aims to define effectiveness of algorithmic trading, comparing with different benchmarks represented by several types of indexes. How big returns can be gotten by algorithmic trading, taking into account the costs of informational and trading infrastructure needed for robot trading implementation? To get the result, it’s necessary to compare two opposite trading strategies: 1) Algorithmic trading (implemented by high-frequency trading robot (based on statistic arbitrage strategy) and trend-following trading robot (based on the indicator Exponential Moving Average with the Variable Factor of Smoothing)) 2) Index investing strategy (classical index strategies “buy and hold”, implemented by four different types of indexes: Capitalization weight index, Fundamental indexing, Equal-weighted indexing, Risk-based indexation/minimal variance). According to the results, it was found that at the current phase of markets’ development, it is theoretically possible for algorithmic trading (and especially high-frequency strategies) to exceed the returns of index strategy, but we should note two important factors: 1) Taking into account all of the costs of organization of high-frequency trading (brokerage and stock exchanges commissions, trade-related infrastructure maintenance, etc.), the difference in returns (with superiority of high-frequency strategy) will be much less . 2) Given the fact that “markets’ efficiency” is growing every year (see more about it further in thesis), and the returns of high-frequency strategies tends to decrease with time (see more about it further in thesis), it is quite logical to assume that it will be necessary to invest more and more in trading infrastructure to “fix” the returns of high-frequency trading strategies on a higher level, than the results of index investing strategies.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Singer, J. B. "Why solutions can be hard to find : a featural theory of cost for a local search algorithm on random satisfiability instances." Thesis, University of Edinburgh, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.661976.

Повний текст джерела
Анотація:
The local search algorithm WSAT is one of the most successful algorithms for solving the archetypal NP-complete problem of satisfiability (SAT). It is notably effective at solving RANDOM-3-SAT instances near the so-called "satisfiability threshold", which are thought to be universally hard. However, WSAT still shows a peak in search cost near the threshold and large variations in cost over different instances. Why are solutions to the threshold instances so hard to find using WSAT? What features characterise threshold instances which make them difficult for WSAT to solve? We make a number of significant contributions to the analysis of WSAT on these high-cost random instances, using the recently-introduced concept of the backbone of a SAT instance. The backbone is the set of literals which are implicates of and instance. We find that the number of solutions predicts the cost well for small-backbone instances but is much less relevant for the large-backbone instances which appear near the threshold and dominate in the overconstrained region. We undertake a detailed study of the behaviour of the algorithm during search and uncover some interesting patterns. These patterns lead us to introduce a measure of the backbone fragility of an instance, which indicates how persistent the backbone is as clauses are removed. We propose that high-cost random instances for WSAT are those with large backbones which are also backbone-fragile. We suggest that the decay in cost for WSAT beyond the satisfiability threshold, which has perplexed a number of researchers, is due to the decreasing backbone fragility. Our hypothesis makes three correct predictions. First, that a measure of the backbone robustness of an instance (the opposite to backbone fragility) is negatively correlated with the WSAT cost when other factors are controlled for. Second, that backbone-minimal instances (which are 3-SAT instances altered so as to be more backbone-fragile) are unusually hard for WSAT. Third, that the clauses most often unsatisfied during search are those whose deletion has the most effect on the backbone.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Argenta, Aline da Silva. "Sistemáticas de gestão de layout para aprimoramento dos fluxos de uma biblioteca universitária." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/172713.

Повний текст джерела
Анотація:
O setor de serviços tem fundamental importância para a economia global, porém o layout de organizações deste setor tipicamente não é abordado com a mesma intensidade com que se discute arranjo físico em ambientes industriais. O objetivo desta dissertação reside na aplicação de sistemáticas de concepção de layout com vistas ao planejamento e aprimoramento do arranjo físico e agrupamento de recursos de uma biblioteca. Como objetivos específicos, traz a aplicação do planejamento sistemático de layout (SLP) para o posicionamento de recursos e organização dos fluxos de uma biblioteca, e a adaptação do algoritmo Close Neighbor para agrupamento de materiais bibliográficos (livros) em prateleiras de acordo com sua área de abrangência. Para tanto, inicialmente apresentam-se as características da Biblioteca da Faculdade de Farmácia da UFRGS (local de aplicação do estudo), a análise da movimentação de pessoas e de materiais, a abordagem proposta e as diretrizes para organização do arranjo físico da biblioteca e do acervo de livros. Dentre outros procedimentos operacionais, fez-se necessária a realização de reuniões com a equipe da biblioteca e com a direção da Faculdade de Farmácia, visando a estabelecer prioridades e definir características desejadas para o arranjo físico do espaço em estudo. Na sequência, implantou-se a proposta de layout selecionada, seguida de uma discussão acerca do desempenho da biblioteca antes e depois da implantação do novo layout; tal discussão foi baseada tanto em resultados numéricos (análise quantitativa) como na percepção da equipe envolvida (análise qualitativa).
The service sector is of fundamental importance to the global economy, but the layout of organizations in this sector is typically not approached with the same intensity with which physical arrangement is discussed in industrial environments. The objective of this dissertation is to apply layout design systematics with a view to planning and improving the physical arrangement and grouping of library resources. As a specific goal, the application of systematic layout planning (SLP) for the positioning of resources and organization of a library's flows, and the adaptation of the Close Neighbor algorithm for grouping bibliographic materials (books) into shelves according to their area of comprehensiveness. In order to do so, the characteristics of the Library of the Faculty of Pharmacy of UFRGS (place of application of the study), the analysis of the movement of people and materials, the proposed approach and the guidelines for the organization of the physical arrangement of the library and of the collection of books. Among other operational procedures, it was necessary to hold meetings with the library staff and the Faculty of Pharmacy, in order to establish priorities and define desired characteristics for the physical arrangement of the space under study. Next, the selected layout proposal was implanted, followed by a discussion about the library's performance before and after the implementation of the new layout; such a discussion was based on both numerical results (quantitative analysis) and the perception of the team involved (qualitative analysis).
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Mirna, Kapetina. "Adaptivna estimacija parametara sistema opisanih iracionalnim funkcijama prenosa." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=105033&source=NDLTD&language=en.

Повний текст джерела
Анотація:
Predmet istraživanja je identifikaciji i adaptivna estimacijaparametara široke klase linearnih sistema. Predloženi algoritmiza adaptivnu estimaciju parametara su primenjivi na sisteme koji seopisuju funkcijama prenosa proizvoljnog oblika, što uključuje sistemesa kašnjenjem, distribuiranim parametrima, frakcione sisteme idruge sisteme opisane iracionalnim funkcijama prenosa. Naposletku, dat je algoritam za identifikaciju CNG sistema koji se neizvršava u realnom vremenu i pretpostavlja da struktura modela nijepoznata unapred.
The subject of this research is the system identification and adaptiveparameter estimation in wide class of linear processes. Proposedapproaches for adaptive parameter estimation can be applied to systemsdescribed by transfer functions of arbitrary form, including systems withdelay, distributed-paratemeter systems, fractional order systems, and othersystem described by irrational transfer functions. In the final part, an offlinealgorithm for identification of CNG system which does not assume any apriori known model structure is proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zhao, Liyi. "Video analysis of head kinematics in boxing matches using OpenCV library under Macintosh platform : How can the Posit algorithm be used in head kinematic analysis?" Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-182170.

Повний текст джерела
Анотація:
The division of Neuronic Engineering at KTH focuses the research on the head and neck biomechanics. Finite Element (FE) models of the human neck and head have been developed to study the neck and head kinematics as well as injurious loadings of various kinds. The overall objective is to improve the injury prediction through accident reconstruction. This project aims at providing an image analysis tool which helps analyzers building models of the head motion, making good estimation of head movements, rotation speed and velocity during head collision. The applicability of this tool is a predefined set of boxing match videos. The methodology however, can be extended for the analysis of different kinds of moving head objects. The user of the analysis tool should have basic ideas of how the different functionalities of the tool work and how to handle it properly. This project is a computer programming work which involves the study of the background, the study of methodology and a programming phase which gives result of the study.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zeloufi, Mohamed. "Développement d’un convertisseur analogique-numérique innovant dans le cadre des projets d’amélioration des systèmes d’acquisition de l’expérience ATLAS au LHC." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT115.

Повний текст джерела
Анотація:
À l’horizon 2024, l’expérience ATLAS prévoit de fonctionner à des luminosités 10 fois supérieures à la configuration actuelle. Par conséquent, l’électronique actuelle de lecture ne correspondra pas aux conditions de ces luminosités. Dans ces conditions, une nouvelle électronique devra être conçue. Cette mise à niveau est rendue nécessaire aussi par les dommages causés par les radiations et le vieillissement. Une nouvelle carte frontale va être intégrée dans l’électronique de lecture du calorimètre LAr. Un élément essentiel de cette carte est le Convertisseur Analogique-Numérique (CAN) présentant une résolution de 12bits pour une fréquence d’échantillonnage de 40MS/s, ainsi qu’une résistance aux irradiations. Compte tenu du grand nombre des voies, ce CAN doit remplir des critères sévères sur la consommation et la surface. Le but de cette thèse est de concevoir un CAN innovant qui peut répondre à ces spécifications. Une architecture à approximations successives (SAR) a été choisie pour concevoir notre CAN. Cette architecture bénéficie d’une basse consommation de puissance et d’une grande compatibilité avec les nouvelles technologies CMOS. Cependant, le SAR souffre de certaines limitations liées principalement aux erreurs de décisions et aux erreurs d’appariement des capacités du CNA. Deux prototypes de CAN-SAR 12bits ont été modélisés en Matlab afin d’évaluer leur robustesse. Ensuite les conceptions ont été réalisées dans une technologie CMOS 130nm d’IBM validée par la collaboration ATLAS pour sa tenue aux irradiations. Les deux prototypes intègrent un algorithme d’approximations avec redondance en 14 étapes de conversion, qui permet de tolérer des marges d’erreurs de décisions et d’ajouter une calibration numérique des effets des erreurs d’appariement des capacités. La partie logique de nos CAN est très simplifiée pour minimiser les retards de génération des commandes et la consommation d’énergie. Cette logique exécute un algorithme monotone de commutation des capacités du CNA permettant une économie de 70% de la consommation dynamique par rapport à un algorithme de commutation classique. Grâce à cet algorithme, une réduction de capacité totale est aussi obtenue : 50% en comparant notre premier prototype à un seul segment avec une architecture classique. Pour accentuer encore plus le gain en termes de surface et de consommation, un second prototype a été réalisé en introduisant un CNA à deux segments. Cela a abouti à un gain supplémentaire d’un facteur 7,64 sur la surface occupée, un facteur de 12 en termes de capacité totale, et un facteur de 1,58 en termes de consommation. Les deux CAN consomment respectivement une puissance de ~10,3mW et ~6,5mW, et ils occupent respectivement une surface de ~2,63mm2 et ~0,344mm2.Afin d’améliorer leurs performances, un algorithme de correction numérique des erreurs d’appariement des capacités a été utilisé. Des buffers de tensions de référence ont étés conçus spécialement pour permettre la charge/décharge des capacités du convertisseur en hautes fréquences et avec une grande précision. En simulations électriques, les deux prototypes atteignent un ENOB supérieur à 11bits tout en fonctionnant à la vitesse de 40MS/s. Leurs erreurs d’INL simulés sont respectivement +1,14/-1,1LSB et +1,66/-1,72LSB.Les résultats de tests préliminaires du premier prototype présentent des performances similaires à celles d’un CAN commercial de référence sur notre carte de tests. Après la correction, ce prototype atteint un ENOB de 10,5bits et un INL de +1/-2,18LSB. Cependant suite à une panne de carte de tests, les résultats de mesures du deuxième prototype sont moins précis. Dans ces circonstances, ce dernier atteint un ENOB de 9,77bits et un INL de +7,61/-1,26LSB. En outre la carte de tests actuelle limite la vitesse de fonctionnement à ~9MS/s. Pour cela une autre carte améliorée a été conçue afin d’atteindre un meilleur ENOB, et la vitesse souhaitée. Les nouvelles mesures vont être publiées dans le futur
By 2024, the ATLAS experiment plan to operate at luminosities 10 times the current configuration. Therefore, many readout electronics must be upgraded. This upgrade is rendered necessary also by the damage caused by years of total radiations’ effect and devices aging. A new Front-End Board (FEB) will be designed for the LAr calorimeter readout electronics. A key device of this board is a radiation hard Analog-to-Digital Converter (ADC) featuring a resolution of 12bits at 40MS/s sampling rate. Following the large number of readout channels, this ADC device must display low power consumption and also a low area to easy a multichannel design.The goal of this thesis is to design an innovative ADC that can deal with these specifications. A Successive Approximation architecture (SAR) has been selected to design our ADC. This architecture has a low power consumption and many recent works has shown his high compatibility with modern CMOS scaling technologies. However, the SAR has some limitations related to decision errors and mismatches in capacitors array.Using Matlab software, we have created the models for two prototypes of 12bits SAR-ADC which are then used to study carefully their limitations, to evaluate their robustness and how it could be improved in digital domain.Then the designs were made in an IBM 130nm CMOS technology that was validated by the ATLAS collaboration for its radiation hardness. The prototypes use a redundant search algorithm with 14 conversion steps allowing some margins with comparator’s decision errors and opening the way to a digital calibration to compensate the capacitors mismatching effects. The digital part of our ADCs is very simplified to reduce the commands generation delays and saving some dynamic power consumption. This logic follows a monotonic switching algorithm which saves about70% of dynamic power consumption compared to the conventional switching algorithm. Using this algorithm, 50% of the total capacitance reduction is achieved when one compare our first prototype using a one segment capacitive DAC with a classic SAR architecture. To boost even more our results in terms of area and consumption, a second prototype was made by introducing a two segments DAC array. This resulted in many additional benefits: Compared to the first prototype, the area used is reduced in a ratio of 7,6, the total equivalent capacitance is divided by a factor 12, and finally the power consumption in improved by a factor 1,58. The ADCs respectively consume a power of ~10,3mW and ~6,5mW, and they respectively occupy an area of ~2,63mm2 and ~0,344mm2.A foreground digital calibration algorithm has been used to compensate the capacitors mismatching effects. A high frequency open loop reference voltages buffers have been designed to allow the high speed and high accuracy charge/discharge of the DAC capacitors array.Following electrical simulations, both prototypes reach an ENOB better than 11bits while operating at the speed of 40MS/s. The INL from the simulations were respectively +1.14/-1.1LSB and +1.66/-1.72LSB.The preliminary testing results of the first prototype are very close to that of a commercial 12bits ADC on our testing board. After calibration, we measured an ENOB of 10,5bits and an INL of +1/-2,18LSB. However, due to a testing board failure, the testing results of the second prototype are less accurate. In these circumstances, the latter reached an ENOB of 9,77bits and an INL of +7,61/-1,26LSB. Furthermore the current testing board limits the operating speed to ~9MS/s. Another improved board was designed to achieve a better ENOB at the targeted 40MS/s speed. The new testing results will be published in the future
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Повний текст джерела
Анотація:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Medina, Nolazco Javier Denis. "Implementación de un algoritmo genético para la optimización de flujo vehicular aplicado a la fase de tiempos en las intersecciones de un corredor vial." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2015. http://tesis.pucp.edu.pe/repositorio/handle/123456789/7096.

Повний текст джерела
Анотація:
Este proyecto de fin de carrera busca aportar una posible solución al problema del tráfico en las principales vías de Lima. Aprovechando la infraestructura de semaforización, este trabajo se enfocará en modificar y optimizar los tiempos de fases de los semáforos para un adecuado flujo de tráfico. Se experimentará el comportamiento del flujo de tránsito en las intersecciones en un corredor vial y se propondrá un algoritmo genético para la adaptabilidad estos tiempos de fase de modo que contribuya con reducir el tiempo perdido en el tráfico.
Tesis
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Andersson, Robin. "Combining Anomaly- and Signaturebased Algorithms for IntrusionDetection in CAN-bus : A suggested approach for building precise and adaptiveintrusion detection systems to controller area networks." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43450.

Повний текст джерела
Анотація:
With the digitalization and the ever more computerization of personal vehicles, new attack surfaces are introduced, challenging the security of the in-vehicle network. There is never such a thing as fully securing any computer system, nor learning all the methods of attack in order to prevent a break-in into a system. Instead, with sophisticated methods, we can focus on detecting and preventing attacks from being performed inside a system. The current state of the art of such methods, named intrusion detection systems (IDS), is divided into two main approaches. One approach makes its models very confident of detecting malicious activity, however only on activities that has been previously learned by this model. The second approach is very good at constructing models for detecting any type of malicious activity, even if never studied by the model before, but with less confidence. In this thesis, a new approach is suggested with a redesigned architecture for an intrusion detection system called Multi-mixed IDS. Where we take a middle ground between the two standardized approaches, trying to find a combination of both sides strengths and eliminating its weaknesses. This thesis aims to deliver a proof of concept for a new approach in the current state of the art in the CAN-bus security research field. This thesis also brings up some background knowledge about CAN and intrusion detection systems, discussing their strengths and weaknesses in further detail. Additionally, a brief overview from a handpick of research contributions from the field are discussed. Further, a simple architecture is suggested, three individual detection models are trained and combined to be tested against a CAN-bus dataset. Finally, the results are examined and evaluated. The results from the suggested approach shows somewhat poor results compared to other suggested algorithms within the field. However, it also shows some good potential, if better decision methods between the individual algorithms that constructs the model can be found.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Karlsson, Daniel M. "Mapping The Valleys of The Uncanny : An investigation into a process and method, colliding with questions relating to what can be known to be real, within the field of algorithmic composition. Or if you prefer: The roles of instrumentation and timbre, as they unwittingly conspire to designate access, power, status, work and ultimately class." Thesis, Kungl. Musikhögskolan, Institutionen för komposition, dirigering och musikteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kmh:diva-3115.

Повний текст джерела
Анотація:
We are free, from the shackles of the finite, and of the physical world. Sound now enjoys morphological freedom through a myriad of transformations. It is malleable to the utmost degree. We have at our disposal an astounding plethora of tools, with which we can manipulate and organise sound. This thesis project is a collection of musical materials that explore the idea of The Uncanny Valley, as it relates to music being real, fake or some strange combination of the two. This thesis project is primarily one in which I produce sound files. In a secondary capacity, I’m also producing a text file. In this text I aim to present some of my thoughts on how my work writing code and making music might be connected, in some hopefully interesting ways, to my field. I’m unlikely to be able to adequately convey my own origin myth. Instead I’ll focus on stories I’ve been told about music, throughout my life, inside and outside of academia. I have a strong suspicion that these stories have shaped my coming into being as a composer. However difficult the task of introspection, and ultimately to know one self proves to be, I at least regard these stories as a source for clues as to why I am driven to do the things that I do.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Cisterna, Malloco César Enrique. "Segmentación de clientes activos de una entidad financiera empleando el algoritmo de K-means y árbol de decisión." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2021. https://hdl.handle.net/20.500.12672/17359.

Повний текст джерела
Анотація:
Actualmente la Institución Financiera ha identificado a clientes según su interacción con los canales físicos y digitales, entre clientes activos (42%) y clientes inactivos (58%), por lo cual es fundamental poder realizar acciones comerciales diferenciadas sobre este universo de clientes. Se define como cliente activo a aquel cliente que realizó operaciones monetarias y no monetaria por canales digitales del banco dentro de los últimos seis meses o que realizan sus operaciones en canales físicos dentro de los últimos seis meses. Debido a ello las áreas de negocio encargadas de realizar las campañas, decidieron priorizar la acción comercial en los clientes activos, lo cuales son alrededor de un millón setecientos mil clientes de manera mensual. Sin embargo, se desea realizar diferentes acciones comerciales según el perfil de los clientes activos puesto no todos tienen el mismo perfil. Por lo cual, el presente trabajo consiste en la segmentación de clientes activos, el cual se desarrolló dentro del área de Business Analytics, área encargada del perfilamientos y segmentaciones de los clientes. Y mediante la segmentación, los responsables del negocio podrán realizar acciones comerciales que permitan gestionar los KPI’s establecidos, que son el cross, el uso de tarjetas de crédito o débito y el aumento del uso de los canales digitales. Esta segmentación permite conocer de manera acertada el perfil de los clientes activos, lo que permitirá ofrecer productos que calcen con las necesidades de los clientes activos, permitiendo incrementar sus KPI’s.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Angeles, Bocanegra Oscar Raúl, and Quispe Cesar Abel Melgarejo. "Algoritmo de clustering utilizando k-means e índice de validación Rose turi para la segmentación de clientes de la Caja Rural Prymera." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2012. https://hdl.handle.net/20.500.12672/12131.

Повний текст джерела
Анотація:
Las empresas en la actualidad necesitan explotar la información que tienen de sus clientes. En particular caja Prymera necesita identificar grupos de clientes para orientar sus recursos y esfuerzos a cada grupo de manera individual. Las técnicas de clustering son de gran utilidad para obtener grupos que compartan características similares internamente y a su vez que los grupos que sean heterogéneos entre sí, es por ello que se realiza un estudio para seleccionar la técnica más adecuada para el problema de la segmentación de clientes, siendo el algoritmo K-Means en complementación con el índice de Rose Turi la técnica a utilizar por su bajo costo computacional, facilidad de implementación y porque permite obtener la cantidad óptima de clusters. Adicionalmente, para validar la eficiencia de la técnica propuesta se implementa el índice de Davies-Bouldin para contrastarlas con la de Rose Turi. Los resultados obtenidos indican que la técnica propuesta obtuvo los de clusters con una eficacia superior en 25% a lo obtenido por el índice de Davies-Bouldin, a su vez en cuanto a eficiencia en tiempo de procesamiento la técnica propuesta es superior en 17%.
Trabajo de suficiencia profesional
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Morcillo, Suárez Carlos. "Analysis of genetic polymorphisms for statistical genomics: tools and applications." Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/78126.

Повний текст джерела
Анотація:
New approaches are needed to manage and analyze the enormous quantity of biological data generated by modern technologies. Existing solutions are often fragmented and uncoordinated and, thus, they require considerable bioinformatics skills from users. Three applications have been developed illustrating different strategies to help users without extensive IT knowledge to take maximum profit from their data. SNPator is an easy-to-use suite that integrates all the usual tools for genetic association studies: from initial quality control procedures to final statistical analysis. CHAVA is an interactive visual application for CNV calling from aCGH data. It presents data in a visual way that helps assessing the quality of the calling and assists in the process of optimization. Haplotype Association Pattern Analysis visually presents data from exhaustive genomic haplotype associations, so that users can recognize patterns of possible associations that cannot be detected by single-SNP tests.
Calen noves aproximacions per la gestió i anàlisi de les enormes quantitats de dades biològiques generades per les tecnologies modernes. Les solucions existents, sovint fragmentaries i descoordinades, requereixen elevats nivells de formació bioinformàtica. Hem desenvolupat tres aplicacions que il•lustren diferents estratègies per ajudar als usuaris no experts en informàtica a aprofitar al màxim les seves dades. SNPator és un paquet de fàcil us que integra les eines usades habitualment en estudis de associació genètica: des del control de qualitat fins les anàlisi estadístiques. CHAVA és una aplicació visual interactiva per a la determinació de CNVs a partir de dades aCGH. Presenta les dades visualment per ajudar a valorar la qualitat de les CNV predites i ajudar a optimitzar-la. Haplotype Pattern Analysis presenta dades d’anàlisi d’associació haplotípica de forma visual per tal que els usuaris puguin reconèixer patrons de associacions que no es possible detectar amb tests de SNPs individuals.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Buchholz, Sven. "Adaptivitätssensitive Platzierung von Replikaten in Adaptiven Content Distribution Networks." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1120985057132-65763.

Повний текст джерела
Анотація:
Adaptive Content Distribution Networks (A-CDNs) sind anwendungsübergreifende, verteilte Infrastrukturen, die auf Grundlage verteilter Replikation von Inhalten und Inhaltsadaption eine skalierbare Auslieferung von adaptierbaren multimedialen Inhalten an heterogene Clients ermöglichen. Die Platzierung der Replikate in den Surrogaten eines A-CDN wird durch den Platzierungsmechanismus des A-CDN gesteuert. Anders als in herkömmlichen CDNs, die keine Inhaltsadaption berücksichtigen, muss ein Platzierungsmechanismus in einem A-CDN nicht nur entscheiden, welches Inhaltsobjekt in welchem Surrogat repliziert werden soll, sondern darüber hinaus, in welcher Repräsentation bzw. in welchen Repräsentationen das Inhaltsobjekt zu replizieren ist. Herkömmliche Platzierungsmechanismen sind nicht in der Lage, verschiedene Repräsentationen eines Inhaltsobjektes zu berücksichtigen. Beim Einsatz herkömmlicher Platzierungsmechanismen in A-CDNs können deshalb entweder nur statisch voradaptierte Repräsentationen oder ausschließlich generische Repräsentationen repliziert werden. Während bei der Replikation von statisch voradaptierten Repräsentationen die Wiederverwendbarkeit der Replikate eingeschränkt ist, führt die Replikation der generischen Repräsentationen zu erhöhten Kosten und Verzögerungen für die dynamische Adaption der Inhalte bei jeder Anfrage. Deshalb werden in der Arbeit adaptivitätssensitive Platzierungsmechanismen zur Platzierung von Replikaten in A-CDNs vorgeschlagen. Durch die Berücksichtigung der Adaptierbarkeit der Inhalte bei der Ermittlung einer Platzierung von Replikaten in den Surrogaten des A-CDNs können adaptivitätssensitive Platzierungsmechanismen sowohl generische und statisch voradaptierte als auch teilweise adaptierte Repräsentationen replizieren. Somit sind sie in der Lage statische und dynamische Inhaltsadaption flexibel miteinander zu kombinieren. Das Ziel der vorliegenden Arbeit ist zu evaluieren, welche Vorteile sich durch die Berücksichtigung der Inhaltsadaption bei Platzierung von adaptierbaren Inhalten in A-CDNs realisieren lassen. Hierzu wird das Problem der adaptivitätssensitiven Platzierung von Replikaten in A-CDNs als Optimierungsproblem formalisiert, Algorithmen zur Lösung des Optimierungsproblems vorgeschlagen und diese in einem Simulator implementiert. Das zugrunde liegende Simulationsmodell beschreibt ein im Internet verteiltes A-CDN, welches zur Auslieferung von JPEG-Bildern an heterogene mobile und stationäre Clients verwendet wird. Anhand dieses Simulationsmodells wird die Leistungsfähigkeit der adaptivitätssensitiven Platzierungsmechanismen evaluiert und mit der von herkömmlichen Platzierungsmechanismen verglichen. Die Simulationen zeigen, dass der adaptivitätssensitive Ansatz in Abhängigkeit vom System- und Lastmodell sowie von der Speicherkapazität der Surrogate im A-CDN in vielen Fällen Vorteile gegenüber dem Einsatz herkömmlicher Platzierungsmechanismen mit sich bringt. Wenn sich die Anfragelasten verschiedener Typen von Clients jedoch nur wenig oder gar nicht überlappen oder bei hinreichend großer Speicherkapazität der Surrogate hat der adaptivitätssensitive Ansatz keine signifikanten Vorteile gegenüber dem Einsatz eines herkömmlichen Platzierungsmechanismus
Adaptive Content Distribution Networks (A-CDNs) are application independent, distributed infrastructures using content adaptation and distributed replication of contents to allow the scalable delivery of adaptable multimedia contents to heterogeneous clients. The replica placement in an A-CDN is controlled by the placement mechanisms of the A-CDN. As opposed to traditional CDNs, which do not take content adaptation into consideration, a replica placement mechanism in an A-CDN has to decide not only which object shall be stored in which surrogate but also which representation or which representations of the object to replicate. Traditional replica placement mechanisms are incapable of taking different representations of the same object into consideration. That is why A-CDNs that use traditional replica placement mechanisms may only replicate generic or statically adapted representations. The replication of statically adapted representations reduces the sharing of the replicas. The replication of generic representations results in adaptation costs and delays with every request. That is why the dissertation thesis proposes the application of adaptation-aware replica placement mechanisms. By taking the adaptability of the contents into account, adaptation-aware replica placement mechanisms may replicate generic, statically adapted and even partially adapted representations of an object. Thus, they are able to balance between static and dynamic content adaptation. The dissertation is targeted at the evaluation of the performance advantages of taking knowledge about the adaptability of contents into consideration when calculating a placement of replicas in an A-CDN. Therefore the problem of adaptation-aware replica placement is formalized as an optimization problem; algorithms for solving the optimization problem are proposed and implemented in a simulator. The underlying simulation model describes an Internet-wide distributed A-CDN that is used for the delivery of JPEG images to heterogeneous mobile and stationary clients. Based on the simulation model, the performance of the adaptation-aware replica placement mechanisms are evaluated and compared to traditional replica placement mechanisms. The simulations prove that the adaptation-aware approach is superior to the traditional replica placement mechanisms in many cases depending on the system and load model as well as the storage capacity of the surrogates of the A-CDN. However, if the load of different types of clients do hardly overlap or with sufficient storage capacity within the surrogates, the adaptation-aware approach has no significant advantages over the application of traditional replica-placement mechanisms
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Lorenzo, Valentín Gil. "Detección de rasgos en imágenes con ruido: Una aproximación con funciones LISA en procesos puntuales espaciales." Doctoral thesis, Universitat Jaume I, 2005. http://hdl.handle.net/10803/10496.

Повний текст джерела
Анотація:
En esta tesis doctoral se retoma un problema de interés real, como es la detección de agrupaciones de puntos -que denominamos rasgos- en imágenes digitalizadas, que se encuentran en compañía de otros puntos que no son de interés, a estos los denominamos ruido. Se pretende separar y clasificar. Las últimas aportaciones hechas en este campo, se dirigen a considerar ausencia de modelo de probabilidad que originó la distribución espacial de los puntos en la imagen, utilizándose distancias al k-ésimo vecino más cercano, haciendo variar el valor de k y utilizando el algoritmo EM. Otros trabajos, que sirven como punto departida, definen funciones locales que recogen características de segundo orden alrededor de cada individuo. Uniendo estas dos ideas, ¿por qué no aplicar y calcular vectores de funciones LISA asociadas a cada individuo, que recojan características de segundo orden, por tanto de agregación del patrón puntual, y clasificar funciones en rasgo y ruido, que al hacer corresponder con los puntos originales éstos queden clasificados?. Esta es la motivación de la tesis doctoral y el desarrollo es el proceso que se ha seguido hasta obtener resultados satisfactorios. En particular, la tesis comienza con un capitulo sobre los conceptos básicos en procesos puntuales espaciales. Continua presentando las técnicas hasta ahora utilizadas para posteriormente desarrollar la metodología necesaria sobre funciones LISA individuales de segundo orden. Se presentan dos capítulos, basados en estudios de simulación y casos reales, en los que se analizan los métodos multivariantes de escalamiento multidimensional y cluster, así como distintos tipos de distancias entre funciones LISA.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Dos, Reis Daniel. "Évaluation électromagnétique en régime diffusif de défauts et objets 3D enfouis : du modèle d'interaction à l'inversion de données." Paris 7, 2001. http://www.theses.fr/2001PA077186.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

CHEIFETZ, Nicolas. "Détection et classification de signatures temporelles CAN pour l'aide à la maintenance de sous-systèmes d'un véhicule de transport collectif." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-01066894.

Повний текст джерела
Анотація:
Le problème étudié dans le cadre de cette thèse porte essentiellement sur l'étape de détection de défaut dans un processus de diagnostic industriel. Ces travaux sont motivés par la surveillance de deux sous-systèmes complexes d'un autobus impactant la disponibilité des véhicules et leurs coûts de maintenance : le système de freinage et celui des portes. Cette thèse décrit plusieurs outils dédiés au suivi de fonctionnement de ces deux systèmes. On choisit une approche de diagnostic par reconnaissance des formes qui s'appuie sur l'analyse de données collectées en exploitation à partir d'une nouvelle architecture télématique embarquée dans les autobus. Les méthodes proposées dans ces travaux de thèse permettent de détecter un changement structurel dans un flux de données traité séquentiellement, et intègrent des connaissances disponibles sur les systèmes surveillés. Le détecteur appliqué aux freins s'appuie sur les variables de sortie (liées au freinage) d'un modèle physique dynamique du véhicule qui est validé expérimentalement dans le cadre de nos travaux. L'étape de détection est ensuite réalisée par des cartes de contrôle multivariées à partir de données multidimensionnelles. La stratégie de détection pour l'étude du système porte traite directement les données collectées par des capteurs embarqués pendant des cycles d'ouverture et de fermeture, sans modèle physique a priori. On propose un test séquentiel à base d'hypothèses alimenté par un modèle génératif pour représenter les données fonctionnelles. Ce modèle de régression permet de segmenter des courbes multidimensionnelles en plusieurs régimes. Les paramètres de ce modèle sont estimés par un algorithme de type EM dans un mode semi-supervisé. Les résultats obtenus à partir de données réelles et simulées ont permis de mettre en évidence l'efficacité des méthodes proposées aussi bien pour l'étude des freins que celle des portes
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Escamilla, Fuster Joan. "Eficiencia Energética y Robustez en Problemas de Scheduling." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/64062.

Повний текст джерела
Анотація:
[EN] Many industrial problems can be modelled as a scheduling problem where some resources are assigned to tasks so as to minimize the completion time, to reduce the use of resources, idle time, etc. There are several scheduling problems which try to represent different kind of situations that can appear in real world problems. Job Shop Scheduling Problem (JSP) is the most used problem. In JSP there are different jobs, every job has different tasks and these tasks have to be executed by different machines. JSP can be extended to other problems in order to simulate more real problems. In this work we have used the problem job shop with operators JSO(n,p) where each task must also be assisted by one operator from a limited set of them. Additionally, we have extended the classical JSP to a job-shop scheduling problem where machines can consume different amounts of energy to process tasks at different rates (JSMS). In JSMS operation has to be executed by a machine that has the possibility to work at different speeds. Scheduling problems consider optimization indicators such as processing time, quality and cost. However, governments and companies are also interested in energy-consumption due to the rising demand and price of fuel, the reduction in energy commodity reserves and growing concern about global warming. In this thesis, we have developed new metaheuristic search techniques to model and solve the JSMS problem. Robustness is a common feature in real life problems. A system persists if it remains running and maintains his main features despite continuous perturbations, changes or incidences. We have developed a technique to solve the $JSO(n,p)$ problem with the aim of obtaining optimized and robust solutions. We have developed a dual model to relate optimality criteria with energy consumption and robustness/stability in the JSMS problem. This model is committed to protect dynamic tasks against further incidences in order to obtain robust and energy-aware solutions. The proposed dual model has been evaluated with a memetic algorithm to compare the behaviour against the original model. In the JSMS problem there are a relationship between Energy-efficiency, Robustness and Makespan. Therefore, the relationship between these three objectives is studied. Analytical formulas are proposed to analyse the relationship between these objectives. The results show the trade-off between makespan and robustness, and the direct relationship between robustness and energy-efficiency. To reduce the makespan and to process the tasks faster, energy consumption has to be increased. When the energy consumption is low it is because the machines are not working at highest speed. So, if an incidence appears, the speed of these machines can be increased in order to recover the time lost by the incidence. Hence robustness is directly related with energy consumption. Additionally, robustness is also directly related with makespan because, when makespan increases, there are more gaps in the solution, these incidences can be absorbed by these natural buffers. The combination of robustness and stability gives the proposal an added value due to since an incidence cannot be directly absorbed by the disrupted task and it can be repaired by involving only a small number of tasks. In this work we propose two different techniques to manage rescheduling over the JSMS problem. This work represents a breakthrough in the state of the art of scheduling problems and in particular the problem where energy consumption can be controlled by the rate of the machines.
[ES] Muchos de los problemas industriales se pueden modelar como un problema de scheduling donde algunos recursos son asignados a tareas a fin de minimizar el tiempo de finalización, para reducir el uso de los recursos, el tiempo de inactividad, etc. Job-Shop scheduling (JSP) es el problema más utilizado. En JSP hay diferentes trabajos, cada trabajo tiene diferentes tareas y estas tareas tienen que ser ejecutadas por diferentes máquinas. JSP puede ser extendido a otros problemas con el fin de simular una mayor cantidad de problemas reales. En este trabajo se ha utilizado el problema job shop scheduling con operadores JSO(n, p), donde cada tarea también debe ser asistida por un operador de un conjunto limitado de ellos. Además, hemos ampliado el clásico problema JSP a un problema donde las máquinas pueden consumir diferentes cantidades de energía al procesar tareas a diferentes velocidades (JSMS). En JSMS las operaciones tiene que ser ejecutadas por una máquina que tiene la posibilidad de trabajar a diferentes velocidades. Los problemas de scheduling consideran indicadores de optimización tales como: el procesamiento de tiempo, la calidad y el coste. Sin embargo, hoy en día los gobiernos y los empresarios están interesados también en el control del consumo de energía debido al aumento de la demanda y del precio de los combustibles, la reducción de las reservas de materias primas energéticas y la creciente preocupación por el calentamiento global. En esta tesis, hemos desarrollado nuevas técnicas de búsqueda metaheurística para modelar y resolver el problema JSMS. La robustez es una característica común en los problemas de la vida real. Un sistema persiste si permanece en funcionamiento y mantiene sus principales características a pesar de las perturbaciones continuas, cambios o incidencias. Hemos desarrollado una técnica para resolver el problema JSO(n, p) con el objetivo de obtener soluciones robustas y optimizadas. Hemos desarrollado un modelo dual para relacionar los criterios de optimalidad con el consumo de energía y la robustez/estabilidad en el problema JSMS. Este modelo se ha desarrollado para proteger a las tareas dinámicas contra incidencias, con el fin de obtener soluciones sólidas y que tengan en cuenta el consumo de la energía. El modelo dual propuesto ha sido evaluado con un algoritmo memético para comparar el comportamiento frente al modelo original. En el problema JSMS hay una relación entre la eficiencia energética, la robustez y el makespan. Por lo tanto, se estudia la relación entre estos tres objetivos. Se desarrollan fórmulas analíticas para representar la relación estimada entre estos objetivos. Los resultados muestran el equilibrio entre makespan y robustez, y la relación directa entre la robustez y eficiencia energética. Para reducir el makespan, el consumo de energía tiene que ser aumentado para poder procesar las tareas más rápido. Cuando el consumo de energía es bajo, debido a que las máquinas no están trabajando a la velocidad más alta, si una incidencia aparece, la velocidad de estas máquinas puede ser aumentada con el fin de recuperar el tiempo perdido por la incidencia. Por lo tanto la robustez está directamente relacionada con el consumo de energía. Además, la robustez también está directamente relacionada con el makespan porque, cuando el makespan aumenta hay más huecos en la solución, que en caso de surgir incidencias, estas pueden ser absorbidas por estos buffers naturales. La combinación de robustez y estabilidad da un valor añadido debido a que si una incidencia no puede ser absorbida directamente por la tarea interrumpida, esta puede ser reparada mediante la participación un pequeño número de tareas.En este trabajo se proponen dos técnicas diferentes para gestionar el rescheduling sobre el problema JSMS. Este trabajo representa un avance en el estado del arte en los problemas de scheduling y en el problema donde el consumo de energía p
[CAT] Molts dels problemes industrials es poden modelar com un problema de scheduling on alguns recursos són assignats a tasques a fi de minimitzar el temps de finalització, per a reduir l'ús dels recursos, el temps d'inactivitat, etc. Existeixen diversos tipus de problemes de scheduling que intenten representar diferents situacions que poden aparèixer en els problemes del món real. Job-Shop scheduling (JSP) és el problema més utilitzat. En JSP hi ha diferents treballs, cada treball té diferents tasques i aquestes tasques han de ser executades per diferents màquines. JSP pot ser estès a altres problemes amb la finalitat de simular una major quantitat de problemes reals. En aquest treball s'ha utilitzat el problema job shop scheduling amb operadors JSO(n, p), on cada tasca també ha de ser assistida per un operador d'un conjunt limitat d'ells. A més, hem ampliat el clàssic problema JSP a un problema on les màquines poden consumir diferents quantitats d'energia per a processar tasques a diferents velocitats (JSMS). Els problemes de scheduling consideren indicadors d'optimització tals com: el processament de temps, la qualitat i el cost. No obstant açò, avui en dia els governs i els empresaris estan interessats també amb el control del consum d'energia a causa de l'augment de la demanda i del preu dels combustibles, la reducció de les reserves de matèries primeres energètiques i la creixent preocupació per l'escalfament global. En aquesta tesi, hem desenvolupat noves tècniques de cerca metaheurística per a modelar i resoldre el problema JSMS. La robustesa és una característica comuna en els problemes de la vida real. Un sistema persisteix si continua en funcionament i manté les seues principals característiques malgrat les pertorbacions contínues, canvis o incidències. Hem desenvolupat una tècnica per a resoldre el problema JSO(n, p) amb l'objectiu d'obtenir solucions robustes i optimitzades. Hem desenvolupat un model dual per a relacionar els criteris de optimalidad amb el consum d'energia i la robustesa/estabilitat en el problema JSMS. Aquest model s'ha desenvolupat per a protegir a les tasques dinàmiques contra incidències, amb la finalitat d'obtenir solucions sòlides i que tinguen en compte el consum de l'energia. El model dual proposat ha sigut evaluat amb un algorisme memético per a comparar el comportament front un model original. En el problema JSMS hi ha una relació entre l'eficiència energètica, la robustesa i el makespan. Per tant, s'estudia la relació entre aquests tres objectius. Es desenvolupen fórmules analítiques per a representar la relació estimada entre aquests objectius. Els resultats mostren l'equilibri entre makespan i robustesa, i la relació directa entre la robustesa i l'eficiència energètica. Per a reduir el makespan, el consum d'energia ha de ser augmentat per a poder processar les tasques més ràpid. Quan el consum d'energia és baix, a causa que les màquines no estan treballant a la velocitat més alta, si una incidència apareix, la velocitat d'aquestes màquines pot ser augmentada amb la finalitat de recuperar el temps perdut per la incidència. Per tant la robustesa està directament relacionada amb el consum d'energia. A més, la robustesa també està directament relacionada amb el makespan perquè, quan el makespan augmenta hi ha més buits en la solució, que en cas de sorgir incidències, aquestes poden ser absorbides per els buffers naturals. La combinació de robustesa i estabilitat dóna un valor afegit a causa de que si una incidència no pot ser absorbida directament per la tasca interrompuda, aquesta pot ser reparada mitjançant la participació d'un xicotet nombre de tasques. En aquest treball es proposen dues tècniques diferents per a gestionar el rescheduling sobre el problema JSMS. Aquest treball representa un avanç en l'estat de l'art en els problemes de scheduling i, en particular, en el problema on el consum d'energia pot ser controlat per
Escamilla Fuster, J. (2016). Eficiencia Energética y Robustez en Problemas de Scheduling [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/64062
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chandran, Sarath, and Mathews Jithin Abraham. "Simulation and Optimization of CNC controlled grinding processes : Analysis and simulation of automated robot finshing process." Thesis, Högskolan i Halmstad, Maskinteknisk produktframtagning (MTEK), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-30709.

Повний текст джерела
Анотація:
Products with complicated shapes require superior surface finish to perform the intended function. Despite significant developments in technology, finishing operations are still performed semi automatically/manually, relying on the skills of the machinist. The pressure to produce products at the best quality in the shortest lead time has made it highly inconvenient to depend on traditional methods. Thus, there is a rising need for automation which has become a resource to remain competitive in the manufacturing industry. Diminishing return of trading quality over time in finishing operations signifies the importance of having a pre-determined trajectory (tool path) that produces an optimum surface in the least possible machining time. Tool path optimization for finishing process considering tool kinematics is of relatively low importance in the present scenario. The available automation in grinding processes encompass around the dynamics of machining. In this paper we provide an overview of optimizing the tool path using evolutionary algorithms, considering the significance of process dynamics and kinematics. Process efficiency of the generated tool movements are studied based on the evaluation of relative importance of the finishing parameters. Surface quality is analysed using MATLAB and optimization is performed on account of peak to valley height. Surface removal characteristics are analysed based on process variables that have the most likely impact on surface finish. The research results indicated that tool path is the most significant parameter determining the surface quality of a finishing operation. The inter-dependency of parameters were also studied using Taguchi design of experiments. Possible combinations of various tool paths and tool influencing parameters are presented to realize a surface that exhibits lowest errors.
European Horizon 2020 Project SYMPLEXITY
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії