Rozprawy doktorskie na temat „Algorithm efficiency”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Algorithm efficiency.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Algorithm efficiency”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Morgan, Wiley Spencer. "Increasing the Computational Efficiency of Combinatoric Searches". BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6528.

Pełny tekst źródła
Streszczenie:
A new algorithm for the enumeration of derivative superstructures of a crystal is presented. The algorithm will help increase the efficiency of computational material design methods such as cluster expansion by increasing the size and diversity of the types of systems that can be modeled. Modeling potential alloys requires the exploration of all possible configurations of atoms. Additionally, modeling the thermal properties of materials requires knowledge of the possible ways of displacing the atoms. One solution to finding all symmetrically unique configurations and displacements is to generate the complete list of possible configurations and remove those that are symmetrically equivalent. This approach, however, suffers from the combinatoric explosion that happens when the supercell size is large, when there are more than two atom types, or when atomic displacements are included in the system. The combinatoric explosion is a problem because the large number of possible arrangements makes finding the relatively small number of unique arrangements for these systems impractical. The algorithm presented here is an extension of an existing algorithm [Hart & Forcade (2008a), Hart & Forcade (2009a), Hart et al. (2012a) Hart, Nelson, & Forcade] to include the extra configurational degree of freedom from the inclusion of displacement directions. The algorithm makes use of another recently developed algorithm for the Pólya [Pólya & Read (1987), Pólya (1937), Rosenbrock et al.(2015) Rosenbrock, Morgan, Hart, Curtarolo, & Forcade] counting theorem to inform the user of the total number of unique arrangements before performing the enumeration and to ensure that the list of unique arrangements will fit in system memory. The algorithm also uses group theory to eliminate large classes of arrangements rather than eliminating arrangements one by one. The three major topics of this paper will be presented in this order, first the Pólya algorithm, second the new algorithm for eliminating duplicate structures, and third the algorithms extension to include displacement directions. With these tools, it is possible to avoid the combinatoric explosion and enumerate previously inaccessible systems, including those that contain displaced atoms.
Style APA, Harvard, Vancouver, ISO itp.
2

Batbayar, Batsukh, i S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning". RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.

Pełny tekst źródła
Streszczenie:
Feedforward neural networks have been widely studied and used in many applications in science and engineering. The training of this type of networks is mainly undertaken using the well-known backpropagation based learning algorithms. One major problem with this type of algorithms is the slow training convergence speed, which hinders their applications. In order to improve the training convergence speed of this type of algorithms, many researchers have developed different improvements and enhancements. However, the slow convergence problem has not been fully addressed. This thesis makes several contributions by proposing new backpropagation learning algorithms based on the terminal attractor concept to improve the existing backpropagation learning algorithms such as the gradient descent and Levenberg-Marquardt algorithms. These new algorithms enable fast convergence both at a distance from and in a close range of the ideal weights. In particular, a new fast convergence mechanism is proposed which is based on the fast terminal attractor concept. Comprehensive simulation studies are undertaken to demonstrate the effectiveness of the proposed backpropagataion algorithms with terminal attractors. Finally, three practical application cases of time series forecasting, character recognition and image interpolation are chosen to show the practicality and usefulness of the proposed learning algorithms with comprehensive comparative studies with existing algorithms.
Style APA, Harvard, Vancouver, ISO itp.
3

Freund, Robert M. "Theoretical Efficiency of A Shifted Barrier Function Algorithm for Linear Programming". Massachusetts Institute of Technology, Operations Research Center, 1989. http://hdl.handle.net/1721.1/5185.

Pełny tekst źródła
Streszczenie:
This paper examines the theoretical efficiency of solving a standard-form linear program by solving a sequence of shifted-barrier problems of the form minimize cTx - n (xj + ehj) j.,1 x s.t. Ax = b , x + e h > , for a given and fixed shift vector h > 0, and for a sequence of values of > 0 that converges to zero. The resulting sequence of solutions to the shifted barrier problems will converge to a solution to the standard form linear program. The advantage of using the shiftedbarrier approach is that a starting feasible solution is unnecessary, and there is no need for a Phase I-Phase II approach to solving the linear program, either directly or through the addition of an artificial variable. Furthermore, the algorithm can be initiated with a "warm start," i.e., an initial guess of a primal solution x that need not be feasible. The number of iterations needed to solve the linear program to a desired level of accuracy will depend on a measure of how close the initial solution x is to being feasible. The number of iterations will also depend on the judicious choice of the shift vector h . If an approximate center of the dual feasible region is known, then h can be chosen so that the guaranteed fractional decrease in e at each iteration is (1 - 1/(6 i)) , which contributes a factor of 6 ii to the number of iterations needed to solve the problem. The paper also analyzes the complexity of computing an approximate center of the dual feasible region from a "warm start," i.e., an initial (possibly infeasible) guess ir of a solution to the center problem of the dual. Key Words: linear program, interior-point algorithm, center, barrier function, shifted-barrier function, Newton step.
Style APA, Harvard, Vancouver, ISO itp.
4

Khudhair, Ali Dheyaa. "A Simplified Routing Algorithm for Energy Efficiency in Wireless Sensor Networks". Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1885751071&sid=8&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lindberg, Joakim, i Martin Steier. "Efficiency of the hybrid AC3-tabu search algorithm for solving Sudoku puzzles". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166421.

Pełny tekst źródła
Streszczenie:
There are many different algorithms for solving Sudoku puzzles, with one of the newer algorithms being the hybrid AC3-tabu search algorithm. Since the algorithm has not been subject of much research, the aim of this thesis is to increase the knowledge of it. This thesis evaluates the efficiency of the hybrid AC3-tabu search algorithm by analyzing how quickly it solves puzzles compared to two other solving algorithms: one using brute-force search, and one combining human solving techniques with brute-force search. This thesis also investigates if there is a correlation between the number of puzzle clues and the solving time for the hybrid AC3-tabu search algorithm. The results show that the hybrid AC3-tabu search algorithm is less efficient than the two other algorithms, and that there seems to be a correlation between the number of clues and the solving time for the algorithm. The conclusion is that due to the algorithm’s low efficiency and some of its characteristics, it is not suitable for solving Sudoku puzzles.
Style APA, Harvard, Vancouver, ISO itp.
6

Chen, Daven 1959. "COMPARISON OF SCIRTSS EFFICIENCY WITH D-ALGORITHM APPLICATION TO ITERATIVE NETWORKS (TEST)". Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275572.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Burger, Christoph Hartfield Roy J. "Propeller performance analys and multidisciplinary optimization using a genetic algorithm". Auburn, Ala, 2007. http://repo.lib.auburn.edu/2007%20Fall%20Dissertations/Burger_Christoph_57.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Selek, I. (István). "Novel evolutionary methods in engineering optimization—towards robustness and efficiency". Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291579.

Pełny tekst źródła
Streszczenie:
Abstract In industry there is a high demand for algorithms that can efficiently solve search problems. Evolutionary Computing (EC) belonging to a class of heuristics are proven to be well suited to solve search problems, especially optimization tasks. They arrived at that location because of their flexibility, scalability and robustness. However, despite their advantages and increasing popularity, there are numerous opened questions in this research area, many of them related to the design and tuning of the algorithms. A neutral technique called Pseudo Redundancy and related concepts such as Updated Objective Grid (UOG) is proposed to tackle the mentioned problem making an evolutionary approach more suitable for ''real world'' applications while increasing its robustness and efficiency. The proposed UOG technique achieves neutral search by objective function transformation(s) resulting several advantageous features. (a) Simplifies the design of an evolutionary solver by giving population sizing principles and directions to choose the right selection operator. (b) The technique of updated objective grid is adaptive without introducing additional parameters, therefore no parameter tuning required for UOG to adjust it for different environments, introducing robustness. (c) The algorithm of UOG is simple and computationally cheap. (d) It boosts the performance of an evolutionary algorithm on high dimensional (constrained and unconstrained) problems. The theoretical and experimental results from artificial test problems included in this thesis clearly show the potential of the proposed technique. In order to demonstrate the power of the introduced methods under "real" circumstances, the author additionally designed EAs and performed experiments on two industrial optimization tasks. Although, only one project is detailed in this thesis while the other is referred. As the main outcome of this thesis, the author provided an evolutionary method to compute (optimal) daily water pump schedules for the water distribution network of Sopron, Hungary. The algorithm is currently working in industry.
Style APA, Harvard, Vancouver, ISO itp.
9

Kassa, Hailu Belay, Shenko Chura Aredo i Estifanos Yohannes Menta. "ENERGY EFFICIENT ADAPTIVE SECTOR-BASED USER CLUSTERING ALGORITHM FOR CELLULAR NETWORK". International Foundation for Telemetering, 2016. http://hdl.handle.net/10150/624220.

Pełny tekst źródła
Streszczenie:
In this paper, we propose an adaptive and multi-sector-based user clustering algorithm which increases energy efficiency in a cellular network. Adaptive sectoring with dynamically changing sector angles is illustrated with a number of randomly distributed mobile stations. Transmitted power is equally shared by sectors before adaptive user clustering. The sector angles vary from 30 to 360 degrees by merging neighboring sectors and a sector is switched off till the user density exceeds a threshold (Td). The Td value is computed from the total number of users that the cell can accommodate over the area of the cell. The sectors with less than Td density exhibits transmit power which approaches to zero or sleeping state and so that the cumulative power is saved. Simulation results show that an average of 45% to 50% energy can be saved in 10 iterations.
Style APA, Harvard, Vancouver, ISO itp.
10

Silva, Cauane Blumenberg. "Adaptive tiling algorithm based on highly correlated picture regions for the HEVC standard". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96040.

Pełny tekst źródła
Streszczenie:
Esta dissertação de mestrado propõe um algoritmo adaptativo que é capaz de dinamicamente definir partições tile para quadros intra- e inter-preditos com o objetivo de reduzir o impacto na eficiência de codificação. Tiles são novas ferramentas orientadas ao paralelismo que integram o padrão de codificação de vídeos de alta eficiência (HEVC – High Efficiency Video Coding standard), as quais dividem o quadro em regiões retangulares independentes que podem ser processadas paralelamente. Para viabilizar o paralelismo, os tiles quebram as dependências de codificação através de suas bordas, gerando impactos na eficiência de codificação. Este impacto pode ser ainda maior caso os limites dos tiles dividam regiões altamente correlacionadas do quadro, porque a maior parte das ferramentas de codificação usam informações de contexto durante o processo de codificação. Assim, o algoritmo proposto agrupa as regiões do quadro que são altamente correlacionadas dentro de um mesmo tile para reduzir o impacto na eficiência de codificação que é inerente ao uso de tiles. Para localizar as regiões altamente correlacionadas do quadro de uma maneira inteligente, as características da imagem e também as informações de codificação são analisadas, gerando mapas de particionamento que servem como parâmetro de entrada para o algoritmo. Baseado nesses mapas, o algoritmo localiza as quebras naturais de contexto presentes nos quadros do vídeo e define os limites dos tiles nessas regiões. Dessa maneira, as quebras de dependência causadas pelas bordas dos tiles coincidem com as quebras de contexto naturais do quadro, minimizando as perdas na eficiência de codificação causadas pelo uso dos tiles. O algoritmo proposto é capaz de reduzir mais de 0.4% e mais de 0.5% o impacto na eficiência de codificação causado pelos tiles em quadros intra-preditos e inter-preditos, respectivamente, quando comparado com tiles uniformes.
This Master Thesis proposes an adaptive algorithm that is able to dynamically choose suitable tile partitions for intra- and inter-predicted frames in order to reduce the impact on coding efficiency arising from such partitioning. Tiles are novel parallelismoriented tools that integrate the High Efficiency Video Coding (HEVC) standard, which divide the frame into independent rectangular regions that can be processed in parallel. To enable the parallelism, tiles break the coding dependencies across their boundaries leading to coding efficiency impacts. These impacts can be even higher if tile boundaries split highly correlated picture regions, because most of the coding tools use context information during the encoding process. Hence, the proposed algorithm clusters the highly correlated picture regions inside the same tile to reduce the inherent coding efficiency impact of using tiles. To wisely locate the highly correlated picture regions, image characteristics and encoding information are analyzed, generating partitioning maps that serve as the algorithm input. Based on these maps, the algorithm locates the natural context break of the picture and defines the tile boundaries on these key regions. This way, the dependency breaks caused by the tile boundaries match the natural context breaks of a picture, then minimizing the coding efficiency losses caused by the use of tiles. The proposed adaptive tiling algorithm, in some cases, provides over 0.4% and over 0.5% of BD-rate savings for intra- and inter-predicted frames respectively, when compared to uniform-spaced tiles, an approach which does not consider the picture context to define the tile partitions.
Style APA, Harvard, Vancouver, ISO itp.
11

Nalluri, Purnachand. "A fast motion estimation algorithm and its VLSI architecture for high efficiency video coding". Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/15442.

Pełny tekst źródła
Streszczenie:
Doutoramento em Engenharia Eletrotécnica
Video coding has been used in applications like video surveillance, video conferencing, video streaming, video broadcasting and video storage. In a typical video coding standard, many algorithms are combined to compress a video. However, one of those algorithms, the motion estimation is the most complex task. Hence, it is necessary to implement this task in real time by using appropriate VLSI architectures. This thesis proposes a new fast motion estimation algorithm and its implementation in real time. The results show that the proposed algorithm and its motion estimation hardware architecture out performs the state of the art. The proposed architecture operates at a maximum operating frequency of 241.6 MHz and is able to process 1080p@60Hz with all possible variables block sizes specified in HEVC standard as well as with motion vector search range of up to ±64 pixels.
A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância, vídeo-conferência, video streaming e armazenamento de vídeo. Numa norma de codificação de vídeo, diversos algoritmos são combinados para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de movimento é a tarefa mais complexa. Por isso, é necessário implementar esta tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese propõe um algoritmo de estimação de movimento rápido bem como a sua implementação em tempo real. Os resultados mostram que o algoritmo e a arquitetura de hardware propostos têm melhor desempenho que os existentes. A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é capaz de processar imagens de resolução 1080p@60Hz, com todos os tamanhos de blocos especificados na norma HEVC, bem como um domínio de pesquisa de vetores de movimento até ±64 pixels.
Style APA, Harvard, Vancouver, ISO itp.
12

Défossez, Gautier. "Le système d'information multi-sources du Registre général des cancers de Poitou-Charentes. Conception, développement et applications à l'ère des données massives en santé". Thesis, Poitiers, 2021. http://theses.univ-poitiers.fr/64594/2021-Defossez-Gautier-These.

Pełny tekst źródła
Streszczenie:
Les registres du cancer sont au plan international l’outil de référence pour produire une vision exhaustive (non biaisée) du poids, de la dynamique et de la gravité du cancer dans la population générale. Leur travail de classification et de codage des diagnostics selon des normes internationales confère aux données finales une qualité spécifique et une comparabilité dans le temps et dans l’espace qui les rendent incontournables pour décrire l’évolution et la prise en charge du cancer dans un environnement non contrôlé. Leur travail repose sur un processus d’enquête rigoureux dont la complexité est largement dépendante des capacités à accéder et à rassembler efficacement toutes les données utiles concernant un même individu. Créé en 2007, le Registre Général des Cancers de Poitou-Charentes (RGCPC) est un registre de génération récente, débuté à une période propice à la mise en œuvre d’une réflexion sur l’optimisation du processus d’enregistrement. Porté par l’informatisation des données médicales et l’interopérabilité croissante des systèmes d’information, le RGCPC a développé et expérimenté sur 10 ans un système d’information multi-sources associant des méthodes innovantes de traitement et de représentation de l’information fondées sur la réutilisation de données standardisées produites pour d’autres finalités.Dans une première partie, ce travail présente les principes fondateurs et l’implémentation d’un système capable de rassembler des volumes élevés de données, hautement qualifiantes et structurées, et rendues interopérables sur le plan sémantique pour faire l’objet d’approches algorithmiques. Les données sont collectées pluri annuellement auprès de 110 partenaires représentant sept sources de données (cliniques, biologiques et médico-administratives). Deux algorithmes assistent l’opérateur du registre en dématérialisant une grande partie des tâches préalables à l’enregistrement des tumeurs. Un premier algorithme crée les tumeurs et leurs caractéristiques (publication), puis un 2ème algorithme modélise le parcours de soin de chaque individu selon une séquence ordonnée d’évènements horodatés consultable au sein d’une interface sécurisée (publication). Des approches de machine learning sont testées pour contourner l’éventuelle absence de codification des prélèvements anatomopathologiques (publication).La deuxième partie s’intéresse au large champ de recherche et d’évaluation rendu possible par la disponibilité de ce système d’information intégré. Des appariements avec d’autres données de santé ont été testés, dans le cadre d’autorisations réglementaires, pour enrichir la contextualisation et la connaissance des parcours de soins, et reconnaître le rôle stratégique des registres du cancer pour l’évaluation en « vie réelle » des pratiques de soins et des services de santé (preuve de concept) : dépistage, diagnostic moléculaire, traitement du cancer, pharmaco épidémiologie (quatre publications principales). L’appariement des données du RGCPC à celles du registre REIN (insuffisance rénale chronique terminale) a constitué un cas d’usage veillant à expérimenter un prototype de plateforme dédiée au partage collaboratif des données massives en santé (publication).La dernière partie de ce travail propose une discussion ouverte sur la pertinence des solutions proposées face aux exigences de qualité, de coût et de transférabilité, puis dresse les perspectives et retombées attendues pour la surveillance, l’évaluation et la recherche à l’ère des données massives en santé
Population-based cancer registries (PBCRs) are the best international option tool to provide a comprehensive (unbiased) picture of the weight, incidence and severity of cancer in the general population. Their work in classifying and coding diagnoses according to international rules gives to the final data a specific quality and comparability in time and space, thus building a decisive knowledge database for describing the evolution of cancers and their management in an uncontrolled environment. Cancer registration is based on a thorough investigative process, for which the complexity is largely related to the ability to access all the relevant data concerning the same individual and to gather them efficiently. Created in 2007, the General Cancer Registry of Poitou-Charentes (RGCPC) is a recent generation of cancer registry, started at a conducive time to devote a reflection about how to optimize the registration process. Driven by the computerization of medical data and the increasing interoperability of information systems, the RGCPC has experimented over 10 years a multi-source information system combining innovative methods of information processing and representation, based on the reuse of standardized data usually produced for other purposes.In a first section, this work presents the founding principles and the implementation of a system capable of gathering large amounts of data, highly qualified and structured, with semantic alignment to subscribe to algorithmic approaches. Data are collected on multiannual basis from 110 partners representing seven data sources (clinical, biological and medical administrative data). Two algorithms assist the cancer registrar by dematerializing the manual tasks usually carried out prior to tumor registration. A first algorithm generate automatically the tumors and its various components (publication), and a second algorithm represent the care pathway of each individual as an ordered sequence of time-stamped events that can be access within a secure interface (publication). Supervised machine learning techniques are experimented to get around the possible lack of codification of pathology reports (publication).The second section focuses on the wide field of research and evaluation achieved through the availability of this integrated information system. Data linkage with other datasets were tested, within the framework of regulatory authorizations, to enhance the contextualization and knowledge of care pathways, and thus to support the strategic role of PBCRs for real-life evaluation of care practices and health services research (proof of concept): screening, molecular diagnosis, cancer treatment, pharmacoepidemiology (four main publications). Data from the RGCPC were linked with those from the REIN registry (chronic end-stage renal failure) as a use case for experimenting a prototype platform dedicated to the collaborative sharing of massive health data (publication).The last section of this work proposes an open discussion on the relevance of the proposed solutions to the requirements of quality, cost and transferability, and then sets out the prospects and expected benefits in the field of surveillance, evaluation and research in the era of big data
Style APA, Harvard, Vancouver, ISO itp.
13

Potter, Christopher C. J. "Kernel Selection for Convergence and Efficiency in Markov Chain Monte Carol". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/249.

Pełny tekst źródła
Streszczenie:
Markov Chain Monte Carlo (MCMC) is a technique for sampling from a target probability distribution, and has risen in importance as faster computing hardware has made possible the exploration of hitherto difficult distributions. Unfortunately, this powerful technique is often misapplied by poor selection of transition kernel for the Markov chain that is generated by the simulation. Some kernels are used without being checked against the convergence requirements for MCMC (total balance and ergodicity), but in this work we prove the existence of a simple proxy for total balance that is not as demanding as detailed balance, the most widely used standard. We show that, for discrete-state MCMC, that if a transition kernel is equivalent when it is “reversed” and applied to data which is also “reversed”, then it satisfies total balance. We go on to prove that the sequential single-variable update Metropolis kernel, where variables are simply updated in order, does indeed satisfy total balance for many discrete target distributions, such as the Ising model with uniform exchange constant. Also, two well-known papers by Gelman, Roberts, and Gilks (GRG)[1, 2] have proposed the application of the results of an interesting mathematical proof to the realistic optimization of Markov Chain Monte Carlo computer simulations. In particular, they advocated tuning the simulation parameters to select an acceptance ratio of 0.234 . In this paper, we point out that although the proof is valid, its result’s application to practical computations is not advisable, as the simulation algorithm considered in the proof is so inefficient that it produces very poor results under all circumstances. The algorithm used by Gelman, Roberts, and Gilks is also shown to introduce subtle time-dependent correlations into the simulation of intrinsically independent variables. These correlations are of particular interest since they will be present in all simulations that use multi-dimensional MCMC moves.
Style APA, Harvard, Vancouver, ISO itp.
14

Schimuneck, Matias Artur Klafke. "Adaptive Monte Carlo algorithm to global radio resources optimization in H-CRAN". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/169922.

Pełny tekst źródła
Streszczenie:
Até 2020 espera-se que as redes celulares aumentam em dez vezes a área de cobertura, suporte cem vezes mais equipamentos de usuários e eleve a capacidade da taxa de dados em mil vezes, comparada as redes celulares atuais. A densa implantação de pequenas células é considerada uma solução promissora para alcançar essas melhorias, uma vez que aproximar as antenas dos usuários proporciona maiores taxas de dados, devido à qualidade do sinal em curtas distâncias. No entanto, operar um grande número de antenas pode aumentar significativamente o consumo de energia da infraestrutura de rede. Além disso, a grande inserção de novos rádios pode ocasionar maior interferência espectral entre as células. Nesse cenário, a gestão dos recursos de rádio é essencial devido ao impacto na qualidade do serviço prestado aos usuários. Por exemplo, baixas potências de transmissão podem deixar usuários sem conexão, enquanto altas potências elevam a possibilidade de ocorrência de interferência. Além disso, a reutilização não planejada dos recursos de rádio causa a ocorrência de interferência, resultando em baixa capacidade de transmissão, enquanto a subutilização de recursos limita a capacidade total de transmissão de dados. Uma solução para controlar a potência de transmissão, atribuir os recursos de rádio e garantir o serviço aos usuários é essencial. Nesta dissertação, é proposto um algoritmo adaptativo de Monte Carlo para realizar alocação global de recursos de forma eficiente em termos de energia, para arquiteturas Heterogeneous Cloud Radio Access Network (H-CRAN), projetadas como futuras redes de quinta geração (5G). Uma solução eficiente para a alocação de recursos em cenários de alta e baixa densidade é proposta. Nossas contribuições são triplas: (i) proposta de uma abordagem global para o problema de atribuição de recursos de rádio na arquitetura HCRAN, cujo caráter estocástico garante uma amostragem geral de espaço de solução; (ii) uma comparação crítica entre nossa solução global e um modelo local; (iii) a demonstração de que, para cenários de alta densidade, a Eficiência Energética não é uma medida adequada para alocação eficiente, considerando a capacidade de transmissão, justiça e total de usuários atendidos. Além disso, a proposta é comparada em relação a três algoritmos de alocação de recursos de última geração para redes 5G.
Up until 2020 it is expected that cellular networks must raise the coverage area in 10-fold, support a 100-fold more user equipments, and increase the data rate capacity by a 1000-fold in comparison with current cellular networks. The dense deployment of small cells is considered a promising solution to reach such aggressive improvements, once it moves the antennas closer to the users, achieving higher data rates due to the signal quality at short distances. However, operating a massive number of antennas can significantly increase the energy consumption of the network infrastructure. Furthermore, the large insertion of new radios brings greater spectral interference between the cells. In this scenery, the optimal management of radio resources turn an exaction due to the impact on the quality of service provided to the users. For example, low transmission powers can leave users without connection, while high transmission powers can contribute to inter radios interference. Furthermore, the interference can be raised on the unplanned reuse of the radio resources, resulting in low data transmission per radio resource, as the under-reuse of radio resources limits the overall data transmission capacity. A solution to control the transmission power, assign the spectral radio resources, and ensure the service to the users is essential. In this thesis, we propose an Adaptive Monte Carlo algorithm to perform global energy efficient resource allocation for Heterogeneous Cloud Radio Access Network (HCRAN) architectures, which are forecast as future fifth-generation (5G) networks. We argue that our global proposal offers an efficient solution to the resource allocation for both high and low density scenarios. Our contributions are threefold: (i) the proposal of a global approach to the radio resource assignment problem in H-CRAN architecture, whose stochastic character ensures an overall solution space sampling; (ii) a critical comparison between our global solution and a local model; (iii) the demonstration that, for high density scenarios, Energy Efficiency is not a well suited metric for efficient allocation, considering data rate capacity, fairness, and served users. Moreover, we compare our proposal against three state-of-the-art resource allocation algorithms for 5G networks.
Style APA, Harvard, Vancouver, ISO itp.
15

Netzén, Örn André. "The Efficiency of Financial Markets Part II : A Stochastic Oscillator Approach". Thesis, Umeå universitet, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-170753.

Pełny tekst źródła
Streszczenie:
Over a long period of time, researchers have investigated the efficiency of financial markets. The widely accepted theory of the subject is the Efficient Market Hypothesis, which states that prices of financial assets are set efficiently. A common way to test this hypothesis is to analyze the returns generated by technical trading rules which uses historical prices in an attempt to predict future price development. This is also what this study aims to do. Using adjusted daily closing prices ranging over 2007 to 2019 for 5120 stocks listed on the U.S stock market, this study tests a momentum trading strategy called the stochastic oscillator in an attempt to beat a buy and hold strategy of the Russel 3000 stock market index. The stochastic oscillator is constructed in three different ways, the Fast%K, the Fast%D and the Slow%D, the difference being that a smoothing parameter is used in the Fast%D and Slow%D in an attempt to reduce the number of whiplashes or false trading signals. The mean returns of the technical trading strategies are tested against the mean returns of the buy and hold strategy using a non-parametric bootstrap methodology and also, the risk adjusted returns in terms of Sharpe Ratios are compared for the different strategies. The results find no significance difference between the mean returns of the buy and hold strategy and any of the technical trading strategies. Further, the buy and hold strategy delivers a higher risk adjusted return compared to the technical trading strategies, although, only by a small margin. Regarding the smoothing parameter applied to the strategies, it seems to fulfill its purpose by reducing the number of trades and slightly increasing the mean returns of the technical trading strategies. Finally, for deeper insight in the subject, a reading of "The efficiency of financial markets: A dual momentum trading strategy on the Swedish stock market" by Netzén Örn (2018) is recommended.
Style APA, Harvard, Vancouver, ISO itp.
16

Lu, Qing. "Applications of the genetic algorithm optimisation approach in the design of high efficiency microwave class E power amplifiers". Thesis, Northumbria University, 2012. http://nrl.northumbria.ac.uk/13340/.

Pełny tekst źródła
Streszczenie:
In this thesis Genetic Algorithm Optimisation Methods (GA) is studied and for the first time used to design high efficiency microwave class E power amplifiers (PAs) and associated load patch antennas. The difficulties of designing high efficiency PAs is that power transistors are highly non linear and classical design techniques only work for resistive loads. There are currently no high efficient and accurate procedures for design high efficiency PAs. To achieve simplified and accurate design procedure, GA and new design quadratic equations are introduced and applied. The performance analysis is based on linear switch models and non linear circuitry push-pull methods. The results of the analytical calculations and experimental verification showed that the power added efficiency (PAE) of the PAs mainly depend on the losses of the active device itself and are nearly independent on the losses of its harmonic networks. Hence, it has been proven that the cheap material PCB FR4 can be used to design high efficiency class E PAs and it also shown that low Q factor networks have only a minor effect on efficiency, allowing a wide bandwidth to be obtained. In additional, a new procedure for designing class E PAs is introduced and applied. The active device (ATF 34143) is used. Good agreement was obtained between predicted analyses and the simulation results (from Microwave Office (AWR) and Agilent ADS software). For the practical realization, class E PAs were fabricated and tested using PCB FR4. The practical results validate computer simulations and the PAE of the class E PAs are more than 71% and Gain is over 3.8 dB when input power (Pin) is equal to 14 dBm at 2 GHz.
Style APA, Harvard, Vancouver, ISO itp.
17

Sciullo, Luca. "Energy-efficient wireless sensor networks via scheduling algorithm and radio Wake-up technology". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14539/.

Pełny tekst źródła
Streszczenie:
One of the most important requirements for wireless sensor networks (WSNs) is the energy efficiency, since sensors are usually fed by a battery that cannot be replaced or recharged. Radio wake-up - the technology that lets a sensor completely turn off and be reactivated by converting the electromagnetic field of radio waves into energy - is now one of the most emergent strategies in the design of wireless sensor networks. This work presents Scheduled on Demand Radio WakeUp (SORW), a flexible scheduler designed for a wireless sensor network where duty cycling strategy and radio wake-up technology are combined in order to optimize the network lifetime. In particular, it tries to keep sensors sleeping as much as possible, still guaranteeing a minimum number of detections per unit of time. Performances of SORW are provided through the use of OMNet++ simulator and compared to results obtained by other basic approaches. Results show that with SORW it is possible to reach a theoretical lifetime of several years, compared to simpler schedulers that only reach days of activity of the network.
Style APA, Harvard, Vancouver, ISO itp.
18

Vu, Chinh Trung. "An Energy-Efficient Distributed Algorithm for k-Coverage Problem in Wireless Sensor Networks". Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_theses/40.

Pełny tekst źródła
Streszczenie:
Wireless sensor networks (WSNs) have recently achieved a great deal of attention due to its numerous attractive applications in many different fields. Sensors and WSNs possesses a number of special characteristics that make them very promising in many applications, but also put on them lots of constraints that make issues in sensor network particularly difficult. These issues may include topology control, routing, coverage, security, and data management. In this thesis, we focus our attention on the coverage problem. Firstly, we define the Sensor Energy-efficient Scheduling for k-coverage (SESK) problem. We then solve it by proposing a novel, completely localized and distributed scheduling approach, naming Distributed Energy-efficient Scheduling for k-coverage (DESK) such that the energy consumption among all the sensors is balanced, and the network lifetime is maximized while still satisfying the k-coverage requirement. Finally, in related work section we conduct an extensive survey of the existing work in literature that focuses on with the coverage problem.
Style APA, Harvard, Vancouver, ISO itp.
19

Parthasarathy, Nikhil Kaushik. "An efficient algorithm for blade loss simulations applied to a high-order rotor dynamics problem". Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/189.

Pełny tekst źródła
Streszczenie:
In this thesis, a novel approach is presented for blade loss simulation of an aircraft gas turbine rotor mounted on rolling element bearings with squeeze film dampers, seal rub and enclosed in a flexible housing. The modal truncation augmentation (MTA) method provides an efficient tool for modeling this large order system with localized nonlinearities in the ball bearings. The gas turbine engine, which is composed of the power turbine and gas generator rotors, is modeled with 38 lumped masses. A nonlinear angular contact bearing model is employed, which has ball and race degrees of freedom and uses a modified Hertzian contact force between the races and balls and for the seal rub. This combines a dry contact force and viscous damping force. A flexible housing with seal rub is also included whose modal description is imported from ANSYS. Prediction of the maximum contact load and the corresponding stress on an elliptical contact area between the races and balls is made during the blade loss simulations. A finite-element based squeeze film damper (SFD), which determines the pressure profile of the oil film and calculates damper forces for any type of whirl orbit is utilized in the simulation. The new approach is shown to provide efficient and accurate predictions of whirl amplitudes, maximum contact load and stress in the bearings, transmissibility, thermal growths, maximum and minimum damper pressures and the amount of unbalanced force for incipient oil film cavitation. It requires about 4 times less computational time than the traditional approaches and has an error of less than 5 %.
Style APA, Harvard, Vancouver, ISO itp.
20

Sklavounos, Dimitris C. "Detection of abnormal situations and energy efficiency control in Heating Ventilation and Air Conditioning (HVAC) systems". Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/12843.

Pełny tekst źródła
Streszczenie:
This research is related to the control of energy consumption and efficiency in building Heating Ventilation and Air Conditioning (HVAC) systems and is primarily concerned with controlling the function of heating. The main goal of this thesis is to develop a control system that can achieve the following two main control functions: a) detection of unexpected indoor conditions that may result in unnecessary power consumption and b) energy efficiency control regarding optimal balancing of two parameters: the required energy consumption for heating, versus thermal comfort of the occupants. Methods of both orientations were developed in a multi-zone space composed of nine zones where each zone is equipped with a wireless node consisting of temperature and occupancy sensors while all the scattered nodes together form a wireless sensor network (WSN). The main methods of both control functions utilize the potential of the deterministic subspace identification (SID) predictive model which provides the predicted temperature of the zones. In the main method for detecting unexpected situations that can directly affect the thermal condition of the indoor space and cause energy consumption (abnormal situations), the predictive temperature from the SID model is compared with the real temperature and thus possible temperature deviations that indicate unexpected situations are detected. The method successfully detects two situations: the high infiltration gain due to unexpected cold air intake from the external surroundings through potential unforeseen openings (windows, exterior doors, opened ceilings etc) as well as the high heat gain due to onset of fire. With the support of the statistical algorithm for abrupt change detection, Cumulative Sum (CUSUM), the detection of temperature deviations is accomplished with accuracy in a very short time. The CUSUM algorithm is first evaluated at an initial approach to detect power diversions due to the above situations caused by the aforementioned exogenous factors. The predicted temperature of the zone from the SID model utilized appropriately also by the main method of the second control function for energy efficiency control. The time needed for the temperature of a zone to reach the thermal comfort zone threshold from a low initial value is measured by the predicted temperature evolution, and this measurement bases the logic of a control criterion for applying proactive heating to the unoccupied zones or not. Additional key points for the control criterion of the method is the occupation time of the zones as well as the remaining time of the occupants in the occupied zones. Two scenarios are examined: the first scenario with two adjacent zones where the one is occupied and the other is not, and the second scenario with a multi-zone space where the occupants are moving through the zones in a cascade mode. Gama and Pareto probability distributions modeled the occupation times of the two-zone scenario while exponential distribution modeled the cascade scenario as the least favorable case. The mobility of the occupants modeled with a semi-Markov process and the method provides satisfactory and reasonable results. At an initial approach the proactive heating of the zones is evaluated with specific algorithms that handle appropriately the occupation time into the zones.
Style APA, Harvard, Vancouver, ISO itp.
21

Kartal, Koc Elcin. "An Algorithm For The Forward Step Of Adaptive Regression Splines Via Mapping Approach". Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615012/index.pdf.

Pełny tekst źródła
Streszczenie:
In high dimensional data modeling, Multivariate Adaptive Regression Splines (MARS) is a well-known nonparametric regression technique to approximate the nonlinear relationship between a response variable and the predictors with the help of splines. MARS uses piecewise linear basis functions which are separated from each other with breaking points (knots) for function estimation. The model estimating function is generated in two stepwise procedures: forward selection and backward elimination. In the first step, a general model including too many basis functions so the knot points are generated
and in the second one, the least contributing basis functions to the overall fit are eliminated. In the conventional adaptive spline procedure, knots are selected from a set of distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid these drawbacks, it is possible to select the knot points from a subset of data points, which leads to data reduction. In this study, a new method (called S-FMARS) is proposed to select the knot points by using a self organizing map-based approach which transforms the original data points to a lower dimensional space. Thus, less number of knot points is enabled to be evaluated for model building in the forward selection of MARS algorithm. The results obtained from simulated datasets and of six real-world datasets show that the proposed method is time efficient in model construction without degrading the model accuracy and prediction performance. In this study, the proposed approach is implemented to MARS and CMARS methods as an alternative to their forward step to improve them by decreasing their computing time
Style APA, Harvard, Vancouver, ISO itp.
22

Holmgren, Faghihi Josef, i Paul Gorgis. "Time efficiency and mistake rates for online learning algorithms : A comparison between Online Gradient Descent and Second Order Perceptron algorithm and their performance on two different data sets". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260087.

Pełny tekst źródła
Streszczenie:
This dissertation investigates the differences between two different online learning algorithms: Online Gradient Descent (OGD) and Second-Order Perceptron (SOP) algorithm, and how well they perform on different data sets in terms of mistake rate, time cost and number of updates. By studying different online learning algorithms and how they perform in different environments will help understand and develop new strategies to handle further online learning tasks. The study includes two different data sets, Pima Indians Diabetes and Mushroom, together with the LIBOL library for testing. The results in this dissertation show that Online Gradient Descent performs overall better concerning the tested data sets. In the first data set, Online Gradient Descent recorded a notably lower mistake rate. For the second data set, although it recorded a slightly higher mistake rate, the algorithm was remarkably more time efficient compared to Second-Order Perceptron. Future work would include a wider range of testing with more, and different, data sets as well as other relative algorithms. This will lead to better result and higher credibility.
Den här avhandlingen undersöker skillnaden mellan två olika “online learning”-algoritmer: Online Gradient Descent och Second-Order Perceptron, och hur de presterar på olika datasets med fokus på andelen felklassificeringar, tidseffektivitet och antalet uppdateringar. Genom att studera olika “online learning”-algoritmer och hur de fungerar i olika miljöer, kommer det hjälpa till att förstå och utveckla nya strategier för att hantera vidare “online learning”-problem. Studien inkluderar två olika dataset, Pima Indians Diabetes och Mushroom, och använder biblioteket LIBOL för testning. Resultatet i denna avhandling visar att Online Gradient Descent presterar bättre överlag på de testade dataseten. För det första datasetet visade Online Gradient Descent ett betydligt lägre andel felklassificeringar. För det andra datasetet visade OGD lite högre andel felklassificeringar, men samtidigt var algoritmen anmärkningsvärt mer tidseffektiv i jämförelse med Second-Order Perceptron. Framtida studier inkluderar en bredare testning med mer, och olika, datasets och andra relaterade algoritmer. Det leder till bättre resultat och höjer trovärdigheten.
Style APA, Harvard, Vancouver, ISO itp.
23

Dobson, William Keith. "Method for Improving the Efficiency of Image Super-Resolution Algorithms Based on Kalman Filters". Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/82.

Pełny tekst źródła
Streszczenie:
The Kalman Filter has many applications in control and signal processing but may also be used to reconstruct a higher resolution image from a sequence of lower resolution images (or frames). If the sequence of low resolution frames is recorded by a moving camera or sensor, where the motion can be accurately modeled, then the Kalman filter may be used to update pixels within a higher resolution frame to achieve a more detailed result. This thesis outlines current methods of implementing this algorithm on a scene of interest and introduces possible improvements for the speed and efficiency of this method by use of block operations on the low resolution frames. The effects of noise on camera motion and various blur models are examined using experimental data to illustrate the differences between the methods discussed.
Style APA, Harvard, Vancouver, ISO itp.
24

Gendre, Victor Hugues. "Predicting short term exchange rates with Bayesian autoregressive state space models: an investigation of the Metropolis Hastings algorithm forecasting efficiency". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437399395.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Ramarathinam, Venkatesh. "A control layer algorithm for ad hoc networks in support of urban search and rescue (USAR) applications". [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000604.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Usman, Modibo. "The Effect of the Implementation of a Swarm Intelligence Algorithm on the Efficiency of the Cosmos Open Source Managed Operating System". Thesis, Northcentral University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10810882.

Pełny tekst źródła
Streszczenie:

As the complexity of mankind’s day-to-day challenges increase, so does a need for the optimization of know solutions to accommodate for this increase in complexity. Today’s computer systems use the Input, Processing, and Output (IPO) model as a way to deliver efficiency and optimization in human activities. Since the relative quality of an output utility derived from an IPO based computer system is closely coupled to the quality of its input media, the measure of the Optimal Quotient (OQ) is the ratio of the input to output which is 1:1. This relationship ensures that all IPO based computers are not just linearly predictable, but also characterized by the Garbage In Garbage Out (GIGO) design concept. While current IPO based computer systems have been relatively successful at delivering some measure of optimization, there is a need to examine (Li & Malik, 2016) alternative methods of achieving optimization. The purpose of this quantitative research study, through an experimental research design, is to determine the effects of the application of a Swarm Intelligence algorithm on the efficiency of the Cosmos Open Source Managed Operating System.

By incorporating swarm intelligence into an improved IPO design, this research addresses the need for optimization in computer systems through the creation of an improved operating system Scheduler. The design of a Swarm Intelligence Operating System (SIOS) is an attempt to solve some inherent vulnerabilities and problems of complexity and optimization otherwise unresolved in the design of conventional operating systems. This research will use the Cosmos open source operating system as a test harness to ensure improved internal validity while the subsequent measurement between the conventional and improved IPO designs will demonstrate external validity to real world applications.

Style APA, Harvard, Vancouver, ISO itp.
27

Vasudevan, Meera. "Profile-based application management for green data centres". Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/98294/1/Meera_Vasudevan_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis presents a profile-based application management framework for energy-efficient data centres. The framework is based on a concept of using Profiles that provide prior knowledge of the run-time workload characteristics to assign applications to virtual machines. The thesis explores the building of profiles for applications, virtual machines and servers from real data centre workload logs. This is then used to inform static and dynamic application assignment, and consolidation of applications.
Style APA, Harvard, Vancouver, ISO itp.
28

Zhang, Ying. "Bayesian D-Optimal Design for Generalized Linear Models". Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/30147.

Pełny tekst źródła
Streszczenie:
Bayesian optimal designs have received increasing attention in recent years, especially in biomedical and clinical trials. Bayesian design procedures can utilize the available prior information of the unknown parameters so that a better design can be achieved. However, a difficulty in dealing with the Bayesian design is the lack of efficient computational methods. In this research, a hybrid computational method, which consists of the combination of a rough global optima search and a more precise local optima search, is proposed to efficiently search for the Bayesian D-optimal designs for multi-variable generalized linear models. Particularly, Poisson regression models and logistic regression models are investigated. Designs are examined for a range of prior distributions and the equivalence theorem is used to verify the design optimality. Design efficiency for various models are examined and compared with non-Bayesian designs. Bayesian D-optimal designs are found to be more efficient and robust than non-Bayesian D-optimal designs. Furthermore, the idea of the Bayesian sequential design is introduced and the Bayesian two-stage D-optimal design approach is developed for generalized linear models. With the incorporation of the first stage data information into the second stage, the two-stage design procedure can improve the design efficiency and produce more accurate and robust designs. The Bayesian two-stage D-optimal designs for Poisson and logistic regression models are evaluated based on simulation studies. The Bayesian two-stage optimal design approach is superior to the one-stage approach in terms of a design efficiency criterion.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
29

Plociennik, Kai. "From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems". Doctoral thesis, Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-65314.

Pełny tekst źródła
Streszczenie:
In theoretical computer science, various notions of efficiency are used for algorithms. The most commonly used notion is worst-case efficiency, which is defined by requiring polynomial worst-case running time. Another commonly used notion is average-case efficiency for random inputs, which is roughly defined as having polynomial expected running time with respect to the random inputs. Depending on the actual notion of efficiency one uses, the approximability of a combinatorial optimization problem can be very different. In this dissertation, the approximability of three classical combinatorial optimization problems, namely Independent Set, Coloring, and Shortest Common Superstring, is investigated for different notions of efficiency. For the three problems, approximation algorithms are given, which guarantee approximation ratios that are unachievable by worst-case efficient algorithms under reasonable complexity-theoretic assumptions. The algorithms achieve polynomial expected running time for different models of random inputs. On the one hand, classical average-case analyses are performed, using totally random input models as the source of random inputs. On the other hand, probabilistic analyses are performed, using semi-random input models inspired by the so called smoothed analysis of algorithms. Finally, the expected performance of well known greedy algorithms for random inputs from the considered models is investigated. Also, the expected behavior of some properties of the random inputs themselves is considered.
Style APA, Harvard, Vancouver, ISO itp.
30

Negrea, Andrei Liviu. "Optimization of energy efficiency for residential buildings by using artificial intelligence". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI090.

Pełny tekst źródła
Streszczenie:
La consommation, en général, représente le processus d’utilisation d’un type de ressource où des économies doivent être réalisées. La consommation d’énergie est devenue l’un des principaux problèmes d’urbanisation et de crise énergétique, car l’épuisement des combustibles fossiles et le réchauffement climatique mettent en péril l’utilisation de l’énergie des plantes. Cette thèse présent une méthode d’économie d’énergie a été adoptée pour la réduction de consommation d’énergie prévu le secteur résidentiel et les maisons passives. Un modèle mathématique basé sur des mesures expérimentales a été développé pour simuler le comportement d’un laboratoire d’essai de l’UPB. Le protocole expérimental a été réalisé à la suite d’actions telles que : la construction de bases de données sur les paramètres, la collecte de données météorologiques, l’apport de flux auxiliaires tout en considérant le comportement humain. L’algorithme de contrôle-commande du système est capable de maintenir une température constante à l’intérieur du bâtiment avec une consommation minimale d’énergie. Les mesures et l’acquisition de données ont été configurées à deux niveaux différents: les données météorologiques et les données sur les bâtiments. La collection de données est faite sur un serveur qui a été mis en œuvre dans l’installation de test en cours d’exécution d’un algorithme complexe qui peut fournir le contrôle de consommation d’énergie. La thèse rapporte plusieurs méthodes numériques pour envisage la consommation d’énergie, utilisée avec l’algorithme de contrôle. Un cas expérimental basé sur des méthodes de calcul dynamiques pour les évaluations de performance énergétique de construction a été faite à Grenade, en Espagne, l’information qui a été plus tard utilisée dans cette thèse. L’estimation des paramètres R-C avec la prévision du flux de chaleur a été faite en utilisant la méthode nodal, basée sur des éléments physiques, des données d’entrée et des informations météorologiques. La prévision d’énergie de consommation présent des résultats améliorés tandis que la collecte de données IoT a été téléchargée sur une carte à base de système de tarte aux framboises. Tous ces résultats ont été stables montrant des progrès impressionnants dans la prévision de la consommation d’énergie et leur application en énergie
Consumption, in general, represents the process of using a type of resource where savings needs to be done. Energy consumption has become one the main issue of urbanization and energy crisis as the fossil depletion and global warming put under threat the planet energy utilization. In this thesis, an automatic control of energy was developed to reduce energy consumption in residential area and passive house buildings. A mathematical model founded on empirical measurements was developed to emphasize the behavior of a testing laboratory from Universitatea Politehnica din București - Université Politechnica de Bucarest - Roumanie. The experimental protocol was carried out following actions such as: building parameters database, collecting weather data, intake of auxiliary flows while considering the controlling factors. The control algorithm is controlling the system which can maintain a comfortable temperature within the building with minimum energy consumption. Measurements and data acquisition have been setup on two different levels: weather and buildings data. The data collection is gathered on a server which was implemented into the testing facility running a complex algorithm which can control energy consumption. The thesis reports several numerical methods for estimating the energy consumption that is further used with the control algorithm. An experimental showcase based on dynamic calculation methods for building energy performance assessments was made in Granada, Spain, information which was later used in this thesis. Estimation of model parameters (resistances and capacities) with prediction of heat flow was made using nodal method, based on physical elements, input data and weather information. Prediction of energy consumption using state-space modeling show improved results while IoT data collection was uploaded on a Raspberry Pi system. All these results were stable showing impressive progress in the prediction of energy consumption and their application in energy field
Style APA, Harvard, Vancouver, ISO itp.
31

Bizkevelci, Erdal. "A Control Algorithm To Minimize Torque Ripple And Acoustic Noise Of Switched Reluctance Motors". Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609866/index.pdf.

Pełny tekst źródła
Streszczenie:
Despite its simple construction, robustness and low manufacturing cost, the application areas of SR motors are remained limited due to the high level of acoustic noise and torque ripple. In this thesis work, two different type of controllers are designed and implemented in order to minimize the acoustic noise and torque ripple which are considered as the major problems of SR motors. In this scope, first the possible acoustic noise sources are investigated. A sliding mode controller is designed and implemented to reduce the shaft torque ripple which is considered as a major source of acoustic noise. The performance of the controller is experimentally tested and it is observed that especially in low speed region reduction of torque ripple is significant. The torque ripple minimization performance of the controller is also tested at different speeds and the acoustic noise levels are recorded simultaneously. Comparing the noise mitigation with the noise reduction the correlation between the acoustic noise and shaft torque ripple is investigated. The results obtained from this investigation indicated that the torque ripple is not a major source of acoustic noise in SR motors. After this finding, radial force which is the other possible acoustic noise source of SRM is taken into consideration. The effects of control parameters on radial force and the motor efficiency are investigated via simulations. With the intuition obtained from this analysis, a switching angle neuro-controller is designed to minimize the peak level of radial forces. The performance of the mentioned controller is verified through noise records under steady state conditions. Regarding to the radial force simulations and the acoustic noise measurements, it is deduced that the radial force is the major source of acoustic noise. On the other hand, another controller is designed and implemented which increases the average torque per ampere value in order to increase the efficiency of the motor. It is seen that this controller has a good effect on increasing the efficiency but does not guarantee to operate at maximum efficiency.
Style APA, Harvard, Vancouver, ISO itp.
32

Hassan, Aakash. "Improving the efficiency, power quality, and cost-effectiveness of solar PV systems using intelligent techniques". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2023. https://ro.ecu.edu.au/theses/2676.

Pełny tekst źródła
Streszczenie:
Growing energy demand, depleting fossil fuels, and increasing environmental concerns lead to adaptation to clean and sustainable energy sources. Renewable energy sources are now believed to play a critical role in diminishing the deteriorating environment, supplying power to remote areas with no access to the grid, and overcoming the energy crisis by reducing the stress on existing power networks. Therefore, an upsurge in renewablesbased energy systems development has been observed during the previous few decades. In particular, solar PV technology has demonstrated extraordinary growth due to readily available solar energy, technological advancement, and a decline in costs. However, its low power conversion efficiency, intermittency, high capital cost, and low power quality are the major challenges in further uptake. This research intends to enhance the overall performance of PV systems by providing novel solutions at all levels of a PV system hierarchy. The first level investigated is the solar energy to PV power conversion, where an efficient maximum power point tracking (MPPT) method is developed. Secondly, the dc to ac power conversion is explored, and an optimal PV system sizing approach with abidance to power quality constraints is developed. Finally, smart power management strategies are investigated to utilise the energy produced by solar PV efficiently, such that the minimum cost of energy can be achieved while considering various technical constraints. The methods involve Genetic Algorithm (GA) for finding the optimal parameters, mathematical models, MATLAB/Simulink simulations of solar PV system (including PV arrays, dc/dc converter with MPPT, batteries, dc/ac inverter, and electric load), and experimental testing of the developed MPPT method and power management strategies at the smart energy lab, Edith Cowan University. Highly dynamic weather and electricity consumption data encompassing multiple seasons are used to test the viability of the developed methods. The results exhibit that the developed hybrid MPPT technique outperforms the conventional techniques by offering a tracking efficiency of above 99%, a tracking speed of less than 1s and almost zero steady-state oscillations under rapidly varying environmental conditions. Additionally, the developed MPPT technique can also track the global maximum power point during partial shading conditions. The analyses of power quality at the inverter’s terminal voltage and current waveforms revealed that solar PV capacity, battery size, and LC filter parameters are critical for the reliable operation of a solar PV system and may result in poor power quality leading to system failure if not selected properly. On the other hand, the optimal system parameters found through the developed methodology can design a solar PV system with minimum cost and conformance to international power quality standards. The comparison between the grid-connected and stand-alone solar PV system reveals that for the studied case, the grid-connected system is more economical than the stand-alone system but outputs higher life cycle emissions. It was also found that for grid tied PV systems, minimum cost of energy can be achieved at an optimal renewable to grid ratio. Additionally, applying a time varying tariff yields a slightly lower energy cost than the anytime flat tariff. A sensitivity analysis of the reliability index, i.e., loss of power supply probability (LPSP), demonstrates that for the stand-alone PV systems, there is an inverse relationship between LPSP and cost of energy. Contrarily, for grid-connected systems, the cost of energy does not vary significantly with the change in LPSP.
Style APA, Harvard, Vancouver, ISO itp.
33

Vu, Chinh Trung. "Distributed Energy-Efficient Solutions for Area Coverage Problems in Wireless Sensor Networks". Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/cs_diss/37.

Pełny tekst źródła
Streszczenie:
Wireless sensor networks (WSNs) have recently attracted a great deal of attention due to their numerous attractive applications in many different fields. Sensors and WSNs possess a number of special characteristics that make them very promising in a wide range of applications, but they also put on them lots of constraints that make issues in sensor network particularly challenging. These issues may include topology control, routing, coverage, security, data management and many others. Among them, coverage problem is one of the most fundamental ones for which a WSN has to watch over the environment such as a forest (area coverage) or set of subjects such as collection of precious renaissance paintings (target of point coverage) in order for the network to be able to collect environment parameters, and maybe further monitor the environment. In this dissertation, we highly focus on the area coverage problem. With no assumption of sensors’ locations (i.e., the sensor network is randomly deployed), we only consider distributed and parallel scheduling methods with the ultimate objective of maximizing network lifetime. Additionally, the proposed solutions (including algorithms, a scheme, and a framework) have to be energy-efficient. Generally, we investigate numerous generalizations and variants of the basic coverage problem. Those problems of interest include k-coverage, composite event detection, partial coverage, and coverage for adjustable sensing range network. Various proposed algorithms. In addition, a scheme and a framework are also suggested to solve those problems. The scheme, which is designed for emergency alarming applications, specifies the guidelines for data and communication patterns that significantly reduce the energy consumption and guarantee very low notification delay. For partial coverage problem, we propose a universal framework (consisting of four strategies) which can take almost any complete-coverage algorithm as an input to generate an algorithm for partial coverage. Among the four strategies, two pairs of strategies are trade-off in terms of network lifetime and coverage uniformity. Extensive simulations are conducted to validate the efficiency of each of our proposed solutions.
Style APA, Harvard, Vancouver, ISO itp.
34

Costa, Luis Herinque MagalhÃes. "UTILIZAÃÃO DE UM ALGORITMO GENÃTICO HÃBRIDO NA OPERAÃÃO DE SISTEMAS DE ABASTECIMENTO DE ÃGUA COM ÃNFASE NA EFICIÃNCIA ENERGÃTICA". Universidade Federal do CearÃ, 2010. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=4756.

Pełny tekst źródła
Streszczenie:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
COSTA, L.H.M. UtilizaÃÃo de um algoritmo genÃtico hÃbrido na operaÃÃo de sistemas de abastecimento de Ãgua com Ãnfase na eficiÃncia energÃtica. Fortaleza, 2010. 146 p. Tese (Doutorado) - Universidade Federal do CearÃ, Fortaleza, 2010. Em geral, as regras operacionais dos Sistemas de Abastecimento de Ãgua (SAAs) visam à garantia da continuidade do abastecimento pÃblico, sem a consideraÃÃo da variaÃÃo da tarifa energÃtica ao longo do dia. Este fato ocasiona o aumento do custo energÃtico gerado pelos motores das bombas em funcionamento. Entretanto, alÃm da utilizaÃÃo eficiente da tarifa energÃtica, outros aspectos devem ser considerados na operaÃÃo de um SAA tais como, a gama de combinaÃÃes possÃveis de regras operacionais, a variaÃÃo da demanda hÃdrica e a manutenÃÃo dos nÃveis dos reservatÃrios e das pressÃes nos pontos de consumo dentro de seus limites prÃestabelecidos. Isto motivou o desenvolvimento desta pesquisa, que tem como objetivo fornecer ao operador condiÃÃes de operacionalidade nas estaÃÃes elevatÃrias do sistema de forma racional, nÃo dependendo somente de sua experiÃncia profissional. Desta forma, apresenta-se neste trabalho um modelo computacional de apoio à tomada de decisÃo com vistas à minimizaÃÃo dos gastos com energia elÃtrica. Para tanto, fundamenta-se na junÃÃo da tÃcnica dos Algoritmos GenÃticos (AGs) e do simulador hidrÃulico EPANET. O AG à responsÃvel pela busca de estratÃgias operacionais com custo energÃtico reduzido, enquanto que a avaliaÃÃo do desempenho hidrÃulico dessas estratÃgias à feita pelo EPANET. AlÃm disso, devido à alta aleatoriedade caracterÃstica dos AGs, foi incorporado ao mesmo um conjunto de algoritmos determinÃsticos visando tornar o processo o menos estocÃstico possÃvel. Com o acoplamento destes algoritmos ao AG padrÃo desenvolveu-se um Algoritmo GenÃtico HÃbrido (AGH). A metodologia proposta foi avaliada por meio de trÃs estudos de casos, sendo dois hipotÃticos e um real, localizado na cidade de OurÃm, em Portugal. Os resultados obtidos nos trÃs estudos de caso demonstram a superioridade do AGH em relaÃÃo ao AG padrÃo, tanto pelo encontro de melhores soluÃÃes, como na reduÃÃo considerÃvel do tempo computacional demandado para tal feito. Finalmente, espera-se que o desenvolvimento dessa metodologia possa contribuir para o uso de modelos de otimizaÃÃo na operaÃÃo de SAAs em tempo real.
COSTA, L.H.M. Use of hybrid genetic algorithm in the operation in water supply system considering energy efficiency. Fortaleza, 2010. 146 p. Thesis (Doctorate) - Federal University of CearÃ, Fortaleza, 2010. In general, operational rules applied to water distribution systems are created to assure continuity of the public water supply, without taking into account variations of the energy costs during a day. This causes an elevation of the energy costs due to the pumps. Furthermore besides rational use of energy by the pumps, there are other aspects which should be considered in order to achieve an optimized operation of a water transmission system, such as the daily variation of the water demand and the requirements regarded minimum and maximum water levels in the tanks and pressure requirements in the nodes of the water network. The objective of the present work is to develop a computer code which will determine on optimized operation rule for the system which will reach minimum costs of energy used by the pumps. The system is based in the use of Genetic Algorithms (GA) and the hydraulic network computer system EPANET. The GA for of the system is responsible for the search for rules of low energy costs and the hydraulic calculations are done by EPANET. Besides, one major innovation proposed by this research is the introduction of the Hybrid Genetic Algorithm which in order to reduce the stochastic standard aspect of the GA. The proposed methodology was applied to three study cases: two hypothetical and one real which was located in the city of the OurÃm, Portugal. The results of these three study cases clearly show the superiority of the hydrid GA over the standard GA. The hybrid GA not only obtained better solution but also took much less time to run. Finally, it is expected that the use of this methodology will lead to more real time applications.
Style APA, Harvard, Vancouver, ISO itp.
35

WANG, YI-NING, i 王翊寧. "Bandwidth-Efficient Fast Algorithm for High Efficiency Video Coding". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/bdp32g.

Pełny tekst źródła
Streszczenie:
碩士
國立高雄第一科技大學
電腦與通訊工程系碩士班
106
Thanks to the fast developing high technology nowadays, mobile telecommunication 4G/LTE is popularized worldwide, and which makes a rapidly growing New Media related Industry.With the higher requirement for good quality an-d high resolution of Video/Webcam , the bandwidth and the amount of coding compressed data for transmitting Video have to be expanded. In order to keep high performance of video under efficient data compression, more complicated mathematical calculations is a must.In the newest HEVC, CU is quite diversified in order to match different resolution requirement as well as to support higher resolution. Since the bandwidth of audio and video on mobile internet device is limited, our major target is to settle bandwidth problem on high resolution video, that is , to narrow the bandwidth. This thesis puts forward the algorithm of Bandwidth-Rate-Distortion Optimization (BRDO), which is on basis of Rate-Distortion Optimization. The algorithm distributes bandwidth and search area by size of Rate-Distortion Cost (RDCost). It not only lowering the usage of bandwidth but maintaining quality and rate. On average, more than 56% of bandwidth's usages were saved and more than 60% of encoding time decrease largely. The hardware architecture was implemented by using Synopsys (Verilog, Verdi, Design Compiler, Synthesis, PrimeTime®, PrimePower®) and Cell Library (TSMC 90nm CLN90G). The speed of our design was 1.1GHz under the worst case simulation case, and the power consumption was 0.873mW.
Style APA, Harvard, Vancouver, ISO itp.
36

Chi, Haohsien, i 紀浩仙. "A Loading-Balance Algorithm for Improving Efficiency of CORBA". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/01950800360546613013.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
資訊管理研究所
92
In the traditional distributed systems, the most popular characteristic is loading-balance. In CORBA which is an OMG proposed architecture, this characteristic is also pointed out. In many published ORBs, many vendors used additional agents to handle this characteristic. But we found there will be a problem with this solution, that is, if this agent fails, the whole system will not work . So we proposed a simplified model. We put this characteristic to be implemented on client’s side. That is , if the client stands alive and not all service providers fail, the whole system will still be alive with this charateristic. By simulations and some experiments, we will repeatedly test the proposed model. And compare this model with VisiBorker which is a published ORBs. And hope to get the answers of the following questions: 1. In this proposed model, will the system’s performance be improved?This problem can be divided into two parts:One is system’s loading status, and the other is system’s response time.(efficiency) 2. Compare to the published CORBA software, will this model become more complicated to program?That is , this model is implemented on application layer, and some software embedded them in underlying architectures. What’s difference when coding or programming? To solve the above problem, we implemented this idea and used results of many experiments to get the answers.
Style APA, Harvard, Vancouver, ISO itp.
37

Lin, Jia-Zhi, i 林佳志. "Improving Clustering Efficiency by SimHash-based K-Means Algorithm". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/nv495x.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北科技大學
資訊工程系研究所
102
K-Means is one of the popular methods for clustering, but it needs a lot of processing time in similarity calculation, which caused lower performance. Some studies proposed new methods for finding better initial centroids to provide an efficient way of assigning the data points to suitable clusters with reduced time complexity. However, with the large amount of data, vector dimension will be higher and needs more time in similarity calculation. In this paper, we propose SimHash-based K-Means algorithm that used dimensionality reduction and Hamming distance to handle large amount of data. The experiment of results showed that our proposed method can improve efficiency without significantly affecting effectiveness.
Style APA, Harvard, Vancouver, ISO itp.
38

Liu, Yu-Chu, i 劉又齊. "A Study of Information Hiding and Its Efficiency Algorithm". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/52702980666594969752.

Pełny tekst źródła
Streszczenie:
碩士
國立臺中技術學院
資訊科技與應用研究所
95
Recently, protecting the intellectual property rights of digitized information is a serious challenge. For the reason above, related information hiding technologies are becoming more and more important. In accordance with different requests, there are three different schemes proposed in this thesis. The first scheme presents a new block-based authentication watermarking for verifying the integrity of binary images. The original protected image is partitioned into individual blocks. Each block obtains the hashing message by a hashing function. An exclusive-or operation is performed on the hashing message and watermark values and thus the authentication information is embedded into the protected image. If a binary image is tampered with by random modification or counterfeiting attack, the proposed technique can detect which locations have been altered. In many data hiding techniques, the simple least-significant-bit (LSB) substitution is a general scheme used to embed secret message in the cover image. This practice may injure the quality of the host image which increases the probability that malicious users will notice the existence of something within the stego-image. As a result, the optimal LSB substitution method was proposed to improve the quality of the image, but the optimal LSB substitution solution is not easy to find. Therefore, the second scheme proposed an efficient algorithm as an attempt to solve the above problem. In the second proposed scheme, the optimal LSB substitution problem is regarded as a general assignment problem, and then the Hungarian algorithm is used to find the actual optimal LSB substitution solution. Also the proposed scheme does not need a great deal of memory space. The third scheme proposed an effective reversible steganographic technique. The main concept is to utilize a similar property of all neighboring pixels. In the proposed scheme, the cover image is divided into non-overlapping groups by the neighboring pixel. Then each group is counted with an error value and then a complete error table can be derived. Then, the frequency of each error value is summed up and allows for the construction of the error histogram. Finally, the histogram shift scheme is used to hide data. The experimental results prove that by using the proposed scheme, the payload size and covered image quality are both obviously better than the original histogram shift scheme.
Style APA, Harvard, Vancouver, ISO itp.
39

Chen, chi-sheng, i 陳智聖. "The Algorithm of Constant Efficiency Tracking for Fast Charging". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/28010529108535118376.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電機與控制工程系所
98
As the growing of portable electronic devises, lithium batteries play an important rule in power management systems. In order to maximize the performance of lithium batteries, a high charging efficiency and less charging time are required. Today, the main charging method for lithium batteries is the constant current- constant voltage method (CC-CV), but it can not reaches the requirement of fast charging. This thesis presents a fast charging method, which improve charging speed at the cost of minimum charging efficiency. First, we search the relationship between battery equivalent models and charging efficiency, then we control charging efficiency to have optimum charging current. Using the proposed algorithm, the charging time improve 12.4%, and the charging efficiency barely decreases 0.73%.
Style APA, Harvard, Vancouver, ISO itp.
40

Chen, Ting-An, i 陳亭安. "Applying Advanced Operators to Improve the Efficiency of Genetic Algorithm". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/98963780385941744746.

Pełny tekst źródła
Streszczenie:
碩士
淡江大學
電機工程學系
87
Genetic Algorithm is a very important and effective optimizer because of its global searching capability. In this decade, Genetic Algorithms are applied in various problems in many disciplines. In general, the searching result does not depend on the initial guess since GA searches multiple points simultaneously, for which three operators (named as selection, crossover and mutation) are applied on some randomly generated initial population consisting of many individuals to achieve the goal of survival of the fittest. However, the price paid for the multiple-point searching scheme is the increase of computation time. Hence, various techniques are continuously proposed to improve the computational efficiency, which is quite important for GA. In this thesis, the non-uniform probability density functions are first employed in the crossover and mutation operators of GA during the course of searching to improve the computational efficiencies. The capability of escaping from local optima is improved such that the global optimum can be easily achieved. In addition, the convergence speed is also raised. Consider the fact that the parameters are encoded during the course of optimization using GA. After encoding, the most left hand side bit is the most significant bit MSB, while, the most right hand side bit is the least significant bit LSB. It is recognized that the correctness of those bits about the MSB determines the correctness of the parameters. The correctness of those bits about the LSB only determines the precision of the parameters. On the other hand, the changes of those bits near MSB imply a large range searching in parameter space, while, the changes of those bits near LSB imply a small range searching in parameter space. For the crossover and mutation operators of a classical GA, the weighting difference of different bits are not recognized and implemented. That is, the probability of crossover and mutation for each bit is the same in a classical GA. In this thesis, some non-uniform probability density functions are first introduced for the crossover and mutation operators. One objective is to enhance the crossover and mutation probability for the bits near MSB region when the best individual of current generation is still far from the global optimum region. This certainly would increase the escaping capability of GA from the local optimum. The other objective is to enhance the crossover and mutation probability for the bits near LSB region when the best individual of current generation is near the global optimum region. This would increase the convergence speed. In order to achieve the above objectives some mechanisms are required to suitably move the probability density functions. Therefore, two mechanisms are proposed, called Cyclical GA and Adaptive GA, in this thesis, and their efficiency improvements are checked. We found both GAs work for different testing functions, including those that are hard to converge for classical GA.
Style APA, Harvard, Vancouver, ISO itp.
41

LIN, WEN-BIN, i 林文斌. "A study for improving the efficiency of Frank-Wolfe algorithm". Thesis, 1992. http://ndltd.ncl.edu.tw/handle/35315928245004791342.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
土木工程研究所
80
Frank-Wolfe 演算法是凸形非線性規劃問題(convex nonlinear programming pro- blem) 的解法之一,而在求解交通網路的均衡指派問題時,一般也是使用 Frank- Wolfe 演算法。此演算法的主要缺點是收斂速度太慢,針對此缺點,在過去已有 Fukushima(1984)、LeBlanc(1985),以及Weintraub(1985) 等人修改此演算法,本 研究認為其中仍有很大發展空間,因此將研究作進一步的改善。本研究將在收斂條 件的要求更嚴格的考慮下,從以下兩方面著手,更進一步地提昇Frank-Wolfe 演算 法的計算效率: (1) 對Fukushima 的方法做完整的分析,找出更適合的策略。 (2) 結合Weintraub 與Fukushima 二者的不同改善方法。 最後,將以電腦測試求解網路交通量指派問題,以顯示本研究提出之改善策略所提 昇的計算效率。
Style APA, Harvard, Vancouver, ISO itp.
42

林詩凱. "Improving AODV Route Protocol Efficiency with Compromised Route Selection Algorithm". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/79852668330947717872.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣師範大學
機電科技研究所
96
Mobile ad hoc Networks (MANETs) is formed by a group of wireless equipment (node) that can move fast and no centralized management mechanism can be used. The communication between mobile nodes can be accomplished via the nearby mobile hosts interchanging messages. In case of the limited resources such as network bandwidth, memory capacity, and battery power, the efficiency of routing scheme in ad hoc networks becomes more important and challenging. In Mobile ad hoc Networks, most nodes are mobile and the routing path may be changed or disrupted quite often due to the movement of some hosts on the path. Therefore, finding a reliable routing path is an important issue. In this thesis, signal strength coefficient, node power coefficient and busy condition coefficients are calculated for route selection, to meet the request packet for establishing transmission path. Route selection value is calculated with above three coefficients for choosing steady routing path and backup path. It can reduce the break time and latency period, and increase packet arrival rate. The developed method in this thesis can choose the path with high stability and reduce the overall network load. The results of simulation showed that overall network performance has been improved.
Style APA, Harvard, Vancouver, ISO itp.
43

Cosgaya, Lozano Adan Jose. "Engineering Algorithms for Solving Geometric and Graph Problems on Large Data Sets". 2011. http://hdl.handle.net/10222/13324.

Pełny tekst źródła
Streszczenie:
This thesis focuses on the engineering of algorithms for massive data sets. In recent years, massive data sets have become ubiquitous and existing computing applications, for the most part, cannot handle these data sets efficiently: either they crash or their performance degrades to a point where they take unacceptably long to process the input. Parallel computing and I/O-efficient algorithms provide the means to process massive amounts of data efficiently. The work presented in this thesis makes use of these techniques and focuses on obtaining practically efficient solutions for specific problems in computational geometry and graph theory. We focus our attention first on skyline computations. This problem arises in decision-making applications and has been well studied in computational geometry and also by the database community in recent years. Most of the previous work on this problem has focused on sequential computations using a single processor, and the algorithms produced are not able to efficiently process data sets beyond the capacity of main memory. Such massive data sets are becoming more common; thus, parallelizing the skyline computation and eliminating the I/O bottleneck in large-scale computations is increasingly important in order to retrieve the results in a reasonable amount of time. Furthermore, we address two fundamental problems of graph analysis that appear in many application areas and which have eluded efforts to develop theoretically I/O-efficient solutions: computing the strongly connected components of a directed graph and topological sorting of a directed acyclic graph. To approach these problems, we designed algorithms, developed efficient implementations and, using extensive experiments, verified that they perform well in practice. Our solutions are based on well understood algorithmic techniques. The experiments show that, even though some of these techniques do not lead to provably efficient algorithms, they do lead to practically efficient heuristic solutions. In particular, our parallel algorithm for skyline computation is based on divide-and-conquer, while the strong connectivity and topological sorting algorithms use techniques such as graph contraction, the Euler technique, list ranking, and time-forward processing.
Style APA, Harvard, Vancouver, ISO itp.
44

Cheng-HaoChen i 陳正浩. "A Fast CU Size Decision Algorithm for High Efficiency Video Coding". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/27k54a.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
電機工程學系
103
High Efficiency Video Coding (HEVC) is the newest video coding standard. It provides the better compression performance compared with the existing standards. HEVC adopts the quad-tree structure which allows recursive splitting into four equally sized nodes, starting from the Coding Tree Unit (CTU). The quad-tree structure causes the better compression efficiency, but it requires the higher computational complexity. In order to reduce the computational complexity, we propose a fast CU size decision algorithm. The proposed algorithm consists of adaptive depth range and early pruning test. First, we use adaptive depth range instead of fixed depth range for a CTU encoding. Then, for each CU, the early pruning test is performed at each depth level according to Bayes rule based on the full RD costs. Compared with the HEVC test model 12.0 (HM 12.0), experimental results show the proposed method achieves the reduction of encoding time by 60.11%, the increment of bitrate by 2.4%, and 0.1 dB Y-PSNR loss, on average.
Style APA, Harvard, Vancouver, ISO itp.
45

Wu, Sheng-Yi, i 吳昇益. "Using modified Dijkstra’s algorithm to improve the movement efficiency of robocar". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/97337776645458917800.

Pełny tekst źródła
Streszczenie:
碩士
國立陽明大學
醫學工程研究所
101
Abstract In recent years, the telehealthcare is very popular. Because the tele-healthcare can keep a watchful eye on information of patients or elderly people, and handle in anytime, in anywhere and by any device, If becomes an on going nursing behavior. Based on its concept, we builded a indoor positioning system by RFID Cartesian grids, which can guide the robocar move to the designated location , then to realize the circumstances with the patient. In this field, many factors will determine whether it can be access or not, such as location-awareness, path finding and path conditions. In this study, we first introduce passive RFID tags to act as landmarks for solving location-awareness. These landmarks can not only read by robocar to determine present localization but reduce computing time for path finding searching process. For the part of path –planning, we proposed an improved graph-based algorithm for archiving obstacles avoidance and less veers into consideration, to generate an efficient path for navigation. We tested the efficiency of different path finding algorithms with the designated map, included Dijkstra’s algorithm, the collision–free algorithm (CFA) on basis of Dijkstra and our proposed method. In comparison of Dijkstra’s algorithm and CFA approach, Dijkstra’s algorithm could find the shortest path. but easily occur collision; and although CFA approach increase 3% distances, it could ensure keeping up a collision-free condition. Another aspect, in comparison of our proposed approach and CFA approach, our method increase cruising distance then CFA, due to it isn’t a shortest path . However, the aim we adopted veering angles is to emend weighting manner to condition of mobile robocar cruising. And our result proved the ideal shortest path is not minimum time to access destinations in practical environment.
Style APA, Harvard, Vancouver, ISO itp.
46

Chun, Chiu YiI, i 邱意淳. "High-Efficiency Prony-Based Algorithm for Time-Varying Power Signal Estimation". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/52473577196075665060.

Pełny tekst źródła
Streszczenie:
碩士
國立彰化師範大學
機電工程學系
99
ABSTRACT With the widespread use of nonlinear loads in the power system, harmonic distortion causes a serious pollution of power quality. Besides, the power unbalance between the generation and the load demand would make the fundamental frequency varying with time. These disturbances may introduce operational problems of power system equipments. Therefore, improving the power quality has become a great concern for both utilities and customers. The frequency-domain methods have been widely used for the signal processing because of its computational efficiency. In addition, most power meters adopt FFT-based algorithm to analyze the harmonics and to show the frequency spectra. However, the FFT-based algorithm is less accurate if the system frequency varies and the frequency resolution decreases. The analysis results will show errors caused by the leakage and picket-fence effects. Therefore, how to achieve both the high resolution and efficiency is worth investigating. According to aforementioned facts, this thesis proposes a Prony-based improved algorithm for harmonics and interharmonics measurement. Not only the calculation time is reduced, but also the result is with a better accuracy, even if the power signals contain frequency variations and non-integer harmonic components. Finally, the thesis applies LabVIEW and the dedicated hardware to design a simple setup for measuring power quality signals. The performance of improved algorithm is validated by testing the synthesized and actual signals. Key Words: Harmonics, System Frequency Variation, Fast Fourier Transform, Prony's Method, LabVIEW
Style APA, Harvard, Vancouver, ISO itp.
47

Li, Yu-Lin, i 李育霖. "Adaptive Traffic Indication Algorithm for Energy Efficiency in IEEE 802.16e Systems". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/84550158708172448917.

Pełny tekst źródła
Streszczenie:
碩士
長庚大學
資訊工程學研究所
96
The efficiency of power saving mechanism on wireless communications will influence the time the mobile station (MSS) can operate. Due to the characteristics of centralized control in WiMAX system, the sleeping period of each subscriber is dominated by a base station (BS) based on their service types, traffic loads, and expected sleeping periods. The power saving mechanism uses an exponential backoff sleeping window manner to determine the sleeping period of each MS. In recently researches, some of them optimize the sleeping period by estimating the packet inter-arrival time for improving the energy efficient. However, those mechanisms do not reflect the relationship between the traffic load and available bandwidth. That is, according to the available and priorities of connections, the lower priority will can not receive data immediately and waste energy on the waiting time. Thus, in this paper, we propose an adaptive traffic indication algorithm (ATIA) to let MSS do the extend sleep on bandwidth unavailable condition, and illustrate an adaptively adjusting sleeping window scheme for delay versus energy consumption. Simulation results show that ATIA increase the degree of power saving with comparison to IEEE 802.16e; and further, it shows ATIA can combine with other power saving mechanism and also get well performance.
Style APA, Harvard, Vancouver, ISO itp.
48

Lin, Li-Jyun, i 林豊鈞. "Energy-Efficiency Scheduling Algorithm forMultiframe Real-Time Tasks in DVS Processor". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/42524271524671748259.

Pełny tekst źródła
Streszczenie:
碩士
國立高雄大學
資訊工程學系碩士班
100
An embedded system with a video decoder has become a new trend due to the applications of mobile multimedia and the consuming electronic products required in the life. For the considerations of low cost and high efficiency when the embedded system plays MPEG video, users require a proper quality of service. However, the amount of encoded data on each frame will affect the processing time. If the maximum execution times of tasks are used to do schedulability test, the quality of service of the system can be guaranteed. However, it will result in the higher energy consumption. It is an important issue to reduce total energy consumption. In this thesis, we propose an EDF-based real-time scheduling algorithm with considering energy consumption for the multiframe task model. A simulation model is built to investigate the performance of the proposed approach. The capability of the proposed approach is evaluated by a series of simulations, for which we have encouraging results.
Style APA, Harvard, Vancouver, ISO itp.
49

Fang, Han-Chiou, i 方瀚萩. "ast Intra Prediction Algorithm and Design for High Efficiency Video Coding". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/46070691957728794785.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電子工程學系 電子研究所
103
When compared to previous video standard H.264, High Efficiency Video Coding (HEVC) has significant computation complexity because of more PU size types and more intra prediction modes. To achieve real time encoding demands, this paper proposes a fast intra prediction algorithm and its design. The fast algorithm can be divided into two parts. The first part is the fast intra prediction unit (PU) size selection that is a gradient weight controlled block size selection to reduce PU sizes to two. These two PU sizes will be reduced to one for more complexity reduction based on the SATD distribution. The required intra prediction modes are reduced by almost half by s simple three-step algorithm. The simulation results show that the proposed algorithm can save 79% encoding time on average for all-intra main case compared to the default encoding scheme in HM-9.0rc1, with 3.9% BD-rate increases With TSMC 90 nm CMOS technology and 270 MHz operating frequency, the gate count of this work is about 224.608K and the memory usage is 1.762 Kbytes to support the 4k×2k 30 fps video encoding.
Style APA, Harvard, Vancouver, ISO itp.
50

Yao, Chiao Yin, i 姚喬尹. "Improving the Efficiency of the Apriori Algorithm for Mining Association Rules". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/65311017636604670428.

Pełny tekst źródła
Streszczenie:
碩士
南台科技大學
資訊管理系
98
With the development of information technology, enterprises have a lot of way to get information and can use this technology store about a lot of enterprise’s transaction or record in data base. How to find the useful information in database has become the subject which the enterprises pay attention. Association rules technology is generally in data mining. Based on the Internet Technology development and the globalization of business, the transaction database of enterprise is constantly changing all the time, and in order to keep the accuracy of exploring result in dynamic database, the traditional explore method in order to keep the information accuracy so it unavoidable must to exploring information again constantly; Because generated too many redundant candidate itemsets so it causes too many times to scan the database; Is need to scan the redundant transaction data because there is not recognize this items belong to which transaction. In order to preserve the accuracy when mining the dynamic database, we need repeatedly scan database. This is above the traditional Apriori algorithm to mining association rules of the weakness in the dynamic database. This research is based on Apriori Algorithm to improve its process. This paper proposed an improve algorithms. The new algorithm is to transform database from horizontal to vertical. This can be avoided scan redundant of Transaction data. Any item count just need to scan two transactions in data base so as to increase mining efficiency. And this is improved from Apriori generate candidate itemsets process. That can avoid generate too many candidate itemset and can increase mining efficiency again. And propose appropriate methods to update this algorithm so as to this algorithms can use in dynamic database in real-time and correctly, to fit in with the business needs and provide immediate and accurate to the important decision-making.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii