Dissertations / Theses on the topic 'Algorithm efficiency'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Algorithm efficiency.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Morgan, Wiley Spencer. "Increasing the Computational Efficiency of Combinatoric Searches." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6528.
Full textBatbayar, Batsukh, and S3099885@student rmit edu au. "Improving Time Efficiency of Feedforward Neural Network Learning." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090303.114706.
Full textFreund, Robert M. "Theoretical Efficiency of A Shifted Barrier Function Algorithm for Linear Programming." Massachusetts Institute of Technology, Operations Research Center, 1989. http://hdl.handle.net/1721.1/5185.
Full textKhudhair, Ali Dheyaa. "A Simplified Routing Algorithm for Energy Efficiency in Wireless Sensor Networks." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1885751071&sid=8&Fmt=2&clientId=1509&RQT=309&VName=PQD.
Full textLindberg, Joakim, and Martin Steier. "Efficiency of the hybrid AC3-tabu search algorithm for solving Sudoku puzzles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166421.
Full textChen, Daven 1959. "COMPARISON OF SCIRTSS EFFICIENCY WITH D-ALGORITHM APPLICATION TO ITERATIVE NETWORKS (TEST)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/275572.
Full textBurger, Christoph Hartfield Roy J. "Propeller performance analys and multidisciplinary optimization using a genetic algorithm." Auburn, Ala, 2007. http://repo.lib.auburn.edu/2007%20Fall%20Dissertations/Burger_Christoph_57.pdf.
Full textSelek, I. (István). "Novel evolutionary methods in engineering optimization—towards robustness and efficiency." Doctoral thesis, University of Oulu, 2009. http://urn.fi/urn:isbn:9789514291579.
Full textKassa, Hailu Belay, Shenko Chura Aredo, and Estifanos Yohannes Menta. "ENERGY EFFICIENT ADAPTIVE SECTOR-BASED USER CLUSTERING ALGORITHM FOR CELLULAR NETWORK." International Foundation for Telemetering, 2016. http://hdl.handle.net/10150/624220.
Full textSilva, Cauane Blumenberg. "Adaptive tiling algorithm based on highly correlated picture regions for the HEVC standard." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96040.
Full textThis Master Thesis proposes an adaptive algorithm that is able to dynamically choose suitable tile partitions for intra- and inter-predicted frames in order to reduce the impact on coding efficiency arising from such partitioning. Tiles are novel parallelismoriented tools that integrate the High Efficiency Video Coding (HEVC) standard, which divide the frame into independent rectangular regions that can be processed in parallel. To enable the parallelism, tiles break the coding dependencies across their boundaries leading to coding efficiency impacts. These impacts can be even higher if tile boundaries split highly correlated picture regions, because most of the coding tools use context information during the encoding process. Hence, the proposed algorithm clusters the highly correlated picture regions inside the same tile to reduce the inherent coding efficiency impact of using tiles. To wisely locate the highly correlated picture regions, image characteristics and encoding information are analyzed, generating partitioning maps that serve as the algorithm input. Based on these maps, the algorithm locates the natural context break of the picture and defines the tile boundaries on these key regions. This way, the dependency breaks caused by the tile boundaries match the natural context breaks of a picture, then minimizing the coding efficiency losses caused by the use of tiles. The proposed adaptive tiling algorithm, in some cases, provides over 0.4% and over 0.5% of BD-rate savings for intra- and inter-predicted frames respectively, when compared to uniform-spaced tiles, an approach which does not consider the picture context to define the tile partitions.
Nalluri, Purnachand. "A fast motion estimation algorithm and its VLSI architecture for high efficiency video coding." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/15442.
Full textVideo coding has been used in applications like video surveillance, video conferencing, video streaming, video broadcasting and video storage. In a typical video coding standard, many algorithms are combined to compress a video. However, one of those algorithms, the motion estimation is the most complex task. Hence, it is necessary to implement this task in real time by using appropriate VLSI architectures. This thesis proposes a new fast motion estimation algorithm and its implementation in real time. The results show that the proposed algorithm and its motion estimation hardware architecture out performs the state of the art. The proposed architecture operates at a maximum operating frequency of 241.6 MHz and is able to process 1080p@60Hz with all possible variables block sizes specified in HEVC standard as well as with motion vector search range of up to ±64 pixels.
A codificação de vídeo tem sido usada em aplicações tais como, vídeovigilância, vídeo-conferência, video streaming e armazenamento de vídeo. Numa norma de codificação de vídeo, diversos algoritmos são combinados para comprimir o vídeo. Contudo, um desses algoritmos, a estimação de movimento é a tarefa mais complexa. Por isso, é necessário implementar esta tarefa em tempo real usando arquiteturas de hardware apropriadas. Esta tese propõe um algoritmo de estimação de movimento rápido bem como a sua implementação em tempo real. Os resultados mostram que o algoritmo e a arquitetura de hardware propostos têm melhor desempenho que os existentes. A arquitetura proposta opera a uma frequência máxima de 241.6 MHz e é capaz de processar imagens de resolução 1080p@60Hz, com todos os tamanhos de blocos especificados na norma HEVC, bem como um domínio de pesquisa de vetores de movimento até ±64 pixels.
Défossez, Gautier. "Le système d'information multi-sources du Registre général des cancers de Poitou-Charentes. Conception, développement et applications à l'ère des données massives en santé." Thesis, Poitiers, 2021. http://theses.univ-poitiers.fr/64594/2021-Defossez-Gautier-These.
Full textPopulation-based cancer registries (PBCRs) are the best international option tool to provide a comprehensive (unbiased) picture of the weight, incidence and severity of cancer in the general population. Their work in classifying and coding diagnoses according to international rules gives to the final data a specific quality and comparability in time and space, thus building a decisive knowledge database for describing the evolution of cancers and their management in an uncontrolled environment. Cancer registration is based on a thorough investigative process, for which the complexity is largely related to the ability to access all the relevant data concerning the same individual and to gather them efficiently. Created in 2007, the General Cancer Registry of Poitou-Charentes (RGCPC) is a recent generation of cancer registry, started at a conducive time to devote a reflection about how to optimize the registration process. Driven by the computerization of medical data and the increasing interoperability of information systems, the RGCPC has experimented over 10 years a multi-source information system combining innovative methods of information processing and representation, based on the reuse of standardized data usually produced for other purposes.In a first section, this work presents the founding principles and the implementation of a system capable of gathering large amounts of data, highly qualified and structured, with semantic alignment to subscribe to algorithmic approaches. Data are collected on multiannual basis from 110 partners representing seven data sources (clinical, biological and medical administrative data). Two algorithms assist the cancer registrar by dematerializing the manual tasks usually carried out prior to tumor registration. A first algorithm generate automatically the tumors and its various components (publication), and a second algorithm represent the care pathway of each individual as an ordered sequence of time-stamped events that can be access within a secure interface (publication). Supervised machine learning techniques are experimented to get around the possible lack of codification of pathology reports (publication).The second section focuses on the wide field of research and evaluation achieved through the availability of this integrated information system. Data linkage with other datasets were tested, within the framework of regulatory authorizations, to enhance the contextualization and knowledge of care pathways, and thus to support the strategic role of PBCRs for real-life evaluation of care practices and health services research (proof of concept): screening, molecular diagnosis, cancer treatment, pharmacoepidemiology (four main publications). Data from the RGCPC were linked with those from the REIN registry (chronic end-stage renal failure) as a use case for experimenting a prototype platform dedicated to the collaborative sharing of massive health data (publication).The last section of this work proposes an open discussion on the relevance of the proposed solutions to the requirements of quality, cost and transferability, and then sets out the prospects and expected benefits in the field of surveillance, evaluation and research in the era of big data
Potter, Christopher C. J. "Kernel Selection for Convergence and Efficiency in Markov Chain Monte Carol." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/249.
Full textSchimuneck, Matias Artur Klafke. "Adaptive Monte Carlo algorithm to global radio resources optimization in H-CRAN." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/169922.
Full textUp until 2020 it is expected that cellular networks must raise the coverage area in 10-fold, support a 100-fold more user equipments, and increase the data rate capacity by a 1000-fold in comparison with current cellular networks. The dense deployment of small cells is considered a promising solution to reach such aggressive improvements, once it moves the antennas closer to the users, achieving higher data rates due to the signal quality at short distances. However, operating a massive number of antennas can significantly increase the energy consumption of the network infrastructure. Furthermore, the large insertion of new radios brings greater spectral interference between the cells. In this scenery, the optimal management of radio resources turn an exaction due to the impact on the quality of service provided to the users. For example, low transmission powers can leave users without connection, while high transmission powers can contribute to inter radios interference. Furthermore, the interference can be raised on the unplanned reuse of the radio resources, resulting in low data transmission per radio resource, as the under-reuse of radio resources limits the overall data transmission capacity. A solution to control the transmission power, assign the spectral radio resources, and ensure the service to the users is essential. In this thesis, we propose an Adaptive Monte Carlo algorithm to perform global energy efficient resource allocation for Heterogeneous Cloud Radio Access Network (HCRAN) architectures, which are forecast as future fifth-generation (5G) networks. We argue that our global proposal offers an efficient solution to the resource allocation for both high and low density scenarios. Our contributions are threefold: (i) the proposal of a global approach to the radio resource assignment problem in H-CRAN architecture, whose stochastic character ensures an overall solution space sampling; (ii) a critical comparison between our global solution and a local model; (iii) the demonstration that, for high density scenarios, Energy Efficiency is not a well suited metric for efficient allocation, considering data rate capacity, fairness, and served users. Moreover, we compare our proposal against three state-of-the-art resource allocation algorithms for 5G networks.
Netzén, Örn André. "The Efficiency of Financial Markets Part II : A Stochastic Oscillator Approach." Thesis, Umeå universitet, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-170753.
Full textLu, Qing. "Applications of the genetic algorithm optimisation approach in the design of high efficiency microwave class E power amplifiers." Thesis, Northumbria University, 2012. http://nrl.northumbria.ac.uk/13340/.
Full textSciullo, Luca. "Energy-efficient wireless sensor networks via scheduling algorithm and radio Wake-up technology." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14539/.
Full textVu, Chinh Trung. "An Energy-Efficient Distributed Algorithm for k-Coverage Problem in Wireless Sensor Networks." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_theses/40.
Full textParthasarathy, Nikhil Kaushik. "An efficient algorithm for blade loss simulations applied to a high-order rotor dynamics problem." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/189.
Full textSklavounos, Dimitris C. "Detection of abnormal situations and energy efficiency control in Heating Ventilation and Air Conditioning (HVAC) systems." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/12843.
Full textKartal, Koc Elcin. "An Algorithm For The Forward Step Of Adaptive Regression Splines Via Mapping Approach." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615012/index.pdf.
Full textand in the second one, the least contributing basis functions to the overall fit are eliminated. In the conventional adaptive spline procedure, knots are selected from a set of distinct data points that makes the forward selection procedure computationally expensive and leads to high local variance. To avoid these drawbacks, it is possible to select the knot points from a subset of data points, which leads to data reduction. In this study, a new method (called S-FMARS) is proposed to select the knot points by using a self organizing map-based approach which transforms the original data points to a lower dimensional space. Thus, less number of knot points is enabled to be evaluated for model building in the forward selection of MARS algorithm. The results obtained from simulated datasets and of six real-world datasets show that the proposed method is time efficient in model construction without degrading the model accuracy and prediction performance. In this study, the proposed approach is implemented to MARS and CMARS methods as an alternative to their forward step to improve them by decreasing their computing time
Holmgren, Faghihi Josef, and Paul Gorgis. "Time efficiency and mistake rates for online learning algorithms : A comparison between Online Gradient Descent and Second Order Perceptron algorithm and their performance on two different data sets." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260087.
Full textDen här avhandlingen undersöker skillnaden mellan två olika “online learning”-algoritmer: Online Gradient Descent och Second-Order Perceptron, och hur de presterar på olika datasets med fokus på andelen felklassificeringar, tidseffektivitet och antalet uppdateringar. Genom att studera olika “online learning”-algoritmer och hur de fungerar i olika miljöer, kommer det hjälpa till att förstå och utveckla nya strategier för att hantera vidare “online learning”-problem. Studien inkluderar två olika dataset, Pima Indians Diabetes och Mushroom, och använder biblioteket LIBOL för testning. Resultatet i denna avhandling visar att Online Gradient Descent presterar bättre överlag på de testade dataseten. För det första datasetet visade Online Gradient Descent ett betydligt lägre andel felklassificeringar. För det andra datasetet visade OGD lite högre andel felklassificeringar, men samtidigt var algoritmen anmärkningsvärt mer tidseffektiv i jämförelse med Second-Order Perceptron. Framtida studier inkluderar en bredare testning med mer, och olika, datasets och andra relaterade algoritmer. Det leder till bättre resultat och höjer trovärdigheten.
Dobson, William Keith. "Method for Improving the Efficiency of Image Super-Resolution Algorithms Based on Kalman Filters." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/82.
Full textGendre, Victor Hugues. "Predicting short term exchange rates with Bayesian autoregressive state space models: an investigation of the Metropolis Hastings algorithm forecasting efficiency." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437399395.
Full textRamarathinam, Venkatesh. "A control layer algorithm for ad hoc networks in support of urban search and rescue (USAR) applications." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000604.
Full textUsman, Modibo. "The Effect of the Implementation of a Swarm Intelligence Algorithm on the Efficiency of the Cosmos Open Source Managed Operating System." Thesis, Northcentral University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10810882.
Full textAs the complexity of mankind’s day-to-day challenges increase, so does a need for the optimization of know solutions to accommodate for this increase in complexity. Today’s computer systems use the Input, Processing, and Output (IPO) model as a way to deliver efficiency and optimization in human activities. Since the relative quality of an output utility derived from an IPO based computer system is closely coupled to the quality of its input media, the measure of the Optimal Quotient (OQ) is the ratio of the input to output which is 1:1. This relationship ensures that all IPO based computers are not just linearly predictable, but also characterized by the Garbage In Garbage Out (GIGO) design concept. While current IPO based computer systems have been relatively successful at delivering some measure of optimization, there is a need to examine (Li & Malik, 2016) alternative methods of achieving optimization. The purpose of this quantitative research study, through an experimental research design, is to determine the effects of the application of a Swarm Intelligence algorithm on the efficiency of the Cosmos Open Source Managed Operating System.
By incorporating swarm intelligence into an improved IPO design, this research addresses the need for optimization in computer systems through the creation of an improved operating system Scheduler. The design of a Swarm Intelligence Operating System (SIOS) is an attempt to solve some inherent vulnerabilities and problems of complexity and optimization otherwise unresolved in the design of conventional operating systems. This research will use the Cosmos open source operating system as a test harness to ensure improved internal validity while the subsequent measurement between the conventional and improved IPO designs will demonstrate external validity to real world applications.
Vasudevan, Meera. "Profile-based application management for green data centres." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/98294/1/Meera_Vasudevan_Thesis.pdf.
Full textZhang, Ying. "Bayesian D-Optimal Design for Generalized Linear Models." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/30147.
Full textPh. D.
Plociennik, Kai. "From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems." Doctoral thesis, Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-65314.
Full textNegrea, Andrei Liviu. "Optimization of energy efficiency for residential buildings by using artificial intelligence." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI090.
Full textConsumption, in general, represents the process of using a type of resource where savings needs to be done. Energy consumption has become one the main issue of urbanization and energy crisis as the fossil depletion and global warming put under threat the planet energy utilization. In this thesis, an automatic control of energy was developed to reduce energy consumption in residential area and passive house buildings. A mathematical model founded on empirical measurements was developed to emphasize the behavior of a testing laboratory from Universitatea Politehnica din București - Université Politechnica de Bucarest - Roumanie. The experimental protocol was carried out following actions such as: building parameters database, collecting weather data, intake of auxiliary flows while considering the controlling factors. The control algorithm is controlling the system which can maintain a comfortable temperature within the building with minimum energy consumption. Measurements and data acquisition have been setup on two different levels: weather and buildings data. The data collection is gathered on a server which was implemented into the testing facility running a complex algorithm which can control energy consumption. The thesis reports several numerical methods for estimating the energy consumption that is further used with the control algorithm. An experimental showcase based on dynamic calculation methods for building energy performance assessments was made in Granada, Spain, information which was later used in this thesis. Estimation of model parameters (resistances and capacities) with prediction of heat flow was made using nodal method, based on physical elements, input data and weather information. Prediction of energy consumption using state-space modeling show improved results while IoT data collection was uploaded on a Raspberry Pi system. All these results were stable showing impressive progress in the prediction of energy consumption and their application in energy field
Bizkevelci, Erdal. "A Control Algorithm To Minimize Torque Ripple And Acoustic Noise Of Switched Reluctance Motors." Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609866/index.pdf.
Full textHassan, Aakash. "Improving the efficiency, power quality, and cost-effectiveness of solar PV systems using intelligent techniques." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2023. https://ro.ecu.edu.au/theses/2676.
Full textVu, Chinh Trung. "Distributed Energy-Efficient Solutions for Area Coverage Problems in Wireless Sensor Networks." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/cs_diss/37.
Full textCosta, Luis Herinque MagalhÃes. "UTILIZAÃÃO DE UM ALGORITMO GENÃTICO HÃBRIDO NA OPERAÃÃO DE SISTEMAS DE ABASTECIMENTO DE ÃGUA COM ÃNFASE NA EFICIÃNCIA ENERGÃTICA." Universidade Federal do CearÃ, 2010. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=4756.
Full textCOSTA, L.H.M. UtilizaÃÃo de um algoritmo genÃtico hÃbrido na operaÃÃo de sistemas de abastecimento de Ãgua com Ãnfase na eficiÃncia energÃtica. Fortaleza, 2010. 146 p. Tese (Doutorado) - Universidade Federal do CearÃ, Fortaleza, 2010. Em geral, as regras operacionais dos Sistemas de Abastecimento de Ãgua (SAAs) visam à garantia da continuidade do abastecimento pÃblico, sem a consideraÃÃo da variaÃÃo da tarifa energÃtica ao longo do dia. Este fato ocasiona o aumento do custo energÃtico gerado pelos motores das bombas em funcionamento. Entretanto, alÃm da utilizaÃÃo eficiente da tarifa energÃtica, outros aspectos devem ser considerados na operaÃÃo de um SAA tais como, a gama de combinaÃÃes possÃveis de regras operacionais, a variaÃÃo da demanda hÃdrica e a manutenÃÃo dos nÃveis dos reservatÃrios e das pressÃes nos pontos de consumo dentro de seus limites prÃestabelecidos. Isto motivou o desenvolvimento desta pesquisa, que tem como objetivo fornecer ao operador condiÃÃes de operacionalidade nas estaÃÃes elevatÃrias do sistema de forma racional, nÃo dependendo somente de sua experiÃncia profissional. Desta forma, apresenta-se neste trabalho um modelo computacional de apoio à tomada de decisÃo com vistas à minimizaÃÃo dos gastos com energia elÃtrica. Para tanto, fundamenta-se na junÃÃo da tÃcnica dos Algoritmos GenÃticos (AGs) e do simulador hidrÃulico EPANET. O AG à responsÃvel pela busca de estratÃgias operacionais com custo energÃtico reduzido, enquanto que a avaliaÃÃo do desempenho hidrÃulico dessas estratÃgias à feita pelo EPANET. AlÃm disso, devido à alta aleatoriedade caracterÃstica dos AGs, foi incorporado ao mesmo um conjunto de algoritmos determinÃsticos visando tornar o processo o menos estocÃstico possÃvel. Com o acoplamento destes algoritmos ao AG padrÃo desenvolveu-se um Algoritmo GenÃtico HÃbrido (AGH). A metodologia proposta foi avaliada por meio de trÃs estudos de casos, sendo dois hipotÃticos e um real, localizado na cidade de OurÃm, em Portugal. Os resultados obtidos nos trÃs estudos de caso demonstram a superioridade do AGH em relaÃÃo ao AG padrÃo, tanto pelo encontro de melhores soluÃÃes, como na reduÃÃo considerÃvel do tempo computacional demandado para tal feito. Finalmente, espera-se que o desenvolvimento dessa metodologia possa contribuir para o uso de modelos de otimizaÃÃo na operaÃÃo de SAAs em tempo real.
COSTA, L.H.M. Use of hybrid genetic algorithm in the operation in water supply system considering energy efficiency. Fortaleza, 2010. 146 p. Thesis (Doctorate) - Federal University of CearÃ, Fortaleza, 2010. In general, operational rules applied to water distribution systems are created to assure continuity of the public water supply, without taking into account variations of the energy costs during a day. This causes an elevation of the energy costs due to the pumps. Furthermore besides rational use of energy by the pumps, there are other aspects which should be considered in order to achieve an optimized operation of a water transmission system, such as the daily variation of the water demand and the requirements regarded minimum and maximum water levels in the tanks and pressure requirements in the nodes of the water network. The objective of the present work is to develop a computer code which will determine on optimized operation rule for the system which will reach minimum costs of energy used by the pumps. The system is based in the use of Genetic Algorithms (GA) and the hydraulic network computer system EPANET. The GA for of the system is responsible for the search for rules of low energy costs and the hydraulic calculations are done by EPANET. Besides, one major innovation proposed by this research is the introduction of the Hybrid Genetic Algorithm which in order to reduce the stochastic standard aspect of the GA. The proposed methodology was applied to three study cases: two hypothetical and one real which was located in the city of the OurÃm, Portugal. The results of these three study cases clearly show the superiority of the hydrid GA over the standard GA. The hybrid GA not only obtained better solution but also took much less time to run. Finally, it is expected that the use of this methodology will lead to more real time applications.
WANG, YI-NING, and 王翊寧. "Bandwidth-Efficient Fast Algorithm for High Efficiency Video Coding." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/bdp32g.
Full text國立高雄第一科技大學
電腦與通訊工程系碩士班
106
Thanks to the fast developing high technology nowadays, mobile telecommunication 4G/LTE is popularized worldwide, and which makes a rapidly growing New Media related Industry.With the higher requirement for good quality an-d high resolution of Video/Webcam , the bandwidth and the amount of coding compressed data for transmitting Video have to be expanded. In order to keep high performance of video under efficient data compression, more complicated mathematical calculations is a must.In the newest HEVC, CU is quite diversified in order to match different resolution requirement as well as to support higher resolution. Since the bandwidth of audio and video on mobile internet device is limited, our major target is to settle bandwidth problem on high resolution video, that is , to narrow the bandwidth. This thesis puts forward the algorithm of Bandwidth-Rate-Distortion Optimization (BRDO), which is on basis of Rate-Distortion Optimization. The algorithm distributes bandwidth and search area by size of Rate-Distortion Cost (RDCost). It not only lowering the usage of bandwidth but maintaining quality and rate. On average, more than 56% of bandwidth's usages were saved and more than 60% of encoding time decrease largely. The hardware architecture was implemented by using Synopsys (Verilog, Verdi, Design Compiler, Synthesis, PrimeTime®, PrimePower®) and Cell Library (TSMC 90nm CLN90G). The speed of our design was 1.1GHz under the worst case simulation case, and the power consumption was 0.873mW.
Chi, Haohsien, and 紀浩仙. "A Loading-Balance Algorithm for Improving Efficiency of CORBA." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/01950800360546613013.
Full text國立交通大學
資訊管理研究所
92
In the traditional distributed systems, the most popular characteristic is loading-balance. In CORBA which is an OMG proposed architecture, this characteristic is also pointed out. In many published ORBs, many vendors used additional agents to handle this characteristic. But we found there will be a problem with this solution, that is, if this agent fails, the whole system will not work . So we proposed a simplified model. We put this characteristic to be implemented on client’s side. That is , if the client stands alive and not all service providers fail, the whole system will still be alive with this charateristic. By simulations and some experiments, we will repeatedly test the proposed model. And compare this model with VisiBorker which is a published ORBs. And hope to get the answers of the following questions: 1. In this proposed model, will the system’s performance be improved?This problem can be divided into two parts:One is system’s loading status, and the other is system’s response time.(efficiency) 2. Compare to the published CORBA software, will this model become more complicated to program?That is , this model is implemented on application layer, and some software embedded them in underlying architectures. What’s difference when coding or programming? To solve the above problem, we implemented this idea and used results of many experiments to get the answers.
Lin, Jia-Zhi, and 林佳志. "Improving Clustering Efficiency by SimHash-based K-Means Algorithm." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/nv495x.
Full text國立臺北科技大學
資訊工程系研究所
102
K-Means is one of the popular methods for clustering, but it needs a lot of processing time in similarity calculation, which caused lower performance. Some studies proposed new methods for finding better initial centroids to provide an efficient way of assigning the data points to suitable clusters with reduced time complexity. However, with the large amount of data, vector dimension will be higher and needs more time in similarity calculation. In this paper, we propose SimHash-based K-Means algorithm that used dimensionality reduction and Hamming distance to handle large amount of data. The experiment of results showed that our proposed method can improve efficiency without significantly affecting effectiveness.
Liu, Yu-Chu, and 劉又齊. "A Study of Information Hiding and Its Efficiency Algorithm." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/52702980666594969752.
Full text國立臺中技術學院
資訊科技與應用研究所
95
Recently, protecting the intellectual property rights of digitized information is a serious challenge. For the reason above, related information hiding technologies are becoming more and more important. In accordance with different requests, there are three different schemes proposed in this thesis. The first scheme presents a new block-based authentication watermarking for verifying the integrity of binary images. The original protected image is partitioned into individual blocks. Each block obtains the hashing message by a hashing function. An exclusive-or operation is performed on the hashing message and watermark values and thus the authentication information is embedded into the protected image. If a binary image is tampered with by random modification or counterfeiting attack, the proposed technique can detect which locations have been altered. In many data hiding techniques, the simple least-significant-bit (LSB) substitution is a general scheme used to embed secret message in the cover image. This practice may injure the quality of the host image which increases the probability that malicious users will notice the existence of something within the stego-image. As a result, the optimal LSB substitution method was proposed to improve the quality of the image, but the optimal LSB substitution solution is not easy to find. Therefore, the second scheme proposed an efficient algorithm as an attempt to solve the above problem. In the second proposed scheme, the optimal LSB substitution problem is regarded as a general assignment problem, and then the Hungarian algorithm is used to find the actual optimal LSB substitution solution. Also the proposed scheme does not need a great deal of memory space. The third scheme proposed an effective reversible steganographic technique. The main concept is to utilize a similar property of all neighboring pixels. In the proposed scheme, the cover image is divided into non-overlapping groups by the neighboring pixel. Then each group is counted with an error value and then a complete error table can be derived. Then, the frequency of each error value is summed up and allows for the construction of the error histogram. Finally, the histogram shift scheme is used to hide data. The experimental results prove that by using the proposed scheme, the payload size and covered image quality are both obviously better than the original histogram shift scheme.
Chen, chi-sheng, and 陳智聖. "The Algorithm of Constant Efficiency Tracking for Fast Charging." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/28010529108535118376.
Full text國立交通大學
電機與控制工程系所
98
As the growing of portable electronic devises, lithium batteries play an important rule in power management systems. In order to maximize the performance of lithium batteries, a high charging efficiency and less charging time are required. Today, the main charging method for lithium batteries is the constant current- constant voltage method (CC-CV), but it can not reaches the requirement of fast charging. This thesis presents a fast charging method, which improve charging speed at the cost of minimum charging efficiency. First, we search the relationship between battery equivalent models and charging efficiency, then we control charging efficiency to have optimum charging current. Using the proposed algorithm, the charging time improve 12.4%, and the charging efficiency barely decreases 0.73%.
Chen, Ting-An, and 陳亭安. "Applying Advanced Operators to Improve the Efficiency of Genetic Algorithm." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/98963780385941744746.
Full text淡江大學
電機工程學系
87
Genetic Algorithm is a very important and effective optimizer because of its global searching capability. In this decade, Genetic Algorithms are applied in various problems in many disciplines. In general, the searching result does not depend on the initial guess since GA searches multiple points simultaneously, for which three operators (named as selection, crossover and mutation) are applied on some randomly generated initial population consisting of many individuals to achieve the goal of survival of the fittest. However, the price paid for the multiple-point searching scheme is the increase of computation time. Hence, various techniques are continuously proposed to improve the computational efficiency, which is quite important for GA. In this thesis, the non-uniform probability density functions are first employed in the crossover and mutation operators of GA during the course of searching to improve the computational efficiencies. The capability of escaping from local optima is improved such that the global optimum can be easily achieved. In addition, the convergence speed is also raised. Consider the fact that the parameters are encoded during the course of optimization using GA. After encoding, the most left hand side bit is the most significant bit MSB, while, the most right hand side bit is the least significant bit LSB. It is recognized that the correctness of those bits about the MSB determines the correctness of the parameters. The correctness of those bits about the LSB only determines the precision of the parameters. On the other hand, the changes of those bits near MSB imply a large range searching in parameter space, while, the changes of those bits near LSB imply a small range searching in parameter space. For the crossover and mutation operators of a classical GA, the weighting difference of different bits are not recognized and implemented. That is, the probability of crossover and mutation for each bit is the same in a classical GA. In this thesis, some non-uniform probability density functions are first introduced for the crossover and mutation operators. One objective is to enhance the crossover and mutation probability for the bits near MSB region when the best individual of current generation is still far from the global optimum region. This certainly would increase the escaping capability of GA from the local optimum. The other objective is to enhance the crossover and mutation probability for the bits near LSB region when the best individual of current generation is near the global optimum region. This would increase the convergence speed. In order to achieve the above objectives some mechanisms are required to suitably move the probability density functions. Therefore, two mechanisms are proposed, called Cyclical GA and Adaptive GA, in this thesis, and their efficiency improvements are checked. We found both GAs work for different testing functions, including those that are hard to converge for classical GA.
LIN, WEN-BIN, and 林文斌. "A study for improving the efficiency of Frank-Wolfe algorithm." Thesis, 1992. http://ndltd.ncl.edu.tw/handle/35315928245004791342.
Full text國立交通大學
土木工程研究所
80
Frank-Wolfe 演算法是凸形非線性規劃問題(convex nonlinear programming pro- blem) 的解法之一,而在求解交通網路的均衡指派問題時,一般也是使用 Frank- Wolfe 演算法。此演算法的主要缺點是收斂速度太慢,針對此缺點,在過去已有 Fukushima(1984)、LeBlanc(1985),以及Weintraub(1985) 等人修改此演算法,本 研究認為其中仍有很大發展空間,因此將研究作進一步的改善。本研究將在收斂條 件的要求更嚴格的考慮下,從以下兩方面著手,更進一步地提昇Frank-Wolfe 演算 法的計算效率: (1) 對Fukushima 的方法做完整的分析,找出更適合的策略。 (2) 結合Weintraub 與Fukushima 二者的不同改善方法。 最後,將以電腦測試求解網路交通量指派問題,以顯示本研究提出之改善策略所提 昇的計算效率。
林詩凱. "Improving AODV Route Protocol Efficiency with Compromised Route Selection Algorithm." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/79852668330947717872.
Full text國立臺灣師範大學
機電科技研究所
96
Mobile ad hoc Networks (MANETs) is formed by a group of wireless equipment (node) that can move fast and no centralized management mechanism can be used. The communication between mobile nodes can be accomplished via the nearby mobile hosts interchanging messages. In case of the limited resources such as network bandwidth, memory capacity, and battery power, the efficiency of routing scheme in ad hoc networks becomes more important and challenging. In Mobile ad hoc Networks, most nodes are mobile and the routing path may be changed or disrupted quite often due to the movement of some hosts on the path. Therefore, finding a reliable routing path is an important issue. In this thesis, signal strength coefficient, node power coefficient and busy condition coefficients are calculated for route selection, to meet the request packet for establishing transmission path. Route selection value is calculated with above three coefficients for choosing steady routing path and backup path. It can reduce the break time and latency period, and increase packet arrival rate. The developed method in this thesis can choose the path with high stability and reduce the overall network load. The results of simulation showed that overall network performance has been improved.
Cosgaya, Lozano Adan Jose. "Engineering Algorithms for Solving Geometric and Graph Problems on Large Data Sets." 2011. http://hdl.handle.net/10222/13324.
Full textCheng-HaoChen and 陳正浩. "A Fast CU Size Decision Algorithm for High Efficiency Video Coding." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/27k54a.
Full text國立成功大學
電機工程學系
103
High Efficiency Video Coding (HEVC) is the newest video coding standard. It provides the better compression performance compared with the existing standards. HEVC adopts the quad-tree structure which allows recursive splitting into four equally sized nodes, starting from the Coding Tree Unit (CTU). The quad-tree structure causes the better compression efficiency, but it requires the higher computational complexity. In order to reduce the computational complexity, we propose a fast CU size decision algorithm. The proposed algorithm consists of adaptive depth range and early pruning test. First, we use adaptive depth range instead of fixed depth range for a CTU encoding. Then, for each CU, the early pruning test is performed at each depth level according to Bayes rule based on the full RD costs. Compared with the HEVC test model 12.0 (HM 12.0), experimental results show the proposed method achieves the reduction of encoding time by 60.11%, the increment of bitrate by 2.4%, and 0.1 dB Y-PSNR loss, on average.
Wu, Sheng-Yi, and 吳昇益. "Using modified Dijkstra’s algorithm to improve the movement efficiency of robocar." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/97337776645458917800.
Full text國立陽明大學
醫學工程研究所
101
Abstract In recent years, the telehealthcare is very popular. Because the tele-healthcare can keep a watchful eye on information of patients or elderly people, and handle in anytime, in anywhere and by any device, If becomes an on going nursing behavior. Based on its concept, we builded a indoor positioning system by RFID Cartesian grids, which can guide the robocar move to the designated location , then to realize the circumstances with the patient. In this field, many factors will determine whether it can be access or not, such as location-awareness, path finding and path conditions. In this study, we first introduce passive RFID tags to act as landmarks for solving location-awareness. These landmarks can not only read by robocar to determine present localization but reduce computing time for path finding searching process. For the part of path –planning, we proposed an improved graph-based algorithm for archiving obstacles avoidance and less veers into consideration, to generate an efficient path for navigation. We tested the efficiency of different path finding algorithms with the designated map, included Dijkstra’s algorithm, the collision–free algorithm (CFA) on basis of Dijkstra and our proposed method. In comparison of Dijkstra’s algorithm and CFA approach, Dijkstra’s algorithm could find the shortest path. but easily occur collision; and although CFA approach increase 3% distances, it could ensure keeping up a collision-free condition. Another aspect, in comparison of our proposed approach and CFA approach, our method increase cruising distance then CFA, due to it isn’t a shortest path . However, the aim we adopted veering angles is to emend weighting manner to condition of mobile robocar cruising. And our result proved the ideal shortest path is not minimum time to access destinations in practical environment.
Chun, Chiu YiI, and 邱意淳. "High-Efficiency Prony-Based Algorithm for Time-Varying Power Signal Estimation." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/52473577196075665060.
Full text國立彰化師範大學
機電工程學系
99
ABSTRACT With the widespread use of nonlinear loads in the power system, harmonic distortion causes a serious pollution of power quality. Besides, the power unbalance between the generation and the load demand would make the fundamental frequency varying with time. These disturbances may introduce operational problems of power system equipments. Therefore, improving the power quality has become a great concern for both utilities and customers. The frequency-domain methods have been widely used for the signal processing because of its computational efficiency. In addition, most power meters adopt FFT-based algorithm to analyze the harmonics and to show the frequency spectra. However, the FFT-based algorithm is less accurate if the system frequency varies and the frequency resolution decreases. The analysis results will show errors caused by the leakage and picket-fence effects. Therefore, how to achieve both the high resolution and efficiency is worth investigating. According to aforementioned facts, this thesis proposes a Prony-based improved algorithm for harmonics and interharmonics measurement. Not only the calculation time is reduced, but also the result is with a better accuracy, even if the power signals contain frequency variations and non-integer harmonic components. Finally, the thesis applies LabVIEW and the dedicated hardware to design a simple setup for measuring power quality signals. The performance of improved algorithm is validated by testing the synthesized and actual signals. Key Words: Harmonics, System Frequency Variation, Fast Fourier Transform, Prony's Method, LabVIEW
Li, Yu-Lin, and 李育霖. "Adaptive Traffic Indication Algorithm for Energy Efficiency in IEEE 802.16e Systems." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/84550158708172448917.
Full text長庚大學
資訊工程學研究所
96
The efficiency of power saving mechanism on wireless communications will influence the time the mobile station (MSS) can operate. Due to the characteristics of centralized control in WiMAX system, the sleeping period of each subscriber is dominated by a base station (BS) based on their service types, traffic loads, and expected sleeping periods. The power saving mechanism uses an exponential backoff sleeping window manner to determine the sleeping period of each MS. In recently researches, some of them optimize the sleeping period by estimating the packet inter-arrival time for improving the energy efficient. However, those mechanisms do not reflect the relationship between the traffic load and available bandwidth. That is, according to the available and priorities of connections, the lower priority will can not receive data immediately and waste energy on the waiting time. Thus, in this paper, we propose an adaptive traffic indication algorithm (ATIA) to let MSS do the extend sleep on bandwidth unavailable condition, and illustrate an adaptively adjusting sleeping window scheme for delay versus energy consumption. Simulation results show that ATIA increase the degree of power saving with comparison to IEEE 802.16e; and further, it shows ATIA can combine with other power saving mechanism and also get well performance.
Lin, Li-Jyun, and 林豊鈞. "Energy-Efficiency Scheduling Algorithm forMultiframe Real-Time Tasks in DVS Processor." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/42524271524671748259.
Full text國立高雄大學
資訊工程學系碩士班
100
An embedded system with a video decoder has become a new trend due to the applications of mobile multimedia and the consuming electronic products required in the life. For the considerations of low cost and high efficiency when the embedded system plays MPEG video, users require a proper quality of service. However, the amount of encoded data on each frame will affect the processing time. If the maximum execution times of tasks are used to do schedulability test, the quality of service of the system can be guaranteed. However, it will result in the higher energy consumption. It is an important issue to reduce total energy consumption. In this thesis, we propose an EDF-based real-time scheduling algorithm with considering energy consumption for the multiframe task model. A simulation model is built to investigate the performance of the proposed approach. The capability of the proposed approach is evaluated by a series of simulations, for which we have encouraging results.
Fang, Han-Chiou, and 方瀚萩. "ast Intra Prediction Algorithm and Design for High Efficiency Video Coding." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/46070691957728794785.
Full text國立交通大學
電子工程學系 電子研究所
103
When compared to previous video standard H.264, High Efficiency Video Coding (HEVC) has significant computation complexity because of more PU size types and more intra prediction modes. To achieve real time encoding demands, this paper proposes a fast intra prediction algorithm and its design. The fast algorithm can be divided into two parts. The first part is the fast intra prediction unit (PU) size selection that is a gradient weight controlled block size selection to reduce PU sizes to two. These two PU sizes will be reduced to one for more complexity reduction based on the SATD distribution. The required intra prediction modes are reduced by almost half by s simple three-step algorithm. The simulation results show that the proposed algorithm can save 79% encoding time on average for all-intra main case compared to the default encoding scheme in HM-9.0rc1, with 3.9% BD-rate increases With TSMC 90 nm CMOS technology and 270 MHz operating frequency, the gate count of this work is about 224.608K and the memory usage is 1.762 Kbytes to support the 4k×2k 30 fps video encoding.
Yao, Chiao Yin, and 姚喬尹. "Improving the Efficiency of the Apriori Algorithm for Mining Association Rules." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/65311017636604670428.
Full text南台科技大學
資訊管理系
98
With the development of information technology, enterprises have a lot of way to get information and can use this technology store about a lot of enterprise’s transaction or record in data base. How to find the useful information in database has become the subject which the enterprises pay attention. Association rules technology is generally in data mining. Based on the Internet Technology development and the globalization of business, the transaction database of enterprise is constantly changing all the time, and in order to keep the accuracy of exploring result in dynamic database, the traditional explore method in order to keep the information accuracy so it unavoidable must to exploring information again constantly; Because generated too many redundant candidate itemsets so it causes too many times to scan the database; Is need to scan the redundant transaction data because there is not recognize this items belong to which transaction. In order to preserve the accuracy when mining the dynamic database, we need repeatedly scan database. This is above the traditional Apriori algorithm to mining association rules of the weakness in the dynamic database. This research is based on Apriori Algorithm to improve its process. This paper proposed an improve algorithms. The new algorithm is to transform database from horizontal to vertical. This can be avoided scan redundant of Transaction data. Any item count just need to scan two transactions in data base so as to increase mining efficiency. And this is improved from Apriori generate candidate itemsets process. That can avoid generate too many candidate itemset and can increase mining efficiency again. And propose appropriate methods to update this algorithm so as to this algorithms can use in dynamic database in real-time and correctly, to fit in with the business needs and provide immediate and accurate to the important decision-making.