Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Algorithm performance.

Dissertationen zum Thema „Algorithm performance“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Algorithm performance" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Wang, Lingyun. „Feeder Performance Analysis with Distributed Algorithm“. Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/31949.

Der volle Inhalt der Quelle
Annotation:
How to evaluate the performance of an electric power distribution system unambiguously and quantitatively is not easy. How to accurately measure the efficiency of it for a whole year, using real time hour-by-hour Locational Marginal Price data, is difficult. How to utilize distributed computing technology to accomplish these tasks with a timely fashion is challenging. This thesis addresses the issues mentioned above, by investigating feeder performance analysis of electric power distribution systems with distributed algorithm. Feeder performance analysis computes a modeled circuitâ s performance over an entire year, listing key circuit performance parameters such as efficiency, loading, losses, cost impact, power factor, three phase imbalance, capacity usage and others, providing detailed operating information for the system, and an overview of the performance of every circuit in the system. A diakoptics tearing method and Graph Trace Analysis based distributed computing technology is utilized to speed up the calculation. A general distributed computing architecture is established and a distributed computing algorithm is described. To the best of the authorâ s knowledge, it is the first time that this detailed performance analysis is researched, developed and tested, using a diakoptics based tearing method and Graph Trace Analysis to split the system so that it can be analyzed with distributed computing technology.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pochet, Juliette. „Evaluation de performance d’une ligne ferroviaire suburbaine partiellement équipée d’un automatisme CBTC“. Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC005.

Der volle Inhalt der Quelle
Annotation:
En zone dense, la croissance actuelle du trafic sur les lignes ferroviaires suburbaines conduit les exploitants à déployer des systèmes de contrôle-commande avancés des trains, tels que les systèmes dits « CBTC » (Communication Based Train Control) jusque-là réservés aux systèmes de métro. Les systèmes CBTC mettent en œuvre un pilotage automatique des trains et permettent une amélioration significative des performances. Par ailleurs, ils peuvent inclure un module de supervision de la ligne en charge de réguler la marche des trains en cas d’aléa, améliorant ainsi la robustesse du trafic. Face au problème de régulation, la recherche opérationnelle a produit un certain nombre de méthodes permettant de répondre efficacement aux perturbations, d’une part dans le secteur métro et d’autre part dans le secteur ferroviaire lourd. En tirant profit de l’état de l’art et des avancées faites dans les deux secteurs, les travaux présentés dans ce manuscrit cherchent à contribuer à l’adaptation des fonctions de régulation des systèmes CBTC pour l’exploitation de lignes ferroviaires suburbaines. L’approche du problème débute par la construction de l’architecture fonctionnelle d’un module de supervision pour un système CBTC standard. Nous proposons ensuite une méthode de régulation basée sur une stratégie de commande prédictive et sur une optimisation multi-objectif des consignes des trains automatiques. Afin d’être en mesure d’évaluer précisément les performances d’une ligne ferroviaire suburbaine équipée d’un automatisme CBTC, il est nécessaire de s’équiper d’un outil de simulation microscopique adapté. Nous présentons dans ce manuscrit l’outil SNCF nommé SIMONE qui permet une simulation réaliste du point de vue fonctionnel et dynamique d’un système ferroviaire incluant un système CBTC. Les objectifs des travaux de thèse nous ont naturellement conduits à prendre part, avec l’équipe SNCF, à la spécification, à la conception et à l’implémentation de cet outil. Finalement, grâce à l’outil SIMONE, nous avons pu tester la méthode de régulation proposée sur des scénarios impliquant des perturbations. Afin d’évaluer la qualité des solutions, la méthode multi-objectif proposée a été comparée à une méthode de régulation individuelle basée sur une heuristique simple. La méthode de régulation multi-objectif propose de bonnes solutions au problème, dans la majorité des cas plus satisfaisantes que celles proposées par la régulation individuelle, et avec un temps de calcul jugé acceptable. Le manuscrit se termine par des perspectives de recherche intéressantes
In high-density area, the demand for railway transportation is continuously increasing. Operating companies turn to new intelligent signaling and control systems, such as Communication Based Train Control (CBTC) systems previously deployed on underground systems only. CBTC systems operate trains in automatic pilot and lead to increase the line capacity without expensive modification of infrastructures. They can also include a supervision module in charge of adapting train behavior according to operating objectives and to disturbances, increasing line robustness. In the literature of real-time traffic management, various methods have been proposed to supervise and reschedule trains, on the one hand for underground systems, on the other hand for railway systems. Making the most of the state-of-the-art in both fields, the presented work intend to contribute to the design of supervision and rescheduling functions of CBTC systems operating suburban railway systems. Our approach starts by designing a supervision module for a standard CBTC system. Then, we propose a rescheduling method based on a model predictive control approach and a multi-objective optimization of automatic train commands. In order to evaluate the performances of a railway system, it is necessary to use a microscopic simulation tool including a CBTC model. In this thesis, we present the tool developed by SNCF and named SIMONE. It allows realistic simulation of a railway system and a CBTC system, in terms of functional architecture and dynamics. The presented work has been directly involved in the design and implementation of the tool. Eventually, the proposed rescheduling method was tested with the tool SIMONE on disturbed scenarios. The proposed method was compared to a simple heuristic strategy intending to recover delays. The proposed multi-objective method is able to provide good solutions to the rescheduling problem and over-performs the simple strategy in most cases, with an acceptable process time. We conclude with interesting perspectives for future work
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Musselman, Roger D. „Robustness a better measure of algorithm performance“. Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Sep%5FMusselman.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, September 2007.
Thesis Advisor(s): Sanchez, Paul J. "September 2007." Description based on title screen as viewed on October 25, 2007. Includes bibliographical references (p. 55-56). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Chu, Yijing, und 褚轶景. „Resursive local estimation: algorithm, performance and applications“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49799320.

Der volle Inhalt der Quelle
Annotation:
Adaptive filters are frequently employed in many applications, such as, system identification, adaptive echo cancellation (AEC), active noise control (ANC), adaptive beamforming, speech signal processing and other related problems, in which the statistic of the underlying signals is either unknown a priori, or slowly-varying. Given the observed signals under study, we shall consider, in this dissertation, the time-varying linear model with Gaussian or contaminated Gaussian (CG) noises. In particular, we focus on recursive local estimation and its applications in linear systems. We base our development on the concept of local likelihood function (LLF) and local posterior probability for parameter estimation, which lead to efficient adaptive filtering algorithms. We also study the convergence performance of these algorithms and their applications by theoretical analyses. As for applications, another important one is to utilize adaptive filters to obtain recursive hypothesis testing and model order selection methods. It is known that the maximum likelihood estimate (MLE) may lead to large variance or ill-conditioning problems when the number of observations is limited. An effective approach to address these problems is to employ various form of regularization in order to reduce the variance at the expense of slightly increased bias. In general, this can be viewed as adopting the Bayesian estimation, where the regularization can be viewed as providing a certain prior density of the parameters to be estimated. By adopting different prior densities in the LLF, we derive the variable regularized QR decomposition-based recursive least squares (VR-QRRLS) and recursive least M-estimate (VR-QRRLM) algorithms. An improved state-regularized variable forgetting factor QRRLS (SR-VFF-QRRLS) algorithm is also proposed. By approximating the covariance matrix in the RLS, new variable regularized and variable step-size transform domain normalized least mean square (VR-TDNLMS and VSS-TDNLMS) algorithms are proposed. Convergence behaviors of these algorithms are studied to characterize their performance and provide useful guidelines for selecting appropriate parameters in practical applications. Based on the local Bayesian estimation framework for linear model parameters developed previously, the resulting estimate can be utilized for recursive nonstationarity detection. This can be cast under the problem of hypothesis testing, as the hypotheses can be viewed as two competitive models between stationary and nonstationary to be selected. In this dissertation, we develop new regularized and recursive generalized likelihood ratio test (GLRT), Rao’s and Wald tests, which can be implemented recursively in a QRRLS-type adaptive filtering algorithm with low computational complexity. Another issue to be addressed in nonstationarity detection is the selection of various models or model orders. In particular, we derive a recursive method for model order selection from the Bayesian Information Criterion (BIC) based on recursive local estimation. In general, the algorithms proposed in this dissertation have addressed some of the important problems in estimation and detection under the local and recursive Bayesian estimation framework. They are intrinsically connected together and can potentially be utilized for various applications. In this dissertation, their applications to adaptive beamforming, ANC system and speech signal processing, e.g. adaptive frequency estimation and nonstationarity detection, have been studied. For adaptive beamforming, the difficulties in determining the regularization or loading factor have been explored by automatically selecting the regularization parameter. For ANC systems, to combat uncertainties in the secondary path estimation, regularization techniques can be employed. Consequently, a new filtered-x VR-QRRLM (Fx-VR-QRRLM) algorithm is proposed and the theoretical analysis helps to address challenging problems in the design of ANC systems. On the other hand, for ANC systems with online secondary-path modeling, the coupling effect of the ANC controller and the secondary path estimator is thoroughly studied by analyzing the Fx-LMS algorithm. For speech signal processing, new approaches for recursive nonstationarity detection with automatic model order selection are proposed, which provides online time-varying autoregressive (TVAR) parameter estimation and the corresponding stationary intervals with low complexity.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hamrin, Niklas, und Nils Runebjörk. „Examining Sorting Algorithm Performance Under System Load“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209765.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we conducted tests in order to determine how system load affected the speed of commonly used algorithms. The difficulty of obtaining these type of results theoretically due to a large number of factors that can affect performance motivated the use of simulation. An implementation was constructed that ran the sorting algorithms Mergesort, Quicksort and Radixsort while the system was put under different levels of stress. This stress was created by generating cache misses using the software stress-ng. The resulting completion times, as well as the cache activity was logged. The result of the simulations were mostly within expectations, faster algorithms still performed better even under heavy loads and even handled the load better than the competition
I denna rapport har vi utfört tester för att utvärdera hur systembelastning påverkar hastigheten på en samling vanligt förekommande algoritmer. Svårigheterna med att teoretiskt räkna ut dessa resultat är på grund av dess många påverkande faktorer. Detta ledde till användandet av en simulation. Ett testsystem skapades för att köra sorteringsalgoritmerna Mergesort, Quicksort och Radixsort med varierad systembelastning. Denna systembelastning skapades genom att generera cachemissar med hjälp av programmet stress-ng. Dessa simulationer, såväl som cache aktiviteten loggades. Resultatet var inom förväntan, de lättare och snabbare algoritmerna klarade sig fortfarande bättre även under svår belastning, och hanterade den ökade belastningen bättre.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Jonas, Mario Ricardo Edward. „High performance computing and algorithm development: application of dataset development to algorithm parameterization“. Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&amp.

Der volle Inhalt der Quelle
Annotation:
A number of technologies exist that captures data from biological systems. In addition, several computational tools, which aim to organize the data resulting from these technologies, have been created. The ability of these tools to organize the information into biologically meaningful results, however, needs to be stringently tested. The research contained herein focuses on data produced by technology that records short Expressed Sequence Tags (EST's).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Anderson, Roger J. „Characterization of Performance, Robustness, and Behavior Relationships in a Directly Connected Material Handling System“. Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/26967.

Der volle Inhalt der Quelle
Annotation:
In the design of material handling systems with complex and unpredictable dynamics, conventional search and optimization approaches that are based only on performance measures offer little guarantee of robustness. Using evidence from research into complex systems, the use of behavior-based optimization is proposed, which takes advantage of observed relationships between complexity and optimality with respect to both performance and robustness. Based on theoretical complexity measures, particularly algorithmic complexity, several simple complexity measures are created. The relationships between these measures and both performance and robustness are examined, using a model of a directly connected material handling system as a backdrop. The fundamental causes of the relationships and their applicability in the proposed behavior-based optimization approach are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Martins, Wellington Santos. „Algorithm performance on a general purpose parallel computer“. Thesis, University of East Anglia, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296870.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Balasubramanian, Priya. „Interfacing VHDL performance models to algorithm partitioning tools“. Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02132009-172459/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Li, Chuhe. „A sliding window BIRCH algorithm with performance evaluations“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-32397.

Der volle Inhalt der Quelle
Annotation:
An increasing number of applications covered various fields generate transactional data or other time-stamped data which all belongs to time series data. Time series data mining is a popular topic in the data mining field, it introduces some challenges to improve accuracy and efficiency of algorithms for time series data. Time series data are dynamical, large-scale and high complexity, which makes it difficult to discover patterns among time series data with common methods suitable for static data. One of hierarchical-based clustering methods called BIRCH was proposed and employed for addressing the problems of large datasets. It minimizes the costs of I/O and time. A CF tree is generated during its working process and clusters are generated after four phases of the whole BIRCH procedure. A drawback of BIRCH is that it is not very scalable. This thesis is devoted to improve accuracy and efficiency of BIRCH algorithm. A sliding window BIRCH algorithm is implemented on the basis of BIRCH algorithm. At the end of thesis, the accuracy and efficiency of sliding window BIRCH are evaluated. A performance comparison among SW BIRCH, BIRCH and K-means are also presented with Silhouette Coefficient index and Calinski-Harabaz Index. The preliminary results indicate that the SW BIRCH may achieve a better performance than BIRCH in some cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Tambouris, Efthimios. „Performance and scalability analysis of parallel systems“. Thesis, Brunel University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341665.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kang, Seunghwa. „On the design of architecture-aware algorithms for emerging applications“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39503.

Der volle Inhalt der Quelle
Annotation:
This dissertation maps various kernels and applications to a spectrum of programming models and architectures and also presents architecture-aware algorithms for different systems. The kernels and applications discussed in this dissertation have widely varying computational characteristics. For example, we consider both dense numerical computations and sparse graph algorithms. This dissertation also covers emerging applications from image processing, complex network analysis, and computational biology. We map these problems to diverse multicore processors and manycore accelerators. We also use new programming models (such as Transactional Memory, MapReduce, and Intel TBB) to address the performance and productivity challenges in the problems. Our experiences highlight the importance of mapping applications to appropriate programming models and architectures. We also find several limitations of current system software and architectures and directions to improve those. The discussion focuses on system software and architectural support for nested irregular parallelism, Transactional Memory, and hybrid data transfer mechanisms. We believe that the complexity of parallel programming can be significantly reduced via collaborative efforts among researchers and practitioners from different domains. This dissertation participates in the efforts by providing benchmarks and suggestions to improve system software and architectures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Dash, Sajal. „Exploring the Landscape of Big Data Analytics Through Domain-Aware Algorithm Design“. Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99798.

Der volle Inhalt der Quelle
Annotation:
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. High volume and velocity of the data warrant a large amount of storage, memory, and compute power while a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. In this thesis, we present our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental arrival of data through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We present Claret, a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool, to demonstrate the application of the first guideline. It combines algorithmic concepts extended from the stochastic force-based multi-dimensional scaling (SF-MDS) and Glimmer. Claret computes approximate weighted Euclidean distances by combining a novel data mapping called stretching and Johnson Lindestrauss' lemma to reduce the complexity of WMDS from O(f(n)d) to O(f(n) log d). In demonstrating the second guideline, we map the problem of identifying multi-hit combinations of genetic mutations responsible for cancers to weighted set cover (WSC) problem by leveraging the semantics of cancer genomic data obtained from cancer biology. Solving the mapped WSC with an approximate algorithm, we identified a set of multi-hit combinations that differentiate between tumor and normal tissue samples. To identify three- and four-hits, which require orders of magnitude larger computational power, we have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. Developing new statistics to combine search results over time makes incremental analysis feasible. iBLAST performs (1+δ)/δ times faster than NCBI BLAST, where δ represents the fraction of database growth. We also explored various approaches to mitigate catastrophic forgetting in incremental training of deep learning models.
Doctor of Philosophy
Experimental and observational data emerging from various scientific domains necessitate fast, accurate, and low-cost analysis of the data. While exploring the landscape of big data analytics, multiple challenges arise from three characteristics of big data: the volume, the variety, and the velocity. Here volume represents the data's size, variety represents various sources and formats of the data, and velocity represents the data arrival rate. High volume and velocity of the data warrant a large amount of storage, memory, and computational power. In contrast, a large variety of data demands cognition across domains. Addressing domain-intrinsic properties of data can help us analyze the data efficiently through the frugal use of high-performance computing (HPC) resources. This thesis presents our exploration of the data analytics landscape with domain-aware approximate and incremental algorithm design. We propose three guidelines targeting three properties of big data for domain-aware big data analytics: (1) explore geometric (pair-wise distance and distribution-related) and domain-specific properties of high dimensional data for succinct representation, which addresses the volume property, (2) design domain-aware algorithms through mapping of domain problems to computational problems, which addresses the variety property, and (3) leverage incremental data arrival through incremental analysis and invention of problem-specific merging methodologies, which addresses the velocity property. We demonstrate these three guidelines through the solution approaches of three representative domain problems. We demonstrate the application of the first guideline through the design and development of Claret. Claret is a fast and portable parallel weighted multi-dimensional scaling (WMDS) tool that can reduce the dimension of high-dimensional data points. In demonstrating the second guideline, we identify combinations of cancer-causing gene mutations by mapping the problem to a well known computational problem known as the weighted set cover (WSC) problem. We have scaled out the WSC algorithm on a hundred nodes of Summit supercomputer to solve the problem in less than two hours instead of an estimated hundred years. In demonstrating the third guideline, we developed a tool iBLAST to perform an incremental sequence similarity search. This analysis was made possible by developing new statistics to combine search results over time. We also explored various approaches to mitigate the catastrophic forgetting of deep learning models, where a model forgets to perform machine learning tasks efficiently on older data in a streaming setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hörmann, Wolfgang. „A Note on the Performance of the "Ahrens Algorithm"“. Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2001. http://epub.wu.ac.at/1698/1/document.pdf.

Der volle Inhalt der Quelle
Annotation:
This short note discusses performance bounds for "Ahrens" algorithm, that can generate random variates from continuous distributions with monotonically decreasing density. This rejection algorithms uses constant hat-functions and constant squeezes over many small intervals. The choice of these intervals is important. Ahrens has demonstrated that the equal area rule that uses strips of constant area leads to a very simple algorithm. We present bounds on the rejection constant of this algorithm depending only on the number of intervals. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Wang, David Tawei. „Modern DRAM memory systems performance analysis and scheduling algorithm /“. College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2432.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Song, Shuo. „Performance analysis and algorithm design for distributed transmit beamforming“. Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5724.

Der volle Inhalt der Quelle
Annotation:
Wireless sensor networks has been one of the major research topics in recent years because of its great potential for a wide range of applications. In some application scenarios, sensor nodes intend to report the sensing data to a far-field destination, which cannot be realized by traditional transmission techniques. Due to the energy limitations and the hardware constraints of sensor nodes, distributed transmit beamforming is considered as an attractive candidate for long-range communications in such scenarios as it can reduce energy requirement of each sensor node and extend the communication range. However, unlike conventional beamforming, which is performed by a centralized antenna array, distributed beamforming is performed by a virtual antenna array composed of randomly located sensor nodes, each of which has an independent oscillator. Sensor nodes have to coordinate with each other and adjust their transmitting signals to collaboratively act as a distributed beamformer. The most crucial problem of realizing distributed beamforming is to achieve carrier phase alignment at the destination. This thesis will investigate distributed beamforming from both theoretical and practical aspects. First, the bit error ratio performance of distributed beamforming with phase errors is analyzed, which is a key metric to measure the system performance in practice. We derive two distinct expressions to approximate the error probability over Rayleigh fading channels corresponding to small numbers of nodes and large numbers of nodes respectively. The accuracy of both expressions is demonstrated by simulation results. The impact of phase errors on the system performance is examined for various numbers of nodes and different levels of transmit power. Second, a novel iterative algorithm is proposed to achieve carrier phase alignment at the destination in static channels, which only requires one-bit feedback from the destination. This algorithm is obtained by combining two novel schemes, both of which can greatly improve the convergence speed of phase alignment. The advantages in the convergence speed are obtained by exploiting the feedback information more efficiently compared to existing solutions. Third, the proposed phase alignment algorithm is modified to track time-varying channels. The modified algorithm has the ability to detect channel amplitude and phase changes that arise over time due to motion of the sensors or the destination. The algorithm can adjust key parameters adaptively according to the changes, which makes it more robust in practical implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Zheng, C. „High performance adaptive MIMO detection : from algorithm to implementation“. Thesis, Queen's University Belfast, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.557888.

Der volle Inhalt der Quelle
Annotation:
Adaptive-Multiple Input Multiple Output (A-MIMO) techniques are regarded as one of the core techniques in 3G/4G wireless communication systems and beyond. One of the key challenges for A- MIMO systems is to retrieve the spatially mixed transmitted symbols at the receiver. However, only a few high performance detection algorithms have been successfully implemented to achieve real-time performance. This thesis concentrates on developing high performance adaptive modulation MIMO detection algorithms and Model Based Design (MBD) techniques that bridge the gap between detection algorithms and efficient embedded implementations. From the algorithm perspective, this work proposes a novel near-optimal low complexity detection algorithm, Real-valued Fixed-complexity Sphere Decoder (RFSD). The RFSD is derived to achieve quasi-ML decoding performance as FSD, which is the most promising low complexity high performance parallel detection algorithm in existence, but with over 70% complexity reduction. In addition, an adaptive detection algorithm is proposed. This detection algorithm alleviates the BER degradation current high performance detection algorithms experience and provides up to 46% BER improvement for small constellation dominated hybrid modulated MIMO systems. It also balances detection performance and complexity for MIMO configurations under different environments. From the implementation perspective, a Regular Choice Petri Net (RCPN) is proposed to accurately model and rapid implement the adaptive detection algorithms. The Texas Instruments l"MS320C64+ DSP-based realisations from the RCPN model demonstrate 90% reduction in run-time overhead and 10% reduction in code memory as compared to languages in existence. Furthermore, an MBD design approach is developed to convert RCPN models into embedded implementations, by creating an automated allocation method and introducing the kernel concept from streaming applications into the scheduling process. The resulting FPGA based multi-SIMD implementation achieves real time performance with at least 52.6% less hardware resource or over 65% reduction in mapping complexity as compared to conventional schemes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Almerström, Przybyl Simon. „A Trade-based Inference Algorithm for Counterfactual Performance Estimation“. Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254453.

Der volle Inhalt der Quelle
Annotation:
A methodology for increasing the success rate in debt collection by matching individual call center agents with optimal debtors is developed. This methodology, called the trade algorithm, consists of the following steps. The trade algorithm first identifies groups of debtors for which agent performance varies. Based on these differences in performance, agents are put into clusters. An optimal call allocation for the clusters is then decided. Two methods to estimate the performance of an optimal call allocation are suggested. These methods are combined with Monte Carlo cross-validation and an alternative time-consistent validation procedure. Tests of significance are applied to the results and the effect size is estimated. The trade algorithm is applied to a dataset from the credit management services company Intrum and is shown to enhance performance.
En metodik för att öka andelen lyckade inkassoärenden genom att para ihop telefonhandläggare med optimala gäldenärer utvecklas. Denna metodik, kallad handels-algoritmen, består av följande steg. Handelsalgoritmen identifierar först grupper av gäldenärer för vilka agenters prestationsförmåga varierar. Utifrån dessa skillnader i prestationsförmåga är agenter placerade i kluster. En optimal samtalsallokering för klustren bestäms sedan. Två metoder för att estimera en optimal samtalsallokerings prestanda föreslås. Dessa metoder kombineras med Monte Carlo-korsvalidering och en alternativ tidskonsistent valideringsteknik. Signifikanstester tillämpas på resultaten och effektstorleken estimeras. Handelsalgoritmen tillämpas på data från kredithanteringsföretaget Intrum och visas förbättra prestanda.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Gerlach, Adam R. „Performance Enhancements of the Spin-Image Pose Estimation Algorithm“. University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1267730727.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Arif, Annatoma Arif. „BLURRED FINGERPRINT IMAGE ENHANCEMENT: ALGORITHM ANALYSIS AND PERFORMANCE EVALUATION“. Miami University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=miami1473428137332997.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Kostina, Victoria. „Optimization and performance analysis of the V-BLAST algorithm“. Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27382.

Der volle Inhalt der Quelle
Annotation:
Complexity-performance tradeoff in different implementations of the V-BLAST algorithm is discussed. Low-complexity alternatives to the optimal ordering procedure, such as ordering at 1st step, adaptive power allocation and pre-set non-uniform power allocation are proposed. A unified analytical framework for the optimum power allocation in the V-BLAST algorithm is presented. Comparative performance analysis of the optimum power allocation based on various optimization criteria (average and instantaneous block and total error rates) is given. Uniqueness of the optimum power allocation is proven for several scenarios. Compact closed-form approximations for the optimum power allocation and for the optimized error rates are derived. The SNR gain of optimization is rigorously defined and analyzed using analytical tools, including lower and upper bounds, high and low SNR approximations. The gain is upper bounded by the number of transmitters, for any modulation format and type of fading channel. While the average optimization is less complex than the instantaneous one, its performance is almost as good at high SNR. A measure of robustness of the optimized algorithm is introduced and evaluated. The optimized algorithm is shown to be robust to perturbations in individual and total transmit powers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Mulgrew, Bernard. „On adaptive filter structure and performance“. Thesis, University of Edinburgh, 1987. http://hdl.handle.net/1842/11865.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Olsson, Victor, und Viktor Eklund. „CPU Performance Evaluation for 2D Voronoi Tessellation“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18259.

Der volle Inhalt der Quelle
Annotation:
Voronoi tessellation can be used within a couple of different fields. Some of these fields include healthcare, construction and urban planning. Since Voronoi tessellations are used in multiple fields, it is motivated to know the strengths and weaknesses of the algorithms used to generate them, in terms of their efficiency. The objectives of this thesis are to compare two CPU algorithm implementations for Voronoi tessellation in regards to execution time and see which of the two is the most efficient. The algorithms compared are The Bowyer-Watson algorithm and Fortunes algorithm. The Fortunes algorithm used in the research is based upon a pre-existing Fortunes implementation while the Bowyer-Watson implementation was specifically made for this research. Their differences in efficiency were determined by measuring their execution times and comparing them. This was done in an iterative manner, where for each iteration, the amount of data to be computed was continuously increased. The results show that Fortunes algorithm is more efficient on the CPU without using any acceleration techniques for any of the algorithms. It took 70 milliseconds for the Bowyer-Watson method to calculate 3000 input points while Fortunes method took 12 milliseconds under the same conditions. As a conclusion, Fortunes algorithm was more efficient due to the Bowyer-Watson algorithm doing unnecessary calculations. These calculations include checking all the triangles for every new point added. A suggestion for improving the speed of this algorithm would be to use a nearest neighbour search technique when searching through triangles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Zhou, Yufeng. „Performance Evaluation of a Weighted Clustering Algorithm in NSPS Scenarios“. Thesis, KTH, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-140427.

Der volle Inhalt der Quelle
Annotation:
In national security and public safety (NSPS) scenarios, the concept of device-to-device (D2D) clustering allows user equipment (UEs) to dynamically form clusters and thereby allows for local communication with partial or no cellular network assistance. We propose and evaluate a clustering approach to solve this problem in this thesis report. One of the key components of clustering is the selection of so called cluster head (CH) nodes that are responsible for the formation of clusters and act as a synchronization and radio resource management information source. In this thesis work we propose a weighted CH selection algorithm that takes into account UE capability, mobility and other information and aims at balancing between energy efficiency, discovery rate and cluster formation time. Numerical results show that the clustering approach consumes more energy but it can achieve a much higher discovery rate and communication rate for the system. Simulation results indicate that the weighted clustering approach is a viable alternative in NSPS situations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Burger, Christoph Hartfield Roy J. „Propeller performance analys and multidisciplinary optimization using a genetic algorithm“. Auburn, Ala, 2007. http://repo.lib.auburn.edu/2007%20Fall%20Dissertations/Burger_Christoph_57.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Mackey, Carol Ann. „A performance-driven fuzzy algorithm for placement of macro cells“. Thesis, The University of Arizona, 1995. http://hdl.handle.net/10150/291564.

Der volle Inhalt der Quelle
Annotation:
Macro cell placement is an integral part of VLSI design. Existing placement techniques do not use a realistic human-like intuitive process for making decisions and therefore, lack the ability to make decisions based on several factors at once. In this research a quad-partitioning algorithm with a tabu search and a fuzzy cost function is used for macro cell placement. This approach partitions the design into small pieces that can be easily placed. The algorithm is based on a method which tries to reduce the path lengths and reduce the number of edges which cross out of a partition. The fuzzy cost function adds the human reasoning missing from other algorithms. The algorithm allows I/O cells to be preplaced or it can optimize their placement. The results show that this algorithm produces higher quality placements than other macro cell algorithms such as Lim, Chee and Wu and Lin and Du.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Comte, Céline. „Resource management in computer clusters : algorithm design and performance analysis“. Thesis, Institut polytechnique de Paris, 2019. http://www.theses.fr/2019SACLT034/document.

Der volle Inhalt der Quelle
Annotation:
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur
The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Malladi, Subrahmanya Sastry Venkata. „Modeling and Algorithm Performance For Seismic Surface Wave Velocity Estimation“. University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1194630399.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Xu, Lin. „Performance modelling and automated algorithm design for NP-hard problems“. Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/51175.

Der volle Inhalt der Quelle
Annotation:
In practical applications, some important classes of problems are NP-complete. Although no worst-case polynomial time algorithm exists for solving them, state-of-the-art algorithms can solve very large problem instances quickly, and algorithm performance varies significantly across instances. In addition, such algorithms are rather complex and have largely resisted theoretical average-case analysis. Empirical studies are often the only practical means for understanding algorithms’ behavior and for comparing their performance. My thesis focuses on two types of research questions. On the science side, the thesis seeks a in better understanding of relations among problem instances, algorithm performance, and algorithm design. I propose many instance features/characteristics based on instance formulation, instance graph representations, as well as progress statistics from running some solvers. With such informative features, I show that solvers’ runtime can be predicted by predictive performance models with high accuracy. Perhaps more surprisingly, I demonstrate that the solution of NP-complete decision problems (e.g., whether a given propositional satisfiability problem instance is satisfiable) can also be predicted with high accuracy. On the engineering side, I propose three new automated techniques for achieving state-of-the-art performance in solving NP-complete problems. In particular, I construct portfolio-based algorithm selectors that outperform any single solver on heterogeneous benchmarks. By adopting automated algorithm configuration, our highly parameterized local search solver, SATenstein-LS, achieves state-of- the-art performance across many different types of SAT benchmarks. Finally, I show that portfolio-based algorithm selection and automated algorithm configuration could be combined into an automated portfolio construction procedure. It requires significant less domain knowledge, and achieved similar or better performance than portfolio-based selectors based on known high-performance candidate solvers. The experimental results on many solvers and benchmarks demonstrate that the proposed prediction methods achieve high predictive accuracy for predicting algorithm performance as well as predicting solutions, while our automatically constructed solvers are state of the art for solving the propositional satisfiability problem (SAT) and the mixed integer programming problem (MIP). Overall, my research results in more than 8 publications including the 2010 IJCAI/JAIR best paper award. The portfolio-based algorithm selector, SATzilla, won 17 medals in the international SAT solver competitions from 2007 to 2012.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Hyyrynen, Fredrik, und Marcus Lignercrona. „A performance study of anevolutionary algorithm for twopoint stock forecasting“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208376.

Der volle Inhalt der Quelle
Annotation:
This study was conducted to conclude whether or not it was possible to accurately predict stock behavior by analyzing general patterns in historical stock data. This was done by creating an evolutionary algorithm that learned and weighted possible outcomes by studying the behaviour of the Nasdaq stock market between 2000 and 2016 and using the result from the training to make predictions. The result of testing with varied parameters concluded that clear patterns could not reliably be established with the suggested method as small adjustments to the measuring dates yielded wildly different results. The results also suggests that the amount of data is more relevant than how closely the stocks are related for the performance and that less precise predictions performs better than predicting multiple degrees of change. The performance of the seemingly better setting was shown to perform worse than random predictions but research with other settings might yield more accurate predictions.
Den här studien utfördes för att konstatera ifall det är möjligt att säkert förutspå beetendet hos en aktiekurs genom att analyser generella mönster i historiska aktiedata. Detta gjordes genom att skapa en evolutionär algorithm som lär och sätter vikt på möjliga utfall genom studie av aktiekurser av Nasdaq-aktiemarknaden mellan 2000 och 2016 för att sedan avnända resultatet av inlärningen för att göra prognoser. Resultaten av test med varierade parametrar konstaterade att tydliga mönster inte kunde etableras med den föreslagna metoden eftersom små justeringar i mätdatum gav stora skillnader i resultatet men förslog att mängden data var mer relevant för prestandan än huruvida aktierna var relaterade till varandra och att mindre nogrannhet gav bättre prestanda än prognoser av fler grader av förändring. Prestandan av inställningen som verkade bättre visades prestera sämre än slumpade prognoser men vidare forskning med andra inställninger skulle kunna ge säkrare prognoser.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Vidas, Dario. „Performance Evaluation of Stereo Reconstruction Algorithms on NIR Images“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191148.

Der volle Inhalt der Quelle
Annotation:
Stereo vision is one of the most active research areas in computer vision. While hundreds of stereo reconstruction algorithms have been developed, little work has been done on the evaluation of such algorithms and almost none on evaluation on Near-Infrared (NIR) images. Of almost a hundred examined, we selected a set of 15 stereo algorithms, mostly with real-time performance, which were then categorized and evaluated on several NIR image datasets, including single stereo pair and stream datasets. The accuracy and run time of each algorithm are measured and compared, giving an insight into which categories of algorithms perform best on NIR images and which algorithms may be candidates for real-time applications. Our comparison indicates that adaptive support-weight and belief propagation algorithms have the highest accuracy of all fast methods, but also longer run times (2-3 seconds). On the other hand, faster algorithms (that achieve 30 or more fps on a single thread) usually perform an order of magnitude worse when measuring the per-centage of incorrectly computed pixels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Dornelles, Edelweis Helena Ache Garcez. „Análise da performance do algoritmo d“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1993. http://hdl.handle.net/10183/26546.

Der volle Inhalt der Quelle
Annotation:
A geração de testes para circuitos combinacionais com fan-outs recovergentes é um problema NP-completo. Com o rápido crescimento da complexidade dos circuitos fabricados, a geração de testes passou a ser um sério problema para a indústria de circuitos integrados. Muitos algoritmos de ATPG (Automatic Test Pattern Generation) baseados no algoritmo D, usam heurísticas para guiar o processo de tomada de decisão na propagação n e na justificação das constantes de forma a aumentar sua eficiencia. Existem heurísticas baseadas em medidas funcionais, estruturais e probabilísticas. Estas medidas são normalmente referidas como observabilidade e controlabilidade que fazem parte de um conceito mais geral, a testabilidade. As medidas que o algoritmo utiliza podem ser calculadas apenas uma vez, durante uma etapa de pré-processamento (medidas de testabilidade estáticas - STM's), ou dinamicamente, recalculando estas medidas durante o processamento sempre que elas forem necessárias (medidas de testabilidade dinâmicas — DTM's). Para alguns circuitos, o use de medidas dinâmicas ao invés de medidas estáticas diminui o número de backtrackings pcir vetor gerado. Apesar disto, o tempo total de CPU por vetor aumenta. Assim, as DTM's só devem ser utilizadas quando as STM's não apresentam uma boa performance. Isto pode ser feito utilizando-se as medidas estáticas ate um certo número de backtrackings. Se o padrão de teste não for encontrado, então medidas dinâmicas são utilizadas. Entretanto, a necessário ainda buscar formas de melhorar o processo dinâmico, diminuindo o custo computacional. A proposta original do calculo das DTM's apresenta algumas técnicas, baseadas em selective tracing, com o objetivo de reduzir o custo computacional. Este trabalho analisa o use combinado de heurísticas e propõe técnicas alternativas, na forma das heurísticas de recalculo parcial e recalculo de linhas não free, que visam minimizar o overhead do calculo das DTM's. E proposta ainda a técnica de Pré-implicação que transfere a complexidade do algoritmo para a memória. Isto é feito através de um preprocessamento que armazena informações necessárias para a geração de todos os vetores de teste. De outra forma estas informações teriam de ser calculadas na geração de cada um destes vetores. A implementação do algoritmo D com as várias heurísticas permitiu a realização de um experimento pratico. Isto possibilitou a análise quantitativa da performance do algoritmo D para vários tipos de circuitos e demonstrou a eficiência de uma das heurísticas propostas neste trabalho.
The test generation for combinational circuits that contain reconvergence is a NP-complete problem. With the rapid increase in the complexity of the fabricated circuits, the generation of test patterns poses a serious problem to the IC industry. A number of existing ATPG algorithms based on the D algorithm use heuristics to guide the decision process in the D-propagation and justification to improve the efficiency. The heuristics used by ATPG algorithm are based on structural, functional and probabilistics measures. These measures are commonly referred to as line controllability and observability and they are combined under the , more general notion of testability. The measures used by ATPG algorithms can be computed only once, during a preprocessing stage (static testability measures - STM's) or can be calculated dinamically, updating the testability measures during the test generation process (dymanic testability measures - DTM's). For some circuits, replacing STM's by DTM's decreases the average number of backtrackings per generated vector. Despite these decrease, the total CPU time per generated vector is greater when using DTM's instead of STM's. So, DTM's only must be used if the STM's don't present a good performance. This can be done by STM's until a certain number of backtrackings. If a test pattern has still not been found, then DTM's are used. Therefore, it is yet necessary to search for ways to improve the dynamic process and decrease the CPU time requirements. In the original approach some techniques for reducing the computational overhead of DTM's based on the well-know technique of selective path tracing are presented. In this work, the combined use of heuristics are analised and alternative techniques — the heuristics of partial recalculus and not free lines recalculus — are proposed. These alternative techniques were developed in order to minimize the overhead of the DTM's calculus. It is yet proposed the pre-implication technique which transfers to memory the algorithm complexity. It includes a preprocessing stage which storages all necesary informations to the generation of all test vectors. So, these informations don't need be computed in the generation of each test vector. The implementation of the D-Algorithm with diferent heuristics has possibilited a practical experiment. It was possible to analise the performance of the D-Algorithm on diferent circuit types and to demonstrate the efficiency of one of the proposed heuristics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Qu, Shaohong. „High Performance Algorithms for Structural Analysis of Grid Stiffened Panels“. Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36990.

Der volle Inhalt der Quelle
Annotation:
In this research, we apply modern high performance computing techniques to solve an engineering problem, structural analysis of grid stiffened panels. An existing engineering code, SPANDO, is studied and modified to execute more efficiently on high performance workstations and parallel computers. Two new SPANDO packages, a modified sequential SPANDO and parallel SPANDO, are developed. In developing the new sequential SPANDO, we use two existing high performance numerical packages: LAPACK and ARPACK to solve our linear algebra problems. Also, a new block-oriented algorithm for computing the matrix-vector multiplication w=A^{-1}Bx is developed. The experimental results show that the new sequential SPANDO can save over 70% of memory size, and is at least 10 times faster than the original SPANDO. In parallel SPANDO, ScaLAPACK and BLACS are used. There are many factors that may affect the performance of parallel SPANDO. The parallel performance and the affects of these factors are discussed in this thesis.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Farghally, Mohammed Fawzi Seddik. „Visualizing Algorithm Analysis Topics“. Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73539.

Der volle Inhalt der Quelle
Annotation:
Data Structures and Algorithms (DSA) courses are critical for any computer science curriculum. DSA courses emphasize concepts related to procedural dynamics and Algorithm Analysis (AA). These concepts are hard for students to grasp when conveyed using traditional textbook material relying on text and static images. Algorithm Visualizations (AVs) emerged as a technique for conveying DSA concepts using interactive visual representations. Historically, AVs have dealt with portraying algorithm dynamics, and the AV developer community has decades of successful experience with this. But there exist few visualizations to present algorithm analysis concepts. This content is typically still conveyed using text and static images. We have devised an approach that we term Algorithm Analysis Visualizations (AAVs), capable of conveying AA concepts visually. In AAVs, analysis is presented as a series of slides where each statement of the explanation is connected to visuals that support the sentence. We developed a pool of AAVs targeting the basic concepts of AA. We also developed AAVs for basic sorting algorithms, providing a concrete depiction about how the running time analysis of these algorithms can be calculated. To evaluate AAVs, we conducted a quasi-experiment across two offerings of CS3114 at Virginia Tech. By analyzing OpenDSA student interaction logs, we found that intervention group students spent significantly more time viewing the material as compared to control group students who used traditional textual content. Intervention group students gave positive feedback regarding the usefulness of AAVs to help them understand the AA concepts presented in the course. In addition, intervention group students demonstrated better performance than control group students on the AA part of the final exam. The final exam taken by both the control and intervention groups was based on a pilot version of the Algorithm Analysis Concept Inventory (AACI) that was developed to target fundamental AA concepts and probe students' misconceptions about these concepts. The pilot AACI was developed using a Delphi process involving a group of DSA instructors, and was shown to be a valid and reliable instrument to gauge students' understanding of the basic AA topics.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Arslan, Omer Cagri. „Implementation And Performance Evaluation Of A Three Antenna Direction Finding System“. Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/2/12611215/index.pdf.

Der volle Inhalt der Quelle
Annotation:
State of the art direction finding (DF) systems usually have several antennas in order to increase accuracy and robustness to certain factors. In this thesis, a three antenna DF system is built and evaluated. While more antennas give better DF performance, a three antenna system is useful for system simplicity and many of the problems in DF systems can be observed and evaluated easily. This system can be used for both azimuth and elevation direction of arrival (DOA) estimation. The system is composed of three monopole antennas, an RF front end, A/D converters and digital signal processing (DSP) units. A number of algorithms are considered, such as, three channel interferometer, correlative interferometer, LSE (least square error) based correlative interferometer and MUSIC (multiple signal classification) algorithms. Different problems in DF systems are investigated. These are gain/phase mismatch of the receiver channels, mutual coupling between antennas, multipath signals and multiple sources. The advantages and disadvantages of different algorithms are outlined.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Fleischer, Mark Alan. „Assessing the performance of the simulated annealing algorithm using information theory“. Case Western Reserve University School of Graduate Studies / OhioLINK, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=case1057677595.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Lavesson, Niklas. „Evaluation of classifier performance and the impact of learning algorithm parameters“. Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4578.

Der volle Inhalt der Quelle
Annotation:
Much research has been done in the fields of classifier performance evaluation and optimization. This work summarizes this research and tries to answer the question if algorithm parameter tuning has more impact on performance than the choice of algorithm. An alternative way of evaluation; a measure function is also demonstrated. This type of evaluation is compared with one of the most accepted methods; the cross-validation test. Experiments, described in this work, show that parameter tuning often has more impact on performance than the actual choice of algorithm and that the measure function could be a complement or an alternative to the standard cross-validation tests.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Sankaran, Sundar G. „On Ways to Improve Adaptive Filter Performance“. Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/30198.

Der volle Inhalt der Quelle
Annotation:
Adaptive filtering techniques are used in a wide range of applications, including echo cancellation, adaptive equalization, adaptive noise cancellation, and adaptive beamforming. The performance of an adaptive filtering algorithm is evaluated based on its convergence rate, misadjustment, computational requirements, and numerical robustness. We attempt to improve the performance by developing new adaptation algorithms and by using "unconventional" structures for adaptive filters. Part I of this dissertation presents a new adaptation algorithm, which we have termed the Normalized LMS algorithm with Orthogonal Correction Factors (NLMS-OCF). The NLMS-OCF algorithm updates the adaptive filter coefficients (weights) on the basis of multiple input signal vectors, while NLMS updates the weights on the basis of a single input vector. The well-known Affine Projection Algorithm (APA) is a special case of our NLMS-OCF algorithm. We derive convergence and tracking properties of NLMS-OCF using a simple model for the input vector. Our analysis shows that the convergence rate of NLMS-OCF (and also APA) is exponential and that it improves with an increase in the number of input signal vectors used for adaptation. While we show that, in theory, the misadjustment of the APA class is independent of the number of vectors used for adaptation, simulation results show a weak dependence. For white input the mean squared error drops by 20 dB in about 5N/(M+1) iterations, where N is the number of taps in the adaptive filter and (M+1) is the number of vectors used for adaptation. The dependence of the steady-state error and of the tracking properties on the three user-selectable parameters, namely step size, number of vectors used for adaptation (M+1), and input vector delay D used for adaptation, is discussed. While the lag error depends on all of the above parameters, the fluctuation error depends only on step size. Increasing D results in a linear increase in the lag error and hence the total steady-state mean-squared error. The optimum choices for step size and M are derived. Simulation results are provided to corroborate our analytical results. We also derive a fast version of our NLMS-OCF algorithm that has a complexity of O(NM). The fast version of the algorithm performs orthogonalization using a forward-backward prediction lattice. We demonstrate the advantages of using NLMS-OCF in a practical application, namely stereophonic acoustic echo cancellation. We find that NLMS-OCF can provide faster convergence, as well as better echo rejection, than the widely used APA. While the first part of this dissertation attempts to improve adaptive filter performance by refining the adaptation algorithm, the second part of this work looks at improving the convergence rate by using different structures. From an abstract viewpoint, the parameterization we decide to use has no special significance, other than serving as a vehicle to arrive at a good input-output description of the system. However, from a practical viewpoint, the parameterization decides how easy it is to numerically minimize the cost function that the adaptive filter is attempting to minimize. A balanced realization is known to minimize the parameter sensitivity as well as the condition number for Grammians. Furthermore, a balanced realization is useful in model order reduction. These properties of the balanced realization make it an attractive candidate as a structure for adaptive filtering. We propose an adaptive filtering algorithm based on balanced realizations. The third part of this dissertation proposes a unit-norm-constrained equation-error based adaptive IIR filtering algorithm. Minimizing the equation error subject to the unit-norm constraint yields an unbiased estimate for the parameters of a system, if the measurement noise is white. The proposed algorithm uses the hyper-spherical transformation to convert this constrained optimization problem into an unconstrained optimization problem. It is shown that the hyper-spherical transformation does not introduce any new minima in the equation error surface. Hence, simple gradient-based algorithms converge to the global minimum. Simulation results indicate that the proposed algorithm provides an unbiased estimate of the system parameters.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Siti, MW, DV Nicolae, AJ Jimoh und A. Ukil. „Reconfiguration and Load Balancing in the LV and MV Distribution Networks for Optimal Performance“. IEEE Transactions on Power Delivery, 2007. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1000807.

Der volle Inhalt der Quelle
Annotation:
To get the distribution network to operate at its optimum performance in an automated distribution system reconfiguration was been proposed and researched. Considering, however, that optimum performance implies minimum loss, no overloading of transformers and cables, correct voltage profile, and absence of phase voltage and current imbalances, network reconfiguration alone is insufficient. It has to be complemented with techniques for phase rearrangement between the distribution transformer banks and the specific primary feeder with a radial structure and dynamic phase and load balancing along a feeder with a radial structure. This paper contributes such a technique at the low-voltage and medium-voltage levels of a distribution network simultaneously with reconfiguration at both levels. While the neural network is adopted for the network reconfiguration problem, this paper introduces a heuristic method for the phase balancing/loss minimization problem. A comparison of the heuristic algorithm with that of the neural network shows the former to be more robust. The approach proposed here, therefore for the combined problem, uses the neural network in conjunction with a heuristic method which enables different reconfiguration switches to be turned on/off and connected consumers to be switched between different phases to keep the phases balanced. An application example of the proposed method using real data is presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Kilic, Varlik. „Performance Improvement Of A 3d Reconstruction Algorithm Using Single Camera Images“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606259/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In this study, it is aimed to improve a set of image processing techniques used in a previously developed method for reconstructing 3D parameters of a secondary passive target using single camera images. This 3D reconstruction method was developed and implemented on a setup consisting of a digital camera, a computer, and a positioning unit. Some automatic target recognition techniques were also included in the method. The passive secondary target used is a circle with two internal spots. In order to achieve a real time target detection, the existing binarization, edge detection, and ellipse detection algorithms are debugged, modified, or replaced to increase the speed, to eliminate the run time errors, and to become compatible for target tracking. The overall speed of 20 Hz is achieved for 640x480 pixel resolution 8 bit grayscale images on a 2.8 GHz computer A novel target tracking method with various tracking strategies is introduced to reduce the search area for target detection and to achieve a detection and reconstruction speed at the maximum frame rate of the hardware. Based on the previously suggested lens distortion model, distortion measurement, distortion parameters determination, and distortion correction methods for both radial and tangential distortions are developed. By the implementation of this distortion correction method, the accuracy of the 3D reconstruction method is enhanced. The overall 3D reconstruction method is implemented in an integrated software and hardware environment as a combination of the methods with the best performance among their alternatives. This autonomous and real time system is able to detect the secondary passive target and reconstruct its 3D configuration parameters at a rate of 25 Hz. Even for extreme conditions, in which it is difficult or impossible to detect the target, no runtime failures are observed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Vemuri, Aditya. „A high performance non-blocking checkpointing/recovery algorithm for ring networks /“. Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1203587991&sid=8&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Hung, Ling-Chin, und 洪凌芹. „Evolving Music Performance using Genetic Algorithm“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/02975768677446286833.

Der volle Inhalt der Quelle
Annotation:
碩士
義守大學
資訊工程學系碩士班
96
An original MIDI (Musical Instrument Digital Interface) piece can only perform even and boring sound. In this thesis, we design the performance profile through hierarchical pulse sets by using Genetic Algorithm (GA) to improve music interpretation. We usually emphasize the key words during talking in order to convey complete idea clearly. It is the same in music piece expression. The piece is divided into many phrases, and we emphasize some of the phrases by extending specific notes. Genetic algorithm mainly mimics the natural selection characteristic of genes in that the simulate species compete and the survivors can the reproduce for the next generation. This is referred to as the evolution. In our research, we take into consideration the piece’s rhythm, melody, and harmony and use Genetic Algorithm to evolve the best pulse set. According to this pulse set, we modify amplitude of MIDI pieces to make it become a tuneful song.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Antelman, Kristin, Nisa Bakkalbasi, David Goodman, Chawki Hajjem und Stevan Harnad. „Evaluation of Algorithm Performance on Identifying OA“. 2005. http://hdl.handle.net/10150/105417.

Der volle Inhalt der Quelle
Annotation:
This is a second signal-detection analysis of the accuracy of a robot in detecting open access (OA) articles (by checking by hand how many of the articles the robot tagged OA were really OA, and vice versa). We found that the robot significantly overcodes for OA. In our Biology sample, 40% of identified OA was in fact OA. In our Sociology sample, only 18% of identified OA was in fact OA. Missed OA was lower: 12% in Biology and 14% in Sociology. The sources of the error are impossible to determine from the present data, since the algorithm did not capture URL's for documents identified as OA. In conclusion, the robot is not yet performing at a desirable level, and future work may be needed to determine the causes, and improve the algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Liu, Hwei-Yu, und 劉蕙瑜. „Characterizing Cooperate Financial Performance with Char Algorithm“. Thesis, 2005. http://ndltd.ncl.edu.tw/handle/24184849283526756842.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中央大學
工業管理研究所
93
In the perspective of investments, they desire to use the less financial variables to anticipate the most performance information. Every human being is increasingly faced with unmanageable amounts of financial data; hence, data mining or knowledge discovery apparently affects all of us. In this study of mining performance of company, we attempt to summarize the stronger characteristic rules of fundamental analysis using 81 financial statement variables. To address this problem, we proposed an effective method, a Char Algorithm, to automatically produce characteristic rules to describe the major characteristics of data in a table is proposed. To fit the data type of Char Algorithm, we proceed many steps to preprocess source data of financial statement from 2001-2003. In the first step, data compression, we adapt wavelet methods to preprocess time series data of several attributes from financial statement from 2001-2003. After data to be compressed by wavelet technique, the second step, sliding window, processes in order to increase the amount of virtual data. Thirdly, we use cluster method to do data discretization process categorizing data to fit the discrete data type. It is a difficult task to construct a concept tree to describe the financial statement. In contrast to traditional Attribute Oriented Induction methods, the algorithm, named as Char Algorithm, does not need a concept tree and only requires setting a desired coverage threshold to generate a minimal set of characteristic rules to describe the given dataset. We develop a formal framework for financial data to adapt Char Algorithm and afford advisements to investors to extract characteristic rules, rapidly. It is also our observation that the dimension of growth rate is significant in circumstance of generalizing good performance companies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Cheng, Kuang-Hung, und 鄭光宏. „Improving Wafer Retesting Performance Using Greedy Algorithm“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3rn6n8.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中興大學
資訊科學與工程學系所
107
The technological advancements on integrated circuits (IC) drive the increasing of wafer size as well as the number of dies that can be put in a wafer. As a result, the costs of testing and retesting wafers also increase. Wafer test is divided into probing and re-probing phases. Wafer probing phase is for checking the quality of each die in the wafer. Wafer re-probing phase only retests the failed dies discovered in the probing phase. To do chip probing on a wafer, the probes on the probe card need to make contact with wafer for electrical testing (In-Circuit-Test). This process is called Touch-Down. Re-probing is a free service without charge for an IC packaging/testing company. Therefore, reducing the number of touch-downs can reduce the costs of wafer testing. This is the issue that we try to solve in this thesis. By analyzing the distribution of failed dies in a wafer after the probing phase, this thesis propose two methods for reducing the number of touch-downs to generate new site maps for re-probing. One is an improved traditional method, and another one is a greedy method. The experimental result shows that comparing to the traditional method, the proposed improved traditional method and greedy method can reduce the number of touch-downs to 8% and 15% in average, respectively. And the best reduction rates are 16% and 23%, respectively. The reduction in the number of touch-downs can also reduce the maintenance costs of probe cards in addition to increasing the productivity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

白聖秋. „Performance improvment of dynamic tree splitting algorithm“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/66783640231047075394.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Wang, Yi Han, und 王怡涵. „An efficient algorithm for performance-driven clustering“. Thesis, 1995. http://ndltd.ncl.edu.tw/handle/37409266559851271912.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lin, Whe Dar, und 林慧達. „A Performance Driven Placement Algorithm For FPGAs“. Thesis, 1993. http://ndltd.ncl.edu.tw/handle/64593975287687098014.

Der volle Inhalt der Quelle
Annotation:
碩士
國立清華大學
資訊科學學系
81
In this paper, we propose a performance driven placement method for FPGAs. The proposed system first assigns the levels of the network using as-soon-as-possible method and finds the relative locations by a bipartite weighted matching algorithm. It then searches for the network shape to fit into the given FPGA architecture. Lastly, a bipartite-weighted-matching is performed again to assign cells into the new shape. Our method is able to produce a shorter critical path delay compared to other placement methods. Experimental results on two sets of benchmarks that the proposed system is indeed very effective in minimizing the real delay after routing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

ZENG, ZHAO-DENG, und 曾兆登. „A performance-driven placement algorithm for module synthesis“. Thesis, 1990. http://ndltd.ncl.edu.tw/handle/42265859739507168115.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Peng, Chun-Fang, und 彭俊方. „A Proxy Cache Replacement Algorithm: A Performance Evaluation“. Thesis, 2002. http://ndltd.ncl.edu.tw/handle/06027818361582232673.

Der volle Inhalt der Quelle
Annotation:
碩士
國立清華大學
資訊工程學系
90
The World-Wide Web traffic keeps on growing rapidly. The web proxy is common solution to reduce the web traffic and the policy of the cache replacement plays a key factor to the performance of a web proxy. In the past, cache architecture has been extensively studied in many fields and several cache replacement algorithms for the web proxy have been developed. In the cache of web proxy, there are some phenomena that differ from those of other applications, such as computer architecture and operating systems. There are four different characteristics (1) size distribution, (2) content distribution, (3) concentration of references, and (4) one-time referencing. This study proposed a novel algorithm by taking advantages these four characteristics. Based on (2), we categorize web pages into two categories and the cache is also divided into two caches for these two categories. Meanwhile, the replacement algorithm in the cache considers (1), (3), and (4) as main factors. The experimental result shows that the new approach achieves obvious improvements in Hit Rate and Byte Hit rate criteria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie