Thèses sur le sujet « Algoritmo PC »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Algoritmo PC.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 24 meilleures thèses pour votre recherche sur le sujet « Algoritmo PC ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Coller, Emanuela. « Analysis of the PC algorithm as a tool for the inference of gene regulatory networks : evaluation of the performance, modification and application to selected case studies ». Doctoral thesis, country:IT, 2013. http://hdl.handle.net/10449/23814.

Texte intégral
Résumé :
The expansion of a Gene Regulatory Network (GRN) by finding additional causally-related genes, is of great importance for our knowledge of biological systems and therefore relevant for its biomedical and biotechnological applications. Aim of the thesis work is the development and evaluation of a bioinformatic method for GRN expansion. The method, named PC-IM, is based on the PC algorithm that discovers causal relationships starting from purely observational data. PC-IM adopts an iterative approach that overcomes the limitations of previous applications of PC to GRN discovery. PC-IM takes in input the prior knowledge of a GRN (represented by nodes and relationships) and gene expression data. The output is a list of genes which expands the known GRN. Each gene in the list is ranked depending on the frequency it appears causally relevant, normalized to the number of times it was possible to find it. Since each frequency value is associated with precision and sensitivity values calculated using the prior knowledge of the GRN, the method provides in output those genes that are above the value of frequency that optimize precision and sensitivity (cut-off frequency). In order to investigate the characteristics and the performances of PC-IM, in this thesis work several parameters have been evaluated such as the influence of the type and size of input gene expression data, of the number of iterations and of the type of GRN. A comparative analysis of PC-IM versus another recent expansion method (GENIES) has been also performed. Finally, PC-IM has been applied to expand two real GRNs of the model plant Arabidopsis thaliana
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hodáň, Ján. « Automatizace kontroly PC ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-413297.

Texte intégral
Résumé :
This thesis describes the design and implementation of automatic control of computer visualization. At the beginning computer control will be introduced. Familiarization with computers control procedures will take the place. Subsequently, another methods that can apply the control will be introduced. The main part of the work will be devoted to describe the basic skills that related to computer control. The last chapter will explain how visualization is implemented and how we evaluated the success of visualization. Result of the work is an application that visualize process and thanks  to that control will be easier, faster, improved and automated.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kalisch, Markus. « Estimating high-dimensional dependence structures with the PC-algorithm / ». Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17783.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Bernini, Matteo. « An efficient Hardware implementation of the Peak Cancellation Crest Factor Reduction Algorithm ». Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206079.

Texte intégral
Résumé :
An important component of the cost of a radio base station comes from to the Power Amplifier driving the array of antennas. The cost can be split in Capital and Operational expenditure, due to the high design and realization costs and low energy efficiency of the Power Amplifier respectively. Both these cost components are related to the Crest Factor of the input signal. In order to reduce both costs, it would be possible to lower the average power level of the transmitting signal, whereas in order to obtain a more efficient transmission, a more energized signal would allow the receiver to better distinguish the message from the noise and interferences. These opposed needs motivate the research and development of solutions aiming at reducing the excursion of the signal without the need of sacrificing its average power level. One of the algorithms addressing this problem is the Peak Cancellation Crest Factor Reduction. This work documents the design of a hardware implementation of such method, targeting a possible future ASIC for Ericsson AB. SystemVerilog is the Hardware Description Language used for both the design and the verification of the project, together with a MATLAB model used for both exploring some design choices and to validate the design against the output of the simulation. The two main goals of the design have been the efficient hardware exploitation, aiming to a smaller area footprint on the integrated circuit, and the adoption of some innovative design solutions in the controlling part of the design, for example the managing of the cancelling pulse coefficients and the use of a time-division multiplexing strategy to further save area on the chip. For the contexts where both the solutions could compete, the proposed one shows better results in terms of area and delay compared to the current methods in use at Ericsson and also provides innovative suggestions and ideas for further improvements.
En komponent som det är viktigt att ta hänsyn till när det kommer till en radiobasstations kostnad är förstärkaren som används för att driva antennerna. Kostnaden för förstärkaren kan delas upp i en initial kostnad relaterad till utveckling och tillverkning av kretsen, samt en löpande kostnad som är relaterad till kretsens energieffektivitet. Båda kostnaderna är kopplade till en egenskap hos förstärkarens insignal, vilken är kvoten mellan signalens maximala effekt och dess medeleffekt, såkallad toppfaktor. För att reducera dessa kostnader så är det möjligt att minska signalens medeleffekt, men en hög medeleffekt förbättrar radioöverföringen eftersom det är lättare för mottagaren att skilja en signal med hög energi från brus och interferens. Dessa två motsatta krav motiverar forskning och utveckling av lösningar för att minska signalens maximala värde utan att minska dess medeleffekt. En algoritm som kan användas för att minska signalens toppfaktor är Peak Cancellation. Den här rapporten presenterar design och hårdvaruimplementering av Peak Cancellation med avsikt att kunna användas av Ericsson AB i framtida integrerade kretsar. Det hårdvarubeskrivande språket SystemVerilog användes för både design och testning i projektet. MATLAB användes för att utforska designalternativ samt för att modellera algoritmen och jämföra utdata med hårdvaruimplementationen i simuleringar. De två huvudmålen med designen var att utnyttja hårdvaran effektivt för att nå en så liten kretsyta som möjligt och att använda en rad innovativa lösningar för kontrolldelen av designen. Exempel på innovativa designlösningar som användes är hur koefficienter för pulserna, som används för reducera toppar i signalen, hanteras och användning av tidsmultiplex för att ytterligare minska kretsytan. I användningsscenarion där båda lösningarna kan konkurrera, visar den föreslagna lösningen bättre resultat när det kommer till kretsyta och latens än nuvarande lösningar som används av Ericsson. Ges också förslag på ytterligare framtida förbättringar av implementationen.
Styles APA, Harvard, Vancouver, ISO, etc.
5

McClenney, Walter O. « Analysis of the DES, LOKI, and IDEA algorithms for use in an encrypted voice PC network ». Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA297919.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Schwanke, Ullrich. « Trigger and reconstruction farms in the HERA-B experiment and algorithms for a Third Level Trigger ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2000. http://dx.doi.org/10.18452/14565.

Texte intégral
Résumé :
Das HERA-$B$-Experiment am Deutschen Elektronen-Synchrotron (DESY) in Hamburg dient der Untersuchung der Physik von Teilchen, die $b$-Quarks enthalten. Der Schwerpunkt des Ex\-pe\-ri\-mentes liegt auf der Messung der CP-Verletzung im System der neutralen $B$-Mesonen. Es wird erwartet, dass die pr\"azise Bestimmung der CP-Asymmetrie im Zerfallskanal $B^0(\bar{B}^0)\to J/\psi K_S^0$ gro{\ss}en Einfluss auf die Weiterentwicklung des Standardmodells der Elementarteilchenphysik und g\"angiger kosmologischer Theorien haben wird. Das HERA-$B$-Experiment nutzt den Protonenstrahl des HERA-Ringes, um in Kollisionen mit einem feststehenden Target paarweise $B$-Hadronen zu erzeugen. Die Wechselwirkungen werden in einem Vorw\"artsspektrometer mit etwa 600.000 Auslesekan\"alen nachgewiesen. Aufgrund der relativ niedrigen Schwerpunktsenergie von 41.6\,GeV sind Ereignisse mit $b$-Quarks im Vergleich zu Wechselwirkungen mit leichteren Quarks um etwa sechs Gr\"o{\ss}enordnungen unterdr\"uckt. Die Selektion von Signalereignissen stellt daher eine besondere Herausforderung dar. Sie wird von einem vierstufigen Datennahme- und Triggerystem \"ubernommen, das die Ereignisrate von 10\,MHz auf etwa 20\,Hz reduziert. Neben speziell entwickelter Elektronik werden im Triggersystem mehrere hundert handels\"ubliche PCs eingesetzt. Die Computer sind in zwei so genannten PC-Farmen mit jeweils mehr als 200 Prozessoren angeordnet, die die Rechenkapazit\"at f\"ur Triggerentscheidungen und die prompte Analyse der Ereignisdaten zur Verf\"ugung stellen. Auf der einen Farm laufen schnelle Triggerprogramme mit einer Rechenzeit von etwa 1--100\,ms pro Ereignis ab. Die andere Farm rekonstruiert die Ereignisse online, bevor die Daten auf Band dauerhaft archiviert werden. Die pro Ereignis aufgewandte Rechenzeit liegt dabei im Bereich einiger Sekunden. Die vorliegende Arbeit behandelt zwei Themenkreise. Einerseits wird die technische Umsetzung der Trigger- und der Rekonstruktionsfarm beschrieben. Besonderes Augenmerk liegt dabei auf den Software-Systemen, die den Farmen erforderliche Kalibrationsdaten verf\"ugbar machen und die zentrale \"Uberwachung der Ergebnisse der ablaufenden Programme gestatten. Der Hauptteil der Arbeit besch\"aftigt sich mit Algorithmen f\"ur eine dritte Triggerstufe, die zus\"atzlich zu existierenden Programmen auf der Triggerfarm zum Einsatz kommen sollen. Der Zerfall $B^0(\bar{B}^0)\to J/\psi X$ hat eine sehr klare Signatur, wenn das $J/\psi$ in ein $e^+e^-$- oder $\mu^+\mu^-$-Paar zerf\"allt. Im Triggersystem wird nach einem Paar entgegengesetzt geladener Leptonen des gleichen Typs gesucht, deren invariante Masse der des $J/\psi$ entspricht und deren Spuren von einem gemeinsamen Vertex in der N\"ahe des Targets ausgehen. Es wird davon ausgegangen, dass die Ausnutzung aller kinematischen Zwangsbedingungen ausreicht, um diesen Zerfallskanal klar von Untergrundereignissen zu trennen. Die dritte Triggerstufe soll dagegen auf Signalprozesse mit weniger kinematischen Beschr\"ankungen angewandt werden. Solche Ereignisse entstehen zum Beispiel dann, wenn zwei in der Proton-Target-Kollision erzeugte $B$-Mesonen semileptonisch zerfallen. Das Triggersystem selektiert lediglich die beiden Leptonen, die aber hier nicht von einem gemeinsamen Vertex kommen. Die dritte Triggerstufe soll f\"ur derartige Zerfallstopologien innerhalb von 100\,ms pro Ereignis weitere Kriterien zur Unterscheidung von Signal- und Untergrundprozessen aus den Daten extrahieren. In der Arbeit wird anhand von Monte-Carlo-Studien untersucht, inwieweit die Daten des Silizium-Vertexdetektors des Experimentes zur Entscheidungsfindung einer dritten Triggerstufe beitragen k\"onnen. Dabei wird die Rekonstruktion von Spuren aus der Zerfallskaskade der $B$-Hadronen zus\"atzlich zu den von der vorhergehenden Triggerstufe selektierten Lep\-ton\-en an\-ge\-strebt. Mithilfe einer schnellen Mustererkennung f\"ur den Vertexdetektor wird gezeigt, dass das Auffinden aller Spuren und die Anwendung von Triggeralgorithmen innerhalb des vorgegebenen Zeitfensters von 100\,ms m\"oglich sind. Die Bestimmung der Spurparameter nahe der Targetregion macht von der Methode des Kalman-Filters Gebrauch, um der Vielfachstreuung im Detektormaterial Rechnung zu tragen. Dabei tritt das Problem auf, dass weder der Impuls der gefundenen Spuren bekannt ist, noch die Materialverteilung im Vertexdetektor aus Zeitgr\"unden in aller Strenge ber\"ucksichtigt werden kann. Durch geeignete N\"aherungen gelingt es, eine ausreichende Genauigkeit f\"ur die Spurparameter zu erreichen. Die aufgefundenen Teilchen bilden den Ausgangspunkt f\"ur Triggeralgorithmen. Hierbei wird untersucht, welche Methoden am besten geeignet sind, um Signal- und Unter\-grund\-ereignisse voneinander zu trennen. Es erweist sich, dass das Auffinden von Spuren mit gro{\ss}em Impaktparameter aussichtsreichere Ans\"atze als eine Suche nach Sekund\"arvertices bietet.
The HERA-$B$ experiment at Deutsches Elektronen-Synchrotron (DESY) Hamburg aims at investigating the physics of particles containing $b$ quarks. The experiment focusses on measuring CP violation in the system of neutral $B$ mesons. It is expected that the precise determination of the CP asymmetry in the channel $B^0(\bar{B}^0)\to J/\psi K_S^0$ will have an impact on the further development of the Standard Model of Elementary Particle Physics and cosmological theories. The HERA-$B$ experiment uses the proton beam of the HERA storage ring in fixed-target mode. $B$ hadrons are produced in pairs when protons from the beam halo interact with target nuclei. The interactions are recorded by a forward-spectrometer with roughly 600.000 readout channels. At the HERA-$B$ centre-of-mass energy of 42.6\,GeV, the $b\bar{b}$ cross section is only a tiny fraction of the total inelastic cross section. Only one in about 10$^6$ events contains $b$ quarks, which turns the selection of signal events into a particular challenge. The selection is accomplished by a four-stage data acquisition and trigger system reducing the event rate from 10\,MHz to about 20\,Hz. Besides custom-made electronics, several hundreds of PCs are used in the trigger system. The computers are arranged in two so-called PC farms with more than 200 processors each. The PC farms provide the computing capacity for trigger decisions and the prompt analysis of event data. One farm executes fast trigger programs with a computing time of 1--100\,ms per event. The other farm performs online reconstruction of the events before data are archived on tape. The computing time per event is in the range of several seconds. This thesis covers two topics. In the beginning, the technical implementation of the trigger and the reconstruction farm are described. In doing so, emphasis is put on the software systems which make calibration data available to the farms and which provide a centralised view on the results of the executing processes. The principal part of this thesis deals with algorithms for a Third Level Trigger. This trigger is to come into operation on the trigger farm together with existing programs. Processes of the type $B^0(\bar{B}^0)\to J/\psi X$ have a very clean signature when the $J/\psi$ decays to a $e^+e^-$ or $\mu^+\mu^-$ pair. The trigger system attempts to identify two unlike-sign leptons of the same flavour whose invariant mass matches the $J/\psi$. In later steps, the tracks are required to originate from a common vertex close to the target. It is assumed that these kinematic constraints are sufficient to pick out events of this type among the copious background processes. In contrast, the Third Level Trigger is to be applied to signal processes with fewer kinematic constraints. Such events occur for example when two $B$ mesons, which were created in a proton-target collision, decay semileptonically. The trigger system selects merely the two leptons which do not originate from a common vertex in this case. The Third Level Trigger has 100\,ms at its disposal to extract further criteria from the data which can serve to distinguish between signal and background events. This thesis investigates with the aid of Monte-Carlo simulations how the data of the experiment's silicon vertex detector can contribute to the decisions of a Third Level Trigger. The trigger aims at reconstructing tracks from the decay cascade of $B$ mesons in addition to the leptons selected by the preceding trigger levels. A fast pattern recognition for the vertex detector demonstrates that the reconstruction of all tracks and the application of trigger algorithms are possible within the given time slot of 100\,ms. The determination of track parameters in the target region exploits the Kalman-filter method to account for the multiple scattering of particles in the detector material. The application of this method is, however, made difficult by two facts. First, the momentum of the reconstructed tracks is not known. And, second, the material distribution in the detector cannot be taken into consideration in detail due to timing limitations. Adequate approximations for the momentum and the material traversed by a particle help to accomplish a sufficient accuracy of the track parameters. The reconstructed tracks constitute the starting point of several trigger algorithms, whose suitability to select signal events is investigated. Our studies indicate that the reconstruction of tracks with large impact parameters is a more promising approach than a search for secondary vertices.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hoffmann, Gustavo André. « Study of the audio coding algorithm of the MPEG-4 AAC standard and comparison among implementations of modules of the algorithm ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2002. http://hdl.handle.net/10183/1697.

Texte intégral
Résumé :
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Saptari, Adi. « PC computer based algorithm for the selection of material handling equipment for a distribution warehouse based on least annual cost and operating parameters ». Ohio : Ohio University, 1990. http://www.ohiolink.edu/etd/view.cgi?ohiou1183473503.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Verbyla, Petras. « Network inference using independence criteria ». Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277912.

Texte intégral
Résumé :
Biological systems are driven by complex regulatory processes. Graphical models play a crucial role in the analysis and reconstruction of such processes. It is possible to derive regulatory models using network inference algorithms from high-throughput data, for example; from gene or protein expression data. A wide variety of network inference algorithms have been designed and implemented. Our aim is to explore the possibilities of using statistical independence criteria for biological network inference. The contributions of our work can be categorized into four sections. First, we provide a detailed overview of some of the most popular general independence criteria: distance covariance (dCov), kernel canonical variance (KCC), kernel generalized variance (KGV) and the Hilbert-Schmidt Independence Criterion (HSIC). We provide easy to understand geometrical interpretations for these criteria. We also explicitly show the equivalence of dCov, KGV and HSIC. Second, we introduce a new criterion for measuring dependence based on the signal to noise ratio (SNRIC). SNRIC is significantly faster to compute than other popular independence criteria. SNRIC is an approximate criterion but becomes exact under many popular modelling assumptions, for example for data from an additive noise model. Third, we compare the performance of the independence criteria on biological experimental data within the framework of the PC algorithm. Since not all criteria are available in a version that allows for testing conditional independence, we propose and test an approach which relies on residuals and requires only an unconditional version of an independence criterion. Finally we propose a novel method to infer networks with feedback loops. We use an MCMC sampler, which samples using a loss function based on an independence criterion. This allows us to find networks under very general assumptions, such as non-linear relationships, non-Gaussian noise distributions and feedback loops.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Alsawaf, Anas. « Development of a PC-based cost engineering algorithm for capital intensive industries based on the methodology of total absorption standard costing ». Ohio : Ohio University, 1992. http://www.ohiolink.edu/etd/view.cgi?ohiou1176492634.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Coller, Emanuela. « Analysis of the PC algorithm as a tool for the inference of gene regulatory networks : evaluation of the performance, modification and application to selected case studies ». Doctoral thesis, Università degli studi di Trento, 2013. https://hdl.handle.net/11572/368606.

Texte intégral
Résumé :
The expansion of a Gene Regulatory Network (GRN) by finding additional causally-related genes, is of great importance for our knowledge of biological systems and therefore relevant for its biomedical and biotechnological applications. Aim of the thesis work is the development and evaluation of a bioinformatic method for GRN expansion. The method, named PC-IM, is based on the PC algorithm that discovers causal relationships starting from purely observational data. PC-IM adopts an iterative approach that overcomes the limitations of previous applications of PC to GRN discovery. PC-IM takes in input the prior knowledge of a GRN (represented by nodes and re- lationships) and gene expression data. The output is a list of genes which expands the known GRN. Each gene in the list is ranked depending on the frequency it appears causally relevant, normalized to the number of times it was possible to find it. Since each frequency value is associated with precision and sensitivity values calculated using the prior knowledge of the GRN, the method provides in output those genes that are above the value of frequency that optimize precision and sensitivity (cut-off frequency). In order to investigate the characteristics and the performances of PC-IM, in this thesis work several parameters have been evaluated such as the influence of the type and size of input gene expression data, of the number of iterations and of the type of GRN. A comparative analysis of PC-IM versus another recent expansion method (GENIES) has been also performed. Finally, PC-IM has been applied to expand two real GRNs of the model plant Arabidopsis thaliana.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Coller, Emanuela. « Analysis of the PC algorithm as a tool for the inference of gene regulatory networks : evaluation of the performance, modification and application to selected case studies ». Doctoral thesis, University of Trento, 2013. http://eprints-phd.biblio.unitn.it/1315/1/Emanuela_Coller_phd-thesis.pdf.

Texte intégral
Résumé :
The expansion of a Gene Regulatory Network (GRN) by finding additional causally-related genes, is of great importance for our knowledge of biological systems and therefore relevant for its biomedical and biotechnological applications. Aim of the thesis work is the development and evaluation of a bioinformatic method for GRN expansion. The method, named PC-IM, is based on the PC algorithm that discovers causal relationships starting from purely observational data. PC-IM adopts an iterative approach that overcomes the limitations of previous applications of PC to GRN discovery. PC-IM takes in input the prior knowledge of a GRN (represented by nodes and re- lationships) and gene expression data. The output is a list of genes which expands the known GRN. Each gene in the list is ranked depending on the frequency it appears causally relevant, normalized to the number of times it was possible to find it. Since each frequency value is associated with precision and sensitivity values calculated using the prior knowledge of the GRN, the method provides in output those genes that are above the value of frequency that optimize precision and sensitivity (cut-off frequency). In order to investigate the characteristics and the performances of PC-IM, in this thesis work several parameters have been evaluated such as the influence of the type and size of input gene expression data, of the number of iterations and of the type of GRN. A comparative analysis of PC-IM versus another recent expansion method (GENIES) has been also performed. Finally, PC-IM has been applied to expand two real GRNs of the model plant Arabidopsis thaliana.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Huamán, Bustamante Jesús Omar. « Implementación de un controlador difuso de temperatura prototipo usando la inferencia difusa de Takagi Sugeno ». Universidad Nacional de Ingeniería. Programa Cybertesis PERÚ, 2007. http://cybertesis.uni.edu.pe/uni/2007/huaman_bj/html/index-frames.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

王盈欽. « Algorithms on PC-clusters for clustering gene chip data ». Thesis, 2004. http://ndltd.ncl.edu.tw/handle/31703531317606136100.

Texte intégral
Résumé :
碩士
中華大學
資訊工程學系碩士班
92
In biology research area, gene chip are recently very popular and valid examination tools, can be applied to cell biology chemistry and diseases detection. Large gene expression data are obtained by scanning used chips and image processing. Then these data are clustered to analyze the relations of genes. Large data processing will consume much computing time. In this thesis we will present how to design clustering algorithm on PC-Cluster to speedup wmputation. And then test our algorithm test our algorithm and evaluate performance of gene chips.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Hsu, Jr-yu, et 許志宇. « Apply PC Cluster and Genetic Algorithm on Generation Unit Commitment ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/68555135992969757924.

Texte intégral
Résumé :
碩士
義守大學
電機工程學系碩士班
97
The objective of unit commitment is to schedule the status and the real power outputs of units and minimize the system production cost during the period while simultaneously satisfying the load demand, spinning reserve, physical and operational constraints of the individual unit. Nevertheless, unit commitment is a optimization problem in essence. That is, when the global optimal solution is available for the problem itself, it requires a great deal of time on computation efforts. In this thesis, the Parallel Genetic Algorithm approaches are presented to solve the unit commitment problem, and comparison with the results obtained using Genetic Algorithm. The approaches are introduced in this thesis since they are based on the principle of natural selection and survival of the fittest, which require computational explosion to obtain the global optimal solution for huge number of crossover, mutation, and judgment amount their combination. Fortunately, the specialty of being high efficient with parallel computation on PC cluster, the time-consuming problem can be solved.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Liang-Chia, Tseng, et 曾亮嘉. « A Genetic Algorithm on PC-Clusters for the Double Digest Problem ». Thesis, 2003. http://ndltd.ncl.edu.tw/handle/98524302053987794784.

Texte intégral
Résumé :
碩士
中華大學
資訊工程學系碩士班
91
Physical mapping, included hybridization mapping and restriction mapping, is an important topic in the field of computational biology research. There is a critical problem in the restriction mapping, double digest problem (DDP). In the literatures review, DDP is a NP-hard problem and there is still no no algorithm with polynomial time complexity developed to solve this problem. The genetic algorithm was designed to solve the DDP and a computer program was developed to examine its efficiency. The purpose of this study is to use the genetic algorithm to do the distributed process in the PC-clusters and shorten the time of solving the DDP by using the genetic algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Horng, Shih-Cheng, et 洪士程. « Simulation Of Minimum Delay Distributed Routing Algorithm Using NetBIOS In PC Network ». Thesis, 1995. http://ndltd.ncl.edu.tw/handle/06965654080718713671.

Texte intégral
Résumé :
碩士
國立交通大學
控制工程研究所
83
The problem of obtaining efficient routing procedures for fast delivery of messages to ther destinations is of utmost importance in the design of modern computer networkd. By routing we mean the decisions regarding the outgoing link to be used for transmitting messages at each node.   The distributed routing algorithm has the characteristic of reliability, toughness, and optimal efficiency over the applications. Therefore, it is more suitable than centralized facility to the problem of routing in complex networks. The distributed routing algorithm assumes knowledge of the link delay gradients for all links. And the perturbation analysis (PA) technique can be used to estimate the delay gradient. The PA technique was developed to provide on-line estimatio of performance sensitivities or gradients of queueing systems.   The purpose of this paper, first, is to derive the infinitesimal perturbation analysis (IPA) estimator for GI/G/1 queueing system. Then we used the IPA technique to estimate the link delay gradients. Second, we used NetBIOS to transmit the messages and implemented the minimum delay distributed routing algorithm on simulated networks. We used the link delay gradients not only based on analytical formula but also IPA approach to implement the simulation.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Lai, Jiun Wei, et 賴駿緯. « A fast and automatic AC-PC labeling algorithm for brain CT and MRI ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/7ngfbq.

Texte intégral
Résumé :
碩士
長庚大學
電機工程學研究所
96
Talairach coordinate system is commonly used to describe the location of brain structures from different individuals. Mapping computed tomography (CT), magnetic resonance imaging (MRI) or brain atlas to the Talairach coordinate system is defined by making two anatomical points, the anterior commissures (AC) and posterior commissures (PC). In this study, an automatic and efficient AC-PC labeling algorithm applied on MRI and CT images is proposed. First, an automatic algorithm to extract Mid-Sagittal Plane (MSP) from MRI is adopted. The MSP is the plane which separates a brain to two symmetric hemispheres. Since AC and PC are located on the MSP, we then search AC and PC from it with the help of elastically registering a pre-labeled MSP template. However, this approach does not work well on CT since the anatomical features on CT are not as conspicuous as on MRI. As a result, we first label AC and PC on MRI by the previous approach and then rigidly map the labeled MRI to the corresponding CT to indicate the positions of AC and PC. Experiment results show that the errors of labeling AC and PC on MRI are less than 2 mm, respectively. Furthermore, errors of labeling AC and PC on CT are 1.48 mm and 2.49 mm by the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Hamilton, Patrick Stewart. « Development and evaluation of a new QRS detection algorithm using the IBM PC ». 1985. http://catalog.hathitrust.org/api/volumes/oclc/12569915.html.

Texte intégral
Résumé :
Thesis (M.S.)--University of Wisconsin--Madison, 1985.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaf 79).
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kang, Sung-Lien, et 康松連. « A Genetic Algorithm on PC-Clusters for the Fragment Assembly Problem with Unknown Orientation ». Thesis, 2004. http://ndltd.ncl.edu.tw/handle/79852103959940712479.

Texte intégral
Résumé :
碩士
中華大學
資訊工程學系碩士班
92
In the sequencing process of DNA, it will generate a large number of DNA sequences. Therefore, fragment assembly is becoming a critical problem to be tackled. Due to the limitation of current experimental technologies, the DNA sequences can only be directly sequenced with the length of base pairs less than 1000. However, most of the DNA sequences have the length much longer than this. As a consequence, to be directly sequenced, DNA sequences first have to be broken by using some special technologies, such as the shotgun method, etc. Then, these fragments are going to be assembled into the initial DNA sequences. This kind of problem is so called “fragment assembly”. In solving such problems, how to assemble the broken fragments into a series of correct DNA sequences is another important issue. The previous works have demonstrated that the fragment assembly problems are NP-hard, and tried to solve the fragment assembly problem with the proposed data structures and algorithms, but still could not make them completed in an acceptable time period. In this paper, we propose a genetic algorithm running in a distributed infrastructure built with PC-clusters to solve the fragment assembly problem. We build an experimental system to demonstrate its effectiveness. The experimental results show that the proposed method will raise the performance of fragment assembly as expected.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Yen, Hao-Che, et 嚴浩哲. « Implementation of a Parallel Genetic Algorithm on PC Cluster to Solve Groundwater Optimization Problems ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/93745849132944811544.

Texte intégral
Résumé :
碩士
國立中興大學
環境工程學系
90
PC Cluster is a new technique of high speed computing and has a very good cost/efficiency ratio. It has been successfully applied in many researches such as astronomy, physics, hydrodynamics, electromagnetic, meteorology and water resources planning and management. This main objective of this research is to develop a parallel genetic algorithm that can be executed on a PC Cluster platform to solve the optimal solutions of groundwater remediation problems. The results showed that the parallel genetic algorithm can solve the complicated groundwater optimization problems effectively and more efficiently. Compared with the sequential genetic algorithm models, the computational time of the parallel version is significantly reduced. Furthermore, when the number of CPU increases, the model can still maintain its computational efficiency and speedup at a good quality. Therefore, the computational time of those large scale and complicated problems can be remarkably decreased by using the PC Clusters which contain more CPUs.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hung, Yin-Chieh, et 洪英傑. « Solving the Product and Operator assignment problem for a PC assembly factory by Genetic Algorithms ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/72012969592617571083.

Texte intégral
Résumé :
碩士
國立成功大學
製造工程研究所碩博士班
90
In recent years, needs of electronic products grow rapidly. For the PC assembly, immediate response to the market needs is required. In the production planning, fast and precise product assignment design is, thus, very important. The characteristics of extreme short lead time of computer assembly factories makes the assignment difficult to be planned. In addition, because of the external form of computer, automatic or semi-automatic assembly is not possible. Manual assembly is the only method. Therefore, for the assigned program, learning curve should be taken into consideration. The size effect has influence on learning curve. For example, when the assigned assembly operation of operator varies, it consumes twice the time that a single machine requires. It is due to lacking adaptation. Thus, the main issue of computer assembly program is about choosing an adequate scale between manual cost and makespan. This research targets on the development of a real-time model of computer assembly assignment. In order to be the reference of actual computer assembly factories, multiple aims arrangement between manual cost and makespan, decision of whether split the order, the quantity of assembly lines, the process of operation, and the assignment of operators are to be considered. At last, genetic algorithms will be applied in order to obtain the result.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Shen, Kuo-Cheng, et 沈國丞. « Building a PC-Based Image Inspection System to detect the Blood Eggs with the K-Nearest Neighbor Algorithm ». Thesis, 2017. http://ndltd.ncl.edu.tw/handle/vynxa3.

Texte intégral
Résumé :
碩士
國立虎尾科技大學
電機工程系碩士班
105
There are currently 1,300 units established for poultry farm feeding laying hens in Taiwan. However, there are no more than 25 units for which the egg quality meets the CAS standards. At present, equipment needs to be imported for firms to carry out grading and packaging of eggs, and this is very expensive. If the equipment can be developed within Taiwan, then this would reduce costs and raise the quality of eggs. This paper presents a system to detect blood spot in eggs, and a simple man-machine interface for users to quickly adopt this approach. A non-destructive method is proposed based on image detection. A simple box with a light source is sued to make the eggs transparent and then an image is taken. The captured image is then binarized. We then normalize the images, derive the size of the egg, perform median filtering, and then converted the image into HSV color space for color analysis. We take out the H component as a feature, and use the K-Nearest Neighbor classification for processing. Finally, the results of the analysis will be shown on a PC screen, and thus reveal whether the eggs have blood sports or not.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Mathirajan, M. « Some PC-based Heuristics For Employee Pick-up Vehicle Routing Problem And Influence Of Spatial Demand Distribution ». Thesis, 1995. http://etd.iisc.ernet.in/handle/2005/1869.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie