Littérature scientifique sur le sujet « Algoritmo PC »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Algoritmo PC ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Algoritmo PC"

1

Ospina R., Oscar, Héctor Anzola Vásquez, Olber Ayala Duarte et Andrea Baracaldo Martínez. « Validación de un algoritmo de procesamiento de imágenes Red Green Blue (RGB), para la estimación de proteína cruda en gramíneas vs la tecnología de espectroscopía de infrarrojo cercano (NIRS) ». Revista de Investigaciones Veterinarias del Perú 31, no 2 (20 juin 2020) : e17940. http://dx.doi.org/10.15381/rivep.v31i2.17940.

Texte intégral
Résumé :
El presente trabajo estuvo orientado a evaluar la precisión del algoritmo de análisis de imágenes Red, Green, Blue (RGB), incluido en el software TaurusWebs ®, que permite calcular el porcentaje de proteína cruda de la materia seca (%PC) de las gramíneas a partir de imágenes de las praderas tomadas por un dron acoplado con cámaras RGB. Se compararon las mediciones del %PC calculadas por el algoritmo frente a un referente, Near Infrared Reflectance Spectroscopy (NIRS), del laboratorio de Corpoica (Agrosavia), calibrado para gramíneas. Se tomaron 42 muestras para NIRS, 18 de gramíneas de trópico alto en Cundinamarca: kikuyo, Pennisetum clandestinum; falsa poa, Holcus lanatus; pasto brasilero, Phalaris arundinacea y 24 de trópico bajo en Tolima, Colombia: pangola, Digitaria decumbens; pará, Brachiaria mutica; bermuda, Cynodon dactylon y colosuana, Bothriochloa pertusa. Los resultados del NIRS se compararon contra las evaluaciones hechas con el algoritmo de las imágenes de las gramíneas provenientes del mismo potrero donde se tomaron las muestras. Los resultados fueron comparados usando las pruebas no paramétricas de correlación de Kendall, rho=0.83 y de Kruskal Wallis. No se encontraron diferencias entre el resultado del %PC de las gramíneas medida por NIRS vs el %PC medida por el algoritmo de análisis de imágenes RGB. En conclusión, la información generada con el algoritmo se puede utilizar para trabajos de análisis del %PC en gramíneas.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Trevisan, André Luis, Carlos Augusto Luz, Giane Fernanda Schneider Gross et Alessandra Dutra. « Pensamento computacional no novo Ensino Médio : atividades desplugadas envolvendo padrões e regularidades ». Em Teia | Revista de Educação Matemática e Tecnológica Iberoamericana 13, no 3 (24 octobre 2022) : 178. http://dx.doi.org/10.51359/2177-9309.2022.254685.

Texte intégral
Résumé :
Muitas estratégias e metodologias tem sido empregadas de modo a favorecer o desenvolvimento de inúmeras habilidades nos estudantes da Educação Básica, como promoção do raciocínio lógico, do pensamento crítico, da resolução de problemas e aquisição de conteúdos científicos. Uma delas é o desenvolvimento do Pensamento Computacional (PC). Assim, este estudo busca analisar as contribuições de atividades desplugadas aplicadas a alunos de duas turmas de Ensino Médio (EM) de uma escola situada ao noroeste do Paraná para ensinar os conteúdos padrões e regularidades por meio dos quatro pilares do PC: decomposição, reconhecimento de padrões, abstração e algoritmo. O tipo de pesquisa empregada no estudo foi a análise de conteúdo. Os resultados mostraram que os alunos não tiveram dificuldades quanto à decomposição, ao reconhecimento de padrões e à elaboração e aplicação de algoritmos. No entanto, a abstração foi a categoria mais difícil de compreensão, o que demonstrou fragilidades na leitura e na interpretação dos dados pelos estudantes.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Machado, Kheronn Khennedy, et Alessandra Dutra. « Para além da programação : desenvolvimento do pensamento computacional nos conteúdos escolares ». Em Teia | Revista de Educação Matemática e Tecnológica Iberoamericana 13, no 3 (24 octobre 2022) : 310. http://dx.doi.org/10.51359/2177-9309.2022.254689.

Texte intégral
Résumé :
A introdução do Pensamento Computacional (PC) na Educação Básica como forma de promover habilidades relacionadas às necessidades da sociedade atual tem despertado interesses nas reformas educacionais e discussões de currículos. No entanto, os estudos de aplicação têm foco em recursos digitais ou atividades desplugadas, desconsiderando os conteúdos escolares para a formação das habilidades ligadas ao PC. Desse modo, o objetivo desse estudo é apresentar e discutir os resultados da aplicação dos pilares do PC - Abstração, Decomposição e Algoritmo - para ensinar sistemas lineares de equações do 1° grau a 31 alunos do 1° ano do Ensino de uma escola situada ao noroeste do Paraná. Após a análise de uma prova diagnóstica, aplicada a nível estadual (Prova Paraná), verificou-se que questões com descritores que tratavam do conteúdo de sistema de equações obteve acertos de 3% e 14,8%. Foi então definida uma intervenção, utilizando como metodologia na resolução de problema um relacionamento aos fundamentos do PC. Os resultados apontam o entendimento dos conceitos nas resoluções das atividades, explicitando os fundamentos de PC. Logo, a contribuição do estudo indica que a aproximação dos pilares pode incorporar problemas cotidianos dos conteúdos curriculares, não necessariamente relacionados com ênfase no ensino de programação de linguagens.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Cedeño Rogger, Andrade. « Módulo didáctico para controlar nivel y caudal de agua, mediante sistema SCADA, PLC y algoritmo PID ». Revista de Investigaciones en Energía, Medio Ambiente y Tecnología : RIEMAT ISSN : 2588-0721 4, no 2 (10 janvier 2020) : 50. http://dx.doi.org/10.33936/riemat.v4i2.2196.

Texte intégral
Résumé :
In this project, design and construction of an educational module has been carry out, allowing understand in a practical way, the concepts related to: control system, instrumentation, actuator, programmable logic controller, SCADA system and control algorithm. Module’s principal components are main tank, reserve tank, piping system, fittings, differential pressure transducer, turbine flowmeter, rotameter, control valve, solenoid valves, servovalve, centrifugal pump, PLC and a personal computer (PC). With this, automatic flow and water level control were achieve, through the implementation of a PID control algorithm. In the end, operation tests have been perform, generating changes in set point, and generating disturbances, to observe the response of the process and assess the control system. Keywords — control system, instrumentation, flow control, level control, PLC, SCADA, PID controller
Styles APA, Harvard, Vancouver, ISO, etc.
5

Llugsi Cañar, Ricardo, et Renato Escandón. « Implementación de un prototipo para captura y procesamiento digital de imágenes térmicas adquiridas desde un UAV ». Enfoque UTE 9, no 1 (30 mars 2018) : 1–11. http://dx.doi.org/10.29019/enfoqueute.v9n1.204.

Texte intégral
Résumé :
El presente trabajo se enfoca en el desarrollo de un prototipo para captura y procesamiento de imágenes térmicas desde un UAV (Vehículo Aéreo no Tripulado). El sistema está compuesto por dos partes: una etapa “aire” instalada en un dron DJI Phantom 3 Standard; y, otra programada en un PC receptor denominada “tierra”. El sistema aire permite la adquisición de imágenes con el uso de 3 elementos: una cámara térmica Flir Lepton, una tarjeta Raspberry Pi y un módulo GPS (para georreferenciación), mientras que para la transmisión de las imágenes a tierra se hace uso de una red Ad-Hoc. En la PC se efectúa un análisis de la información mediante el uso de histogramas y la detección de bordes (algoritmo de Canny), lo que desemboca en la generación de un filtro que permitirá discriminar en qué fotografías se puede definir claramente la ubicación de puntos de calor, evitando así el desperdicio de tiempo y potencia de procesamiento en la detección de falsos positivos en las imágenes. Finalmente, y para la comprobación de la correcta operación del sistema, el prototipo fue probado en condiciones climáticas adversas (neblina) en el sector del Volcán Pululahua.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Peixoto, Renata de Moraes, Rodolfo de Moraes Peixoto, Kárita Cláudia Freitas Lidani et Mateus Matiuzzi da Costa. « Genotipificação de isolados de Staphylococcus epidermidis provenientes de casos de mastite caprina ». Ciência Rural 43, no 2 (février 2013) : 322–25. http://dx.doi.org/10.1590/s0103-84782013000200021.

Texte intégral
Résumé :
Para verificar a dinâmica da resistência aos antimicrobianos em uma propriedade rural no município de Santa Maria da Boa Vista, PE, foram avaliados 14 isolados de Staphylococcus epidermidis de caprinos com mastite subclínica. O perfil de resistência aos antimicrobianos foi determinado pelo teste de difusão em disco. A genotipificação foi realizada empregando o marcador REP (Repetitive Extragenic Palindromic) - PCR, utilizando o primer RW3A, enquanto os graus de similaridade e o fenograma de agrupamento foram estabelecidos por meio do coeficiente de Sorensen-Dice (SD) do algoritmo UPGMA, programa NTSYS-pc, o qual permitiu a identificação de 4 padrões dos 14 isolados de S. epidermidis, sendo oito no perfil A, quatro no perfil B, um no perfil C e um no perfil D. Para todos os grupos, a resistência à penicilina foi observada, enquanto que, para os grupos A e C, esta foi associada à lincomicina, no grupo B, esta foi associada à tetraciclina.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Rodríguez Bustinza, Ricardo, et Ewar Mamani Churayra. « CONTROL DE LA VELOCIDAD EN TIEMPO REAL DE UN MOTOR DC CONTROLADO POR LÓGICA DIFUSA TIPO PD+I USANDO LABVIEW ». Revista Cientifica TECNIA 21, no 1 (7 avril 2017) : 5. http://dx.doi.org/10.21754/tecnia.v21i1.92.

Texte intégral
Résumé :
En este artículo, se presenta un método basado en inteligencia artificial para controlar una planta motor DC por un microordenador personal (PC), el que interactuando hardware y software logra el control de la velocidad del motor DC en tiempo real usando el algoritmo de control Difuso-PD+I. La adquisición de datos e identificación de los parámetros del motor DC han sido necesarias para el control de la velocidad del motor DC, por medio de la tarjeta de adquisición de datos PCI NIDAQ 6024E cuya interface corre en tiempo real que usa el Workshop Real-Time (RTW), el archivo de datos es procesado con la herramienta de identificación del programa Matlab llamada IDENT. El prototipo del sistema computadora-controlador se diseña empleando la programación grafica de LabVIEW, en este caso se hace uso de las herramientas Fuzzy Logic Control y Simulation Module. El control en tiempo real del sistema se lleva a cabo en el laboratorio usando el convertidor digital-a-analógico (DAC) y encoder formado por dos sensores de efecto hall de tipo incremental que por medio de un convertidor frecuencia voltaje se logra procesar las señales desde las entradas analógicas de la NIDAQ. Se verifican los resultados de simulación de computadora experimentalmente, los que demuestran que la señal de control diseñada puede hacer que la salida del sistema prototipo siga eficientemente las referencias impuestas con mínimo sobrepaso y error en estado estacionario nulo. Palabras clave.- Motor DC, Adquisición de datos, Identificación de parámetros, Diseño del controlador e implementación. ABSTRACTIn this article, a method is presented based on artificial intelligence to control a plant DC motor for a personal microcomputer (PC), that interacted hardware and software achieves the control of the speed of the DC motor in real time using the control algorithm Fuzzy-PD+I. The acquisition of data and identification of the parameters of the DC motor have been necessary for the control of the speed of the motor DC, by means of the card of acquisition of data PCI NIDAQ 6024E whose interface runs in the real time that the Workshop Real-Time uses (RTW), the file of data is processed with the tool of identification of the program called IDENT of Matlab. The prototype of the system computer-controller is designed using the graphic programming of LabVIEW, in this case use of the tools Fuzzy Logic Control and Simulation Module. The control in real time of the system is carried out in the laboratory using the digital-to-analogical converter (DAC) and incremental encoder formed by two sensors of effect hall that is possible to process the signs from the analogical input of the NIDAQ by means of a convertor frequency voltage. The results of computer simulation are verified experimentally, those that demonstrate that the designed control sign can make that the exit of the system prototype follows the references imposed with minimum overshoot and null steady-state error. Keywords.- DC Motor, Data acquisition, Parameters identification, Control design and implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Li, Haifeng, Mo Hai et Wenxun Tang. « Prior Knowledge-Based Causal Inference Algorithms and Their Applications for China COVID-19 Analysis ». Mathematics 10, no 19 (30 septembre 2022) : 3568. http://dx.doi.org/10.3390/math10193568.

Texte intégral
Résumé :
Causal inference has become an important research direction in the field of computing. Traditional methods have mainly used Bayesian networks to discover the causal effects between variables. These methods have limitations, namely, on the one hand, the computing cost is expensive if one wants to achieve accurate results, i.e., exponential growth along with the number of variables. On the other hand, the accuracy is not good enough if one tries to reduce the computing cost. In this study, we use prior knowledge iteration or time series trend fitting between causal variables to resolve the limitations and discover bidirectional causal edges between the variables. Subsequently, we obtain real causal graphs, thus establishing a more accurate causal model for the evaluation and calculation of causal effects. We present two new algorithms, namely, the PC+ algorithm and the DCM algorithm. The PC+ algorithm is used to address the problem of the traditional PC algorithm, which needs to enumerate all Markov equivalence classes at a high computational cost or with immediate output of non-directional causal edges. In the PC+ algorithm, the causal tendency among some variables was analyzed via partial exhaustive analysis. By fixing the relatively certain causality as prior knowledge, a causal graph of higher accuracy is the final output at a low running cost. The DCM algorithm uses the d-separation strategy to improve the traditional CCM algorithm, which can only handle the pairwise fitting of variables, and thus identify the indirect causality as the direct one. By using the d-separation strategy, our DCM algorithm achieves higher accuracy while following the basic criteria of Bayesian networks. In this study, we evaluate the proposed algorithms based on the COVID-19 pandemic with experimental and theoretical analysis. The experimental results show that our improved algorithms are effective and efficient. Compared to the exponential cost of the PC algorithm, the time complexity of the PC+ algorithm is reduced to a linear level. Moreover, the accuracies of the PC+ algorithm and DCM algorithm are improved to different degrees; specifically, the accuracy of the PC+ algorithm reaches 91%, much higher than the 33% of the PC algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Aydin, K., et J. Singh. « Cloud Ice Crystal Classification Using a 95-GHz Polarimetric Radar ». Journal of Atmospheric and Oceanic Technology 21, no 11 (1 novembre 2004) : 1679–88. http://dx.doi.org/10.1175/jtech1671.1.

Texte intégral
Résumé :
Abstract Two algorithms are presented for ice crystal classification using 95-GHz polarimetric radar observables and air temperature (T). Both are based on a fuzzy logic scheme. Ice crystals are classified as columnar crystals (CC), planar crystals (PC), mixtures of PC and small- to medium-sized aggregates and/or lightly to moderately rimed PC (PSAR), medium- to large-sized aggregates of PC, or densely rimed PC, or graupel-like snow or small lumpy graupel (PLARG), and graupel larger than about 2 mm (G). The 1D algorithm makes use of Zh, Zdr, LDRhv, and T, while the 2D algorithm incorporates the three radar observables in pairs, (Zdr, Zh), (LDRhv, Zh), and (Zdr, LDRhv), plus the temperature T. The range of values for each observable or pair of observables is derived from extensive modeling studies conducted earlier. The algorithms are tested using side-looking radar measurements from an aircraft, which was also equipped with particle probes producing simultaneous and nearly collocated shadow images of cloud ice crystals. The classification results from both algorithms agreed very well with the particle images. The two algorithms were in agreement by 89% in one case and 97% in the remaining three cases considered here. The most effective observable in the 1D algorithm was Zdr, and in the 2D algorithm the pair (Zdr, Zh). LDRhv had negligible effect in the 1D classification algorithm for the cases considered here. The temperature T was mainly effective in separating columnar crystals from the rest. The advantage of the 2D algorithm over the 1D algorithm was that it significantly reduced the dependence on T in two out of the four cases.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Han, Qing Yu, et Dong Jin. « The Design and Implementation of a New S-Curve Acceleration/Deceleration Algorithm ». Advanced Materials Research 898 (février 2014) : 851–54. http://dx.doi.org/10.4028/www.scientific.net/amr.898.851.

Texte intégral
Résumé :
The acceleration can be controlled using S-curve acceleration/deceleration algorithm in motion process and the impact of system is decreased in the stage of start or stop. So the S-curve is widely used in CNC or robot system. At present, most acceleration/deceleration algorithms are achieved in PC using strong operation ability of CPU, which increase the delay time of PC. So a simple and efficient S-curve acceleration/deceleration algorithm in FPGA is proposed. This algorithm can share the operation press of PC effectively and achieve flexible control of motion control system.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Algoritmo PC"

1

Coller, Emanuela. « Analysis of the PC algorithm as a tool for the inference of gene regulatory networks : evaluation of the performance, modification and application to selected case studies ». Doctoral thesis, country:IT, 2013. http://hdl.handle.net/10449/23814.

Texte intégral
Résumé :
The expansion of a Gene Regulatory Network (GRN) by finding additional causally-related genes, is of great importance for our knowledge of biological systems and therefore relevant for its biomedical and biotechnological applications. Aim of the thesis work is the development and evaluation of a bioinformatic method for GRN expansion. The method, named PC-IM, is based on the PC algorithm that discovers causal relationships starting from purely observational data. PC-IM adopts an iterative approach that overcomes the limitations of previous applications of PC to GRN discovery. PC-IM takes in input the prior knowledge of a GRN (represented by nodes and relationships) and gene expression data. The output is a list of genes which expands the known GRN. Each gene in the list is ranked depending on the frequency it appears causally relevant, normalized to the number of times it was possible to find it. Since each frequency value is associated with precision and sensitivity values calculated using the prior knowledge of the GRN, the method provides in output those genes that are above the value of frequency that optimize precision and sensitivity (cut-off frequency). In order to investigate the characteristics and the performances of PC-IM, in this thesis work several parameters have been evaluated such as the influence of the type and size of input gene expression data, of the number of iterations and of the type of GRN. A comparative analysis of PC-IM versus another recent expansion method (GENIES) has been also performed. Finally, PC-IM has been applied to expand two real GRNs of the model plant Arabidopsis thaliana
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hodáň, Ján. « Automatizace kontroly PC ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-413297.

Texte intégral
Résumé :
This thesis describes the design and implementation of automatic control of computer visualization. At the beginning computer control will be introduced. Familiarization with computers control procedures will take the place. Subsequently, another methods that can apply the control will be introduced. The main part of the work will be devoted to describe the basic skills that related to computer control. The last chapter will explain how visualization is implemented and how we evaluated the success of visualization. Result of the work is an application that visualize process and thanks  to that control will be easier, faster, improved and automated.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kalisch, Markus. « Estimating high-dimensional dependence structures with the PC-algorithm / ». Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17783.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Bernini, Matteo. « An efficient Hardware implementation of the Peak Cancellation Crest Factor Reduction Algorithm ». Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206079.

Texte intégral
Résumé :
An important component of the cost of a radio base station comes from to the Power Amplifier driving the array of antennas. The cost can be split in Capital and Operational expenditure, due to the high design and realization costs and low energy efficiency of the Power Amplifier respectively. Both these cost components are related to the Crest Factor of the input signal. In order to reduce both costs, it would be possible to lower the average power level of the transmitting signal, whereas in order to obtain a more efficient transmission, a more energized signal would allow the receiver to better distinguish the message from the noise and interferences. These opposed needs motivate the research and development of solutions aiming at reducing the excursion of the signal without the need of sacrificing its average power level. One of the algorithms addressing this problem is the Peak Cancellation Crest Factor Reduction. This work documents the design of a hardware implementation of such method, targeting a possible future ASIC for Ericsson AB. SystemVerilog is the Hardware Description Language used for both the design and the verification of the project, together with a MATLAB model used for both exploring some design choices and to validate the design against the output of the simulation. The two main goals of the design have been the efficient hardware exploitation, aiming to a smaller area footprint on the integrated circuit, and the adoption of some innovative design solutions in the controlling part of the design, for example the managing of the cancelling pulse coefficients and the use of a time-division multiplexing strategy to further save area on the chip. For the contexts where both the solutions could compete, the proposed one shows better results in terms of area and delay compared to the current methods in use at Ericsson and also provides innovative suggestions and ideas for further improvements.
En komponent som det är viktigt att ta hänsyn till när det kommer till en radiobasstations kostnad är förstärkaren som används för att driva antennerna. Kostnaden för förstärkaren kan delas upp i en initial kostnad relaterad till utveckling och tillverkning av kretsen, samt en löpande kostnad som är relaterad till kretsens energieffektivitet. Båda kostnaderna är kopplade till en egenskap hos förstärkarens insignal, vilken är kvoten mellan signalens maximala effekt och dess medeleffekt, såkallad toppfaktor. För att reducera dessa kostnader så är det möjligt att minska signalens medeleffekt, men en hög medeleffekt förbättrar radioöverföringen eftersom det är lättare för mottagaren att skilja en signal med hög energi från brus och interferens. Dessa två motsatta krav motiverar forskning och utveckling av lösningar för att minska signalens maximala värde utan att minska dess medeleffekt. En algoritm som kan användas för att minska signalens toppfaktor är Peak Cancellation. Den här rapporten presenterar design och hårdvaruimplementering av Peak Cancellation med avsikt att kunna användas av Ericsson AB i framtida integrerade kretsar. Det hårdvarubeskrivande språket SystemVerilog användes för både design och testning i projektet. MATLAB användes för att utforska designalternativ samt för att modellera algoritmen och jämföra utdata med hårdvaruimplementationen i simuleringar. De två huvudmålen med designen var att utnyttja hårdvaran effektivt för att nå en så liten kretsyta som möjligt och att använda en rad innovativa lösningar för kontrolldelen av designen. Exempel på innovativa designlösningar som användes är hur koefficienter för pulserna, som används för reducera toppar i signalen, hanteras och användning av tidsmultiplex för att ytterligare minska kretsytan. I användningsscenarion där båda lösningarna kan konkurrera, visar den föreslagna lösningen bättre resultat när det kommer till kretsyta och latens än nuvarande lösningar som används av Ericsson. Ges också förslag på ytterligare framtida förbättringar av implementationen.
Styles APA, Harvard, Vancouver, ISO, etc.
5

McClenney, Walter O. « Analysis of the DES, LOKI, and IDEA algorithms for use in an encrypted voice PC network ». Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA297919.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Schwanke, Ullrich. « Trigger and reconstruction farms in the HERA-B experiment and algorithms for a Third Level Trigger ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2000. http://dx.doi.org/10.18452/14565.

Texte intégral
Résumé :
Das HERA-$B$-Experiment am Deutschen Elektronen-Synchrotron (DESY) in Hamburg dient der Untersuchung der Physik von Teilchen, die $b$-Quarks enthalten. Der Schwerpunkt des Ex\-pe\-ri\-mentes liegt auf der Messung der CP-Verletzung im System der neutralen $B$-Mesonen. Es wird erwartet, dass die pr\"azise Bestimmung der CP-Asymmetrie im Zerfallskanal $B^0(\bar{B}^0)\to J/\psi K_S^0$ gro{\ss}en Einfluss auf die Weiterentwicklung des Standardmodells der Elementarteilchenphysik und g\"angiger kosmologischer Theorien haben wird. Das HERA-$B$-Experiment nutzt den Protonenstrahl des HERA-Ringes, um in Kollisionen mit einem feststehenden Target paarweise $B$-Hadronen zu erzeugen. Die Wechselwirkungen werden in einem Vorw\"artsspektrometer mit etwa 600.000 Auslesekan\"alen nachgewiesen. Aufgrund der relativ niedrigen Schwerpunktsenergie von 41.6\,GeV sind Ereignisse mit $b$-Quarks im Vergleich zu Wechselwirkungen mit leichteren Quarks um etwa sechs Gr\"o{\ss}enordnungen unterdr\"uckt. Die Selektion von Signalereignissen stellt daher eine besondere Herausforderung dar. Sie wird von einem vierstufigen Datennahme- und Triggerystem \"ubernommen, das die Ereignisrate von 10\,MHz auf etwa 20\,Hz reduziert. Neben speziell entwickelter Elektronik werden im Triggersystem mehrere hundert handels\"ubliche PCs eingesetzt. Die Computer sind in zwei so genannten PC-Farmen mit jeweils mehr als 200 Prozessoren angeordnet, die die Rechenkapazit\"at f\"ur Triggerentscheidungen und die prompte Analyse der Ereignisdaten zur Verf\"ugung stellen. Auf der einen Farm laufen schnelle Triggerprogramme mit einer Rechenzeit von etwa 1--100\,ms pro Ereignis ab. Die andere Farm rekonstruiert die Ereignisse online, bevor die Daten auf Band dauerhaft archiviert werden. Die pro Ereignis aufgewandte Rechenzeit liegt dabei im Bereich einiger Sekunden. Die vorliegende Arbeit behandelt zwei Themenkreise. Einerseits wird die technische Umsetzung der Trigger- und der Rekonstruktionsfarm beschrieben. Besonderes Augenmerk liegt dabei auf den Software-Systemen, die den Farmen erforderliche Kalibrationsdaten verf\"ugbar machen und die zentrale \"Uberwachung der Ergebnisse der ablaufenden Programme gestatten. Der Hauptteil der Arbeit besch\"aftigt sich mit Algorithmen f\"ur eine dritte Triggerstufe, die zus\"atzlich zu existierenden Programmen auf der Triggerfarm zum Einsatz kommen sollen. Der Zerfall $B^0(\bar{B}^0)\to J/\psi X$ hat eine sehr klare Signatur, wenn das $J/\psi$ in ein $e^+e^-$- oder $\mu^+\mu^-$-Paar zerf\"allt. Im Triggersystem wird nach einem Paar entgegengesetzt geladener Leptonen des gleichen Typs gesucht, deren invariante Masse der des $J/\psi$ entspricht und deren Spuren von einem gemeinsamen Vertex in der N\"ahe des Targets ausgehen. Es wird davon ausgegangen, dass die Ausnutzung aller kinematischen Zwangsbedingungen ausreicht, um diesen Zerfallskanal klar von Untergrundereignissen zu trennen. Die dritte Triggerstufe soll dagegen auf Signalprozesse mit weniger kinematischen Beschr\"ankungen angewandt werden. Solche Ereignisse entstehen zum Beispiel dann, wenn zwei in der Proton-Target-Kollision erzeugte $B$-Mesonen semileptonisch zerfallen. Das Triggersystem selektiert lediglich die beiden Leptonen, die aber hier nicht von einem gemeinsamen Vertex kommen. Die dritte Triggerstufe soll f\"ur derartige Zerfallstopologien innerhalb von 100\,ms pro Ereignis weitere Kriterien zur Unterscheidung von Signal- und Untergrundprozessen aus den Daten extrahieren. In der Arbeit wird anhand von Monte-Carlo-Studien untersucht, inwieweit die Daten des Silizium-Vertexdetektors des Experimentes zur Entscheidungsfindung einer dritten Triggerstufe beitragen k\"onnen. Dabei wird die Rekonstruktion von Spuren aus der Zerfallskaskade der $B$-Hadronen zus\"atzlich zu den von der vorhergehenden Triggerstufe selektierten Lep\-ton\-en an\-ge\-strebt. Mithilfe einer schnellen Mustererkennung f\"ur den Vertexdetektor wird gezeigt, dass das Auffinden aller Spuren und die Anwendung von Triggeralgorithmen innerhalb des vorgegebenen Zeitfensters von 100\,ms m\"oglich sind. Die Bestimmung der Spurparameter nahe der Targetregion macht von der Methode des Kalman-Filters Gebrauch, um der Vielfachstreuung im Detektormaterial Rechnung zu tragen. Dabei tritt das Problem auf, dass weder der Impuls der gefundenen Spuren bekannt ist, noch die Materialverteilung im Vertexdetektor aus Zeitgr\"unden in aller Strenge ber\"ucksichtigt werden kann. Durch geeignete N\"aherungen gelingt es, eine ausreichende Genauigkeit f\"ur die Spurparameter zu erreichen. Die aufgefundenen Teilchen bilden den Ausgangspunkt f\"ur Triggeralgorithmen. Hierbei wird untersucht, welche Methoden am besten geeignet sind, um Signal- und Unter\-grund\-ereignisse voneinander zu trennen. Es erweist sich, dass das Auffinden von Spuren mit gro{\ss}em Impaktparameter aussichtsreichere Ans\"atze als eine Suche nach Sekund\"arvertices bietet.
The HERA-$B$ experiment at Deutsches Elektronen-Synchrotron (DESY) Hamburg aims at investigating the physics of particles containing $b$ quarks. The experiment focusses on measuring CP violation in the system of neutral $B$ mesons. It is expected that the precise determination of the CP asymmetry in the channel $B^0(\bar{B}^0)\to J/\psi K_S^0$ will have an impact on the further development of the Standard Model of Elementary Particle Physics and cosmological theories. The HERA-$B$ experiment uses the proton beam of the HERA storage ring in fixed-target mode. $B$ hadrons are produced in pairs when protons from the beam halo interact with target nuclei. The interactions are recorded by a forward-spectrometer with roughly 600.000 readout channels. At the HERA-$B$ centre-of-mass energy of 42.6\,GeV, the $b\bar{b}$ cross section is only a tiny fraction of the total inelastic cross section. Only one in about 10$^6$ events contains $b$ quarks, which turns the selection of signal events into a particular challenge. The selection is accomplished by a four-stage data acquisition and trigger system reducing the event rate from 10\,MHz to about 20\,Hz. Besides custom-made electronics, several hundreds of PCs are used in the trigger system. The computers are arranged in two so-called PC farms with more than 200 processors each. The PC farms provide the computing capacity for trigger decisions and the prompt analysis of event data. One farm executes fast trigger programs with a computing time of 1--100\,ms per event. The other farm performs online reconstruction of the events before data are archived on tape. The computing time per event is in the range of several seconds. This thesis covers two topics. In the beginning, the technical implementation of the trigger and the reconstruction farm are described. In doing so, emphasis is put on the software systems which make calibration data available to the farms and which provide a centralised view on the results of the executing processes. The principal part of this thesis deals with algorithms for a Third Level Trigger. This trigger is to come into operation on the trigger farm together with existing programs. Processes of the type $B^0(\bar{B}^0)\to J/\psi X$ have a very clean signature when the $J/\psi$ decays to a $e^+e^-$ or $\mu^+\mu^-$ pair. The trigger system attempts to identify two unlike-sign leptons of the same flavour whose invariant mass matches the $J/\psi$. In later steps, the tracks are required to originate from a common vertex close to the target. It is assumed that these kinematic constraints are sufficient to pick out events of this type among the copious background processes. In contrast, the Third Level Trigger is to be applied to signal processes with fewer kinematic constraints. Such events occur for example when two $B$ mesons, which were created in a proton-target collision, decay semileptonically. The trigger system selects merely the two leptons which do not originate from a common vertex in this case. The Third Level Trigger has 100\,ms at its disposal to extract further criteria from the data which can serve to distinguish between signal and background events. This thesis investigates with the aid of Monte-Carlo simulations how the data of the experiment's silicon vertex detector can contribute to the decisions of a Third Level Trigger. The trigger aims at reconstructing tracks from the decay cascade of $B$ mesons in addition to the leptons selected by the preceding trigger levels. A fast pattern recognition for the vertex detector demonstrates that the reconstruction of all tracks and the application of trigger algorithms are possible within the given time slot of 100\,ms. The determination of track parameters in the target region exploits the Kalman-filter method to account for the multiple scattering of particles in the detector material. The application of this method is, however, made difficult by two facts. First, the momentum of the reconstructed tracks is not known. And, second, the material distribution in the detector cannot be taken into consideration in detail due to timing limitations. Adequate approximations for the momentum and the material traversed by a particle help to accomplish a sufficient accuracy of the track parameters. The reconstructed tracks constitute the starting point of several trigger algorithms, whose suitability to select signal events is investigated. Our studies indicate that the reconstruction of tracks with large impact parameters is a more promising approach than a search for secondary vertices.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Hoffmann, Gustavo André. « Study of the audio coding algorithm of the MPEG-4 AAC standard and comparison among implementations of modules of the algorithm ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2002. http://hdl.handle.net/10183/1697.

Texte intégral
Résumé :
Audio coding is used to compress digital audio signals, thereby reducing the amount of bits needed to transmit or to store an audio signal. This is useful when network bandwidth or storage capacity is very limited. Audio compression algorithms are based on an encoding and decoding process. In the encoding step, the uncompressed audio signal is transformed into a coded representation, thereby compressing the audio signal. Thereafter, the coded audio signal eventually needs to be restored (e.g. for playing back) through decoding of the coded audio signal. The decoder receives the bitstream and reconverts it into an uncompressed signal. ISO-MPEG is a standard for high-quality, low bit-rate video and audio coding. The audio part of the standard is composed by algorithms for high-quality low-bit-rate audio coding, i.e. algorithms that reduce the original bit-rate, while guaranteeing high quality of the audio signal. The audio coding algorithms consists of MPEG-1 (with three different layers), MPEG-2, MPEG-2 AAC, and MPEG-4. This work presents a study of the MPEG-4 AAC audio coding algorithm. Besides, it presents the implementation of the AAC algorithm on different platforms, and comparisons among implementations. The implementations are in C language, in Assembly of Intel Pentium, in C-language using DSP processor, and in HDL. Since each implementation has its own application niche, each one is valid as a final solution. Moreover, another purpose of this work is the comparison among these implementations, considering estimated costs, execution time, and advantages and disadvantages of each one.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Saptari, Adi. « PC computer based algorithm for the selection of material handling equipment for a distribution warehouse based on least annual cost and operating parameters ». Ohio : Ohio University, 1990. http://www.ohiolink.edu/etd/view.cgi?ohiou1183473503.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Verbyla, Petras. « Network inference using independence criteria ». Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277912.

Texte intégral
Résumé :
Biological systems are driven by complex regulatory processes. Graphical models play a crucial role in the analysis and reconstruction of such processes. It is possible to derive regulatory models using network inference algorithms from high-throughput data, for example; from gene or protein expression data. A wide variety of network inference algorithms have been designed and implemented. Our aim is to explore the possibilities of using statistical independence criteria for biological network inference. The contributions of our work can be categorized into four sections. First, we provide a detailed overview of some of the most popular general independence criteria: distance covariance (dCov), kernel canonical variance (KCC), kernel generalized variance (KGV) and the Hilbert-Schmidt Independence Criterion (HSIC). We provide easy to understand geometrical interpretations for these criteria. We also explicitly show the equivalence of dCov, KGV and HSIC. Second, we introduce a new criterion for measuring dependence based on the signal to noise ratio (SNRIC). SNRIC is significantly faster to compute than other popular independence criteria. SNRIC is an approximate criterion but becomes exact under many popular modelling assumptions, for example for data from an additive noise model. Third, we compare the performance of the independence criteria on biological experimental data within the framework of the PC algorithm. Since not all criteria are available in a version that allows for testing conditional independence, we propose and test an approach which relies on residuals and requires only an unconditional version of an independence criterion. Finally we propose a novel method to infer networks with feedback loops. We use an MCMC sampler, which samples using a loss function based on an independence criterion. This allows us to find networks under very general assumptions, such as non-linear relationships, non-Gaussian noise distributions and feedback loops.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Alsawaf, Anas. « Development of a PC-based cost engineering algorithm for capital intensive industries based on the methodology of total absorption standard costing ». Ohio : Ohio University, 1992. http://www.ohiolink.edu/etd/view.cgi?ohiou1176492634.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Algoritmo PC"

1

Mundel, Marvin Everett. Measuring the productivity ofcommercial banks : Algorithms and PC programs. White Plains, N.Y : UNIPUB/Kraus International Publications, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Measuring totalproductivity in manufacturing organizations : Algorithms and PC programs. White Plains, N.Y : UNIPUB/Kraus International Publications, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Mundel, Marvin Everett. Measuring total productivity of manufacturing organizations : Algorithms and PC programs. White Plains, N.Y : UNIPUB/Kraus International Publications, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Mundel, Marvin Everett. Measuring the productivity of commercial banks : Algorithms and PC programs. White Plains, N.Y : UNIPUB/Kraus International Publications, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bennane, Aomar. A TMS320/IBM PC coprocessor system for digital signal processing algorithms. Birmingham : University of Birmingham, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Drew, Wells, dir. Ray Tracing Creations : Generate 3D Photorealistic Images on the PC. 2e éd. Corte Madera, CA : Waite group Press, 1994.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

1955-, Young Chris, dir. Ray Tracing Creations : Generate 3-D Photorealistic Images on the PC. Corte Madera, CA : Waite Group Press, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mundel, Marvin Everett. The white-collar knowledge worker : Measuring and improving productivity and effectiveness : algorithms and PC programs. [Tokyo, Japan] : Asian Productivity Organization, 1989.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Mundel, Marvin Everett. The white-collar knowledge worker : Measuring and improving productivity and effectiveness : algorithms and PC programs. [Tokyo, Japan] : Asian Productivity Organization, 1989.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rommelfanger, Heinrich. PC software FULPAL 2.0 : An interactive algorithm for solving multicriteria fuzzy linear programs controlled by aspiration levels. Frankfurt/Main : Johann Wolfgang Goethe-Universität Frankfurt, Fachbereich Wirtschaftswissenschaften, 1995.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Algoritmo PC"

1

Madsen, Anders L., Frank Jensen, Antonio Salmerón, Helge Langseth et Thomas D. Nielsen. « Parallelisation of the PC Algorithm ». Dans Advances in Artificial Intelligence, 14–24. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24598-0_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Näher, Stefan, et Kurt Mehlhorn. « LEDA — A Library of Efficient Data Types and Algorithms ». Dans PC-Einsatz in der Hochschulausbildung, 105–8. Berlin, Heidelberg : Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-84839-1_17.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Alsuwat, Emad, Hatim Alsuwat, Marco Valtorta et Csilla Farkas. « Cyber Attacks Against the PC Learning Algorithm ». Dans ECML PKDD 2018 Workshops, 159–76. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13453-2_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Li, Jiuyong, Lin Liu et Thuc Duy Le. « Local Causal Discovery with a Simple PC Algorithm ». Dans Practical Approaches to Causal Relationship Exploration, 9–21. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-14433-7_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Yushi, et George Baciu. « PC-OPT : A SfM Point Cloud Denoising Algorithm ». Dans Lecture Notes in Computer Science, 280–91. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62362-3_25.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Jiawan, Jizhou Sun, Yi Zhang, Qianqian Han et Zhou Jin. « A Parallel Volume Splatting Algorithm Based on PC-Clusters ». Dans Computational Science and Its Applications – ICCSA 2004, 272–79. Berlin, Heidelberg : Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24709-8_29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Cui, Ruifei, Perry Groot et Tom Heskes. « Copula PC Algorithm for Causal Discovery from Mixed Data ». Dans Machine Learning and Knowledge Discovery in Databases, 377–92. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46227-1_24.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Denysiuk, Roman, Lino Costa, Isabel Espírito Santo et José C. Matos. « MOEA/PC : Multiobjective Evolutionary Algorithm Based on Polar Coordinates ». Dans Lecture Notes in Computer Science, 141–55. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-15934-8_10.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Noor, Fazal, et Syed Misbahuddin. « Using MPI on PC Cluster to Compute Eigenvalues of Hermitian Toeplitz Matrices ». Dans Algorithms and Architectures for Parallel Processing, 313–23. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13119-6_28.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Doan, Viet Hung, Nam Thoai et Nguyen Thanh Son. « An Adaptive Space-Sharing Scheduling Algorithm for PC-Based Clusters ». Dans Modeling, Simulation and Optimization of Complex Processes, 225–34. Berlin, Heidelberg : Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79409-7_14.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Algoritmo PC"

1

Fontinele, Alexandre, et André Soares. « Um Novo Algoritmo IA-RSA Ciente de Imperfeições de Camada Física para Redes Ópticas Elásticas ». Dans XXX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/ctd.2017.3458.

Texte intégral
Résumé :
Este artigo prop õe um novo algoritmo de roteamento e alocação de espectro ciente dos efeitos de camada física (IA-RSA - Impairment-Aware Routing and Spectrum Assignment) para redes ópticas elásticas. O objetivo do algoritmo proposto é reduzir a probabilidade de bloqueio causada pela degradação da qualidade de transmiss ão quando novos circuitos s ão estabelecidos. O algoritmo proposto é comparado com outros algoritmos IA-RSA: Modified Dijkstra Path Computation (MD-PC) e K-Shortest Path Computation (KS-PC). Resultados de simulação mostram que o algoritmo proposto apresenta um desempenho superior ao dos algoritmos MD-PC e KS-PC em termos de i) probabilidade de bloqueio de circuitos, ii) probabilidade de bloqueio de banda, iii) justiça no atendimento de diferentes pares de n ós origem e destino e iv) justiça no atendimento de diferentes larguras de banda para as topologias EON e NSFNet.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ferreira, Tamara, Joslaine Cristina Jeske de Freitas, Marcos Wagner Souza Ribeiro et Eliane Raimann. « Desenvolvimento do Pensamento Computacional na Ciência da Computação - uma Questão de Gênero ? » Dans Workshop de Informática na Escola. Sociedade Brasileira de Computação, 2019. http://dx.doi.org/10.5753/cbie.wie.2019.944.

Texte intégral
Résumé :
Entende-se por Pensamento Computacional - PC o processo de pensamento da formulação de um problema e na expressão de sua solução de forma que se possa efetivamente efetuá-lo. O objetivo deste trabalho é analisar o nível de desenvolvimento do PC entre homens e mulheres quando entram na faculdade de computação e também durante o curso, com o intuito de apontar se o desenvolvimento do PC pode influenciar no ingresso em cursos de Computação e se realmente existe um crescimento no índice do PC no tempo de formação. Para tanto, desenvolveu-se um questionário baseado nas cinco dimensões do PC (abstração, generalização, modularidade, algoritmo e decomposição). Os resultados mostraram que o desempenho é relativamente igual entre homens e mulheres.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Militky, Jiri, Milan Meloun et Karel Kupka. « Chemometrical package for PC ». Dans Proceedings of the First Scientific Meeting of the IASE. International Association for Statistical Education, 1993. http://dx.doi.org/10.52041/srap.93312.

Texte intégral
Résumé :
Chemometrics is a relatively young discipline ranging over chemistry, mathematical statistics and informatics. Its important part comprises methods for the extraction of relevant information from chemical experiments. These methods make use of selected computer algorithms of mathematical statistics, which are discussed e.g., in Melon et al. (1992) and Militky (1989). Some chem-metrical computations can be performed by Bernal statical packages, namely SYSTAT, SAS, STAT-GRAPHICS. Extensive tests and studies of these and other packages have shown, however, that in many cases they do not support a researcher with suitable statistical methods nor are the numerical algorithms reliable enough (see Militky, 1990).
Styles APA, Harvard, Vancouver, ISO, etc.
4

Varney, Doug. « Adequacy of checksum algorithms for computer virus detection ». Dans the 1990 ACM SIGSMALL/PC symposium. New York, New York, USA : ACM Press, 1990. http://dx.doi.org/10.1145/99412.99494.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kulkarni, A. D., et G. M. Whitson. « Self organizing neural networks with a split/merge algorithm ». Dans the 1990 ACM SIGSMALL/PC symposium. New York, New York, USA : ACM Press, 1990. http://dx.doi.org/10.1145/99412.99485.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

« Summarizing Genome-wide Phased Genotypes using Phased PC Plots ». Dans International Conference on Bioinformatics Models, Methods and Algorithms. SCITEPRESS - Science and and Technology Publications, 2014. http://dx.doi.org/10.5220/0004793501300135.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Martin, William, et Cassandra Telenko. « Finding Causality in Socio-Technical Systems : A Comparison of Bayesian Network Structure Learning Algorithms ». Dans ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67414.

Texte intégral
Résumé :
A wide number of Bayesian Network (BN) structure learning algorithms have been developed for a variety of applications. The purpose of this research is to shed light on which of these BN structure learning algorithms work best with small, amalgamated socio-technical datasets in an attempt to better understand such systems and improve their design. BN structure learning algorithms have not been widely used for socio-technical problems due to the small and disparate natures of the data describing such systems. This research tested four widely used learning algorithms given two test cases: a simulated ALARM network data set as a baseline and a novel socio-technical network data set combining Divvy bike’s bike share data and Chicago weather data as the more challenging design case. After testing the K2, PC, Sparse Candidate Algorithm (SCA), and Min-Max Hill Climbing (MMHC) algorithm, results indicate that all of the algorithms tested are capable of giving insight into the novel dataset’s most likely causal structure given the real socio-technical data. It should be noted that the convergence with the real world socio-technical data was significantly slower than with the simulated ALARM network dataset. The conditional independence (PC) algorithm also exhibited an interesting pattern in that it diverged farther away from the novel socio-technical network’s most likely structure when given very large datasets, preferring a denser network with more edges. The resulting network structures from all of the algorithms suggest that an opportunity exists to increase ridership by women during commuting hours in the Divvy bike program.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Tassa, Tamir, Tal Grinshpoun et Avishai Yanay. « A Privacy Preserving Collusion Secure DCOP Algorithm ». Dans Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California : International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/663.

Texte intégral
Résumé :
In recent years, several studies proposed privacy-preserving algorithms for solving Distributed Constraint Optimization Problems (DCOPs). All of those studies assumed that agents do not collude. In this study we propose the first privacy-preserving DCOP algorithm that is immune against coalitions, under the assumption of honest majority. Our algorithm -- PC-SyncBB -- is based on the classical Branch and Bound DCOP algorithm. It offers constraint, topology and decision privacy. We evaluate its performance on different benchmarks, problem sizes, and constraint densities. We show that achieving security against coalitions is feasible. As all existing privacy-preserving DCOP algorithms base their security on assuming solitary conduct of the agents, we view this study as an essential first step towards lifting this potentially harmful assumption in all those algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Martinec, Dan, et Marek Bundzel. « Evolutionary algorithms and reinforcement learning in experiments with slot cars ». Dans 2013 International Conference on Process Control (PC). IEEE, 2013. http://dx.doi.org/10.1109/pc.2013.6581401.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wakatani, Akiyoshi. « PNN Algorithm for PC Grid System ». Dans International Conference on Software Engineering Advances (ICSEA 2007). IEEE, 2007. http://dx.doi.org/10.1109/icsea.2007.54.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Algoritmo PC"

1

Davidson, J. R. Monte Carlo tests of the ELIPGRID-PC algorithm. Office of Scientific and Technical Information (OSTI), avril 1995. http://dx.doi.org/10.2172/52637.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Searcy, Stephen W., et Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, août 1993. http://dx.doi.org/10.32747/1993.7568747.bard.

Texte intégral
Résumé :
This project includes two main parts: Development of a “Selective Wavelength Imaging Sensor” and an “Adaptive Classifiery System” for adaptive imaging and sorting of agricultural products respectively. Three different technologies were investigated for building a selectable wavelength imaging sensor: diffraction gratings, tunable filters and linear variable filters. Each technology was analyzed and evaluated as the basis for implementing the adaptive sensor. Acousto optic tunable filters were found to be most suitable for the selective wavelength imaging sensor. Consequently, a selectable wavelength imaging sensor was constructed and tested using the selected technology. The sensor was tested and algorithms for multispectral image acquisition were developed. A high speed inspection system for fresh-market carrots was built and tested. It was shown that a combination of efficient parallel processing of a DSP and a PC based host CPU in conjunction with a hierarchical classification system, yielded an inspection system capable of handling 2 carrots per second with a classification accuracy of more than 90%. The adaptive sorting technique was extensively investigated and conclusively demonstrated to reduce misclassification rates in comparison to conventional non-adaptive sorting. The adaptive classifier algorithm was modeled and reduced to a series of modules that can be added to any existing produce sorting machine. A simulation of the entire process was created in Matlab using a graphical user interface technique to promote the accessibility of the difficult theoretical subjects. Typical Grade classifiers based on k-Nearest Neighbor techniques and linear discriminants were implemented. The sample histogram, estimating the cumulative distribution function (CDF), was chosen as a characterizing feature of prototype populations, whereby the Kolmogorov-Smirnov statistic was employed as a population classifier. Simulations were run on artificial data with two-dimensions, four populations and three classes. A quantitative analysis of the adaptive classifier's dependence on population separation, training set size, and stack length determined optimal values for the different parameters involved. The technique was also applied to a real produce sorting problem, e.g. an automatic machine for sorting dates by machine vision in an Israeli date packinghouse. Extensive simulations were run on actual sorting data of dates collected over a 4 month period. In all cases, the results showed a clear reduction in classification error by using the adaptive technique versus non-adaptive sorting.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie