Dissertationen zum Thema „Cluster monitoring“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-32 Dissertationen für die Forschung zum Thema "Cluster monitoring" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Worm, Stefan. „Monitoring of large-scale Cluster Computers“. Master's thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700032.
Der volle Inhalt der QuelleDie ständige Überwachung eines Computers gehört zu den essentiellen Dingen, die zu tun sind um immer auf dem Laufenden zu sein, wie der aktuelle Zustand des Rechners ist. Dies ist trivial, wenn man direkt davor sitzt, aber wenn man einen Computer aus der Ferne beobachten soll ist dies schon nicht mehr so einfach möglich. Schwieriger wird es dann, wenn es eine große Anzahl an Rechnern zu überwachen gilt. Da der Vorgang der Überwachung auch immer etwas Netzwerklast und Last auf dem zu überwachenden Rechner selber verursacht, ist es wichtig diese Einflüsse so gering wie möglich zu halten. Gerade dann, wenn man viele Computer zu einem leistungsfähigen Cluster zusammen geschalten hat ist es notwendig, dass diese Überwachungslösung möglichst effizient funktioniert und die eigentliche Arbeit des Supercomputers nicht stört. Die Hauptziele dieser Arbeit sind deshalb Analysen zur Sicherstellung der Skalierbarkeit der Überwachungslösung für einen großen Computer Cluster, sowie der praktische Nachweis der Funktionalität dieser. Dazu wurde zuerst eine Einordnung des Monitorings in den Gesamtbetrieb eines großen Computersystems vorgenommen. Danach wurden Methoden und Lösungen aufgezeigt, welche in einem allgemeinen Szenario geeignet sind, um den ganzheitlichen Vorgang der Überwachung möglichst effizient und skalierbar durchzuführen. Im weiteren Verlauf wurde darauf eingegangen welche Lehren aus dem Betrieb eines vorhandenen Clusters für den Betrieb eines neuen, leistungsfähigeren Systems gezogen werden können um dessen Funktion möglichst gut gewährleisten zu können. Darauf aufbauend wurde eine Auswahl getroffen, welche Anwendung aus einer Menge existierende Lösungen heraus, zur Überwachung des neuen Clusters besonders geeignet ist. Dies fand unter Berücksichtigung der spezielle Situation, zum Beispiel der Verwendung von InfiniBand als Verbindungsnetzwerk, statt. Im Zuge dessen wurde eine zusätzliche Software entwickelt, welche die verschiedensten Statusinformationen der InfiniBand Anschlüsse auslesen und verarbeiten kann, unabhängig vom Hersteller der Hardware. Diese Funktionalität, welche im Bereich der freien Überwachungsanwendungen bisher ansonsten noch nicht verfügbar war, wurde beispielhaft für die gewählte Monitoring Software umgesetzt. Letztlich war der Einfluss der Überwachungsaktivitäten auf die eigentlichen Anwendungen des Clusters von Interesse. Dazu wurden exemplarisch das selbst entwickelte Plugin sowie eine Auswahl an typischen Überwachungswerten benutzt, um den Einfluss auf die CPU und das Netzwerk zu untersuchen. Dabei wurde gezeigt, dass für typische Überwachungsintervalle keine Einschränkungen der eigentlichen Anwendung zu erwarten sind und dass überhaupt nur für untypisch kurze Intervalle ein geringer Einfluss festzustellen war
Worm, Stefan Mehlan Torsten. „Monitoring of large-scale Cluster Computers“. [S.l. : s.n.], 2007.
Den vollen Inhalt der Quelle findenBucciarelli, Mark. „Cluster sampling methods for monitoring route-level transit ridership“. Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13485.
Der volle Inhalt der QuelleBank, Mathias. „AIM - A Social Media Monitoring System for Quality Engineering“. Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-115894.
Der volle Inhalt der QuelleIn den letzten Jahren hat sich das World Wide Web dramatisch verändert. War es vor einigen Jahren noch primär eine Informationsquelle, in der ein kleiner Anteil der Nutzer Inhalte veröffentlichen konnte, so hat sich daraus eine Kommunikationsplattform entwickelt, in der jeder Nutzer aktiv teilnehmen kann. Die dadurch enstehende Datenmenge behandelt jeden Aspekt des täglichen Lebens. So auch Qualitätsthemen. Die Analyse der Daten verspricht Qualitätssicherungsmaßnahmen deutlich zu verbessern. Es können dadurch Themen behandelt werden, die mit klassischen Sensoren schwer zu messen sind. Die systematische und reproduzierbare Analyse von benutzergenerierten Daten erfordert jedoch die Anpassung bestehender Tools sowie die Entwicklung neuer Social-Media spezifischer Algorithmen. Diese Arbeit schafft hierfür ein völlig neues Social Media Monitoring-System, mit dessen Hilfe ein Analyst tausende Benutzerbeiträge mit minimaler Zeitanforderung analysieren kann. Die Anwendung des Systems hat einige Vorteile aufgezeigt, die es ermöglichen, die kundengetriebene Definition von \"Qualität\" zu erkennen
Neema, Isak. „Surveying and monitoring crimes in Namibia through the likrlihood based cluster analysis“. Thesis, University of Reading, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518226.
Der volle Inhalt der QuelleŽivčák, Adam. „Správa Raspberry Pi 4 clusteru pomocí Nix“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445519.
Der volle Inhalt der QuelleChen, Yajuan. „Cluster_Based Profile Monitoring in Phase I Analysis“. Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/46810.
Der volle Inhalt der QuellePh. D.
Chan, Sik-foon Joyce. „Application of cluster analysis to identify sources of particulate matter in Hong Kong /“. Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1470920X.
Der volle Inhalt der QuelleTedder, O. W. S. „Monitoring the spin environment of coupled quantum dots : towards the deterministic generation of photonic cluster states“. Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10049622/.
Der volle Inhalt der QuelleYang, Weishuai. „Scalable and effective clustering, scheduling and monitoring of self-organizing grids“. Diss., Online access via UMI:, 2008.
Den vollen Inhalt der Quelle findenSomon, Bertille. „Corrélats neuro-fonctionnels du phénomène de sortie de boucle : impacts sur le monitoring des performances“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS042/document.
Der volle Inhalt der QuelleThe ongoing technological mutations occuring in aeronautics have profoundly changed the interactions between men and machines. Systems are more and more complex, automated and opaque. Several tragedies have reminded us that the supervision of those systems by human operators is still a challenge. Particularly, evidences have been made that automation has driven the operators away from the control loop of the system thus creating an out-of-the-loop phenomenon (OOL). This phenomenon is characterized by a decrease in situation awareness and vigilance, but also complacency and over-reliance towards automated systems. These difficulties have been shown to result in a degradation of the operator’s performances. Thus, the OOL phenomenon is a major issue of today’s society to improve human-machine interactions. Even though it has been studied for several decades, the OOL is still difficult to characterize, and even more to predict. The aim of this thesis is to define how cognitive neurosciences theories, such as the performance monitoring activity, can be used in order to better characterize the OOL phenomenon and the operator’s state, particularly through physiological measures. Consequently, we have used electroencephalographic activity (EEG) to try and identify markers and/or precursors of the supervision activity during system monitoring. In a first step we evaluated the error detection or performance monitoring activity through standard laboratory tasks, with varying levels of difficulty. We performed two EEG studies allowing us to show that : (i) the performance monitoring activity emerges both for our own errors detection but also during another agent supervision, may it be a human agent or an automated system, and (ii) the performance monitoring activity is significantly decreased by increasing task difficulty. These results led us to develop another experiment to assess the brain activity associated with system supervision in an ecological environment, resembling everydaylife aeronautical system monitoring. Thanks to adapted signal processing techniques (e.g. trial-by-trial time-frequency decomposition), we were able to show that there is : (i) a fronto-central θ activité time-locked to the system’s decision similar to the one obtained in laboratory condition, (ii) a decrease in overall supervision activity time-locked to the system’s decision, and (iii) a specific decrease of monitoring activity for errors. In this thesis, several EEG measures have been used in order to adapt to the context at hand. As a perspective, we have developped a final study aiming at defining the evolution of the monitoring activity during the OOL. Finding markers of this degradation would allow to monitor its emersion, and even better, predict it
Komaragiri, Shalini Sushmitha. „A SAG monitoring device based on a cluster of code-based GPS receivers : a thesis presented to the faculty of the Graduate School, Tennessee Technological University /“. Click to access online, 2009. http://proquest.umi.com/pqdweb?index=0&did=2000377771&SrchMode=1&sid=2&Fmt=6&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1277472835&clientId=28564.
Der volle Inhalt der QuelleBoileau, Donald. „Modélisation spatio-temporelle pour la détection d’événements de sécurité publique à partir d’un flux Twitter“. Mémoire, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10241.
Der volle Inhalt der QuelleAbstract : Twitter is a social media that is very popular in North America, giving law enforcement agencies an opportunity to detect events of public interest. Twitter messages (tweets) tied to an event often contain street names, indicating where this event takes place, which can be used to infer the event's geographical coordinates in real time. Many commercial software tools are available to monitor social media. The performance of these tools could be greatly improved with a larger sample of tweets, a sorting mechanism to identify pertinent events more quickly and to measure the reliability of the detected events. The goal of this master‟s thesis is to detect, from a public Twitter stream, events relative to public safety of a territory, automatically and with an acceptable level of reliability. To achieve this objective, a computer model based on four components has been developed: a) capture of public tweets based on keywords with the application of a geographic filter, b) natural language processing of the text of these tweets, use of a street gazetteer to identify tweets that can be localized and geocoding of tweets based on street names and intersections, c) a spatio-temporal method to form tweet clusters and, d) event detection by isolating clusters containing at least two tweets treating the same subject. This research project differs from existing scientific research as it combines natural language processing, search and geocoding of toponyms based on a street gazetteer, the creation of clusters using geomatics and identification of event clusters based on common tweets to detect public safety events in a Twitter public stream. The application of the model to the 90,347 tweets collected for the Toronto-Niagara region in Ontario, Canada has resulted in the identification and geocoding of 1,614 tweets and the creation of 172 clusters from which 79 event clusters contain at least two tweets having the same subject showing a reliability rate of 45.9 %.
Agne, Arvid. „Provisioning, Configuration and Monitoring of Single-board Computer Clusters“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97853.
Der volle Inhalt der QuelleLiv, Jakob, und Fredrik Nygren. „Lastbalanseringskluster : En studie om operativsystemets påverkan på lastbalanseraren“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-36574.
Der volle Inhalt der QuelleThis report contains a study over an operating system’s impact on the load balancerHAproxy. The study was performed in an experimental environment with four virtualclients for testing, one load balancer and three web server nodes connected to the loadbalancer. The operating system was the main point in the study where the load on theload balancer’s hardware, the response time, the amount of connections and the maximumamount of connections per second were examined. The operating systems whichwere tested was Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 and OpenBSD 5.5. The resultsfrom the tests shows that the load on the hardware and the response time are almost identicalon all operating systems with the exception of OpenBSD where the conditions to beable to run the hardware tests could not be achieved. FreeBSD was the operating systemthat was able to manage the highest amount of connections along with CentOS. Ubuntuturned out to be more limited and OpenBSD was very limited. FreeBSD also managedthe highest amount of connections per second, followed by Ubuntu, CentOS and finallyOpenBSD which turned out to be the worst performer.
Palomino, Lizeth Vargas. „Técnicas de inteligência artificial aplicadas ao método de monitoramento de integridade estrutural baseado na impedância eletromecânica para monitoramento de danos em estruturas aeronáuticas“. Universidade Federal de Uberlândia, 2012. https://repositorio.ufu.br/handle/123456789/14726.
Der volle Inhalt der QuelleThe basic concept of impedance-based structure health monitoring is measuring the variation of the electromechanical impedance of the structure as caused by the presence of damage by using patches of piezoelectric material bonded on the surface of the structure (or embedded into). The measured electrical impedance of the PZT patch is directly related to the mechanical impedance of the structure. That is why the presence of damage can be detected by monitoring the variation of the impedance signal. In order to quantify damage, a metric is specially defined, which allows to assign a characteristic scalar value to the fault. This study initially evaluates the influence of environmental conditions in the impedance measurement, such as temperature, magnetic fields and ionic environment. The results show that the magnetic field does not influence the impedance measurement and that the ionic environment influences the results. However, when the sensor is shielded, the effect of the ionic environment is significantly reduced. The influence of the sensor geometry has also been studied. It has been established that the shape of the PZT patch (rectangular or circular) has no influence on the impedance measurement. However, the position of the sensor is an important issue to correctly detect damage. This work presents the development of a low-cost portable system for impedance measuring to automatically measure and store data from 16 PZT patches, without human intervention. One fundamental aspect in the context of this work is to characterize the damage type from the various impedance signals collected. In this sense, the techniques of artificial intelligence known as neural networks and fuzzy cluster analysis were tested for classifying damage of aircraft structures, obtaining satisfactory results. One last contribution of the present work is the study of the performance of the electromechanical impedance-based structural health monitoring technique to detect damage in structures under dynamic loading. Encouraging results were obtained for this aim.
O conceito básico da técnica de integridade estrutural baseada na impedância tem a ver com o monitoramento da variação da impedância eletromecânica da estrutura, causada pela presença alterações estruturais, através de pastilhas de material piezelétrico coladas na superfície da estrutura ou nela incorporadas. A impedância medida se relaciona com a impedância mecânica da estrutura. A partir da variação dos sinais de impedância pode-se concluir pela existência ou não de uma falha. Para quantificar esta falha, métricas de dano são especialmente definidas, permitindo atribuir-lhe um valor escalar característico. Este trabalho pretende inicialmente avaliar a influência de algumas condições ambientais, tais como os campos magnéticos e os meios iônicos na medição de impedância. Os resultados obtidos mostram que os campos magnéticos não tem influência na medição de impedância e que os meios iônicos influenciam os resultados; entretanto, ao blindar o sensor, este efeito se reduz consideravelmente. Também foi estudada a influencia da geometria, ou seja, do formato do PZT e da posição do sensor com respeito ao dano. Verificou-se que o formato do PZT não tem nenhuma influência na medição e que a posição do sensor é importante para detectar corretamente o dano. Neste trabalho se apresenta o desenvolvimento de um sistema de medição de impedância de baixo custo e portátil que tem a capacidade de medir e armazenar a medição de 16 PZTs sem a necessidade de intervenção humana. Um aspecto de fundamental importância no contexto deste trabalho é a caracterização do dano a partir dos sinais de impedância coletados. Neste sentido, as técnicas de inteligência artificial conhecidas como redes neurais e análises de cluster fuzzy, foram testadas para classificar danos em estruturas aeronáuticas, obtendo resultados satisfatórios para esta tarefa. Uma última contribuição deste trabalho é o estudo do comportamento da técnica de monitoramento de integridade estrutural baseado na impedância eletromecânica na detecção de danos em estruturas submetidas a carregamento dinâmico. Os resultados obtidos mostram que a técnica funciona adequadamente nestes casos.
Doutor em Engenharia Mecânica
Petersson, Andreas. „A tool for monitoring resource usage in large scale supercomputing clusters“. Thesis, Linköpings universitet, PELAB - Laboratoriet för programmeringsomgivningar, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75435.
Der volle Inhalt der QuelleTerrell, Thomas. „Structural health monitoring for damage detection using wired and wireless sensor clusters“. Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5055.
Der volle Inhalt der QuelleID: 029810361; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.C.E.)--University of Central Florida, 2011.; Includes bibliographical references (p. 102-114).
M.S.C.E.
Masters
Civil, Environmental and Construction Engineering
Engineering and Computer Science
Civil Engineering
Marshall, J. Brooke. „Prospective Spatio-Temporal Surveillance Methods for the Detection of Disease Clusters“. Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29639.
Der volle Inhalt der QuellePh. D.
Ivars, Camañez Vicente-José. „TDP-Shell: Entorno para acoplar gestores de colas y herramientas de monitorizaci on“. Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/96251.
Der volle Inhalt der QuelleNowadays distributed applications are executed on computer clusters managed by a Batch Queue Systems. Users take advantage of Monitoring Tools to detect run-time problems on their applications running on a distributed environment. But it is a challenge to use Monitoring Tools on a cluster controlled by a Batch Queue System. This is due to the fact that Batch Queue Systems and Monitoring Tools do not coordinate the management of the resources they share, when executing a distributed application. We call this problem "lack of interoperability" and to solve it we have developed a framework called TDP-Shell. This framework supports different Batch Queue Systems such as Condor and SGE, and different Monitoring Tools such as Paradyn, Gdb and Totalview, without any changes on their source code. This thesis describes the development of the TDP-Shell framework, which allows monitoring both sequential and distributed applications that are executed on a cluster controlled by a Batch Queue System, as well as a new type of monitoring called "delayed".
Skinner, Michael A. „Hapsite® gas chromatograph-mass spectrometer (GC/MS) variability assessment /“. Download the thesis in PDF, 2005. http://www.lrc.usuhs.mil/dissertations/pdf/Skinner2005.pdf.
Der volle Inhalt der QuelleHsu, Ming-Wei, und 徐銘蔚. „Cluster Analysis of River Water Quality Monitoring Data“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/7kbnex.
Der volle Inhalt der Quelle國立中山大學
應用數學系研究所
103
Water is a major constituent of our bodies and vital organs. Safe drinking water is essential to humans and other lifeforms, so environmental water quality monitoring is very important. The Environmental Protection Administration (EPA) of Taiwan starts to monitor environmental water quality and posts on the Internet at 2001. In this study, we analyze river monitoring data about the general items of pollutants in sixteen monitoring stations of Gaoping river from 2005 to 2013. However, the water monitoring data have complex patterns. There are missing values, outliers and lower detection limit, which may be called left censored data. Before doing the data analysis, we replace missing values with median, and deal with left censored data by using censored time series model (Park et al., 2007). Then we fit a linear regression model to find seasonal and trend patterns and use the estimated coefficients of the fitted regression models to do cluster analysis. Finally, we discuss the differences of the pollution levels between different clusters of monitoring stations, which may be useful for the EPA, as reference for the river water quality management.
Su, Ying-Yuan, und 蘇膺元. „Spatial Cluster Detection for the Fishing Vessel Monitoring Systems“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/07964793676065392045.
Der volle Inhalt der Quelle國立臺灣海洋大學
通訊與導航工程系
96
Fishing Vessel Monitoring System (VMS) is an effective tool of fisheries monitoring, control and surveillance measures to counter over-fishing. It can also help the coast guard to safeguard vessels more efficiently. As VMS is widely implemented, more and more efforts focus on mining the VMS database to discover knowledge and clues that would further enhance the benefits. This thesis is focused on data mining VMS database with clustering technology developed for and implemented into the VMS of Taiwan. The initial request form the Fisheries Administration was to constantly identify wherever there are at least three fishing vessels within 3 nautical miles of range. The proposed solution was based on DBSCAN [1] clustering algorithm. The performances in accuracy and run-time were evaluated and improved with vessel position prediction, partitioning of datasets, data structure and algorithm design. With the promising results, this solution has been recognized by the fisheries management and VMS operation experts to be of many extended use in VMS. Finally, this Density Area Detection System was applied to the detection of at-sea transshipment and parallel-track vessels. Then, the performance in accuracy and practicability would be discussed.
Lu, Wen-Jui, und 呂文瑞. „THE MONITORING SYSTEM FOR A CLUSTER TOOL IN SEMICONDUCTOR“. Thesis, 2001. http://ndltd.ncl.edu.tw/handle/74076600363505632589.
Der volle Inhalt der Quelle國立臺灣大學
機械工程學研究所
89
Recently, the final goal of semiconductor manufacturing automation systems is to achieve unmanned fabs. Equipment control interface is one of the important issues. Integrated processing technology is being applied to the cluster tool not only to increase yield but also to reduce cost. The purpose of the thesis is to analyze and to design a monitoring module controller in the cluster tool by Object-Oriented method , focusing on the equipment controller , function of each module and implementation method. The functional requirements of the monitoring system and the related problems are firstly described. The modules are developed by using unified modeling language (UML). According to users’ requirements, a real time monitoring module controller is developed. In addition, a Petri net model is proposed to model since the complex dynamic behavior within the system. Finally, the system is implemented by using Object-Oriented Programming MS Visual Basic to demonstrate the proposed model via Ethernet on MS Windows 98/NT environment. SECS II(Semiconductor Equipment Communication Standard II) communication interface between equipment controller and an upper controller is used. A experimental cluster tool platform is used as a test example.
Li, Yi-ting, und 李怡庭. „Cluster Analysis of River Water Quality of Heavy Metal Monitoring Data“. Thesis, 2015. http://ndltd.ncl.edu.tw/handle/v5r3n3.
Der volle Inhalt der Quelle國立中山大學
應用數學系研究所
103
Water plays a very important role in our lives. Not only the water quantity but also its quality in each river basin are important parameters for evaluation of water usage efficiency in the corresponding areas. Systematic monitoring of the water quantity as well as the quality is necessary for efficient control and management of the water usage. The Environmental Protection Administration (EPA) in Taiwan has set up monitoring stations in different river basins all over the country to monitor the water quantity and quality. In this work we are interested in understanding the status of the heavy metal pollution in river basin through these monitoring data. We consider in this study the sixteen stations of Gaoping river basin, take the river water heavy metal concentration from 2005 to 2013 to evaluate the heavy metal pollution levels in their river basin. We firstly establish a regression model based on the characteristics of water heavy metal data. The data provided by the EPA sometimes have different situations need to be taken care of before performing the data analysis such as missing values, outliers and lower than the monitoring limit values (left censored). Thus, we interpolate some missing values using the established method by Park et al. (2007) for left censored data to give estimated values. Moreover, we use the interquartile method to determine whether some data are outliers. Later the outliers are considered in the regression model together time trend and seasonal effects. Finally, we use cluster analysis to identify the commonality and differences among the sixteen stations, which may be useful as a reference for EPA in understanding the status of the water quality for future management.
Lu, Hsueh-Chih, und 呂學治. „Experimental Platform for Remote Monitoring and Diagnosing of Cluster-Tools Equipment“. Thesis, 2003. http://ndltd.ncl.edu.tw/handle/4k9jj2.
Der volle Inhalt der Quelle中原大學
機械工程研究所
91
The object of the article is to build an experimental workshop, which is used to simulate processing module of cluster-tools of semiconductor industry. In order to verify the reliability, this experimental workshop is finally combined to three-tiers remote monitor and diagnosing system. Under the construction of the workshop, the widely used cluster-tools equipments in semiconductor industrial are taken as reference to build most important features of the workshop. The workshop is equipped a SCRAR robot to simulate wafer transfer procedure during process modules. And finally, a programmable language controller (PLC) is used to collect all signals and states of equipments and feeds useful data back to server end. In order to verify the reliability of the workshop, a human-based interface is developed to make graphical control possible. Users could control equipments of the workshop directly and get status of workshop such as robot status, gate open/close, wafer conditions, etc. Instead of the traditional monitor and control, the architecture of three-tiers remote monitor and control system are used to expand the workshop to Internet scope. Users also could control the workshop through the Internet connection by the interface of client end of three-tiers Internet remoter and diagnosing system.
Lin, Sih-yuan, und 林思遠. „A research of managing and monitoring server farm cluster using IPMI“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/17860598550981081013.
Der volle Inhalt der Quelle世新大學
資訊管理學研究所(含碩專班)
97
After enterprises build their server clusters, the operation of enterprises will be highly related to that of server clusters. Once the server has a breakdown, enterprises will not be able to operate regularly, then making a loss. Therefore, the management of server clusters is necessary and plays an important role in maintaining regular operation of numerous server clusters. IPMI (Intelligent Platform Management Interface) is a standard of intelligent management system. We can monitor the condition in the system through the cross-platform standard interface provided by IPMI. Furthermore, we can reduce the cost of server management and effectively solve a variety of problems resulting from that the interface of server cannot be compatible with that of its accessory appliances. My research focuses primarily on establishing IPMI Firmware according with IPMI v2.0 spec on Embedded Linux. This IPMI Firmware helps us to deal with any IPMI standard commands from remote input. In addition, through IPMI over LAN we can transmit commands from management server to provide remote server cluster management, and this way helps enterprises to achieve better operating performance. As a result, server cluster management will enhance the ability to remotely monitor the hardware information in the server and the ability to turn on the remote server. Finally, server cluster management will overcome the difficulty that traditional server is unable to monitor computer hardware and remotely turn on the computer when computer system malfunctions.
Lu, Kuo-Chang, und 呂國璋. „Design and Development of Remote Monitoring and Diagnosing System for Cluster-Tools Equipment“. Thesis, 2003. http://ndltd.ncl.edu.tw/handle/49r7gu.
Der volle Inhalt der Quelle中原大學
機械工程研究所
91
Cluster-tools equipment has been the main stream of front-end equipments in semiconductor manufacturer for less pollution, small footprint, and low cost ownership. Moreover, equipment automation is key level in wafer transport automation of cluster-tools for semiconductor manufactory. In order to monitor equipment production statuses effectively, it is beneficial for damage prevention of equipment to enhance manufacture stability by using message data of equipment and sensors. With remote monitoring techniques, users can control and monitor the equipment through Internet anywhere at any time. When equipment has any problem, engineer can resolve them through Internet and avoid possible further damage. Design and development of remote monitoring and diagnosing system for semiconductor cluster-tools equipment are proposed in this article. The architecture design of remote monitoring and diagnosing system is discussed base on International SEMATECH e-Diagnostics and three-tier application concept in the first section. In the second section, Microsoft .NET Framework platform is used for system development and constructed cluster-tools experimental platform of this study is used to verify and realize remote monitoring/diagnostics capabilities.
„A study of two problems in data mining: anomaly monitoring and privacy preservation“. 2008. http://library.cuhk.edu.hk/record=b5893636.
Der volle Inhalt der QuelleThesis (M.Phil.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 89-94).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Anomaly Monitoring --- p.1
Chapter 1.2 --- Privacy Preservation --- p.5
Chapter 1.2.1 --- Motivation --- p.7
Chapter 1.2.2 --- Contribution --- p.12
Chapter 2 --- Anomaly Monitoring --- p.16
Chapter 2.1 --- Problem Statement --- p.16
Chapter 2.2 --- A Preliminary Solution: Simple Pruning --- p.19
Chapter 2.3 --- Efficient Monitoring by Local Clusters --- p.21
Chapter 2.3.1 --- Incremental Local Clustering --- p.22
Chapter 2.3.2 --- Batch Monitoring by Cluster Join --- p.24
Chapter 2.3.3 --- Cost Analysis and Optimization --- p.28
Chapter 2.4 --- Piecewise Index and Query Reschedule --- p.31
Chapter 2.4.1 --- Piecewise VP-trees --- p.32
Chapter 2.4.2 --- Candidate Rescheduling --- p.35
Chapter 2.4.3 --- Cost Analysis --- p.36
Chapter 2.5 --- Upper Bound Lemma: For Dynamic Time Warping Distance --- p.37
Chapter 2.6 --- Experimental Evaluations --- p.39
Chapter 2.6.1 --- Effectiveness --- p.40
Chapter 2.6.2 --- Efficiency --- p.46
Chapter 2.7 --- Related Work --- p.49
Chapter 3 --- Privacy Preservation --- p.52
Chapter 3.1 --- Problem Definition --- p.52
Chapter 3.2 --- HD-Composition --- p.58
Chapter 3.2.1 --- Role-based Partition --- p.59
Chapter 3.2.2 --- Cohort-based Partition --- p.61
Chapter 3.2.3 --- Privacy Guarantee --- p.70
Chapter 3.2.4 --- Refinement of HD-composition --- p.75
Chapter 3.2.5 --- Anonymization Algorithm --- p.76
Chapter 3.3 --- Experiments --- p.77
Chapter 3.3.1 --- Failures of Conventional Generalizations --- p.78
Chapter 3.3.2 --- Evaluations of HD-Composition --- p.79
Chapter 3.4 --- Related Work --- p.85
Chapter 4 --- Conclusions --- p.87
Bibliography --- p.89
Lin, Tzu-Wei, und 林子維. „The Study of Remote Performance Monitoring and Automatic Reporting for Semiconductor Cluster-Tools Equipment“. Thesis, 2005. http://ndltd.ncl.edu.tw/handle/ger4g2.
Der volle Inhalt der Quelle中原大學
機械工程研究所
93
With the rapid growth of semiconductor equipment industry, competition among companies is tense. Each company must make the best use of their own resources, and hopefully reduce mistakes made by moody or unskillful operators. How to lower costs and reduce retrieval error ratio effectively is the goal which every company wants to reach. Web Service, a server which supplies services on the Internet, providing the calculating and inquiring functions. The request and response of the Web Service both use SOAP be transmission standard. It allows most languages to be requested by the Web Service. The engineer can transmit the entire common program to become Web Service. It can simplify the program in each server and lighten the load of other servers. Equipments will appear metal fatigue out of continuous working. It will also bring more problems in the tough environment. If the program can automatically warn the engineers of the unusual situation about the equipment, then they can check and fix the equipment in advance. It will decrease the problems of equipment and, on the other hand, increase the efficiency for the company. After this study works two systems (SQL Server and Web Service) together, we know that the Web Service can actually handle more work than the SQL Server system. It helps the web server to calculate. Therefore, Web Service has great influence on improving the system structure. The program will recognize the image and calculate the height of the front arm. Then, it uses statistical process control to get process capability about the equipment. It will show the operators the status of equipment. They can call the engineers to check and fix the equipment in advance, when the index displayed a decrease tendency to avoid an accident. It will achieve the capability of e-Diagnostics Prediction.
Háva, Jakub. „Monitorovací nástroj pro distribuované aplikace v jazyce Java“. Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-357041.
Der volle Inhalt der QuelleAnwar, K. M. Mostafa. „Multivariate data analysis for monitoring the quality of the commercialized bottled water in Bangladesh“. Master's thesis, 2018. http://hdl.handle.net/10362/40334.
Der volle Inhalt der QuelleSeveral multivariate statistical or chemometrics or pattern recognition techniques e.g. Principal Component Analysis, Factor Analysis, Hierarchical and Non-Hierarchical k-Mean Cluster Analysis have been applied to gain understanding about the quality of the packaged bottled drinking water in the market of Bangladesh. Twenty three (23) physico-chemical properties of total of 51 water samples have been investigated. The data set consists of 49 individuals from 11 Brands and 2 deionized ASTM TYPE-I water samples produced in the laboratory to be a technically pure water having Electrical Conductivity ~0.056 μS-cm-1. Descriptive statistics, analysis of variance, Non-Parametric Kruskal-Wallis tests have been conducted to detect statistical differences between the water types and different brands. Total of 23 attributes of water covering major ion contents: sodium, potassium, calcium, magnesium, iron, manganese, chloride, fluoride, sulphate, bicarbonate and nitrate and other features: pH, temperature, total dissolved solids, electrical conductivity, hardness, ammonium, nitrite, free cyanogen and chemical oxygen demand, total cation sum and total anion sum. Both the Principal Components Analysis and the Factor Analysis revealed that the differences between water individuals are best characterized by four Principal Components or Factors indicating material loadings, hardness or softness aesthetic acceptability and lightness/sutability for human consumption. Hierarchical and Non- Hierarchical k-means Cluster Analysis clearly identified the presence of four distinct clusters: A, B, C and D among the bottled water products in the market of Bangladesh. The profile features for each cluster have been defined as such the classification achieved to acquire improved and detailed understanding of the general properties of the products under study. We have observed that HCA using WARD algorithm provided us with more realistic classification solution in comparison with non-hierachical k-means as the Cluster members are truly reflecting their group pattern in line with their chemical compositions. HCA using WARD showed that BRAND05 and BRAND11 belonging to Cluster A products execssively loaded with materials and considered to be as hard waters. And BRAND09 and BRAND10 staying with DEIONIZEDWATER belonging to Cluster B are completely devoide of essential minerals as such seemed to be as ultra low mineral content type water or too soft in nature. The other folks BRAND03, BRAND04, BRAND06, BRAND07 and BRAND08 are also not having sufficient mineral contents so as to be very soft water indeed. Hence, waters belonging to Clusters A, B and C are not suitable for human consumption. Only two brands BRAND01 and BRAND02 staying in Cluster D appeared to be suitable for human consumption in every respect.The fact is the BRAND01 is produced by a foreign manufacturer. That means, all other local brands, except BRAND02 are essentially not having the appropriate quality to be drinking waters. From both PCA and FA these two brands BRAND01 and BRAND02 have been very well explained. These are the major outcomes of this study not immediately apparent from univariate approach or not appeared from the data set while looking through naked eyes. It is revealed that the multivariate data analytical techniques have potential to be useful complementary techniques to support the existing univariate practices for industrial quality assurance quality control, market surveillance, standardization process and or regulatory purposes and also seemed to be interesting to academic and scientific communities seeking advanced knowledge.