To see the other types of publications on this topic, follow the link: Cluster monitoring.

Dissertations / Theses on the topic 'Cluster monitoring'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Cluster monitoring.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Worm, Stefan. "Monitoring of large-scale Cluster Computers." Master's thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700032.

Full text
Abstract:
The constant monitoring of a computer is one of the essentials to be up-to-date about its state. This may seem trivial if one is sitting right in front of it but when monitoring a computer from a certain distance it is not as simple anymore. It gets even more difficult if a large number of computers need to be monitored. Because the process of monitoring always causes some load on the network and the monitored computer itself, it is important to keep these influences as low as possible. Especially for a high-performance cluster that was built from a lot of computers, it is necessary that the monitoring approach works as efficiently as possible and does not influence the actual operations of the supercomputer. Thus, the main goals of this work were, first of all, analyses to ensure the scalability of the monitoring solution for a large computer cluster as well as to prove the functionality of it in practise. To achieve this, a classification of monitoring activities in terms of the overall operation of a large computer system was accomplished first. Thereafter, methods and solutions were presented which are suitable for a general scenario to execute the process of monitoring as efficient and scalable as possible. During the course of this work, conclusions from the operation of an existing cluster for the operation of a new, more powerful system were drawn to ensure its functionality as good as possible. Consequently, a selection of applications from an existing pool of solutions was made to find one that is most suitable for the monitoring of the new cluster. The selection took place considering the special situation of the system like the usage of InfiniBand as the network interconnect. Further on, an additional software was developed which can read and process the different status information of the InfiniBand ports, unaffected by the vendor of the hardware. This functionality, which so far had not been available in free monitoring applications, was exemplarily realised for the chosen monitoring software. Finally, the influence of monitoring activities on the actual tasks of the cluster was of interest. To examine the influence on the CPU and the network, the self-developed plugin as well as a selection of typical monitoring values were used exemplarily. It could be proven that no impact on the productive application for typical monitoring intervals can be expected and only for atypically short intervals a minor influence could be determined
Die ständige Überwachung eines Computers gehört zu den essentiellen Dingen, die zu tun sind um immer auf dem Laufenden zu sein, wie der aktuelle Zustand des Rechners ist. Dies ist trivial, wenn man direkt davor sitzt, aber wenn man einen Computer aus der Ferne beobachten soll ist dies schon nicht mehr so einfach möglich. Schwieriger wird es dann, wenn es eine große Anzahl an Rechnern zu überwachen gilt. Da der Vorgang der Überwachung auch immer etwas Netzwerklast und Last auf dem zu überwachenden Rechner selber verursacht, ist es wichtig diese Einflüsse so gering wie möglich zu halten. Gerade dann, wenn man viele Computer zu einem leistungsfähigen Cluster zusammen geschalten hat ist es notwendig, dass diese Überwachungslösung möglichst effizient funktioniert und die eigentliche Arbeit des Supercomputers nicht stört. Die Hauptziele dieser Arbeit sind deshalb Analysen zur Sicherstellung der Skalierbarkeit der Überwachungslösung für einen großen Computer Cluster, sowie der praktische Nachweis der Funktionalität dieser. Dazu wurde zuerst eine Einordnung des Monitorings in den Gesamtbetrieb eines großen Computersystems vorgenommen. Danach wurden Methoden und Lösungen aufgezeigt, welche in einem allgemeinen Szenario geeignet sind, um den ganzheitlichen Vorgang der Überwachung möglichst effizient und skalierbar durchzuführen. Im weiteren Verlauf wurde darauf eingegangen welche Lehren aus dem Betrieb eines vorhandenen Clusters für den Betrieb eines neuen, leistungsfähigeren Systems gezogen werden können um dessen Funktion möglichst gut gewährleisten zu können. Darauf aufbauend wurde eine Auswahl getroffen, welche Anwendung aus einer Menge existierende Lösungen heraus, zur Überwachung des neuen Clusters besonders geeignet ist. Dies fand unter Berücksichtigung der spezielle Situation, zum Beispiel der Verwendung von InfiniBand als Verbindungsnetzwerk, statt. Im Zuge dessen wurde eine zusätzliche Software entwickelt, welche die verschiedensten Statusinformationen der InfiniBand Anschlüsse auslesen und verarbeiten kann, unabhängig vom Hersteller der Hardware. Diese Funktionalität, welche im Bereich der freien Überwachungsanwendungen bisher ansonsten noch nicht verfügbar war, wurde beispielhaft für die gewählte Monitoring Software umgesetzt. Letztlich war der Einfluss der Überwachungsaktivitäten auf die eigentlichen Anwendungen des Clusters von Interesse. Dazu wurden exemplarisch das selbst entwickelte Plugin sowie eine Auswahl an typischen Überwachungswerten benutzt, um den Einfluss auf die CPU und das Netzwerk zu untersuchen. Dabei wurde gezeigt, dass für typische Überwachungsintervalle keine Einschränkungen der eigentlichen Anwendung zu erwarten sind und dass überhaupt nur für untypisch kurze Intervalle ein geringer Einfluss festzustellen war
APA, Harvard, Vancouver, ISO, and other styles
2

Worm, Stefan Mehlan Torsten. "Monitoring of large-scale Cluster Computers." [S.l. : s.n.], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bucciarelli, Mark. "Cluster sampling methods for monitoring route-level transit ridership." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/13485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bank, Mathias. "AIM - A Social Media Monitoring System for Quality Engineering." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-115894.

Full text
Abstract:
In the last few years the World Wide Web has dramatically changed the way people are communicating with each other. The growing availability of Social Media Systems like Internet fora, weblogs and social networks ensure that the Internet is today, what it was originally designed for: A technical platform in which all users are able to interact with each other. Nowadays, there are billions of user comments available discussing all aspects of life and the data source is still growing. This thesis investigates, whether it is possible to use this growing amount of freely provided user comments to extract quality related information. The concept is based on the observation that customers are not only posting marketing relevant information. They also publish product oriented content including positive and negative experiences. It is assumed that this information represents a valuable data source for quality analyses: The original voices of the customers promise to specify a more exact and more concrete definition of \"quality\" than the one that is available to manufacturers or market researchers today. However, the huge amount of unstructured user comments makes their evaluation very complex. It is impossible for an analysis protagonist to manually investigate the provided customer feedback. Therefore, Social Media specific algorithms have to be developed to collect, pre-process and finally analyze the data. This has been done by the Social Media monitoring system AIM (Automotive Internet Mining) that is the subject of this thesis. It investigates how manufacturers, products, product features and related opinions are discussed in order to estimate the overall product quality from the customers\\\' point of view. AIM is able to track different types of data sources using a flexible multi-agent based crawler architecture. In contrast to classical web crawlers, the multi-agent based crawler supports individual crawling policies to minimize the download of irrelevant web pages. In addition, an unsupervised wrapper induction algorithm is introduced to automatically generate content extraction parameters which are specific for the crawled Social Media systems. The extracted user comments are analyzed by different content analysis algorithms to gain a deeper insight into the discussed topics and opinions. Hereby, three different topic types are supported depending on the analysis needs. * The creation of highly reliable analysis results is realized by using a special context-aware taxonomy-based classification system. * Fast ad-hoc analyses are applied on top of classical fulltext search capabilities. * Finally, AIM supports the detection of blind-spots by using a new fuzzified hierarchical clustering algorithm. It generates topical clusters while supporting multiple topics within each user comment. All three topic types are treated in a unified way to enable an analysis protagonist to apply all methods simultaneously and in exchange. The systematically processed user comments are visualized within an easy and flexible interactive analysis frontend. Special abstraction techniques support the investigation of thousands of user comments with minimal time efforts. Hereby, specifically created indices show the relevancy and customer satisfaction of a given topic
In den letzten Jahren hat sich das World Wide Web dramatisch verändert. War es vor einigen Jahren noch primär eine Informationsquelle, in der ein kleiner Anteil der Nutzer Inhalte veröffentlichen konnte, so hat sich daraus eine Kommunikationsplattform entwickelt, in der jeder Nutzer aktiv teilnehmen kann. Die dadurch enstehende Datenmenge behandelt jeden Aspekt des täglichen Lebens. So auch Qualitätsthemen. Die Analyse der Daten verspricht Qualitätssicherungsmaßnahmen deutlich zu verbessern. Es können dadurch Themen behandelt werden, die mit klassischen Sensoren schwer zu messen sind. Die systematische und reproduzierbare Analyse von benutzergenerierten Daten erfordert jedoch die Anpassung bestehender Tools sowie die Entwicklung neuer Social-Media spezifischer Algorithmen. Diese Arbeit schafft hierfür ein völlig neues Social Media Monitoring-System, mit dessen Hilfe ein Analyst tausende Benutzerbeiträge mit minimaler Zeitanforderung analysieren kann. Die Anwendung des Systems hat einige Vorteile aufgezeigt, die es ermöglichen, die kundengetriebene Definition von \"Qualität\" zu erkennen
APA, Harvard, Vancouver, ISO, and other styles
5

Neema, Isak. "Surveying and monitoring crimes in Namibia through the likrlihood based cluster analysis." Thesis, University of Reading, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Živčák, Adam. "Správa Raspberry Pi 4 clusteru pomocí Nix." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445519.

Full text
Abstract:
The scope of this thesis is to design and implement a system for deploying, managing and monitoring a Raspberry Pi cluster using Nix technologies. The thesis describes the benefits of the functional approach of Nix and the subsystems that are based on it. The thesis also results in a supporting web application, providing an intuitive environment for working with cluster configuration deployments and clearly displaying information about the utilization of individual nodes using dashboards. The final part of the thesis is devoted to testing cluster performance using sample distributed computing jobs.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Yajuan. "Cluster_Based Profile Monitoring in Phase I Analysis." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/46810.

Full text
Abstract:
Profile monitoring is a well-known approach used in statistical process control where the quality of the product or process is characterized by a profile or a relationship between a response variable and one or more explanatory variables. Profile monitoring is conducted over two phases, labeled as Phase I and Phase II. In Phase I profile monitoring, regression methods are used to model each profile and to detect the possible presence of out-of-control profiles in the historical data set (HDS). The out-of-control profiles can be detected by using the statis-tic. However, previous methods of calculating the statistic are based on using all the data in the HDS including the data from the out-of-control process. Consequently, the ability of using this method can be distorted if the HDS contains data from the out-of-control process. This work provides a new profile monitoring methodology for Phase I analysis. The proposed method, referred to as the cluster-based profile monitoring method, incorporates a cluster analysis phase before calculating the statistic. Before introducing our proposed cluster-based method in profile monitoring, this cluster-based method is demonstrated to work efficiently in robust regression, referred to as cluster-based bounded influence regression or CBI. It will be demonstrated that the CBI method provides a robust, efficient and high breakdown regression parameter estimator. The CBI method first represents the data space via a special set of points, referred to as anchor points. Then a collection of single-point-added ordinary least squares regression estimators forms the basis of a metric used in defining the similarity between any two observations. Cluster analysis then yields a main cluster containing at least half the observations, with the remaining observations comprising one or more minor clusters. An initial regression estimator arises from the main cluster, with a group-additive DFFITS argument used to carefully activate the minor clusters through a bounded influence regression frame work. CBI achieves a 50% breakdown point, is regression equivariant, scale and affine equivariant and distributionally is asymptotically normal. Case studies and Monte Carlo results demonstrate the performance advantage of CBI over other popular robust regression procedures regarding coefficient stabil-ity, scale estimation and standard errors. The cluster-based method in Phase I profile monitoring first replaces the data from each sampled unit with an estimated profile, using some appropriate regression method. The estimated parameters for the parametric profiles are obtained from parametric models while the estimated parameters for the nonparametric profiles are obtained from the p-spline model. The cluster phase clusters the profiles based on their estimated parameters and this yields an initial main cluster which contains at least half the profiles. The initial estimated parameters for the population average (PA) profile are obtained by fitting a mixed model (parametric or nonparametric) to those profiles in the main cluster. Profiles that are not contained in the initial main cluster are iteratively added to the main cluster provided their statistics are "small" and the mixed model (parametric or nonparametric) is used to update the estimated parameters for the PA profile. Those profiles contained in the final main cluster are considered as resulting from the in-control process while those not included are considered as resulting from an out-of-control process. This cluster-based method has been applied to monitor both parametric and nonparametric profiles. A simulated example, a Monte Carlo study and an application to a real data set demonstrates the detail of the algorithm and the performance advantage of this proposed method over a non-cluster-based method is demonstrated with respect to more accurate estimates of the PA parameters and improved classification performance criteria. When the profiles can be represented by vectors, the profile monitoring process is equivalent to the detection of multivariate outliers. For this reason, we also compared our proposed method to a popular method used to identify outliers when dealing with a multivariate response. Our study demonstrated that when the out-of-control process corresponds to a sustained shift, the cluster-based method using the successive difference estimator is clearly the superior method, among those methods we considered, based on all performance criteria. In addition, the influence of accurate Phase I estimates on the performance of Phase II control charts is presented to show the further advantage of the proposed method. A simple example and Monte Carlo results show that more accurate estimates from Phase I would provide more efficient Phase II control charts.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Chan, Sik-foon Joyce. "Application of cluster analysis to identify sources of particulate matter in Hong Kong /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1470920X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tedder, O. W. S. "Monitoring the spin environment of coupled quantum dots : towards the deterministic generation of photonic cluster states." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10049622/.

Full text
Abstract:
Indium gallium arsenide self-assembled quantum dots have attracted a lot of attention due to their ability to trap single electrons and holes whose spin can be manipulated optically. This makes them attractive as qubits and light sources in various quantum computing and communication schemes. However, the spin of electrons and holes rapidly decoheres due to hyper-fine interaction with the atomic nuclei of the dot. The theme of this thesis is to find ways of overcoming this decoherence, in particular to allow generation of photonic cluster states from quantum dots. This was first approached by designing theoretical schemes to measure and compensate for the source of the decoherence, which were experimentally tested. Two new systems were then theoretically designed where the effects of decoherence could be mitigated. It is shown theoretically that exciting a quantum dot with a laser of well-defined polarisation and monitoring the polarisation of emitted photons, it is possible to determine the vector polarisation of the nuclear spin ensemble. It is shown through simulation that this measurement can be performed on and possibly faster than the time-scale of nuclear fluctuations. The fundamental concept behind the measurement procedure is proved in an experiment using coupled quantum dots. Through the course of the experiment anomalous behaviour of the dots was discovered. A second theoretical proposal is made for a system allowing the fast application of an effective field to compensate for the decoherence mechanism. It is then shown by simulation that a coupled dot system with a prepared in-plane nuclear spin polarisation, can allow optical spin rotation and entanglement generation. A different system is then theoretically proposed where the electron spin in quantum dot can be replaced with another qubit, such as embedded manganese atoms. It is shown through simulation that this system also allows the generation of photonic cluster states.
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Weishuai. "Scalable and effective clustering, scheduling and monitoring of self-organizing grids." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Somon, Bertille. "Corrélats neuro-fonctionnels du phénomène de sortie de boucle : impacts sur le monitoring des performances." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS042/document.

Full text
Abstract:
Les mutations technologiques à l’œuvre dans les systèmes aéronautiques ont profondément modifié les interactions entre l’homme et la machine. Au fil de cette évolution, les opérateurs se sont retrouvés face à des systèmes de plus en plus complexes, de plus en plus automatisés et de plus en plus opaques. De nombreuses tragédies montrent à quel point la supervision des systèmes par des opérateurs humains reste un problème sensible. En particulier, de nombreuses évidences montrent que l’automatisation a eu tendance à éloigner l’opérateur de la boucle de contrôle des systèmes, créant un phénomène dit de sortie de boucle (OOL). Ce phénomène se caractérise notamment par une diminution de la conscience de la situation et de la vigilance de l’opérateur, ainsi qu’une complaisance et une sur-confiance dans les automatismes. Ces difficultés déclenchent notamment une baisse des performances de l’opérateur qui n’est plus capable de détecter les erreurs du système et de reprendre la main si nécessaire. La caractérisation de l’OOL est donc un enjeux majeur des interactions homme-système et de notre société en constante évolution. Malgré plusieurs décennies de recherche, l’OOL reste difficile à caractériser, et plus encore à anticiper. Nous avons dans cette thèse utilisé les théories issues des neurosciences, notamment sur le processus de détection d’erreurs, afin de progresser sur notre compréhension de ce phénomène dans le but de développer des outils de mesure physiologique permettant de caractériser l’état de sortie de boucle lors d’interactions avec des systèmes écologiques. En particulier, l’objectif de cette thèse était de caractériser l’OOL à travers l’activité électroencéphalographique (EEG) dans le but d’identifier des marqueurs et/ou précurseurs de la dégradation du processus de supervision du système. Nous avons dans un premier temps évalué ce processus de détection d’erreurs dans des conditions standards de laboratoire plus ou moins complexes. Deux études en EEG nous ont d’abord permis : (i) de montrer qu’une activité cérébrale associée à ce processus cognitif se met en place dans les régions fronto-centrales à la fois lors de la détection de nos propres erreurs (ERN-Pe et FRN-P300) et lors de la détection des erreurs d’un agent que l’on supervise, (complexe N2-P3) et (ii) que la complexité de la tâche évaluée peut dégrader cette activité cérébrale. Puis nous avons mené une autre étude portant sur une tâche plus écologique et se rapprochant des conditions de supervision courantes d’opérateurs dans l’aéronautique. Au travers de techniques de traitement du signal EEG particulières (e.g., analyse temps-fréquence essai par essai), cette étude a mis en évidence : (i) l’existence d’une activité spectrale θ dans les régions fronto-centrales qui peut être assimilée aux activités mesurées en condition de laboratoire, (ii) une diminution de l’activité cérébrale associée à la détection des décisions du système au cours de la tâche, et (iii) une diminution spécifique de cette activité pour les erreurs. Dans cette thèse, plusieurs mesures et analyses statistiques de l’activité EEG ont été adaptées afin de considérer les contraintes des tâches écologiques. Les perspectives de cette thèse ouvrent sur une étude en cours dont le but est de mettre en évidence la dégradation de l’activité de supervision des systèmes lors de la sortie de boucle, ce qui permettrait d’identifier des marqueurs précis de ce phénomène permettant ainsi de le détecter, voire même, de l’anticiper
The ongoing technological mutations occuring in aeronautics have profoundly changed the interactions between men and machines. Systems are more and more complex, automated and opaque. Several tragedies have reminded us that the supervision of those systems by human operators is still a challenge. Particularly, evidences have been made that automation has driven the operators away from the control loop of the system thus creating an out-of-the-loop phenomenon (OOL). This phenomenon is characterized by a decrease in situation awareness and vigilance, but also complacency and over-reliance towards automated systems. These difficulties have been shown to result in a degradation of the operator’s performances. Thus, the OOL phenomenon is a major issue of today’s society to improve human-machine interactions. Even though it has been studied for several decades, the OOL is still difficult to characterize, and even more to predict. The aim of this thesis is to define how cognitive neurosciences theories, such as the performance monitoring activity, can be used in order to better characterize the OOL phenomenon and the operator’s state, particularly through physiological measures. Consequently, we have used electroencephalographic activity (EEG) to try and identify markers and/or precursors of the supervision activity during system monitoring. In a first step we evaluated the error detection or performance monitoring activity through standard laboratory tasks, with varying levels of difficulty. We performed two EEG studies allowing us to show that : (i) the performance monitoring activity emerges both for our own errors detection but also during another agent supervision, may it be a human agent or an automated system, and (ii) the performance monitoring activity is significantly decreased by increasing task difficulty. These results led us to develop another experiment to assess the brain activity associated with system supervision in an ecological environment, resembling everydaylife aeronautical system monitoring. Thanks to adapted signal processing techniques (e.g. trial-by-trial time-frequency decomposition), we were able to show that there is : (i) a fronto-central θ activité time-locked to the system’s decision similar to the one obtained in laboratory condition, (ii) a decrease in overall supervision activity time-locked to the system’s decision, and (iii) a specific decrease of monitoring activity for errors. In this thesis, several EEG measures have been used in order to adapt to the context at hand. As a perspective, we have developped a final study aiming at defining the evolution of the monitoring activity during the OOL. Finding markers of this degradation would allow to monitor its emersion, and even better, predict it
APA, Harvard, Vancouver, ISO, and other styles
12

Komaragiri, Shalini Sushmitha. "A SAG monitoring device based on a cluster of code-based GPS receivers : a thesis presented to the faculty of the Graduate School, Tennessee Technological University /." Click to access online, 2009. http://proquest.umi.com/pqdweb?index=0&did=2000377771&SrchMode=1&sid=2&Fmt=6&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1277472835&clientId=28564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Boileau, Donald. "Modélisation spatio-temporelle pour la détection d’événements de sécurité publique à partir d’un flux Twitter." Mémoire, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/10241.

Full text
Abstract:
Twitter est un réseau social très répandu en Amérique du Nord, offrant aux autorités policières une opportunité pour détecter les événements d’intérêt public. Les messages Twitter liés à un événement contiennent souvent les noms de rue où se déroule l’événement, ce qui permet la géolocalisation en temps réel. Plusieurs logiciels commerciaux sont offerts pour effectuer la vigie des réseaux sociaux. L’efficacité de ces outils pour les autorités policières pourrait être grandement améliorée avec un accès à un plus grand échantillon de messages Twitter, avec un tri préalable pour dégager les événements pertinents en moins de temps et avec une mesure de la fiabilité des événements détectés. Ce mémoire vise à proposer une démarche afin de détecter, à partir du flux de messages Twitter, les événements de sécurité publique d’un territoire, automatiquement et avec un niveau de fiabilité acceptable. Pour atteindre cet objectif, un modèle informatisé a été conçu, basé sur les quatre composantes suivantes: a) la cueillette de tweets à partir de mots clés avec un filtrage géographique, b) l’analyse linguistique et l’utilisation d’un répertoire de rues pour déceler les tweets localisables et pour trouver leurs coordonnées à partir des noms de rue et de leur intersection, c) une méthode spatio-temporelle pour former des grappes de tweets, et d) la détection des événements en identifiant les grappes contenant au moins deux (2) tweets communs touchant le même sujet. Ce travail de recherche diffère des articles scientifiques recensés car il combine l’analyse textuelle, la recherche et le géocodage de toponymes à partir d’un répertoire de noms de rue, la formation de grappes avec la géomatique et l’identification de grappes contenant des tweets communs pour détecter localement des événements de sécurité publique. L’application du modèle aux 90 347 tweets cueillis dans la région de Toronto-Niagara au Canada a résulté en l’identification et la géolocalisation de 1 614 tweets ainsi qu’en la formation de 172 grappes dont 79 grappes d’événements contenant au moins deux (2) tweets touchant le même sujet, soit un taux de fiabilité de 45,9 %.
Abstract : Twitter is a social media that is very popular in North America, giving law enforcement agencies an opportunity to detect events of public interest. Twitter messages (tweets) tied to an event often contain street names, indicating where this event takes place, which can be used to infer the event's geographical coordinates in real time. Many commercial software tools are available to monitor social media. The performance of these tools could be greatly improved with a larger sample of tweets, a sorting mechanism to identify pertinent events more quickly and to measure the reliability of the detected events. The goal of this master‟s thesis is to detect, from a public Twitter stream, events relative to public safety of a territory, automatically and with an acceptable level of reliability. To achieve this objective, a computer model based on four components has been developed: a) capture of public tweets based on keywords with the application of a geographic filter, b) natural language processing of the text of these tweets, use of a street gazetteer to identify tweets that can be localized and geocoding of tweets based on street names and intersections, c) a spatio-temporal method to form tweet clusters and, d) event detection by isolating clusters containing at least two tweets treating the same subject. This research project differs from existing scientific research as it combines natural language processing, search and geocoding of toponyms based on a street gazetteer, the creation of clusters using geomatics and identification of event clusters based on common tweets to detect public safety events in a Twitter public stream. The application of the model to the 90,347 tweets collected for the Toronto-Niagara region in Ontario, Canada has resulted in the identification and geocoding of 1,614 tweets and the creation of 172 clusters from which 79 event clusters contain at least two tweets having the same subject showing a reliability rate of 45.9 %.
APA, Harvard, Vancouver, ISO, and other styles
14

Agne, Arvid. "Provisioning, Configuration and Monitoring of Single-board Computer Clusters." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97853.

Full text
Abstract:
Single-board computers as hardware for container orchestration have been a growing subject. Previous studies have investigated their potential of running production-grade technologies in various environments where low-resource, cheap, and flexible clusters may be of use. This report investigates the appliance of methods and processes prevalent in cluster, container orchestration, and cloud-native environments. The motivation being that if single-board computers are able to run clusters to a satisfactory degree, they should also be able to fulfill the methods and processes which permeate the same cloud-native technologies. Investigation of the subject will be conducted through the creation of different criteria for each method and process. They will then act as an evaluation basis for an experiment in which a single-board computer cluster will be built, provisioned, configured, and monitored. As a summary, the investigation has been successful, instilling more confidence in single-board computer clusters and their ability to implement cluster related methodologies and processes.
APA, Harvard, Vancouver, ISO, and other styles
15

Liv, Jakob, and Fredrik Nygren. "Lastbalanseringskluster : En studie om operativsystemets påverkan på lastbalanseraren." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-36574.

Full text
Abstract:
Denna rapport innehåller en studie över ett operativsystems påverkan på lastbalanserarenHAproxy. Studien utfördes i en experimentmiljö med fyra virtuella testklienter, en lastbalanseraresamt tre webbservernoder kopplade till lastbalanseraren. Operativsystemet varhuvudpunkten i studien där belastningen på dess hårdvara, svarstiden, antalet anslutningarsamt det maximala antalet anslutninger per sekund undersöktes. De operativsystem somtestades var Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 och OpenBSD 5.5. Resultaten fråntesterna visar att hårdvaran och svarstiden är näst intill identisk på samtliga operativsystemmed undantag för OpenBSD där förutsättningarna för att genomföra hårdvarutesternainte kunde uppnås. FreeBSD var det operativsystem som klarade av att hantera flestantal anslutningar tillsammans med CentOS. Ubuntu visade sig vara mer begränsat ochOpenBSD var mycket begränsat. FreeBSD klarade även av högst antal anslutningar persekund, följt av Ubuntu, CentOS och slutligen OpenBSD som visade sig vara det sämstpresterande.
This report contains a study over an operating system’s impact on the load balancerHAproxy. The study was performed in an experimental environment with four virtualclients for testing, one load balancer and three web server nodes connected to the loadbalancer. The operating system was the main point in the study where the load on theload balancer’s hardware, the response time, the amount of connections and the maximumamount of connections per second were examined. The operating systems whichwere tested was Ubuntu 10.04, CentOS 6.5, FreeBSD 9.1 and OpenBSD 5.5. The resultsfrom the tests shows that the load on the hardware and the response time are almost identicalon all operating systems with the exception of OpenBSD where the conditions to beable to run the hardware tests could not be achieved. FreeBSD was the operating systemthat was able to manage the highest amount of connections along with CentOS. Ubuntuturned out to be more limited and OpenBSD was very limited. FreeBSD also managedthe highest amount of connections per second, followed by Ubuntu, CentOS and finallyOpenBSD which turned out to be the worst performer.
APA, Harvard, Vancouver, ISO, and other styles
16

Palomino, Lizeth Vargas. "Técnicas de inteligência artificial aplicadas ao método de monitoramento de integridade estrutural baseado na impedância eletromecânica para monitoramento de danos em estruturas aeronáuticas." Universidade Federal de Uberlândia, 2012. https://repositorio.ufu.br/handle/123456789/14726.

Full text
Abstract:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
The basic concept of impedance-based structure health monitoring is measuring the variation of the electromechanical impedance of the structure as caused by the presence of damage by using patches of piezoelectric material bonded on the surface of the structure (or embedded into). The measured electrical impedance of the PZT patch is directly related to the mechanical impedance of the structure. That is why the presence of damage can be detected by monitoring the variation of the impedance signal. In order to quantify damage, a metric is specially defined, which allows to assign a characteristic scalar value to the fault. This study initially evaluates the influence of environmental conditions in the impedance measurement, such as temperature, magnetic fields and ionic environment. The results show that the magnetic field does not influence the impedance measurement and that the ionic environment influences the results. However, when the sensor is shielded, the effect of the ionic environment is significantly reduced. The influence of the sensor geometry has also been studied. It has been established that the shape of the PZT patch (rectangular or circular) has no influence on the impedance measurement. However, the position of the sensor is an important issue to correctly detect damage. This work presents the development of a low-cost portable system for impedance measuring to automatically measure and store data from 16 PZT patches, without human intervention. One fundamental aspect in the context of this work is to characterize the damage type from the various impedance signals collected. In this sense, the techniques of artificial intelligence known as neural networks and fuzzy cluster analysis were tested for classifying damage of aircraft structures, obtaining satisfactory results. One last contribution of the present work is the study of the performance of the electromechanical impedance-based structural health monitoring technique to detect damage in structures under dynamic loading. Encouraging results were obtained for this aim.
O conceito básico da técnica de integridade estrutural baseada na impedância tem a ver com o monitoramento da variação da impedância eletromecânica da estrutura, causada pela presença alterações estruturais, através de pastilhas de material piezelétrico coladas na superfície da estrutura ou nela incorporadas. A impedância medida se relaciona com a impedância mecânica da estrutura. A partir da variação dos sinais de impedância pode-se concluir pela existência ou não de uma falha. Para quantificar esta falha, métricas de dano são especialmente definidas, permitindo atribuir-lhe um valor escalar característico. Este trabalho pretende inicialmente avaliar a influência de algumas condições ambientais, tais como os campos magnéticos e os meios iônicos na medição de impedância. Os resultados obtidos mostram que os campos magnéticos não tem influência na medição de impedância e que os meios iônicos influenciam os resultados; entretanto, ao blindar o sensor, este efeito se reduz consideravelmente. Também foi estudada a influencia da geometria, ou seja, do formato do PZT e da posição do sensor com respeito ao dano. Verificou-se que o formato do PZT não tem nenhuma influência na medição e que a posição do sensor é importante para detectar corretamente o dano. Neste trabalho se apresenta o desenvolvimento de um sistema de medição de impedância de baixo custo e portátil que tem a capacidade de medir e armazenar a medição de 16 PZTs sem a necessidade de intervenção humana. Um aspecto de fundamental importância no contexto deste trabalho é a caracterização do dano a partir dos sinais de impedância coletados. Neste sentido, as técnicas de inteligência artificial conhecidas como redes neurais e análises de cluster fuzzy, foram testadas para classificar danos em estruturas aeronáuticas, obtendo resultados satisfatórios para esta tarefa. Uma última contribuição deste trabalho é o estudo do comportamento da técnica de monitoramento de integridade estrutural baseado na impedância eletromecânica na detecção de danos em estruturas submetidas a carregamento dinâmico. Os resultados obtidos mostram que a técnica funciona adequadamente nestes casos.
Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
17

Petersson, Andreas. "A tool for monitoring resource usage in large scale supercomputing clusters." Thesis, Linköpings universitet, PELAB - Laboratoriet för programmeringsomgivningar, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75435.

Full text
Abstract:
Large scale computer clusters have during the last years become dominant for making computations in applications where extremely high computation capacity is required. The clusters consist of a large set of normal servers, interconnected with a fast network. As each node runs its own instance of the operating system, and each node is working, in that sense autonomously, supervising the whole cluster is a challenge. To get an overview of the efficency and utilization of the system, one cannot only look at one computer. It is necessary to monitor all nodes to get a good view on how the cluster behaves. Monitoring performance counters in a large scale computation cluster implies many difficulties. How can samples of performance metrics be made available for an operator? How can samples of performance metrics be stored? How can a large set of samples of performance metrics be visualized in a meaningful way? In this thesis it will be discussed how such a monitoring system can be implemented, what problems one may encounter and possible solutions.
APA, Harvard, Vancouver, ISO, and other styles
18

Terrell, Thomas. "Structural health monitoring for damage detection using wired and wireless sensor clusters." Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5055.

Full text
Abstract:
Sensing and analysis of a structure for the purpose of detecting, tracking, and evaluating damage and deterioration, during both regular operation and extreme events, is referred to as Structural Health Monitoring (SHM). SHM is a multi-disciplinary field, with a complete system incorporating sensing technology, hardware, signal processing, networking, data analysis, and management for interpretation and decision making. However, many of these processes and subsequent integration into a practical SHM framework are in need of development. In this study, various components of an SHM system will be investigated. A particular focus is paid to the investigation of a previously developed damage detection methodology for global condition assessment of a laboratory structure with a decking system. First, a review of some of the current SHM applications, which relate to a current UCF Structures SHM study monitoring a full-scale movable bridge, will be presented in conjunction with a summary of the critical components for that project. Studies for structural condition assessment of a 4-span bridge-type steel structure using the SHM data collected from laboratory based experiments will then be presented. For this purpose, a time series analysis method using ARX models (Auto-Regressive models with eXogeneous input) for damage detection with free response vibration data will be expanded upon using both wired and wireless acceleration data. Analysis using wireless accelerometers will implement a sensor roaming technique to maintain a dense sensor field, yet require fewer sensors. Using both data types, this ARX based time series analysis method was shown to be effective for damage detection and localization for this relatively complex laboratory structure. Finally, application of the proposed methodologies on a real-life structure will be discussed, along with conclusions and recommendations for future work.
ID: 029810361; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.C.E.)--University of Central Florida, 2011.; Includes bibliographical references (p. 102-114).
M.S.C.E.
Masters
Civil, Environmental and Construction Engineering
Engineering and Computer Science
Civil Engineering
APA, Harvard, Vancouver, ISO, and other styles
19

Marshall, J. Brooke. "Prospective Spatio-Temporal Surveillance Methods for the Detection of Disease Clusters." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29639.

Full text
Abstract:
In epidemiology it is often useful to monitor disease occurrences prospectively to determine the location and time when clusters of disease are forming. This aids in the prevention of illness and injury of the public and is the reason spatio-temporal disease surveillance methods are implemented. Care must be taken in the design and implementation of these types of surveillance methods so that the methods provide accurate information on the development of clusters. Here two spatio-temporal methods for prospective disease surveillance are considered. These include the local Knox monitoring method and a new wavelet-based prospective monitoring method. The local Knox surveillance method uses a cumulative sum (CUSUM) control chart for monitoring the local Knox statistic, which tests for space-time clustering each time there is an incoming observation. The detection of clusters of events occurring close together both temporally and spatially is important in finding outbreaks of disease within a specified geographic region. The local Knox surveillance method is based on the Knox statistic, which is often used in epidemiology to test for space-time clustering retrospectively. In this method, a local Knox statistic is developed for use with the CUSUM chart for prospective monitoring so that epidemics can be detected more quickly. The design of the CUSUM chart used in this method is considered by determining the in-control average run length (ARL) performance for different space and time closeness thresholds as well as for different control limit values. The effect of nonuniform population density and region shape on the in-control ARL is explained and some issues that should be considered when implementing this method are also discussed. In the wavelet-based prospective monitoring method, a surface of incidence counts is modeled over time in the geographical region of interest. This surface is modeled using Poisson regression where the regressors are wavelet functions from the Haar wavelet basis. The surface is estimated each time new incidence data is obtained using both past and current observations, weighing current observations more heavily. The flexibility of this method allows for the detection of changes in the incidence surface, increases in the overall mean incidence count, and clusters of disease occurrences within individual areas of the region, through the use of control charts. This method is also able to incorporate information on population size and other covariates as they change in the geographical region over time. The control charts developed for use in this method are evaluated based on their in-control and out-of-control ARL performance and recommendations on the most appropriate control chart to use for different monitoring scenarios is provided.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Ivars, Camañez Vicente-José. "TDP-Shell: Entorno para acoplar gestores de colas y herramientas de monitorizaci on." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/96251.

Full text
Abstract:
Hoy en día la mayoría de aplicaciones distribuidas se ejecutan en clusters de ordenadores gestionados por un gestor de colas. Por otro lado, los usuarios pueden utilizar las herramientas de monitorización actuales para detectar los problemas en sus aplicaciones distribuidas. Pero para estos usuarios, es un problema utilizar estas herramientas de monitorización cuando el cluster está controlado por un gestor de colas. Este problema se debe al hecho de que los gestores de colas y las herramientas de monitorización, no gestionan adecuadamente los recursos que deben compartir al ejecutar y operar con aplicaciones distribuidas. A este problema le denominamos "falta de interoperabilidad" y para resolverlo se ha desarrollado un entorno de trabajo llamado TDP-Shell. Este entorno soporta, sin alterar sus códigos fuentes, diferentes gestores de colas, como Cóndor o SGE y diferentes herramientas de monitorización, como Paradyn, Gdb y Totalview.
Nowadays distributed applications are executed on computer clusters managed by a Batch Queue Systems. Users take advantage of Monitoring Tools to detect run-time problems on their applications running on a distributed environment. But it is a challenge to use Monitoring Tools on a cluster controlled by a Batch Queue System. This is due to the fact that Batch Queue Systems and Monitoring Tools do not coordinate the management of the resources they share, when executing a distributed application. We call this problem "lack of interoperability" and to solve it we have developed a framework called TDP-Shell. This framework supports different Batch Queue Systems such as Condor and SGE, and different Monitoring Tools such as Paradyn, Gdb and Totalview, without any changes on their source code. This thesis describes the development of the TDP-Shell framework, which allows monitoring both sequential and distributed applications that are executed on a cluster controlled by a Batch Queue System, as well as a new type of monitoring called "delayed".
APA, Harvard, Vancouver, ISO, and other styles
21

Skinner, Michael A. "Hapsite® gas chromatograph-mass spectrometer (GC/MS) variability assessment /." Download the thesis in PDF, 2005. http://www.lrc.usuhs.mil/dissertations/pdf/Skinner2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hsu, Ming-Wei, and 徐銘蔚. "Cluster Analysis of River Water Quality Monitoring Data." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/7kbnex.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
103
Water is a major constituent of our bodies and vital organs. Safe drinking water is essential to humans and other lifeforms, so environmental water quality monitoring is very important. The Environmental Protection Administration (EPA) of Taiwan starts to monitor environmental water quality and posts on the Internet at 2001. In this study, we analyze river monitoring data about the general items of pollutants in sixteen monitoring stations of Gaoping river from 2005 to 2013. However, the water monitoring data have complex patterns. There are missing values, outliers and lower detection limit, which may be called left censored data. Before doing the data analysis, we replace missing values with median, and deal with left censored data by using censored time series model (Park et al., 2007). Then we fit a linear regression model to find seasonal and trend patterns and use the estimated coefficients of the fitted regression models to do cluster analysis. Finally, we discuss the differences of the pollution levels between different clusters of monitoring stations, which may be useful for the EPA, as reference for the river water quality management.
APA, Harvard, Vancouver, ISO, and other styles
23

Su, Ying-Yuan, and 蘇膺元. "Spatial Cluster Detection for the Fishing Vessel Monitoring Systems." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/07964793676065392045.

Full text
Abstract:
碩士
國立臺灣海洋大學
通訊與導航工程系
96
Fishing Vessel Monitoring System (VMS) is an effective tool of fisheries monitoring, control and surveillance measures to counter over-fishing. It can also help the coast guard to safeguard vessels more efficiently. As VMS is widely implemented, more and more efforts focus on mining the VMS database to discover knowledge and clues that would further enhance the benefits. This thesis is focused on data mining VMS database with clustering technology developed for and implemented into the VMS of Taiwan. The initial request form the Fisheries Administration was to constantly identify wherever there are at least three fishing vessels within 3 nautical miles of range. The proposed solution was based on DBSCAN [1] clustering algorithm. The performances in accuracy and run-time were evaluated and improved with vessel position prediction, partitioning of datasets, data structure and algorithm design. With the promising results, this solution has been recognized by the fisheries management and VMS operation experts to be of many extended use in VMS. Finally, this Density Area Detection System was applied to the detection of at-sea transshipment and parallel-track vessels. Then, the performance in accuracy and practicability would be discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Lu, Wen-Jui, and 呂文瑞. "THE MONITORING SYSTEM FOR A CLUSTER TOOL IN SEMICONDUCTOR." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/74076600363505632589.

Full text
Abstract:
碩士
國立臺灣大學
機械工程學研究所
89
Recently, the final goal of semiconductor manufacturing automation systems is to achieve unmanned fabs. Equipment control interface is one of the important issues. Integrated processing technology is being applied to the cluster tool not only to increase yield but also to reduce cost. The purpose of the thesis is to analyze and to design a monitoring module controller in the cluster tool by Object-Oriented method , focusing on the equipment controller , function of each module and implementation method. The functional requirements of the monitoring system and the related problems are firstly described. The modules are developed by using unified modeling language (UML). According to users’ requirements, a real time monitoring module controller is developed. In addition, a Petri net model is proposed to model since the complex dynamic behavior within the system. Finally, the system is implemented by using Object-Oriented Programming MS Visual Basic to demonstrate the proposed model via Ethernet on MS Windows 98/NT environment. SECS II(Semiconductor Equipment Communication Standard II) communication interface between equipment controller and an upper controller is used. A experimental cluster tool platform is used as a test example.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Yi-ting, and 李怡庭. "Cluster Analysis of River Water Quality of Heavy Metal Monitoring Data." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/v5r3n3.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
103
Water plays a very important role in our lives. Not only the water quantity but also its quality in each river basin are important parameters for evaluation of water usage efficiency in the corresponding areas. Systematic monitoring of the water quantity as well as the quality is necessary for efficient control and management of the water usage. The Environmental Protection Administration (EPA) in Taiwan has set up monitoring stations in different river basins all over the country to monitor the water quantity and quality. In this work we are interested in understanding the status of the heavy metal pollution in river basin through these monitoring data. We consider in this study the sixteen stations of Gaoping river basin, take the river water heavy metal concentration from 2005 to 2013 to evaluate the heavy metal pollution levels in their river basin. We firstly establish a regression model based on the characteristics of water heavy metal data. The data provided by the EPA sometimes have different situations need to be taken care of before performing the data analysis such as missing values, outliers and lower than the monitoring limit values (left censored). Thus, we interpolate some missing values using the established method by Park et al. (2007) for left censored data to give estimated values. Moreover, we use the interquartile method to determine whether some data are outliers. Later the outliers are considered in the regression model together time trend and seasonal effects. Finally, we use cluster analysis to identify the commonality and differences among the sixteen stations, which may be useful as a reference for EPA in understanding the status of the water quality for future management.
APA, Harvard, Vancouver, ISO, and other styles
26

Lu, Hsueh-Chih, and 呂學治. "Experimental Platform for Remote Monitoring and Diagnosing of Cluster-Tools Equipment." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/4k9jj2.

Full text
Abstract:
碩士
中原大學
機械工程研究所
91
The object of the article is to build an experimental workshop, which is used to simulate processing module of cluster-tools of semiconductor industry. In order to verify the reliability, this experimental workshop is finally combined to three-tiers remote monitor and diagnosing system. Under the construction of the workshop, the widely used cluster-tools equipments in semiconductor industrial are taken as reference to build most important features of the workshop. The workshop is equipped a SCRAR robot to simulate wafer transfer procedure during process modules. And finally, a programmable language controller (PLC) is used to collect all signals and states of equipments and feeds useful data back to server end. In order to verify the reliability of the workshop, a human-based interface is developed to make graphical control possible. Users could control equipments of the workshop directly and get status of workshop such as robot status, gate open/close, wafer conditions, etc. Instead of the traditional monitor and control, the architecture of three-tiers remote monitor and control system are used to expand the workshop to Internet scope. Users also could control the workshop through the Internet connection by the interface of client end of three-tiers Internet remoter and diagnosing system.
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Sih-yuan, and 林思遠. "A research of managing and monitoring server farm cluster using IPMI." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/17860598550981081013.

Full text
Abstract:
碩士
世新大學
資訊管理學研究所(含碩專班)
97
After enterprises build their server clusters, the operation of enterprises will be highly related to that of server clusters. Once the server has a breakdown, enterprises will not be able to operate regularly, then making a loss. Therefore, the management of server clusters is necessary and plays an important role in maintaining regular operation of numerous server clusters. IPMI (Intelligent Platform Management Interface) is a standard of intelligent management system. We can monitor the condition in the system through the cross-platform standard interface provided by IPMI. Furthermore, we can reduce the cost of server management and effectively solve a variety of problems resulting from that the interface of server cannot be compatible with that of its accessory appliances. My research focuses primarily on establishing IPMI Firmware according with IPMI v2.0 spec on Embedded Linux. This IPMI Firmware helps us to deal with any IPMI standard commands from remote input. In addition, through IPMI over LAN we can transmit commands from management server to provide remote server cluster management, and this way helps enterprises to achieve better operating performance. As a result, server cluster management will enhance the ability to remotely monitor the hardware information in the server and the ability to turn on the remote server. Finally, server cluster management will overcome the difficulty that traditional server is unable to monitor computer hardware and remotely turn on the computer when computer system malfunctions.
APA, Harvard, Vancouver, ISO, and other styles
28

Lu, Kuo-Chang, and 呂國璋. "Design and Development of Remote Monitoring and Diagnosing System for Cluster-Tools Equipment." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/49r7gu.

Full text
Abstract:
碩士
中原大學
機械工程研究所
91
Cluster-tools equipment has been the main stream of front-end equipments in semiconductor manufacturer for less pollution, small footprint, and low cost ownership. Moreover, equipment automation is key level in wafer transport automation of cluster-tools for semiconductor manufactory. In order to monitor equipment production statuses effectively, it is beneficial for damage prevention of equipment to enhance manufacture stability by using message data of equipment and sensors. With remote monitoring techniques, users can control and monitor the equipment through Internet anywhere at any time. When equipment has any problem, engineer can resolve them through Internet and avoid possible further damage. Design and development of remote monitoring and diagnosing system for semiconductor cluster-tools equipment are proposed in this article. The architecture design of remote monitoring and diagnosing system is discussed base on International SEMATECH e-Diagnostics and three-tier application concept in the first section. In the second section, Microsoft .NET Framework platform is used for system development and constructed cluster-tools experimental platform of this study is used to verify and realize remote monitoring/diagnostics capabilities.
APA, Harvard, Vancouver, ISO, and other styles
29

"A study of two problems in data mining: anomaly monitoring and privacy preservation." 2008. http://library.cuhk.edu.hk/record=b5893636.

Full text
Abstract:
Bu, Yingyi.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (leaves 89-94).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.v
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Anomaly Monitoring --- p.1
Chapter 1.2 --- Privacy Preservation --- p.5
Chapter 1.2.1 --- Motivation --- p.7
Chapter 1.2.2 --- Contribution --- p.12
Chapter 2 --- Anomaly Monitoring --- p.16
Chapter 2.1 --- Problem Statement --- p.16
Chapter 2.2 --- A Preliminary Solution: Simple Pruning --- p.19
Chapter 2.3 --- Efficient Monitoring by Local Clusters --- p.21
Chapter 2.3.1 --- Incremental Local Clustering --- p.22
Chapter 2.3.2 --- Batch Monitoring by Cluster Join --- p.24
Chapter 2.3.3 --- Cost Analysis and Optimization --- p.28
Chapter 2.4 --- Piecewise Index and Query Reschedule --- p.31
Chapter 2.4.1 --- Piecewise VP-trees --- p.32
Chapter 2.4.2 --- Candidate Rescheduling --- p.35
Chapter 2.4.3 --- Cost Analysis --- p.36
Chapter 2.5 --- Upper Bound Lemma: For Dynamic Time Warping Distance --- p.37
Chapter 2.6 --- Experimental Evaluations --- p.39
Chapter 2.6.1 --- Effectiveness --- p.40
Chapter 2.6.2 --- Efficiency --- p.46
Chapter 2.7 --- Related Work --- p.49
Chapter 3 --- Privacy Preservation --- p.52
Chapter 3.1 --- Problem Definition --- p.52
Chapter 3.2 --- HD-Composition --- p.58
Chapter 3.2.1 --- Role-based Partition --- p.59
Chapter 3.2.2 --- Cohort-based Partition --- p.61
Chapter 3.2.3 --- Privacy Guarantee --- p.70
Chapter 3.2.4 --- Refinement of HD-composition --- p.75
Chapter 3.2.5 --- Anonymization Algorithm --- p.76
Chapter 3.3 --- Experiments --- p.77
Chapter 3.3.1 --- Failures of Conventional Generalizations --- p.78
Chapter 3.3.2 --- Evaluations of HD-Composition --- p.79
Chapter 3.4 --- Related Work --- p.85
Chapter 4 --- Conclusions --- p.87
Bibliography --- p.89
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, Tzu-Wei, and 林子維. "The Study of Remote Performance Monitoring and Automatic Reporting for Semiconductor Cluster-Tools Equipment." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/ger4g2.

Full text
Abstract:
碩士
中原大學
機械工程研究所
93
With the rapid growth of semiconductor equipment industry, competition among companies is tense. Each company must make the best use of their own resources, and hopefully reduce mistakes made by moody or unskillful operators. How to lower costs and reduce retrieval error ratio effectively is the goal which every company wants to reach. Web Service, a server which supplies services on the Internet, providing the calculating and inquiring functions. The request and response of the Web Service both use SOAP be transmission standard. It allows most languages to be requested by the Web Service. The engineer can transmit the entire common program to become Web Service. It can simplify the program in each server and lighten the load of other servers. Equipments will appear metal fatigue out of continuous working. It will also bring more problems in the tough environment. If the program can automatically warn the engineers of the unusual situation about the equipment, then they can check and fix the equipment in advance. It will decrease the problems of equipment and, on the other hand, increase the efficiency for the company. After this study works two systems (SQL Server and Web Service) together, we know that the Web Service can actually handle more work than the SQL Server system. It helps the web server to calculate. Therefore, Web Service has great influence on improving the system structure. The program will recognize the image and calculate the height of the front arm. Then, it uses statistical process control to get process capability about the equipment. It will show the operators the status of equipment. They can call the engineers to check and fix the equipment in advance, when the index displayed a decrease tendency to avoid an accident. It will achieve the capability of e-Diagnostics Prediction.
APA, Harvard, Vancouver, ISO, and other styles
31

Háva, Jakub. "Monitorovací nástroj pro distribuované aplikace v jazyce Java." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-357041.

Full text
Abstract:
The main goal of this thesis is to create a monitoring platform and library that can be used to monitor distributed Java-based applications. This work is inspired by Google Dapper and shares a concept called "Span" with the aforementioned project. Spans represent a small specific part of the computation and are used to capture state among multiple communicating nodes. In order to be able to col- lect spans without recompiling the original application's code, instrumentation techniques are highly used in the thesis. The monitoring tool, which is called Distrace, consists of two parts: the native agent and the instrumentation server. Users of the Distrace tool are supposed to extend the instrumentation server and specify the points in their application's code where new spans should be created and closed. In order to achieve high performance and affect the running appli- cation at least as possible, the instrumentation server is used for instrumenting the code. The Distrace tool is aimed to have a small foot-print on the monitored applications, should be easy to deploy and is transparent to target applications from the point of view of the final user. 1
APA, Harvard, Vancouver, ISO, and other styles
32

Anwar, K. M. Mostafa. "Multivariate data analysis for monitoring the quality of the commercialized bottled water in Bangladesh." Master's thesis, 2018. http://hdl.handle.net/10362/40334.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Information Analysis and Management
Several multivariate statistical or chemometrics or pattern recognition techniques e.g. Principal Component Analysis, Factor Analysis, Hierarchical and Non-Hierarchical k-Mean Cluster Analysis have been applied to gain understanding about the quality of the packaged bottled drinking water in the market of Bangladesh. Twenty three (23) physico-chemical properties of total of 51 water samples have been investigated. The data set consists of 49 individuals from 11 Brands and 2 deionized ASTM TYPE-I water samples produced in the laboratory to be a technically pure water having Electrical Conductivity ~0.056 μS-cm-1. Descriptive statistics, analysis of variance, Non-Parametric Kruskal-Wallis tests have been conducted to detect statistical differences between the water types and different brands. Total of 23 attributes of water covering major ion contents: sodium, potassium, calcium, magnesium, iron, manganese, chloride, fluoride, sulphate, bicarbonate and nitrate and other features: pH, temperature, total dissolved solids, electrical conductivity, hardness, ammonium, nitrite, free cyanogen and chemical oxygen demand, total cation sum and total anion sum. Both the Principal Components Analysis and the Factor Analysis revealed that the differences between water individuals are best characterized by four Principal Components or Factors indicating material loadings, hardness or softness aesthetic acceptability and lightness/sutability for human consumption. Hierarchical and Non- Hierarchical k-means Cluster Analysis clearly identified the presence of four distinct clusters: A, B, C and D among the bottled water products in the market of Bangladesh. The profile features for each cluster have been defined as such the classification achieved to acquire improved and detailed understanding of the general properties of the products under study. We have observed that HCA using WARD algorithm provided us with more realistic classification solution in comparison with non-hierachical k-means as the Cluster members are truly reflecting their group pattern in line with their chemical compositions. HCA using WARD showed that BRAND05 and BRAND11 belonging to Cluster A products execssively loaded with materials and considered to be as hard waters. And BRAND09 and BRAND10 staying with DEIONIZEDWATER belonging to Cluster B are completely devoide of essential minerals as such seemed to be as ultra low mineral content type water or too soft in nature. The other folks BRAND03, BRAND04, BRAND06, BRAND07 and BRAND08 are also not having sufficient mineral contents so as to be very soft water indeed. Hence, waters belonging to Clusters A, B and C are not suitable for human consumption. Only two brands BRAND01 and BRAND02 staying in Cluster D appeared to be suitable for human consumption in every respect.The fact is the BRAND01 is produced by a foreign manufacturer. That means, all other local brands, except BRAND02 are essentially not having the appropriate quality to be drinking waters. From both PCA and FA these two brands BRAND01 and BRAND02 have been very well explained. These are the major outcomes of this study not immediately apparent from univariate approach or not appeared from the data set while looking through naked eyes. It is revealed that the multivariate data analytical techniques have potential to be useful complementary techniques to support the existing univariate practices for industrial quality assurance quality control, market surveillance, standardization process and or regulatory purposes and also seemed to be interesting to academic and scientific communities seeking advanced knowledge.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography