Dissertations / Theses on the topic 'Data surveillance'

To see the other types of publications on this topic, follow the link: Data surveillance.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data surveillance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Simi. "Surveillance video data fusion." Thesis, Kingston University, 2016. http://eprints.kingston.ac.uk/35593/.

Full text
Abstract:
The overall objective under consideration is the design of a system capable of automatic inference about events occurring in the scene under surveillance. Using established video processing techniques. low level inferences are relatively straightforward to establish as they only determine activities of some description. The challenge is to design a system that is capable of higher-level inference, that can be used to notify stakeholders about events having semantic importance. It is argued that re-identification of the entities present in the scene (such as vehicles and pedestrians) is an important intermediate objective, to support many of the types of higher level interference required. The input video can be processed in a number of ways to obtain estimates of the attributes of the objects and events in the scene. These attributes can then be analysed, or 'fused', to enable the high-level inference. One particular challenge is the management of the uncertainties, which are associated with the estimates, and hence with the overall inferences. Another challenge is obtaining accurate estimates of prior probabilities, which can have a significant impact on the final inferences. This thesis makes the following contributions. Firstly, a review of the nature of the uncertainties present in a visual surveillance system and quantification of the uncertainties associated with current techniques. Secondly, an investigation into the benefits of using a new high resolution dataset for the problem of pedestrain re-identification under various scenarios including occlusoon. This is done by combining state-of-art techniques with low level fusion techniques. Thirdly, a multi-class classification approach to solve the classification of vehicle manufacture logos. The approach uses the Fisher Discriminative classifier and decision fusion techniques to identify and classify logos into its correct categories. Fourthly, two probabilistic fusion frameworks were developed, using Bayesian and Evidential Dempster-Shafer methodologies, respectively, to allow inferences about multiple objectives and to reduce the uncertainty by combining multiple information sources. Fifthly, an evaluation framework was developed, based on the Kelly Betting Strategy, to effectively accommodate the additional information offered by the Dempster-Shafer approach, hence allowing comparisons with the single probabilistic output provided by a Bayesian analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Clarke, Roger Anthony, and Roger Clarke@xamax com au. "Data Surveillance: Theory, Practice & Policy." The Australian National University. Faculty of Engineering and Information Technology, 1997. http://thesis.anu.edu.au./public/adt-ANU20031112.124602.

Full text
Abstract:
Data surveillance is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons. This collection of papers was the basis for a supplication under Rule 28 of the ANU's Degree of Doctor of Philosophy Rules. The papers develop a body of theory that explains the nature, applications and impacts of the data processing technologies that support the investigation or monitoring of individuals and populations. Literature review and analysis is supplemented by reports of field work undertaken in both the United States and Australia, which tested the body of theory, and enabled it to be articulated. The research programme established a firm theoretical foundation for further work. It provided insights into appropriate research methods, and delivered not only empirically-based descriptive and explanatory data, but also evaluative information relevant to policy-decisions. The body of work as a whole provides a basis on which more mature research work is able to build.
APA, Harvard, Vancouver, ISO, and other styles
3

Meeyai, Aronrag. "The analysis of influenza surveillance data." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Adams, Andrew J. "Multispectral persistent surveillance /." Online version of thesis, 2008. http://hdl.handle.net/1850/7070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Flach, James D. "River basin surveillance using remotely sensed data." Thesis, Aston University, 1989. http://publications.aston.ac.uk/14296/.

Full text
Abstract:
This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.
APA, Harvard, Vancouver, ISO, and other styles
6

Laxhammar, Rikard. "Anomaly detection in trajectory data for surveillance applications." Licentiate thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-17235.

Full text
Abstract:
Abnormal behaviour may indicate important objects and events in a wide variety of domains. One such domain is intelligence and surveillance, where there is a clear trend towards more and more advanced sensor systems producing huge amounts of trajectory data from moving objects, such as people, vehicles, vessels and aircraft. In the maritime domain, for example, abnormal vessel behaviour, such as unexpected stops, deviations from standard routes, speeding, traffic direction violations etc., may indicate threats and dangers related to smuggling, sea drunkenness, collisions, grounding, hijacking, piracy etc. Timely detection of these relatively infrequent events, which is critical for enabling proactive measures, requires constant analysis of all trajectories; this is typically a great challenge to human analysts due to information overload, fatigue and inattention. In the Baltic Sea, for example, there are typically 3000–4000 commercial vessels present that are monitored by only a few human analysts. Thus, there is a need for automated detection of abnormal trajectory patterns. In this thesis, we investigate algorithms appropriate for automated detection of anomalous trajectories in surveillance applications. We identify and discuss some key theoretical properties of such algorithms, which have not been fully addressed in previous work: sequential anomaly detection in incomplete trajectories, continuous learning based on new data requiring no or limited human feedback, a minimum of parameters and a low and well-calibrated false alarm rate. A number of algorithms based on statistical methods and nearest neighbour methods are proposed that address some or all of these key properties. In particular, a novel algorithm known as the Similarity-based Nearest Neighbour Conformal Anomaly Detector (SNN-CAD) is proposed. This algorithm is based on the theory of Conformal prediction and is unique in the sense that it addresses all of the key properties above. The proposed algorithms are evaluated on real world trajectory data sets, including vessel traffic data, which have been complemented with simulated anomalous data. The experiments demonstrate the type of anomalous behaviour that can be detected at a low overall alarm rate. Quantitative results for learning and classification performance of the algorithms are compared. In particular, results from reproduced experiments on public data sets show that SNN-CAD, combined with Hausdorff distance  for measuring dissimilarity between trajectories, achieves excellent classification performance without any parameter tuning. It is concluded that SNN-CAD, due to its general and parameter-light design, is applicable in virtually any anomaly detection application. Directions for future work include investigating sensitivity to noisy data, and investigating long-term learning strategies, which address issues related to changing behaviour patterns and increasing size and complexity of training data.
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Jun. "Privacy-Preserving Data Integration in Public Health Surveillance." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19994.

Full text
Abstract:
With widespread use of the Internet, data is often shared between organizations in B2B health care networks. Integrating data across all sources in a health care network would be useful to public health surveillance and provide a complete view of how the overall network is performing. Because of the lack of standardization for a common data model across organizations, matching identities between different locations in order to link and aggregate records is difficult. Moreover, privacy legislation controls the use of personal information, and health care data is very sensitive in nature so the protection of data privacy and prevention of personal health information leaks is more important than ever. Throughout the process of integrating data sets from different organizations, consent (explicitly or implicitly) and/or permission to use must be in place, data sets must be de-identified, and identity must be protected. Furthermore, one must ensure that combining data sets from different data sources into a single consolidated data set does not create data that may be potentially re-identified even when only summary data records are created. In this thesis, we propose new privacy preserving data integration protocols for public health surveillance, identify a set of privacy preserving data integration patterns, and propose a supporting framework that combines a methodology and architecture with which to implement these protocols in practice. Our work is validated with two real world case studies that were developed in partnership with two different public health surveillance organizations.
APA, Harvard, Vancouver, ISO, and other styles
8

Teo, Wan Ching. "Privacy in the European Union data surveillance context." Thesis, University of Reading, 2018. http://centaur.reading.ac.uk/77849/.

Full text
Abstract:
This study asks: how has European Union (EU) law protected privacy in the context of data surveillance used by the EU to address terrorist threats? It argues that the manner in which privacy is protected has a profound impact on the interests that privacy serves to protect -importantly, having a private sphere free from intrusion where the individual can think, express, explore and act; interests that are crucial for human dignity, autonomy and the maintenance of a democratic society. It conceptualises the different notions of privacy derived from legal and philosophical theorists and underscores that privacy harm can result from intrusion into the private sphere. The study adopts De Hert and Gutwirth's view that privacy is an opacity tool, acting to shield the private sphere by prohibiting interference. Privacy is thus distinguished from data protection; the latter serving as a transparency tool to regulate power and manage interferences. The thesis outlines the legal framework for the protection of privacy in the EU, critically evaluating the influence of data protection frameworks. The study asserts that data surveillance threatens privacy because it intrudes into the private sphere of the individual, monitoring and tracking activities and communications in a systematic, indiscriminate manner. It uses two case studies of air passenger profiling and data retention to study how privacy has been protected. The central thesis is that the approach taken to protecting privacy in the EU data surveillance context has largely been framed through the prism of data protection. In view of the different goals that data protection serves, the study finds that a data protection approach to privacy's protection has led to the proceduralisation of privacy, legitimising and bureaucratising intrusions into the private sphere so long as data collected are adequately secured. The implications are that interferences with the private sphere are de facto legitimised and data surveillance accommodated.
APA, Harvard, Vancouver, ISO, and other styles
9

Abghari, Shahrooz, and Samira Kazemi. "Open Data for Anomaly Detection in Maritime Surveillance." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4807.

Full text
Abstract:
Context: Maritime Surveillance (MS) has received increased attention from a civilian perspective in recent years. Anomaly detection (AD) is one of the many techniques available for improving the safety and security in the MS domain. Maritime authorities utilize various confidential data sources for monitoring the maritime activities; however, a paradigm shift on the Internet has created new sources of data for MS. These newly identified data sources, which provide publicly accessible data, are the open data sources. Taking advantage of the open data sources in addition to the traditional sources of data in the AD process will increase the accuracy of the MS systems. Objectives: The goal is to investigate the potential open data as a complementary resource for AD in the MS domain. To achieve this goal, the first step is to identify the applicable open data sources for AD. Then, a framework for AD based on the integration of open and closed data sources is proposed. Finally, according to the proposed framework, an AD system with the ability of using open data sources is developed and the accuracy of the system and the validity of its results are evaluated. Methods: In order to measure the system accuracy, an experiment is performed by means of a two stage random sampling on the vessel traffic data and the number of true/false positive and negative alarms in the system is verified. To evaluate the validity of the system results, the system is used for a period of time by the subject matter experts from the Swedish Coastguard. The experts check the detected anomalies against the available data at the Coastguard in order to obtain the number of true and false alarms. Results: The experimental outcomes indicate that the accuracy of the system is 99%. In addition, the Coastguard validation results show that among the evaluated anomalies, 64.47% are true alarms, 26.32% are false and 9.21% belong to the vessels that remain unchecked due to the lack of corresponding data in the Coastguard data sources. Conclusions: This thesis concludes that using open data as a complementary resource for detecting anomalous behavior in the MS domain is not only feasible but also will improve the efficiency of the surveillance systems by increasing the accuracy and covering some unseen aspects of maritime activities.
This thesis investigated the potential open data as a complementary resource for Anomaly Detection (AD) in the Maritime Surveillance (MS) domain. A framework for AD was proposed based on the usage of open data sources along with other traditional sources of data. According to the proposed AD framework and the algorithms for implementing the expert rules, the Open Data Anomaly Detection System (ODADS) was developed. To evaluate the accuracy of the system, an experiment on the vessel traffic data was conducted and an accuracy of 99% was obtained for the system. There was a false negative case in the system results that decreased the accuracy. It was due to incorrect AIS data in a special situation that was not possible to be handled by the detection rules in the scope of this thesis. The validity of the results was investigated by the subject matter experts from the Swedish Coastguard. The validation results showed that the majority of the ODADS evaluated anomalies were true alarms. Moreover, a potential information gap in the closed data sources was observed during the validation process. Despite the high number of true alarms, the number of false alarms was also considerable that was mainly because of the inaccurate open data. This thesis provided insights into the open data as a complement to the common data sources in the MS domain and is concluded that using open data will improve the efficiency of the surveillance systems by increasing the accuracy and covering some unseen aspects of maritime activities.
APA, Harvard, Vancouver, ISO, and other styles
10

Eriksson, Pontus, Carl Nordström, and Alexander Troshin. "Surveillance Using Facial Recognition and Social Media Data." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385696.

Full text
Abstract:
People share more and more on social media, aware that they are being surveilled but unaware of the scope and the ways that their data is processed. Large amounts of resources are dedicated to performing the surveillance, both in terms of labor and computation. This project explores the scope of data collection and processing by showing that it is possible to gather, process, and store data from the social media platforms Twitter and Reddit in real-time using only a personal computer. The focus was to use facial recognition to find specific individuals in the stream of data, but the data collected can be used for other purposes. We have also explored the ethical concerns regarding both the collection and processing of such data.
Människor delar mer och mer på social medier medvetna om att de blir övervakade, men omedvetna om i vilken utsträckning och på vilka sätt datan är processad. Idag används mycket resurser för att urföra dessa uppgifter. Med det här projektet visar vi att det är möjligt att samla in, processa och spara data från sociala medierna Reddit och Twitter i realtid genom att enbart använda en persondator. Vårat fokus har varit att använda ansiktsigenkänning för att identifiera specifika individer från en dataström, men datan kan användas för andra syften. Vi har också kollat på de etiska dilemman som dyker upp i samband med insamling och processning av sådan data.
APA, Harvard, Vancouver, ISO, and other styles
11

Phillips, Kirk Tollef. "Statewide surveillance of asthma hospitalizations with secondary data." Thesis, University of Iowa, 2002. https://ir.uiowa.edu/etd/5901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dutrisac, James George. "Counter-Surveillance in an Algorithmic World." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

賴翰笙 and Hon-seng Lai. "An effective methodology for visual traffic surveillance." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B30456708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wong, Yuen-ting, and 黃婉婷. "Inferring influenza epidemic attack rates from serological surveillance data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45174696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Павлова, Д. Б., Г. Е. Заволодько, and І. І. Обод. "Merging primary data of joint air space surveillance systems." Thesis, НТУ «ХПІ», 2019. http://openarchive.nure.ua/handle/document/10371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gurrapu, Chaitanya. "Human Action Recognition In Video Data For Surveillance Applications." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15878/1/Chaitanya_Gurrapu_Thesis.pdf.

Full text
Abstract:
Detecting human actions using a camera has many possible applications in the security industry. When a human performs an action, his/her body goes through a signature sequence of poses. To detect these pose changes and hence the activities performed, a pattern recogniser needs to be built into the video system. Due to the temporal nature of the patterns, Hidden Markov Models (HMM), used extensively in speech recognition, were investigated. Initially a gesture recognition system was built using novel features. These features were obtained by approximating the contour of the foreground object with a polygon and extracting the polygon's vertices. A Gaussian Mixture Model (GMM) was fit to the vertices obtained from a few frames and the parameters of the GMM itself were used as features for the HMM. A more practical activity detection system using a more sophisticated foreground segmentation algorithm immune to varying lighting conditions and permanent changes to the foreground was then built. The foreground segmentation algorithm models each of the pixel values using clusters and continually uses incoming pixels to update the cluster parameters. Cast shadows were identified and removed by assuming that shadow regions were less likely to produce strong edges in the image than real objects and that this likelihood further decreases after colour segmentation. Colour segmentation itself was performed by clustering together pixel values in the feature space using a gradient ascent algorithm called mean shift. More robust features in the form of mesh features were also obtained by dividing the bounding box of the binarised object into grid elements and calculating the ratio of foreground to background pixels in each of the grid elements. These features were vector quantized to reduce their dimensionality and the resulting symbols presented as features to the HMM to achieve a recognition rate of 62% for an event involving a person writing on a white board. The recognition rate increased to 80% for the "seen" person sequences, i.e. the sequences of the person used to train the models. With a fixed lighting position, the lack of a shadow removal subsystem improved the detection rate. This is because of the consistent profile of the shadows in both the training and testing sequences due to the fixed lighting positions. Even with a lower recognition rate, the shadow removal subsystem was considered an indispensable part of a practical, generic surveillance system.
APA, Harvard, Vancouver, ISO, and other styles
17

Gurrapu, Chaitanya. "Human Action Recognition In Video Data For Surveillance Applications." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15878/.

Full text
Abstract:
Detecting human actions using a camera has many possible applications in the security industry. When a human performs an action, his/her body goes through a signature sequence of poses. To detect these pose changes and hence the activities performed, a pattern recogniser needs to be built into the video system. Due to the temporal nature of the patterns, Hidden Markov Models (HMM), used extensively in speech recognition, were investigated. Initially a gesture recognition system was built using novel features. These features were obtained by approximating the contour of the foreground object with a polygon and extracting the polygon's vertices. A Gaussian Mixture Model (GMM) was fit to the vertices obtained from a few frames and the parameters of the GMM itself were used as features for the HMM. A more practical activity detection system using a more sophisticated foreground segmentation algorithm immune to varying lighting conditions and permanent changes to the foreground was then built. The foreground segmentation algorithm models each of the pixel values using clusters and continually uses incoming pixels to update the cluster parameters. Cast shadows were identified and removed by assuming that shadow regions were less likely to produce strong edges in the image than real objects and that this likelihood further decreases after colour segmentation. Colour segmentation itself was performed by clustering together pixel values in the feature space using a gradient ascent algorithm called mean shift. More robust features in the form of mesh features were also obtained by dividing the bounding box of the binarised object into grid elements and calculating the ratio of foreground to background pixels in each of the grid elements. These features were vector quantized to reduce their dimensionality and the resulting symbols presented as features to the HMM to achieve a recognition rate of 62% for an event involving a person writing on a white board. The recognition rate increased to 80% for the "seen" person sequences, i.e. the sequences of the person used to train the models. With a fixed lighting position, the lack of a shadow removal subsystem improved the detection rate. This is because of the consistent profile of the shadows in both the training and testing sequences due to the fixed lighting positions. Even with a lower recognition rate, the shadow removal subsystem was considered an indispensable part of a practical, generic surveillance system.
APA, Harvard, Vancouver, ISO, and other styles
18

King, Chris. "A Foucauldian Analysis of NCLB: Student Data as Panoptic Surveillance." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/eps_diss/102.

Full text
Abstract:
ABSTRACT A FOUCAULDIAN ANALYSIS OF NCLB: STUDENT DATA AS PANOPTIC SURVEILLANCE by Chris King The No Child Left Behind Act of 2001 (NCLB; Public Law 107-110) reauthorizes and expands the Elementary and Secondary Education Act of 1965 to require large amounts of student data for the purpose of academic surveillance. This study investigates the historical and philosophical components of Jeremy Bentham’s Panopticon as a model of surveillance to identify similarities between panopticism and the rubric of collecting student data required by NCLB. All public school districts are evaluated annually for adequate yearly progress (AYP). Under the auspices of this evaluation, all students must be tested, and all results must be included in each district’s AYP calculation. All African American, Hispanic, White, economically disadvantaged, special education, and limited English proficient (LEP) students must meet the same performance and participation standards. States individually develop minimum size criteria for evaluation of student groups. High schools must meet a graduation rate standard set by the state. NCLB’s comprehensive data compilation and student tracking initiatives are consistent with previous federal education policies to conduct data surveillance on students and teachers. Similar to Jeremy Bentham’s 18th century Panopticon model of penal supervision and rehabilitation, NCLB is transforming the schoolhouse into a correction house by unveiling technologies of surveillance and power. By using Benthamian and Foucauldian philosophical analyses, this dissertation examines NCLB’s worldview of student data and tracking, specifically from student subgroups, and their effects of panoptic surveillance. This dissertation proceeds with a review of the historical context of Jeremy Bentham’s Panopticon and Michel Foucault’s panopticism. This study recognizes various American educational reform movements from 1776 to 2002 in identifying the following panoptic disciplines: constant surveillance, hierarchical observation and categorization, and panoptic power. It considers the NCLB doctrine of data collection for student and teacher tracking purposes and presents an anticolonial analysis of NCLB’s methods of compiling and tracking student subgroup data using the works of anticolonial scholars Frantz Fanon, Sylvia Wynter, and Carter Woodson. The dissertation concludes with a synthesis of the questions and the problems presented by NCLB and the implications of this analysis for students and teachers.
APA, Harvard, Vancouver, ISO, and other styles
19

Su, Ting-Li. "Application of spatial statistics to space-time disease surveillance data." Thesis, Lancaster University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Czerwinski, David (David E. ). "Quality of care and drug surveillance : a data-driven perspective." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45170.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 119-128).
In this thesis, we describe the use of medical insurance claims data in three important areas of medicine. First, we develop expert- trained statistical models of quality of care based on variables derived from insurance claims. Such models can be used to identify patients who are receiving poor care so that interventions can be arranged to improve their care. Second, we develop an algorithm that utilizes claims data to perform post-marketing surveillance of drugs to detect previously unknown side effects. The algorithm performed strongly in several realistic simulation tests, detecting side effects a large fraction of the time while controlling the false detection rate. Lastly, we use insurance claims data to improve our understanding of the costs of care for patients who suffer from depression and a chronic disease.
by David Czerwinski.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Stokes, Grant H., Herbert E. M. Viggh, and J. Kent Pollock. "SPACE-BASED VISIBLE (SBV) SURVEILLANCE DATA VERIFICATION AND TELEMETRY PROCESSING." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/608395.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
This paper discusses the telemetry processing and data verification performed by the SBV Processing, Operations and Control Center (SPOCC) located at MIT Lincoln Laboratory (MIT LL). The SPOCC is unique among the Midcourse Space Experiment (MSX) Data Processing Centers because it supports operational demonstrations of the SBV sensor for Space-Based Space Surveillance applications. The surveillance experiment objectives focus on tracking of resident space objects (RSOs), including acquisition of newly launched satellites. Since Space Surveillance operations have fundamentally short timelines, the SPOCC must be deeply involved in the mission planning for the series of observations and must receive and process the resulting data quickly. In order to achieve these objectives, the MSX Concept of Operations (CONOPS) has been developed to include the SPOCC in the operations planning process. The SPOCC is responsible for generating all MSX spacecraft command information required to execute space surveillance events using the MSX. This operating agreement and a highly automated planning system at the SPOCC allow the planning timeline objectives to be met. In addition, the Space Surveillance experiment scenarios call for active use of the 1 Mbps real-time link to transmit processed targets tracks from the SBV to the SPOCC for processing and for short time-line response of the SPOCC to process the track of the new object and produce new commands for the MSX spacecraft, or other space surveillance sensors, to re-acquire the object. To accomplish this, surveillance data processed and stored onboard the SBV is transmitted to the APL Mission Processing Center via 1 Mbps contacts with the dedicated Applied Physics Laboratory (APL) station, or via one of the AFSCN RTS locations, which forwards the telemetry in real-time to APL. The Mission Processing facility at APL automatically processes the MSX telemetry to extract the SBV allocation and forwards the data via file transfer over a dedicated fractional T1 link to the SPOCC. The data arriving at the SPOCC is automatically identified and processed to yield calibrated metric observations of RSOs. These results are then fed forward into the mission planning process for follow-up observations. In addition to the experiment support discussed above, the SPOCC monitors and stores SBV housekeeping data, monitors payload health and status, and supports diagnosis and correction. There are also software tools which support the assessment of the results of surveillance experiments and to produce a number of products used by the SBV instrument team to assess the overall performance characteristics of the SBV instrument.
APA, Harvard, Vancouver, ISO, and other styles
22

Bashir, Muzammil. "Deep Learning Approach to Trespass Detection using Video Surveillance Data." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1292.

Full text
Abstract:
While railroad trespassing is a dangerous activity with significant security and safety risks, regular patrolling of potential trespassing sites is infeasible due to exceedingly high resource demands and personnel costs. There is thus a need to design an automated trespass detection and early warning prediction tool leveraging state-of-the-art machine learning techniques. Leveraging video surveillance through security cameras, this thesis designs a novel approach called ARTS (Automated Railway Trespassing detection System) that tackles the problem of detecting trespassing activity. In particular, we adopt a CNN-based deep learning architecture (Faster-RCNN) as the core component of our solution. However, these deep learning-based methods, while effective, are known to be computationally expensive and time consuming, especially when applied to a large amount of surveillance data. Given the sparsity of railroad trespassing activity, we design a dual-stage deep learning architecture composed of an inexpensive prefiltering stage for activity detection followed by a high fidelity trespass detection stage for robust classification. The former is responsible for filtering out frames that show little to no activity, this way reducing the amount of data to be processed by the later more compute-intensive stage which adopts state-of-the-art Faster-RCNN to ensure effective classification of trespassing activity. The resulting dual-stage architecture ARTS represents a flexible solution capable of trading-off performance and computational time. We demonstrate the efficacy of our approach on a public domain surveillance dataset.
APA, Harvard, Vancouver, ISO, and other styles
23

Savadatti-Kamath, Sanmati S. "Video analysis and compression for surveillance applications." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26602.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Dr. J. R. Jackson; Committee Member: Dr. D. Scott; Committee Member: Dr. D. V. Anderson; Committee Member: Dr. P. Vela; Committee Member: Dr. R. Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
24

Shaffer, Loren E. "Using pre-diagnostic data fom veterinary laboratories to detect disease outbreaks in companion animals." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1176376010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Linder, Martin, and Tobias Nylin. "Pricing of radar data." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-104020.

Full text
Abstract:
In this thesis we examine the issue regarding pricing of radar data and surveillance to the operators of air navigation service (ANS) at the aerodromes in Sweden. The question of who should be responsible for providing radar data to the operators is being managed that results in if it should be LFV, as it is today, the government or another authority. This is being examined since LFV in 2010 lost its monopoly position in the terminal area in Sweden. LFV still has monopoly on the en route part, and thru the en route income finances the radar data to all operators in Sweden. Air traffic service units (ATS) receive the radar data without any compensation to LFV, this needs to be regulated and conditions and prerequisites are necessary to be implemented. Our supervisor at LFV, Anders Andersson, has been the primary source of information regarding the current situation, background for the problem and also provided relevant documents with proper information. Laws and regulations have been accessed via the Swedish Transport Agency’s website and scientific articles on monopolies and pricing in aviation and other markets have been used in order to compare earlier issues similar to ours. The literature studies combined with interviews with Anders Andersson are the foundations of the development of the pricing schemes. The result of the thesis is presented as three different pricing schemes where each one of them are presented in tables and analysed how it will affect the ATS. In the first pricing scheme the cost for maintenance is equally divided between all ATS, this means every ATS has to pay the same cost regardless size of the airport, number of movement and net sales. The second pricing scheme is based on number of landings per year and divides the ATS in three categories. This scheme increases the cost with concern to the number of landings, which results in the larger ATS are charged more than the smaller ATS. The final pricing scheme is divided in four categories and based on terminal control area (TMA) and requirements on surveillance service. This means the different categories are based on a combination of the median distance flown in TMA and the different requirements the ATS must provide surveillance service. This pricing scheme is a disadvantage for the military airports and the ATS with associated TMA. The conclusions that can be made are the Swedish Transport Agency needs to implement some distinct guidelines and regulations regarding how the pricing should be made, where the pricing schemes and analysis in this thesis could form the basis for future investigations.
APA, Harvard, Vancouver, ISO, and other styles
26

Hund, Lauren Brooke. "Survey Designs and Spatio-Temporal Methods for Disease Surveillance." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10346.

Full text
Abstract:
By improving the precision and accuracy of public health surveillance tools, we can improve cost-efficacy and obtain meaningful information to act upon. In this dissertation, we propose statistical methods for improving public health surveillance research. In Chapter 1, we introduce a pooled testing option for HIV prevalence estimation surveys to increase testing consent rates and subsequently decrease non-response bias. Pooled testing is less certain than individual testing, but, if more people to submit to testing, then it should reduce the potential for non-response bias. In Chapter 2, we illustrate technical issues in the design of neonatal tetanus elimination surveys. We address identifying the target population; using binary classification via lot quality assurance sampling (LQAS); and adjusting the design for the sensitivity of the survey instrument. In Chapter 3, we extend LQAS survey designs for monitoring malnutrition for longitudinal surveillance programs. By combining historical information with data from previous surveys, we detect spikes in malnutrition rates. Using this framework, we detect rises in malnutrition prevalence in longitudinal programs in Kenya and the Sudan. In Chapter 4, we develop a computationally efficient geostatistical disease mapping model that naturally handles model fitting issues due to temporal boundary misalignment by assuming that an underlying continuous risk surface induces spatial correlation between areas. We apply our method to assess socioeconomic trends in breast cancer incidence in Los Angeles between 1990 and 2000. In Chapter 5, we develop a statistical framework for addressing statistical uncertainty associated with denominator interpolation and with temporal misalignment in disease mapping studies. We propose methods for assessing the impact of the uncertainty in these predictions on health effects analyses. Then, we construct a general framework for spatial misalignment in regression.
APA, Harvard, Vancouver, ISO, and other styles
27

Karlsson, David. "Electronic Data Capture for Injury and Illness Surveillance : A usability study." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102737.

Full text
Abstract:
Despite the development of injury surveillance systems for use at large multi sportsevents (Junge 2008), their implementation is still methodologically and practicallychallenging. Edouard (2013) and Engebretsen (2013) have pointed out that thecontext of athletics championships feature unique constraints, such as a limiteddata-collection window and large amounts of data to be recorded and rapidlyvalidated. To manage these logistical issues, Electronic Data Capture (EDC) methodshave been proposed (Bjorneboe 2009, Alonso 2012, Edouard 2013). EDC systemshave successfully been used for surveillance during multi-sport events Derman et al(2013) and its potential for surveillance studies during athletics championships istherefore interesting. The focus for surveillance during athletics championships hasthis far been on injury and illness data collected from team medical staff in directassociation to the competitions. But the most common injury and illness problems inathletics are overuse syndromes (Alonso 2009, Edouard 2012, Jacobsson 2013) andknowledge of risk factors associated to these problems is also relevant in associationto championships. A desirable next step to extend the surveillance routines istherefore to include also pre-participation risk factors. For surveillance of overusesyndromes, online systems for athlete self-report of data on pain and othersymptoms have been reported superior to reports from coaches (Shiff 2010). EDCsystems have also been applied for athlete self-report of exposure and injury data inathletics and other individual sports and have been found to be well accepted with agood efficiency (Jacobsson 2013, Clarsen 2013). There are thus reasons forinvestigating EDC system use by both athletes and team medical staff during athleticchampionships.This thesis used a cross-sectional design to collect qualitative data from athletes andteam medical staff using interviews and “think-aloud” usability evaluation methods(Ericsson 1993; Kuusela 2000). It was performed over 3 days during the 2013European Athletics Indoor Championships in Gothenburg, Sweden. Online EDCsystems for collection of data from athletes and team medical staff, respectively,were prepared for the study. The system for use by team medical staff was intendedto collect data on injuries and illnesses sustained during the championship and thesystem for athletes to collect data on risk factors.This study does not provide a solution in how an EDC effort should be implementedduring athletics championships. It does however points towards usability factorsthat needs to be taken into consideration if taking such an approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Stykow, Henriette. "Small data on a large scale : Torn between convenience and surveillance." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-110630.

Full text
Abstract:
Technology has become an inherent part of our daily lives. If we don’t want to abstain from the benefits technology brings, we have to acknowledge the fact that tech generates data and adjust our norms and habits to it. This thesis critiques how corporations and governmental institutions collect, store and analyze data of individuals. It discusses the economic and technological forces that stand behind the collection and usage of data in the past, today, and the near future. Beyond that, it alludes to political implications. The overarching goal is to stimulate reflection about culture and future. To achieve that, the design of an interactive educational web story within the browser is proposed. A curated personal data platform in combination with interactive web stories make data collection, data usage, and the risks of data aggregation visible. Business practices and interests are rendered transparent on the basis of users’ actual online behavior and exposure. The web stories allows to understand the meaning and value of the data traces users leave online. In five chapters, they experience the basic technologies of the Internet, business motivations, and surveillance practices in the context of their individual web browsing behavior. Each chapter invites to explore details of the topic to accommodate for individual need and interest in the matter. A critical reflection on the future of data collection is encouraged, tools and settings within the browser help users to protect their digital identities.
APA, Harvard, Vancouver, ISO, and other styles
29

Matusek, F. (Florian). "Selective privacy protection for video surveillance." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526204154.

Full text
Abstract:
Abstract An unparalleled surge in video surveillance has occurred in recent years, due to some tragic events such as terror attacks, bank robberies and the activities of organized crime. Video surveillance technology has advanced significantly, which has even enabled the automatic tracking of individuals. However, in the opinion of the public the increase in security has brought about a decrease in personal privacy. Through video surveillance citizens could be monitored more easily than ever before, thus considerably intruding into their personal privacy. It was assumed that security and privacy in video surveillance was a zero-sum game in which citizens were forced to choose one over the other. This study was based on the belief that this notion is false. It was assumed that it can be possible to keep personal privacy while guaranteeing the utmost security. A solution to this issue was sought using Hevner’s design science research guidelines and design science research cycles. A video surveillance system was designed and constructed that would protect the personal privacy of uninvolved individuals under surveillance while still providing a high level of security, namely the Privacy Enhancing Video Surveillance system PEVS. PEVS protected the privacy of individuals by automatically scrambling the image regions where people were present in video streams. If a criminal act should take place, it was possible, with the proper authorization, to selectively unscramble the data of individuals of interest to analyze the situation. This enabled to analyze the situation without intruding into the privacy of uninvolved people on the one hand, while on the other hand using the data as evidence of possible criminal activity. Hence, the privacy of individuals was protected while maintaining the same level of security. PEVS provided the first technology-based video surveillance solution, which showed only relevant individuals in the image while leaving the identity of everyone else unrevealed. Therefore, the main contribution of this thesis was the construction of a novel approach to video surveillance systems, capable of selectively protecting the privacy of individuals. This included introducing an architecture for a privacy preserving video surveillance system, which consisted of several sub-constructs. These included storage techniques for privacy data and shadow detection and segmentation methods, which increased the accuracy and speed of previous methods. Further, novel security and privacy metrics for video surveillance were introduced. The overall system was a significant improvement over the existing knowledge base that has thus far seen only first steps to selective privacy protection but has failed to provide a complete system
Tiivistelmä Videovalvonnassa on tapahtunut viime vuosina merkittävää kasvua johtuen järkyttävistä tapahtumista kuten terrori-iskut, pankkiryöstöt ja järjestäytyneen rikollisuuden toimet. Videovalvontateknologia on kehittynyt merkittävästi mahdollistaen jopa yksittäisten ihmisten automaattisen seurannan. Turvallisuuden lisääntymisen katsotaan kuitenkin vähentäneen yksityisyyttä. Videovalvonnan avulla ihmisiä pystytään seuraamaan helpommin kuin koskaan aikaisemmin tunkeutuen täten heidän yksityisyytensä alueelle. On oletettu, että turvallisuus ja yksityisyys videovalvonnassa on nollasummapeliä, jossa kansalaisten on valittava yksityisyyden ja turvallisuuden välillä. Tämä tutkimus perustuu olettamukseen, että edellä esitetty ei pidä paikkaansa, vaan että on mahdollista suojata yksityisyys samalla taaten täysi turvallisuus. Ratkaisua tähän ongelmaan etsittiin suunnittelutieteellisen tutkimuksen avulla. Työssä suunniteltiin ja toteutettiin videovalvontajärjestelmä PEVS (Privacy Enhancing Video Surveillance system), joka suojaa valvonnanalaisten sivullisten yksityisyyttä ja siitä huolimatta tuottaa korkean turvallisuustason.. PEVS suojaa henkilöiden yksityisyyttä salaamalla automaattisesti videoaineistosta ne kuva-alat, joissa esiintyy ihmisiä. Mikäli laitonta toimintaa havaittaisiin, olisi riittävillä käyttöoikeuksilla mahdollista purkaa salaus mielenkiinnon kohteena olevien henkilöiden kohdalta tilanteen analysoimiseksi. Tämä mahdollisti yhtäältä puuttumattomuuden sivullisten yksityisyyteen ja toisaalta tiedon käyttämisen todistusaineistona mahdollisen rikoksen tutkimisessa. Tällä järjestelmällä yksityisyys oli mahdollista suojata samanaikaisesti, kun turvallisuudesta huolehdittiin. PEVS mahdollisti ensimmäistä kertaa maailmassa videovalvonnan, joka näyttää vain relevantit henkilöt jättäen muiden henkilöllisyyden paljastamatta. Sen takia tämän tutkimuksen merkittävin kontribuutio oli uudenlaisen lähestymistavan kehittäminen videovalvontaan, joka kykenee valikoivasti suojelemaan ihmisten yksityisyyttä. Tämä ratkaisu sisältää yksityisyyden suojaavan, useita rakenneosia sisältävän videovalvontajärjestelmäarkkitehtuurin esittelyn. Rakenneosiin kuuluu yksityisen tiedon tallennusmenetelmiä ja varjontunnistus- ja segmentointimetodeja, jotka paransivat aiemmin käytettyjen metodien tarkkuutta ja nopeutta. Lisäksi esiteltiin uudenlainen turvallisuus- ja yksityisyysmetriikka videovalvonnalle. Toteutettu järjestelmä on huomattava lisäys nykytietämykseen, jossa yksityisyyden suojan osalta on otettu vasta ensiaskelia ja joka ei mahdollista kattavaa järjestelmää
APA, Harvard, Vancouver, ISO, and other styles
30

Kim, Kihwan. "Spatio-temporal data interpolation for dynamic scene analysis." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47729.

Full text
Abstract:
Analysis and visualization of dynamic scenes is often constrained by the amount of spatio-temporal information available from the environment. In most scenarios, we have to account for incomplete information and sparse motion data, requiring us to employ interpolation and approximation methods to fill for the missing information. Scattered data interpolation and approximation techniques have been widely used for solving the problem of completing surfaces and images with incomplete input data. We introduce approaches for such data interpolation and approximation from limited sensors, into the domain of analyzing and visualizing dynamic scenes. Data from dynamic scenes is subject to constraints due to the spatial layout of the scene and/or the configurations of video cameras in use. Such constraints include: (1) sparsely available cameras observing the scene, (2) limited field of view provided by the cameras in use, (3) incomplete motion at a specific moment, and (4) varying frame rates due to different exposures and resolutions. In this thesis, we establish these forms of incompleteness in the scene, as spatio-temporal uncertainties, and propose solutions for resolving the uncertainties by applying scattered data approximation into a spatio-temporal domain. The main contributions of this research are as follows: First, we provide an efficient framework to visualize large-scale dynamic scenes from distributed static videos. Second, we adopt Radial Basis Function (RBF) interpolation to the spatio-temporal domain to generate global motion tendency. The tendency, represented by a dense flow field, is used to optimally pan and tilt a video camera. Third, we propose a method to represent motion trajectories using stochastic vector fields. Gaussian Process Regression (GPR) is used to generate a dense vector field and the certainty of each vector in the field. The generated stochastic fields are used for recognizing motion patterns under varying frame-rate and incompleteness of the input videos. Fourth, we also show that the stochastic representation of vector field can also be used for modeling global tendency to detect the region of interests in dynamic scenes with camera motion. We evaluate and demonstrate our approaches in several applications for visualizing virtual cities, automating sports broadcasting, and recognizing traffic patterns in surveillance videos.
APA, Harvard, Vancouver, ISO, and other styles
31

Boying, Lu, Zhang Jun, Nie Shuhui, and Huang Xinjian. "AUTOMATIC DEPENDENT SURVEILLANCE (ADS) SYSTEM RESEARCH AND DEVELOPMENT." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/607495.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
This paper presents the basic concept, construction principle and implementation work for the Automatic Dependent Surveillance (ADS) system. As a part of ADS system, the ADS message processing system based on PC computer was given more attention. Furthermore, the paper introduces the ADS trial status and points out that the ADS implementation will bring tremendous economical and social efficiency.
APA, Harvard, Vancouver, ISO, and other styles
32

Minaeian, Sara, and Sara Minaeian. "Dynamic Data-Driven Visual Surveillance of Human Crowds via Cooperative Unmanned Vehicles." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/625649.

Full text
Abstract:
Visual surveillance of human crowds in a dynamic environment has attracted a great amount of computer vision research efforts in recent years. Moving object detection, which conventionally includes motion segmentation and optionally, object classification, is the first major task for any visual surveillance application. After detecting the targets, estimation of their geo-locations is needed to create the same reference coordinate system for them for higher-level decision-making. Depending on the required fidelity of decision, multi-target data association may be also needed at higher levels to differentiate multiple targets in a series of frames. Applying all these vision-based algorithms to a crowd surveillance system (a major application studied in this dissertation) using a team of cooperative unmanned vehicles (UVs), introduces new challenges to the problem. Since the visual sensors move with the UVs, and thus the targets and the environment are dynamic, it adds to the complexity and uncertainty of the video processing. Moreover, the limited onboard computation resources require more efficient algorithms to be proposed. Responding to these challenges, the goal of this dissertation is to design and develop an effective and efficient visual surveillance system based on dynamic data driven application system (DDDAS) paradigm to be used by the cooperative UVs for autonomous crowd control and border patrol. The proposed visual surveillance system includes different modules: 1) a motion detection module, in which a new method for detecting multiple moving objects, based on sliding window is proposed to segment the moving foreground using the moving camera onboard the unmanned aerial vehicle (UAV); 2) a target recognition module, in which a customized method based on histogram-of-oriented-gradients is applied to classify the human targets using the onboard camera of unmanned ground vehicle (UGV); 3) a target geo-localization module, in which a new moving-landmark-based method is proposed for estimating the geo-location of the detected crowd from the UAV, while a heuristic method based on triangulation is applied for geo-locating the detected individuals via the UGV; and 4) a multi-target data association module, in which the affinity score is dynamically adjusted to comply with the changing dispersion of the detected targets over successive frames. In this dissertation, a cooperative team of one UAV and multiple UGVs with onboard visual sensors is used to take advantage of the complementary characteristics (e.g. different fidelities and view perspectives) of these UVs for crowd surveillance. The DDDAS paradigm is also applied toward these vision-based modules, where the computational and instrumentation aspects of the application system are unified for more accurate or efficient analysis according to the scenario. To illustrate and demonstrate the proposed visual surveillance system, aerial and ground video sequences from the UVs, as well as simulation models are developed, and experiments are conducted using them. The experimental results on both developed videos and literature datasets reveal the effectiveness and efficiency of the proposed modules and their promising performance in the considered crowd surveillance application.
APA, Harvard, Vancouver, ISO, and other styles
33

Mallya, Shruti. "Modelling Human Risk of West Nile Virus Using Surveillance and Environmental Data." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35734.

Full text
Abstract:
Limited research has been performed in Ontario to ascertain risk factors for West Nile Virus (WNV) and to develop a unified risk prediction strategy. The aim of the current body of work was to use spatio-temporal modelling in conjunction with surveillance and environmental data to determine which pre-WNV season factors could forecast a high risk season and to explore how well mosquito surveillance data could predict human cases in space and time during the WNV season. Generalized linear mixed modelling found that mean minimum monthly temperature variables and annual WNV-positive mosquito pools were most significantly predictive of number of human WNV cases (p<0.001). Spatio-temporal cluster analysis found that positive mosquito pool clusters could predict human case clusters up to one month in advance. These results demonstrate the usefulness of mosquito surveillance data as well as publicly available climate data for assessing risk and informing public health practice.
APA, Harvard, Vancouver, ISO, and other styles
34

Vrbova, Linda. "Use of animal data in public health surveillance for emerging zoonotic diseases." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44254.

Full text
Abstract:
Infectious agents transmitted between animals and humans (zoonoses) are important causes of emerging infectious diseases with major societal, economic, and public health implications. In order to prevent and control emerging zoonotic diseases (EZDs), they should ideally be identified in animals before they affect the human population. The utility of animal data for public health EZD surveillance was investigated in this thesis in four studies: a systematic literature review of current EZD surveillance systems and three critical examinations of pilot agricultural animal health surveillance systems. The first critical examination used expert-elicited criteria of EZD surveillance needs to evaluate a sentinel clinical pre-diagnostic system. The other two studies used statistical modeling to assess the ability of a laboratory-based system and an integrated system with both human and animal data to detect known patterns and outbreaks. The systematic review identified few evaluated surveillance systems, hence an evidence base for successful systems could not be obtained. Experts identified diagnostic data from laboratories and information on potential human exposures as important for public health action. While the sentinel animal surveillance system was not deemed useful on its own, identified gaps and biases in laboratory submissions suggest that sentinel veterinarians could inform animal laboratory surveillance. Seasonal trends and expected events of public health importance were identified in animal diagnostic laboratory data, however, statistical surveillance in either pre-diagnostic or diagnostic data streams did not provide adequate early warning signals for action. While the integrated surveillance for Salmonella bacteria allowed for the examination of the relationship between human and animal data, statistical alerts did not correlate with expert-identified investigations. Laboratory surveillance is likely the best candidate for EZD surveillance in animals, however, this information needs to be supplemented with potential human exposure information, as well as knowledge of data gaps and biases inherent in the data. Without this additional risk information to convert the animal data into risk for humans, the best use of animal laboratory data at this time is to help generate hypotheses in epidemiological investigations and in helping evaluate programs by examining longer-term trends.
APA, Harvard, Vancouver, ISO, and other styles
35

Findlater, Aidan. "Climate variability and leishmaniasis in Peru: an exploratory analysis of surveillance data." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106303.

Full text
Abstract:
Introduction: Mean global temperature is expected to increase by 0.2ºC in the next decade. The human health impacts of this increase are uncertain, but may include increases in host or vector populations of vector-borne diseases. This study was undertaken to evaluate the effect of sea surface temperature (SST) variability on leishmaniasis cases, a vector-borne disease, in Peru, and to investigate ways of modelling the relationship. Methods: Monthly incidence of cutaneous and mucocutaneous leishmaniasis data by region for 2002-2008 was combined with SST anomaly data from either El Niño index regions 1 and 2 (ENI1+2) or from fixed coastal monitoring stations. Several Bayesian models were compared, using the deviance information criterion (DIC) as a measure of model fit. The lag between an increase in SST and the resulting increase or decrease in leishmaniasis incidence was chosen by minimizing the DIC. Results: The model that gave the best fit was a hierarchical negative binomial model that used a linear trend term and a cosine function to model the disease incidence. In this model, each degree Celsius above expected temperature of the SST monitoring stations led to an increase in the national incidence of cutaneous leishmaniasis of 5% five months later (risk ratio 1.05; 95% CI 0.93 to 1.18); each degree Celsius above expected temperature of ENI1+2 led to an increase in the national incidence of cutaneous leishmaniasis of 26% six months later (risk ratio 1.26; 95% CI 1.18 to 1.35). Department-specific estimates of effect varied much more in the SST station model than in the ENI1+2 model, and departments in the north-east showing generally higher risk ratios than those in the south. Results for mucocutaneous leishmaniasis had much wider confidence intervals, but followed the same overall patterns as cutaneous leishmaniasis. Conclusions: Hierarchical negative binomial models that model seasonality with a cosine function provide a good fit for leishmaniasis data. Our results indicate that regional warming may contribute to increased transmission of leishmaniasis in Peru.
Introduction: La température moyenne mondiale va augmenter de 0,2 º C dans la prochaine décennie. Les effets de cette hausse sur la santé humaine sont inconnus mais peuvent inclure une augmentation des populations des hôtes ou des vecteurs de maladies à transmission vectorielle. Cette étude a été entreprise pour évaluer l'effet qu'une variabilité de la température de la surface de la mer (TSM) aurait sur les cas de leishmaniose, une maladie à transmission vectorielle, au Pérou, ainsi que pour étudier les moyens de modéliser cette relation.Méthodes: L'incidence mensuelle de la leishmaniose cutanée ou cutanéo-muqueuse entre 2002 et 2008 a été combinée avec les anomalies de TSM de l'indice des régions El Niño 1 et 2 (ENI1+2) ou des stations côtières fixes de surveillance. Plusieurs modèles bayésiens ont été comparés en utilisant le critère d'information de déviance (DIC) comme mesure de la qualité de l'ajustement. Le décalage entre l'augmentation de TSM et la hausse ou la baisse de l'incidence de la leishmaniose a été choisi en minimisant le DIC.Résultats: Le modèle qui a donné le meilleur ajustement aux données était un modèle binomial négatif hiérarchique qui avait un terme de tendance linéaire et une fonction cosinus pour modéliser l'incidence de la maladie. Dans ce modèle, chaque degré Celsius au-dessus de la température prévue par les stations de surveillance de TSM donnait une augmentation de l'incidence nationale de la leishmaniose cutanée de 5% cinq mois plus tard (risque relatif 1,05; 95% CI 0,93 à 1,18); chaque degré Celsius au-dessus de la température prévue par ENI1+2 avait comme résultat une augmentation de l'incidence nationale de la leishmaniose cutanée de 26% six mois plus tard (risque relatif 1,26; 95% IC 1,18 à 1,35). Les estimations des effets régionaux variaient beaucoup plus dans le modèle qui utilisait les stations de TSM que dans le modèle basé sur ENI1+2. En général, les régions dans le nord-est avaient des risques relatifs plus élevés que celles du sud. Les résultats pour la leishmaniose cutanéo-muqueuse avaient des intervalles de confiance beaucoup plus larges, mais ils suivaient les mêmes tendances que ceux de la leishmaniose cutanée.Conclusions: Les modèles binomiaux négatifs hiérarchiques qui modèlent le caractère saisonnier de l'incidence avec une fonction cosinus fournissent une bonne qualité d'ajustement aux données de la leishmaniose. Nos résultats indiquent que le réchauffement régional peut contribuer à une augmentation de la transmission de la leishmaniose au Pérou.
APA, Harvard, Vancouver, ISO, and other styles
36

Cadieux, Geneviève. "Assessing and improving the accuracy of surveillance case definitions using administrative data." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103456.

Full text
Abstract:
BACKGROUND Keeping pace with the rapidly evolving demands of infectious disease monitoring requires constant advances in surveillance methodology and infrastructure. A promising new method is syndromic surveillance, where health department staff, assisted by automated data acquisition and statistical alerts, monitor health indicators in near real-time. Several syndromic surveillance systems use diagnoses in administrative databases. However, physician claim diagnoses are not audited, and the effect of diagnostic coding variation on surveillance case definitions is not known. Furthermore, syndromic surveillance systems are limited by high false-positive (FP) rates. Almost no effort has been made to reduce FP rates by improving the positive predictive value (PPV) of surveilled data. OBJECTIVES 1) To evaluate the feasibility of identifying syndrome cases using diagnoses in physician claims. 2) To assess the accuracy of syndrome definitions based on diagnoses in physician claims. 3) To identify physician, patient, encounter and billing characteristics associated with the PPV of syndrome definitions. METHODS & RESULTS STUDY 1: We focused on a subset of diagnoses from a single syndrome (respiratory). We compared cases and non-cases identified from physician claims to medical charts. A convenience sample of 9 Montreal-area family physicians participated. 3,526 visits among 729 patients were abstracted from medical charts and linked to physician claims. The sensitivity and PPV of physician claims for identifying respiratory infections were 0.49, 95%CI (0.45, 0.53) and 0.93, 95%CI (0.91, 0.94). This pilot work demonstrated the feasibility of the proposed method and contributed to planning a full-scale validation of several syndrome definitions. STUDY 2: We focused on 5 syndromes: fever, gastrointestinal, neurological, rash, and respiratory. We selected a random sample of 3,600 physicians practicing in the province of Quebec in 2005-2007, then a stratified random sample of 10 visits per physician from their claims. We obtained chart diagnoses for all sampled visits through double-blinded chart reviews. Sensitivity, specificity, PPV, and negative predictive value (NPV) of syndrome definitions based on diagnoses in physician claims were estimated by comparison to chart review. 1,098 (30.5%) physicians completed the chart review and 10,529 visits were validated. The sensitivity of syndrome definitions ranged from 0.11, 95%CI (0.10, 0.13) for fever to 0.44, 95%CI (0.41, 0.47) for respiratory syndrome. The specificity and NPV were high for all syndromes. The PPV ranged from 0.59, 95%CI (0.55, 0.64) for fever to 0.85, 95%CI (0.83, 0.88) for respiratory syndrome. STUDY 3: We focused on the 4,330 syndrome cases identified from the claims of the 1,098 physicians who participated in study 2. We estimated the association between claim-chart agreement and physician, patient, encounter and billing characteristics using multivariate logistic regression. The likelihood of the medical chart agreeing with the physician claim about the presence of a syndrome was higher when the physician had billed many visits for the same syndrome recently (RR per 10 visits, 1.05; 95%CI, 1.01-1.08), had a lower workload (RR per 10 claims, 0.93; 95%CI, 0.90-0.97), and when the patient was younger (RR per 5 years, 0.96; 95%CI, 0.94-0.97) and less socially deprived (RR most vs least deprived, 0.76; 95%CI, 0.60-0.95). CONCLUSIONS This was the first population-based validation of syndromic surveillance case definitions based on diagnoses in physician claims. We found that the sensitivity of syndrome definitions was low, the PPV was moderate to high, and the specificity and NPV were high. We identified several physician, patient, encounter and billing characteristics associated with the PPV of syndrome definitions, many of which are readily accessible to public health departments and could be used to reduce the FP rate of syndromic surveillance systems.
CONTEXTE La surveillance des maladies infectieuses est un défi en constante évolution et un progrès continu au niveau des méthodes et des infrastructures est nécessaire pour répondre à la demande. Une nouvelle approche est la surveillance syndromique, où le personnel de santé publique, assisté de collecte automatisée de données et d'alertes statistiques, surveille des indicateurs de santé en temps quasi-réel. Plusieurs systèmes de surveillance syndromique s'appuient sur les diagnostics issus de bases de données administratives. Parce que ces codes de diagnostics ne font pas l'objet d'audits, l'effet de variations dans leur codage sur les définitions syndromiques demeure inconnu. OBJECTIFS 1) Évaluer la faisabilité d'identifier des syndromes à partir des diagnostics issus des services facturés par les médecins. 2) Évaluer l'exactitude de définitions syndromiques basées sur les diagnostics issus des services facturés par les médecins.3) Identifier les caractéristiques du médecin, du patient, de la rencontre médecin-patient et du mode de facturation associées au coefficient de prédiction positif (CPP) des définitions syndromiques. MÉTHODES & RÉSULTATS ÉTUDE 1: Cette étude a porté sur un seul syndrome (respiratoire). Nous avons comparés les cas positifs et négatifs identifiés à partir de la facturation, aux dossiers médicaux. Un échantillon de 9 médecins généralistes Montréalais a été utilisé. Les diagnostics de 3 526 visites effectuées par 729 patients ont été extraits des dossiers médicaux, et reliés à la facturation. La sensibilité et le CPP des diagnostics d'infection respiratoire issus de la facturation étaient 0.49 et 0.93. Cette étude de faisabilité a permis la planification d'une validation à grande-échelle de plusieurs définitions syndromiques. ÉTUDE 2: Cette étude a porté sur 5 syndromes: fièvre, gastro-intestinal, neurologique, cutané et respiratoire. Nous avons sélectionné aléatoirement 3600 médecins pratiquant au Québec en 2005-2007 et, parmi tous les services facturés, 10 visites par médecin. Pour chaque visite, le diagnostic du dossier médical a été obtenu grâce à une révision de dossier à double insu. La sensibilité, la spécificité, le CPP et le coefficient prédictif négatif (CPN) des définitions syndromiques basées sur les diagnostics issus de la facturation ont été estimés. 1098 (30.5%) médecins ont participé à l'étude et 10529 visites ont été validées. La sensibilité des définitions syndromiques variait de 0.11 pour la fièvre à 0.44 pour le syndrome respiratoire. La spécificité et le CPN étaient élevés pour tous les syndromes. Le CPP variait de 0.59 pour la fièvre à 0.85 pour le syndrome respiratoire. ÉTUDE 3: Nous avons restreint notre échantillon aux 4330 visites des 1098 médecins de l'étude 2 où le diagnostic de la facturation correspondait à l'un des syndromes. Nous avons utilisé une régression logistique multi-variée afin d'estimer l'association entre l'accord facturation-dossier et les caractéristiques du médecin, du patient, de la rencontre médecin-patient et du mode de facturation. La probabilité que le dossier médical confirme un syndrome présent selon la facturation était plus élevée lorsque le médecin avait facturé plusieurs visites pour le même syndrome récemment, avait une charge de travail moindre, et lorsque le patient était plus jeune et moins défavorisé socialement. CONCLUSIONS Cette étude a été la première validation à grande-échelle de définitions syndromiques basées sur les diagnostics issus des services facturés par les médecins. Nous avons découvert que la sensibilité de ces définitions est faible, le CPP varie de moyen à élevé, et la spécificité et le CPN sont élévés. Nous avons identifiés maintes caractéristiques du médecin, du patient, de la rencontre médecin-patient et du mode de facturation associées au CPP des définitions syndromiques, dont plusieurs sont accessibles aux agences de santé publique et pourraient être utilisées pour améliorer les systèmes de surveillance syndromique.
APA, Harvard, Vancouver, ISO, and other styles
37

Herrmann, Christian [Verfasser]. "Video-to-Video Face Recognition for Low-Quality Surveillance Data / Christian Herrmann." Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Matzopoulos, Richard. "The body count : using routine mortality surveillance data to drive violence prevention." Doctoral thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/12645.

Full text
Abstract:
Includes bibliographical references.
This thesis describes the conceptualisation, development and implementation of a mortuary-based system for the routine collection of information about homicide. It traces the evolution of the system from its conceptualisation in 1994, through various iterations as a city-level research tool, to a national sentinel system pilot, as a multicity all-injury surveillance system, and finally its institutionalisation as a provincial injury mortality surveillance system in the Western Cape. In so doing, it demonstrates that the data arising from medico-legal post-mortem investigations described in this thesis were an important source of descriptive epidemiological information on homicide. The 37,037 homicide records described in the thesis were drawn from Cape Town, Durban, Johannesburg, Port Elizabeth and Pretoria, for which the surveillance system maintained full coverage from 2001 to 2005. The aim was to apply more complex statistical analysis and modelling than had been applied previously.
APA, Harvard, Vancouver, ISO, and other styles
39

Quarranttey, George K. "Falls and Related Injuries Based on Surveillance Data: U.S. Hospital Emergency Departments." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2011.

Full text
Abstract:
Falls can lead to unintentional injuries and possibly death, making falls an important public health problem in terms of related health care cost, incurred disabilities, and years of life lost. Approximately 1 in every 3 Americans ages 65 years and older is at risk of falling at least once every year. Children, young adults, and middle-aged adults are also vulnerable to falls. The purpose of this study was to examine the epidemiology of falls and fall-related injuries using surveillance data from nationally representative samples of hospital emergency departments in United States. The study was guided by a social-ecological model on the premise that multiple levels of risk factors affect health. Using a cross-sectional study and archival data from NEISS-AIP between 2009 and 2011, the result of multiple logistic regression indicated that age, gender, race and body part affected were significantly associated with hospitalization due to falls (p < .001) and incident locale independently predicted hospitalization due to falls in which hospitalization due to falls was considered a proxy measure of fall severity. The odds in each of the groups for fall injuries were (a) older adults versus children, 1.07 (95% CI: 1.05-1.08); (b) males versus females, 1.23 (95% CI: 1.21-1.26); (c) Blacks versus Whites, 2.12 (95% CI: 2.11-2.13); (d) body part extremities versus head area, 0.98 (95% CI: 0.97-0.99); and (e) outside home versus inside home, 1.14 (95% CI: 1.13-1.15). The results of this study may be important in forming and implementing age-specific prevention strategies and specialized safety training programs for all age groups, thereby reducing deaths, disabilities, and considerable health care cost associated with hospitalization due to fall-related injuries.
APA, Harvard, Vancouver, ISO, and other styles
40

Kim, Youngho. "A surveillance modeling and ecological analysis of urban residential crimes in Columbus, Ohio, using Bayesian Hierarchical data analysis and new space-time surveillance methodology." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1186607028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wiliem, Arnold. "Robust suspicious behaviour detection for smart surveillance systems." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41567/1/Arnold_Wiliem_Thesis.pdf.

Full text
Abstract:
Video surveillance technology, based on Closed Circuit Television (CCTV) cameras, is one of the fastest growing markets in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. To overcome this limitation, it is necessary to have “intelligent” processes which are able to highlight the salient data and filter out normal conditions that do not pose a threat to security. In order to create such intelligent systems, an understanding of human behaviour, specifically, suspicious behaviour is required. One of the challenges in achieving this is that human behaviour can only be understood correctly in the context in which it appears. Although context has been exploited in the general computer vision domain, it has not been widely used in the automatic suspicious behaviour detection domain. So, it is essential that context has to be formulated, stored and used by the system in order to understand human behaviour. Finally, since surveillance systems could be modeled as largescale data stream systems, it is difficult to have a complete knowledge base. In this case, the systems need to not only continuously update their knowledge but also be able to retrieve the extracted information which is related to the given context. To address these issues, a context-based approach for detecting suspicious behaviour is proposed. In this approach, contextual information is exploited in order to make a better detection. The proposed approach utilises a data stream clustering algorithm in order to discover the behaviour classes and their frequency of occurrences from the incoming behaviour instances. Contextual information is then used in addition to the above information to detect suspicious behaviour. The proposed approach is able to detect observed, unobserved and contextual suspicious behaviour. Two case studies using video feeds taken from CAVIAR dataset and Z-block building, Queensland University of Technology are presented in order to test the proposed approach. From these experiments, it is shown that by using information about context, the proposed system is able to make a more accurate detection, especially those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information give critical feedback to the system designers to refine the system. Finally, the proposed modified Clustream algorithm enables the system to both continuously update the system’s knowledge and to effectively retrieve the information learned in a given context. The outcomes from this research are: (a) A context-based framework for automatic detecting suspicious behaviour which can be used by an intelligent video surveillance in making decisions; (b) A modified Clustream data stream clustering algorithm which continuously updates the system knowledge and is able to retrieve contextually related information effectively; and (c) An update-describe approach which extends the capability of the existing human local motion features called interest points based features to the data stream environment.
APA, Harvard, Vancouver, ISO, and other styles
42

Szarka, John Louis III. "Surveillance of Negative Binomial and Bernoulli Processes." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/26617.

Full text
Abstract:
The evaluation of discrete processes are performed for industrial and healthcare processes. Count data may be used to measure the number of defective items in industrial applications or the incidence of a certain disease at a health facility. Another classification of a discrete random variable is for binary data, where information on an item can be classified as conforming or nonconforming in a manufacturing context, or a patient's status of having a disease in health-related applications. The first phase of this research uses discrete count data modeled from the Poisson and negative binomial distributions in a healthcare setting. Syndromic counts are currently monitored by the BioSense program within the Centers for Disease Control and Prevention (CDC) to provide real-time biosurveillance. The Early Aberration Reporting System (EARS) uses recent baseline information comparatively with a current day's syndromic count to determine if outbreaks may be present. An adaptive threshold method is proposed based on fitting baseline data to a parametric distribution, then calculating an upper-tailed p-value. These statistics are then converted to an approximately standard normal random variable. Monitoring is examined for independent and identically distributed data as well as data following several seasonal patterns. An exponentially weighted moving average (EWMA) chart is also used for these methods. The effectiveness of these methods in detecting simulated outbreaks in several sensitivity analyses is evaluated. The second phase of research explored in this dissertation considers information that can be classified as a binary event. In industry, it is desirable to have the probability of a nonconforming item, p, be extremely small. Traditional Shewhart charts such as the p-chart, are not reliable for monitoring this type of process. A comprehensive literature review of control chart procedures for this type of process is given. The equivalence between two cumulative sum (CUSUM) charts, based on geometric and Bernoulli random variables is explored. An evaluation of the unit and group--runs (UGR) chart is performed, where it is shown that the in--control behavior of this chart is quite misleading and should not be recommended for practitioners.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Cakici, Baki. "Disease surveillance systems." Licentiate thesis, KTH, Programvaru- och datorsystem, SCS, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-33661.

Full text
Abstract:
Recent advances in information and communication technologies have made the development and operation of complex disease surveillance systems technically feasible, and many systems have been proposed to interpret diverse data sources for health-related signals. Implementing these systems for daily use and efficiently interpreting their output, however, remains a technical challenge. This thesis presents a method for understanding disease surveillance systems structurally, examines four existing systems, and discusses the implications of developing such systems. The discussion is followed by two papers. The first paper describes the design of a national outbreak detection system for daily disease surveillance. It is currently in use at the Swedish Institute for Communicable Disease Control. The source code has been licenced under GNU v3 and is freely available. The second paper discusses methodological issues in computational epidemiology, and presents the lessons learned from a software development project in which a spatially explicit micro-meso-macro model for the entire Swedish population was built based on registry data.
QC 20110520
APA, Harvard, Vancouver, ISO, and other styles
44

Beyan, Cigdem. "Object Tracking For Surveillance Applications Using Thermal And Visible Band Video Data Fusion." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612743/index.pdf.

Full text
Abstract:
Individual tracking of objects in the video such as people and the luggages they carry is important for surveillance applications as it would enable deduction of higher level information and timely detection of potential threats. However, this is a challenging problem and many studies in the literature track people and the belongings as a single object. In this thesis, we propose using thermal band video data in addition to the visible band video data for tracking people and their belongings separately for indoor applications using their heat signatures. For object tracking step, an adaptive, fully automatic multi object tracking system based on mean-shift tracking method is proposed. Trackers are refreshed using foreground information to overcome possible problems which may occur due to the changes in object&rsquo
s size, shape and to handle occlusion, split and to detect newly emerging objects as well as objects that leave the scene. By using the trajectories of objects, owners of the objects are found and abandoned objects are detected to generate an alarm. Better tracking performance is also achieved compared a single modality as the thermal reflection and halo effect which adversely affect tracking are eliminated by the complementing visible band data.
APA, Harvard, Vancouver, ISO, and other styles
45

Tam, Yat-hung, and 譚一鴻. "Can automated alerts generated from influenza surveillance data reduceinstitutional outbreaks in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B3972492X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Meyer, Mark A. (Mark Aaron). "Use of location data for the surveillance, analysis, and optimization of clinical processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35517.

Full text
Abstract:
Thesis (S.M.)--Harvard-MIT Division of Health Sciences and Technology, 2006.
Includes bibliographical references (leaves 33-35).
Location tracking systems in healthcare produce a wealth of data applicable across many aspects of care and management. However, since dedicated location tracking systems, such as the oft mentioned RFID tracking system, are still sparsely deployed, a number of other data sources may be utilized to serve as a proxy for physical location, such as barcodes and manual timestamp entry, and may be better suited to indicate progress through clinical workflows. INCOMING!, a web-based platform that monitors and tracks patient progress from the operating room to the post-anesthesia care unit (PACU), is one such system that utilizes manual timestamps routinely entered as standard process of care in the operating room in order to track a patient's progress through the post-operative period. This integrated real time system facilitates patient flow between the PACU and the surgical ward and eases PACU workload by reducing the effort of discharging patients.
(cont.) We have also developed a larger-scale integrated system for perioperative processes that integrates perioperative data from anesthesia and surgical devices and operating room (OR) / hospital information systems, and projects the real-time integrated data as a single, unified, easy to visualize display. The need to optimize perioperative throughput creates a demand for integration of the datastreams and for timely data presentation. The system provides improved context-sensitive information display, improved real-time monitoring of physiological data, real-time access to readiness information, and improved workflow management. These systems provide improved data access and utilization, providing context-aware applications in healthcare that are aware of a user's location, environment, needs, and goals.
by Mark A. Meyer.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
47

Marzuki, Marza Ihsan. "VMS data analyses and modeling for the monitoring and surveillance of Indonesian fisheries." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0012/document.

Full text
Abstract:
Le suivi, le contrôle et la surveillance (MCS) des pêches marines sont des problèmes essentiels pour la gestion durable des ressources halieutiques. Dans cette thèse, nous étudions le suivi spatial des activités des navires de pêche en utilisant les données de trajectoire du système de surveillance des navires (VMS) dans le cadre du projet INDESO (2013-2017). Notre objectif général est de développer une chaîne de traitement des données VMS afin de: i) effectuer un suivi de l'effort de pêche des flottilles de palangriers indonésiens, ii) détecter les activités de pêche illégales et évaluer leur importance. L'approche proposée repose sur des modèles de mélange gaussien (GMM) et les modèles de Markov cachés (HMM), en vue d'identifier les comportements élémentaires des navires de pêche, tels que les voyages, la recherche et les activités de pêche, dans un cadre non supervisé. Nous considérons différentes paramétrisations de ces modèles avec une étude particulière des palangriers indonésiens, pour lesquels nous pouvons bénéficier de données d'observateurs embarqués afin de procéder à une évaluation quantitative des modèles proposés et testés.Nous exploitons ensuite ces modèles statistiques pour deux objectifs différents: a) la discrimination des différents flottilles de pêche à partir des trajectoires des navires de pêche et l'application à la détection et à l'évaluation des activités de pêche illégale, b) l'évaluation d'un effort de pêche spatialisé à partir des données VMS. Nous obtenons de très bons taux de reconnaissance (environ 97%) pour la première tâche et nos expériences soutiennent le potentiel d'une exploration opérationnelle de l'approche proposée. En raison du nombre limité de données d'observateurs embarqués, seules des analyses préliminaires on pu être effectuées pour l'estimation de l'effort de pêche à partir des données VMS. Au-delà des développements méthodologiques potentiels, cette thèse met l'accent sur l'importance de la qualité de données d'observation en mer représentatives pour développer davantage l'exploitation des données VMS tant pour la recherche que pour les questions opérationnelles
Monitoring, control and surveillance (MCS) of marine fisheries are critical issues for the sustainable management of marine fisheries. In this thesis we investigate the space-based monitoring of fishing vessel activities using Vessel Monitoring System (VMS) trajectory data in the context of INDESO project (2013-2017). Our general objective is to develop a processing chain of VMS data in order to: i) perform a follow-up of the fishing effort of the Indonesian longline fleets, ii) detect illegal fishing activities and assess their importance. The proposed approach relies on classical latent class models, namely Gaussian Mixture Models (GMM) and Hidden Markov Models (HMM), with a view to identifying elementary fishing vessel behaviors, such as travelling, searching and fishing activities, in a unsupervised framework. Following state-of-the-art approaches, we consider different parameterizations of these models with a specific focus on Indonesian longliners, for which we can benefit from at-sea observers¿ data to proceed to a quantitative evaluation. We then exploit these statistical models for two different objectives: a) the discrimination of different fishing fleets from fishing vessel trajectories and the application to the detection and assessment of illegal fishing activities, b) the assessment of a spatialized fishing effort from VMS data. We report good recognition rate (about 97%) for the former task and our experiments support the potential for an operational exploration of the proposed approach. Due to limited at-sea observers¿ data, only preliminary analyses could be carried out for the proposed VMS-derived fishing effort. Beyond potential methodological developments, this thesis emphasizes the importance of high-quality and representative at-sea observer data for further developing the exploitation of VMS data both for research and operational issues
APA, Harvard, Vancouver, ISO, and other styles
48

Fraker, Shannon E. "Evaluation of Scan Methods Used in the Monitoring of Public Health Surveillance Data." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29511.

Full text
Abstract:
With the recent increase in the threat of biological terrorism as well as the continual risk of other diseases, the research in public health surveillance and disease monitoring has grown tremendously. There is an abundance of data available in all sorts of forms. Hospitals, federal and local governments, and industries are all collecting data and developing new methods to be used in the detection of anomalies. Many of these methods are developed, applied to a real data set, and incorporated into software. This research, however, takes a different view of the evaluation of these methods. We feel that there needs to be solid statistical evaluation of proposed methods no matter the intended area of application. Using proof-by-example does not seem reasonable as the sole evaluation criteria especially concerning methods that have the potential to have a great impact in our lives. For this reason, this research focuses on determining the properties of some of the most common anomaly detection methods. A distinction is made between metrics used for retrospective historical monitoring and those used for prospective on-going monitoring with the focus on the latter situation. Metrics such as the recurrence interval and time-to-signal measures are therefore the most applicable. These metrics, in conjunction with control charts such as exponentially weighted moving average (EWMA) charts and cumulative sum (CUSUM) charts, are examined. Two new time-to-signal measures, the average time-between-signal events and the average signal event length, are introduced to better compare the recurrence interval with the time-to-signal properties of surveillance schemes. The relationship commonly thought to exist between the recurrence interval and the average time to signal is shown to not exist once autocorrelation is present in the statistics used for monitoring. This means that closer consideration needs to be paid to the selection of which of these metrics to report. The properties of a commonly applied scan method are also studied carefully in the strictly temporal setting. The counts of incidences are assumed to occur independently over time and follow a Poisson distribution. Simulations are used to evaluate the method under changes in various parameters. In addition, there are two methods proposed in the literature for the calculation of the p-value, an adjustment based on the tests for previous time periods and the use of the recurrence interval with no adjustment for previous tests. The difference in these two methods is also considered. The quickness of the scan method in detecting an increase in the incidence rate as well as the number of false alarm events that occur and how long the method signals after the increase threat has passed are all of interest. These estimates from the scan method are compared to other attribute monitoring methods, mainly the Poisson CUSUM chart. It is shown that the Poisson CUSUM chart is typically faster in the detection of the increased incidence rate.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Yuzhou. "Using big data to enhance pertussis surveillance and response in Shandong Province, China." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/206172/1/Yuzhou_Zhang_Thesis.pdf.

Full text
Abstract:
Pertussis imposes a substantial global health burden and has been reported to resurge over the next few years in many countries. The thesis used big data to predict pertussis infection in Shandong province, China. The research quantified the associations of internet query data and socio-environmental factors with pertussis infection and developed spatial and temporal predictive models based on big data. The findings of the thesis may enhance traditional pertussis surveillance and response via the development of an early warning system based on big data.
APA, Harvard, Vancouver, ISO, and other styles
50

Tierney, Nicholas J. "Statistical approaches to revealing structure in complex health data." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/115536/1/Nicholas_Tierney_Thesis.pdf.

Full text
Abstract:
This thesis develops approaches for the modelling and analysis of structured data, considering two case study datasets containing features such as missing data, irregular time periods, workplace grouping, and spatial observations from different spatial scales. The models developed helped to: create individual health risk profiles over time, obtain ideal locations of health facilities to maximise their coverage, evaluate the impact of health facility access. Overall this thesis makes substantive contributions that extend models to account for data structures, provide corresponding new software tools, improve health surveillance and health resource usage, in the hope of improving health of the public.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography