Dissertations / Theses on the topic 'Plant maintenance Data processing'

To see the other types of publications on this topic, follow the link: Plant maintenance Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Plant maintenance Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Bin. "Optimization strategies for data warehouse maintenance in distributed environments." Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-0430102-133814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Numanovic, Kerim. "Advanced Clinical Data Processing: A Predictive Maintenance Model for Anesthesia Machines." Thesis, KTH, Tillämpad fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-283323.

Full text
Abstract:
The maintenance of medical devices is of great importance to ensure that the devices are stable, well-functioning, and safe to use. The current method of maintenance, which is called preventive maintenance, has its advantages but can be problematic both from an operators’ and a manufacturers’ side. Developing a model that will predict failure in anesthesia machines can be of great use for the manufacturer, the customers, and the patients. This thesis sets to examine the possibility of creating a predictive maintenance model for anesthesia machines by utilizing device data and machine learning. This thesis also investigates the influence of the data on the model performance and compare different lag sizes and future horizons to model performance. The time-series data collected came from 87 unique devices and a specific test was chosen to be the output variable of the model. A whole pipeline was created, which included pre-processing of the data, feature engineering, and model development. Feature extraction was done on the time series data, with the help of a library called tsfresh, which transformed time series characteristics into features that would enable supervised learning. Two models were developed: logistic regression and XGBoost. The logistic regression model acted as a baseline model and the result of its performance was as expected, quite poor. The XGBoost yielded an AUCPR score of 0.21 on the full dataset and 0.32 on a downsampled dataset. Although a quite low score, it was surprisingly high considering the extreme class imbalance that existed in the dataset. No clear pattern was found between the lag sizes and future horizons with the model performance. Something that could be seen was that the data imbalance had a great impact on the model performance, which was discovered when the downsampled dataset with less class imbalance yielded a higher AUCPR score.
Underhållet av medicintekniska produkter är mycket viktigt för att säkerställa att enheterna är stabila, välfungerande och säkra att använda. Den nuvarande underhållsmetoden, som kallas förebyggande underhåll, har sina fördelar men kan vara problematisk både från operatörens och tillverkarsidan. Att utveckla en modell som förutsäger fel i anestesimaskiner kan vara till stor nytta för tillverkaren, kunderna och patienterna. Denna avhandling syftar till att undersöka möjligheten att skapa en förutsägbar underhållsmodell för anestesimaskiner genom att använda enhetsdata och maskininlärning. Denna avhandling undersöker också påverkan av data på modellprestanda och jämför olika fördröjningsstorlekar och framtida horisonter med modellprestanda. Tidsseriedata som samlats in kom från 87 unika enheter och ett specifikt test valdes för att vara modellens outputvariabel. En hel pipeline skapades, som inkluderade förbehandling av data, funktionsteknik och modellutveckling. Funktionsextraktion gjordes på tidsseriedata med hjälp av ett bibliotek som heter tsfresh, som förvandlade tidsserieegenskaper till funktioner som skulle möjliggöra övervakat lärande. Två modeller utvecklades: logistisk regression och XGBoost. Den logistiska regressionsmodellen fungerade som en basmodell och resultatet av dess prestanda var som förväntat ganska dåligt. XGBoost gav en AUCPR-poäng på 0,21 på hela datamängden och 0,32 på en nedmonterad datamängd. Även om det var en ganska låg poäng, var det överraskande högt med tanke på den extrema klassobalansen som fanns i datasetet. Inget tydligt mönster hittades mellan fördröjningsstorlekarna och framtida horisonter med modellprestanda. Något som kunde ses var att dataobalansen hade stor inverkan på modellens prestanda, vilket upptäcktes när den nedprovade datamängden med mindre obalans i klassen gav en högre AUCPR-poäng.
APA, Harvard, Vancouver, ISO, and other styles
3

Antonovsky, Ari David. "The relationship between human factors and plant maintenance reliability in a petroleum processing organisation." Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/336.

Full text
Abstract:
Despite the considerable emphasis on improving maintenance reliability in the petroleum industry by adopting an engineering approach (International Standards Organization, 2006b), production losses, ineffective maintenance, and major disasters continue to occur (Urbina, 2010; Pidgeon, 2000). Analyses of these events have indicated that a failure to consider the human factors in the design (Taylor, 2007), operation (Øien, 2001a), or maintenance (Bea, 1998) of hazardous process technologies is often an important contributor. Based on research to evaluate the influence of these human factors on organisational performance, various models (Rasmussen, 1982; Dekker 2005) and taxonomies (Reason, 1998) for analysing organisational processes at the individual-, group- and organisational-level have been developed.By using these models, the current research was designed to determine the influence of human factors on maintenance reliability in petroleum operations. Three studies were conducted in petroleum operations with the objective in the first two studies of identifying the most-frequent contributors to maintenance-related failures, and in the third study, determining if group differences between higher and lower reliability work areas could be differentiated on the basis of these human factors.In Study 1, the First Priority incident database of the target organisation was used to determine the most frequently reported human factors in maintenance-related, lost-production failures. The most-frequent factors in the incidents (N=194) were found to be Violations, Design & Maintenance, Detection, and Decision-making. These results accorded with earlier studies in the field of human factors (Hobbs & Williamson, 2003; Lawton 1998), which frequently identified human error and violations as the causes of failures. Study 2 provided a more rigorous investigation of the organisational contributors to failures through structured interviews with maintenance personnel. The results of these interviews (N=38) using the Human Factors Investigation Tool (HFIT) (Gordon, 2005) demonstrated that Assumption, Design & Maintenance, and Communication were the most frequent contributors to maintenance-related failures.Based on the predominant factors identified in Study 2, a survey of the perceptions of maintenance personnel (N=178) was conducted for Study 3. Scales measuring Problem-solving (Morgeson & Humphrey, 2006) and Vigilance (Mann, Burnett, Radford, & Ford, 1997) were used to measure the processes that provoke assumptions. Design & Maintenance items from HFIT (Gordon, 2001), and scales from Wiio’s (1978 a&b) Organisational Communication Development questionnaire (OCD/2) were used to test the factors identified in Study 2. Exploratory Factor Analysis indicated that the responses to the Design & Maintenance items loaded onto a single variable, while the Communication items loaded onto two variables, which were named Job-related feedback and Information about change.The perceptions of personnel in lower and higher reliability work areas across the target organisation were compared using these scales, with reliability level ranked according to the monthly Mean Time Between Deferments of petroleum production. Significant between-group differences were found between work areas on Design & Maintenance and Problem-solving. These results suggest that better maintainability in the design of plant is predictive of higher reliability level. In addition, greater requirements for Problem-solving were associated with lower reliability level. There were no significant effects of reliability on Vigilance or either communication measure.The quantitative data was triangulated with comments in response to an open-ended question asking about factors that help or hinder maintenance activities. Respondent’s comments indicated that Communication was not significantly associated with reliability at the group-level. The reason appeared to be that Communication was an organisation-level property of the employing company. Many comments indicated that access to information was difficult, explaining the high occurrence of assumptions reported in Study 2. In addition, although maintenance personnel generally agreed in the survey that they were vigilant in decision-making, personnel in lower reliability facilities provided a higher proportion of comments indicating that the decision-making of supervisors and management had a negative impact on their work.The results of the three studies support past research demonstrating that problem-solving skills (Tucker, 2002) and the design of socio-technical facilities (Reiman, Oedewald & Rollenhagen, 2005) have an important influence on organisational performance. The findings further extend research in the field of human factors by demonstrating a significant relationship between these two factors and group-level performance. The findings also demonstrated the importance of organisational communication, but as an organisational-level dimension that might not influence group-level measures. This research has implications for organisations that operate complex, hazardous technologies and that are attempting to improve organisational processes by utilising a human factors approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Johansson, Peter. "Plant Condition Measurement from Spectral Reflectance Data." Thesis, Linköping University, Computer Vision, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59286.

Full text
Abstract:

The thesis presents an investigation of the potential of measuring plant condition from hyperspectral reflectance data. To do this, some linear methods for embedding the high dimensional hyperspectral data and to perform regression to a plant condition space have been compared. A preprocessing step that aims at normalized illumination intensity in the hyperspectral images has been conducted and some different methods for this purpose have also been compared.A large scale experiment has been conducted where tobacco plants have been grown and treated differently with respect to watering and nutrition. The treatment of the plants has served as ground truth for the plant condition. Four sets of plants have been grown one week apart and the plants have been measured at different ages up to the age of about five weeks. The thesis concludes that there is a relationship between plant treatment and their leaves' spectral reflectance, but the treatment has to be somewhat extreme for enabling a useful treatment approximation from the spectrum. CCA has been the proposed method for calculation of the hyperspectral basis that is used to embed the hyperspectral data to the plant condition (treatment) space. A preprocessing method that uses a weighted normalization of the spectrums for illumination intensity normalization is concluded to be the most powerful of the compared methods.

APA, Harvard, Vancouver, ISO, and other styles
5

Kaidis, Christos. "Wind Turbine Reliability Prediction : A Scada Data Processing & Reliability Estimation Tool." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-221135.

Full text
Abstract:
This research project discusses the life-cycle analysis of wind turbines through the processing of operational data from two modern European wind farms. A methodology for SCADA data processing has been developed combining previous research findings and in-house experience followed by statistical analysis of the results. The analysis was performed by dividing the wind turbine into assemblies and the failures events in severity categories. Depending on the failure severity category a different statistical methodology was applied, examining the reliability growth and the applicability of the “bathtub curve” concept for wind turbine reliability analysis. Finally, a methodology for adapting the results of the statistical analysis to site-specific environmental conditions is proposed.
APA, Harvard, Vancouver, ISO, and other styles
6

李淑儀 and Shuk-yee Wendy Lee. "Computer aided facilities design." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31208277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Samson, Margaret Kingman 1950. "COMPUTER AIDS FOR FACILITY LAYOUT." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mendez, Ronald Osiris. "The building information model in facilities management." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050406-153423/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Martello, Rosanna. "Cloud storage and processing of automotive Lithium-ion batteries data for RUL prediction." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
Lithium-ion batteries are the ideal choice for electric and hybrid vehicles, but the high cost and the relatively short life represent an open issue for automotive industries. For this reason, the estimation of battery Remaining Useful Life (RUL) and the State of Health (SoH) are primary goals in the automotive sector. Cloud computing provides all the resources necessary to store, process and analyze all sensor data coming from connected vehicles in order to develop Predictive Maintenance tasks. This project describes the work done during my internship at FEV Italia s.r.l. The aims were designing an architecture for managing the data coming from a vehicle fleet and developing algorithms able to predict the SoH and the RUL of Lithium-ion batteries. The designed architecture is based on three Amazon Web Services: Amazon Elastic Compute Cloud, Amazon Simple Storage Service and Amazon Relational Database Service. After data processing and the feature extraction, the RUL and SoH estimations are performed by training two Neural Networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Evans, Roy F. "Industrial maintenance data collection and application developing an information strategy for an industrial site /." Access electronically, 2008. http://ro.uow.edu.au/theses/92.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Obiora, Obinna Chukwuemeka. "Wireless condition monitoring to reduce maintenance resources in the Escravos–Gas–To–Liquids plant, Nigeria / Obiora, O.C." Thesis, North-West University, 2011. http://hdl.handle.net/10394/7040.

Full text
Abstract:
The purpose of this research is to reduce maintenance resources and improve Escravos–Gas–to–Liquids plant availability (EGTL) in Escravos, Nigeria using wireless condition monitoring. Secondary to the above is to justify the use of this technology over other conventional condition monitoring methods in petrochemical plants with specific reference to cost, reliability and security of the system. Wireless and continuous condition monitoring provides the means to evaluate current conditions of equipment and detect abnormalities. It allows for corrective measures to be taken to prevent upcoming failures. Continuous monitoring and event recording provides information on the energized equipment's response to normal and emergency conditions. Wireless/remote monitoring helps to coordinate equipment specifications and ratings, determine the real limits of the monitored equipment and optimize facility operations. Bentley N, (2005). Using wireless techniques eliminate any need for special cables and wires with lower installation costs if compared to other types of condition monitoring systems. In addition to this, wireless condition monitoring works well under difficult conditions in strategically important locations. The Escravos gas–to–liquid plant in Nigeria, located in a remote and offshore area where accommodation and space for offices is a factor for monitoring plant conditions in every office, is a typical example. Wireless technology for condition monitoring of energized equipment is applicable to both standalone and remote systems. In the research work of Meyer and Brambley (2002), they characterized the current problem with regards to cost effectiveness and availability of wireless condition monitoring. Maintenance of rotating equipment provides probability estimates of the total impact of the problem, cost implication of plant equipment maintenance and describes a generic system in which these developing technologies are used to provide real–time wireless/remote condition monitoring for rotating main air compressor (MAC) units and their components as a case study. Costs with today’s technology are provided and future costs are estimated, showing that benefits will greatly exceed costs in many cases, particularly if low–cost wireless monitoring is used. With management trends such as “re–engineering” and “downsizing” of the available workforce, wireless condition–monitoring of critical machines has been given more importance as a way to ensure quality production with fewer personnel. Wireless condition–monitoring using inexpensive wireless communication technology frees up existing plant maintenance personnel work on machines that are signaling problems and focusing the maintenance efforts away from attempting to work on a large population of machines to only those machines requiring immediate attention. Lloyd and Buddy (200) suggested that Point–to–point wireless data transmission systems, an excellent example of recent technological advances in communication systems, are now practical and cost–effective for industrial use. While both complex infrastructures and complex protocols are required for cellular communications, non– cellular communication systems, such as the point–to–point wireless data transmission system example, require no elaborate infrastructure. Limited research was done on the immediate benefits of implementing wireless condition monitoring systems in plants. All papers on the subject have been drawn up by manufacturers of such equipment. This research will thus also deliver a "third–party" perspective on the effectiveness of such devices, justifying their impact on data gathering security, cost and reliability.
Thesis (M.Ing. (Development and Management Engineering))--North-West University, Potchefstroom Campus, 2012.
APA, Harvard, Vancouver, ISO, and other styles
12

Foltman, Mary Ann. "AMUSED : a multi-user software environment diagnostic /." Online version of thesis, 1989. http://hdl.handle.net/1850/10495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Minarini, Francesco. "Anomaly detection prototype for log-based predictive maintenance at INFN-CNAF tier-1." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19304/.

Full text
Abstract:
Splitting the evolution of HEP from the one of computational resources needed to perform analyses is, nowadays, not possible. Each year, in fact, LHC produces dozens of PetaBytes of data (e.g. collision data, particle simulation, metadata etc.) that need orchestrated computing resources for storage, computational power and high throughput networks to connect centers. As a consequence of the LHC upgrade, the Luminosity of the experiment will increase by a factor of 10 over its originally designed value, entailing a non negligible technical challenge at computing centers: it is expected, in fact, an uprising in the amount of data produced and processed by the experiment. With this in mind, the HEP Software Foundation took action and released a road-map document describing the actions needed to prepare the computational infrastructure to support the upgrade. As a part of this collective effort, involving all computing centres of the Grid, INFN-CNAF has set a preliminary study towards the development of AI driven maintenance paradigm. As a contribution to this preparatory study, this master thesis presents an original software prototype that has been developed to handle the task of identifying critical activity time windows of a specific service (StoRM). Moreover, the prototype explores the viability of a content extraction via Text Processing techniques, applying such strategies to messages belonging to anomalous time windows.
APA, Harvard, Vancouver, ISO, and other styles
14

Lundgren, Andreas. "Data-Driven Engine Fault Classification and Severity Estimation Using Residuals and Data." Thesis, Linköpings universitet, Fordonssystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165736.

Full text
Abstract:
Recent technological advances in the automotive industry have made vehicularsystems increasingly complex in terms of both hardware and software. As thecomplexity of the systems increase, so does the complexity of efficient monitoringof these system. With increasing computational power the field of diagnosticsis becoming evermore focused on software solutions for detecting and classifyinganomalies in the supervised systems. Model-based methods utilize knowledgeabout the physical system to device nominal models of the system to detect deviations,while data-driven methods uses historical data to come to conclusionsabout the present state of the system in question. This study proposes a combinedmodel-based and data-driven diagnostic framework for fault classification,severity estimation and novelty detection. An algorithm is presented which uses a system model to generate a candidate setof residuals for the system. A subset of the residuals are then selected for eachfault using L1-regularized logistic regression. The time series training data fromthe selected residuals is labelled with fault and severity. It is then compressedusing a Gaussian parametric representation, and data from different fault modesare modelled using 1-class support vector machines. The classification of datais performed by utilizing the support vector machine description of the data inthe residual space, and the fault severity is estimated as a convex optimizationproblem of minimizing the Kullback-Leibler divergence (kld) between the newdata and training data of different fault modes and severities. The algorithm is tested with data collected from a commercial Volvo car enginein an engine test cell and the results are presented in this report. Initial testsindicate the potential of the kld for fault severity estimation and that noveltydetection performance is closely tied to the residual selection process.
APA, Harvard, Vancouver, ISO, and other styles
15

Moore, Thomas P. "Optimal design, procurement and support of multiple repairable equipment and logistic systems." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/71158.

Full text
Abstract:
A concept for the mathematical modeling of multiple repairable equipment and logistic systems (MREAL systems) is developed; These systems consist of multiple populations of repairable equipment, and their associated design, procurement, maintenance, and supply support. MREAL systems present management and design problems which parallel the·management and design of multiple, consumable item inventory systems. However, the MREAL system is more complex since it has a repair component. The MREAL system concept is described in a classification hierarchy which attempts to categorize the components of such systems. A specific mathematical model (MREAL1) is developed for a subset of these components. Included in MREAL1 are representations of the equipment reliability and maintainability design problem, the maintenance capacity problem, the retirement age problem, and the population size problem, for each of the multiple populations. MREAL1 models the steady state stochastic behavior of the equipment repair facilities using an approximation which is based upon the finite source, multiple server queuing system. System performance measures included in MREAL1 are: the expected MREAL total system life cycle cost (including a shortage cost penalty); the steady state expected number of shortages; the probability of catastrophic failure in each equipment population; and two budget based measures of effectiveness. Two optimization methods are described for a test problem developed for MREAL1. The first method computes values of the objective function and the constraints for a specified subset of the solution space. The best feasible solution found is recorded. This method can also examine all possible solutions, or can be used in a manual search. The second optimization method performs an exhaustive enumeration. of the combinatorial programming portion of MREAL1, which represents equipment design. For each enumerated design combination, an attempt is made to find the optimal solution to the remaining nonlinear discrete programming problem. A sequential unconstrained minimization technique is used which is based on an augmented Lagrangian penalty function adapted to the integer nature of MREAL1. The unconstrained minimization is performed by a combination of Rosenbrock's search technique, the steepest descent method, and Fibonacci line searches, adapted to the integer nature of the search. Since the model contains many discrete local minima, the sequential unconstrained minimization is repeated from different starting solutions, based upon a heuristic selection procedure. A gradient projection method provides the termination criteria for each unconstrained minimization.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Loharjun, Pasu. "A decision theoretic approach to the general layout problem." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/49824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hutton, Alistair James. "An empirical investigation of issues relating to software immigrants." Thesis, Connect to e-thesis, 2008. http://theses.gla.ac.uk/136/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2008.
Ph.D. thesis submitted to the Department of Computing Science, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
18

Louw, Andries Barnabas. "Tools for the revision of a maintenance strategy for an explosives manufacturing plant, using asset management principles / A.B. Louw." Thesis, North-West University, 2009. http://hdl.handle.net/10394/3835.

Full text
Abstract:
The research topic is: Tools for the revision of a maintenance strategy for an explosives manufacturing plant, using asset management principles. This research has specific reference to the SASOL Prillan plant based in SASOL, Sasolburg. The purpose of this research is to identify tools for the revision of a maintenance strategy for an explosives manufacturing plant, using asset management principles. These tools must be aimed to increase the proactive work capacity index, Figure 3, and to identify and/or develop tools that can be used by the engineering team of this explosives manufacturing plant to increase equipment reliability and performance. In this research assets include people. The meaning and application of asset management principles were researched and the tools needed to combine existing efforts and future needs are discussed. The human element to ensure the successful implementation of an asset management culture was researched and attributes of leaders and a change model is presented. This research was done into the wider engineering management discipline and not only maintenance. The method used to gather data was by means of interviews of a sample group within this organization. As this manufacturing unit makes use of subject matter experts, these support functions and plant personnel that were not interviewed, were issued with questionnaires to ensure that the sample group is a fair representation of the total manufacturing facility. To obtain a holistic view of potential shortcomings within the current maintenance strategy, all disciplines and levels within this operation were interviewed and commonalities of various asset management models were determined and used to define existing problem areas. This data was used to determine statistical correlations. The case study presented in Chapter 1 indicates that there is a case for change that can improve the proactive work capacity index of the engineering team. The results of this research confirm that there is in fact a real requirement to increase spares accuracy, improve on technical training as well as a need to establish visual performance indicators (dashboard) to measure overall equipment efficiency with the goal toTools for the revision of a maintenance strategy for an explosives manufacturing plant, using asset management principles increase equipment reliability and performance. The technical training referred to in this research reflects on training of people on equipment after investment in new technology. The current spares holding strategy is lacking equipment description accuracy. Furthermore, it is recommended that the implementation of career paths and development plans for individuals must be developed to create an environment of learning. The use of user status information captured on the computerized maintenance management system (SAP R/3) can add to the management of works orders and indicate where the focus must be to complete overdue work orders. Open work orders should be used to manage expenditure, to measure planning efficiency and to manage the cash flow of the business. The use of overall equipment efficiency and engineering efficiency measures is recommended and must be visually displayed on a “dashboard”. It was recommended that the engineering and operations personnel of this manufacturing plant be trained in asset management principles and that balanced scorecards are developed to ensure that the strategies of the various departments are aligned with the business strategy. Diagram 1 best illustrates the thinking and process flow of this research. The flow diagram shows five distinct stages and the appropriate objectives and/ or elements that were considered. The dissertation is also structured in this manner. All abbreviations, acronyms and definitions used in this document were listed in APPENDIX B
Thesis (M.Ing. (Development and Management Engineering))--North-West University, Potchefstroom Campus, 2009.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Bin. "Scalable integration view computation and maintenance with parallel, adaptive and grouping techniques." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-081905-093754/unrestricted/bliu.pdf.

Full text
Abstract:
Dissertation (Ph.D.) -- Worcester Polytechnic Institute.
Keywords: parallel multi-join computation; state level adaptation; materialized view maintenance; grouping maintenance; cyclic join views; distributed data sources. Includes bibliographical references (p.245-255).
APA, Harvard, Vancouver, ISO, and other styles
20

Shaw, Paul David. "Visualizing genetic transmission patterns in plant pedigrees." Thesis, Edinburgh Napier University, 2016. http://researchrepository.napier.ac.uk/Output/463271.

Full text
Abstract:
Ensuring food security in a world with an increasing population and demand on natural resources is becoming ever more pertinent. Plant breeders are using an increasingly diverse range of data types such as phenotypic and genotypic data to identify plant lines with desirable characteristics suitable to be taken forward in plant breeding programmes. These characteristics include a number of key morphological and physiological traits, such as disease resistance and yield that need to be maintained and improved upon if a commercial plant variety is to be successful. The ability to predict and understand the inheritance of alleles that facilitate resistance to pathogens or any other commercially important characteristic is crucially important to experimental plant genetics and commercial plant breeding programmes. However, derivation of the inheritance of such traits by traditional molecular techniques is expensive and time consuming, even with recent developments in high-throughput technologies. This is especially true in industrial settings where, due to time constraints relating to growing seasons, many thousands of plant lines may need to be screened quickly, efficiently and economically every year. Thus, computational tools that provide the ability to integrate and visualize diverse data types with an associated plant pedigree structure will enable breeders to make more informed and subsequently better decisions on the plant lines that are used in crossings. This will help meet both the demands for increased yield and production and adaptation to climate change. Traditional family tree style layouts are commonly used and simple to understand but are unsuitable for the data densities that are now commonplace in large breeding programmes. The size and complexity of plant pedigrees means that there is a cognitive limitation in conceptualising large plant pedigree structures, therefore novel techniques and tools are required by geneticists and plant breeders to improve pedigree comprehension. Taking a user-centred, iterative approach to design, a pedigree visualization system was developed for exploring a large and unique set of experimental barley (H. vulgare) data. This work progressed from the development of a static pedigree visualization to interactive prototypes and finally the Helium pedigree visualization software. At each stage of the development process, user feedback in the form of informal and more structured user evaluation from domain experts guided the development lifecycle with users' concerns addressed and additional functionality added. Plant pedigrees are very different to those from humans and farmed animals and consequently the development of the pedigree visualizations described in this work focussed on implementing currently accepted techniques used in pedigree visualization and adapting them to meet the specific demands of plant pedigrees. Helium includes techniques to aid problems with user understanding identified through user testing; examples of these include difficulties where crosses between varieties are situated in different regions of the pedigree layout. There are good biological reasons why this happens but it has been shown, through testing, that it leads to problems with users' comprehension of the relatedness of individuals in the pedigree. The inclusion of visual cues and the use of localised layouts have allowed complications like these to be reduced. Other examples include the use of sizing of nodes to show the frequency of usage of specific plant lines which have been shown to act as positional reference points to users, and subsequently bringing a secondary level of structure to the pedigree layout. The use of these novel techniques has allowed the classification of three main types of plant line, which have been coined: principal, flanking and terminal plant lines. This technique has also shown visually the most frequently used plant lines, which while previously known in text records, were never quantified. Helium's main contributions are two-fold. Firstly it has applied visualization techniques used in traditional pedigrees and applied them to the domain of plant pedigrees; this has addressed problems with handling large experimental plant pedigrees. The scale, complexity and diversity of data and the number of plant lines that Helium can handle exceed other currently available plant pedigree visualization tools. These techniques (including layout, phenotypic and genotypic encoding) have been improved to deal with the differences that exist between human/mammalian pedigrees which take account of problems such as the complexity of crosses and routine inbreeding. Secondly, the verification of the effectiveness of the visualizations has been demonstrated by performing user testing on a group of 28 domain experts. The improvements have advanced both user understanding of pedigrees and allowed a much greater density and scale of data to be visualized. User testing has shown that the implementation and extensions to visualization techniques has improved user comprehension of plant pedigrees when asked to perform real-life tasks with barley datasets. Results have shown an increase in correct responses between the prototype interface and Helium. A SUS analysis has sown a high acceptance rate for Helium.
APA, Harvard, Vancouver, ISO, and other styles
21

Cunningham, Jock Bernard. "Bulk analysis of lead smelter sinter plant products for sulphur (and Pb, Fe, Zn) using neutron inelastic scatter gamma-rays." Thesis, Queensland University of Technology, 1987.

Find full text
Abstract:
A potential for increasing the production in the lead sinter plant at Mount Isa exists if the sulphur and moisture concentration in the feed material could be controlled accurately. This thesis reports on a project to develop an on-line instrument to measure sulphur and moisture in sinter feed for process control. The feasibility of measuring the concentrations of lead, iron and zinc is also examined. The instrument uses the technique of fast neutron inelastic scatter gamma ray measurement for the element determinations and neutron moderation for moisture measurement. Three instrument geometries were examined under laboratory conditions:- (a) annular (b) semi-annular (c) backscatter - main layer (d) backscatter - ignitor layer An instrument based on geometry (c) constructed and tested in the laboratory. above, was designed, The instrument was shown to measure sulphur, lead, iron, zinc and moisture to within (one standard deviation) O. 67, 1.2, 0.49 and 0.34 and 0.49 -weight 1.respectively. The repeatability for a aeasurement time of 25 minutes was 0.40, 0. 70, 0.20, 0.17, 0.3 weight 1. respectively. This performance is adequate for process control.
APA, Harvard, Vancouver, ISO, and other styles
22

Guillen, Rosaperez Diego Alonso. "Self-Learning Methodology for Failure Detection in an Oil- Hydraulic Press : Predictive maintenance." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289371.

Full text
Abstract:
Deep Learning methods have dramatically improved the state-of-the-art across multiple fields, such as speech recognition, object detection, among others. Nevertheless, its application on signal processing, where data is frequently unlabelled, has received relatively little attention. In this field, nowadays, a set of sub-optimal techniques are often used. They usually require an expert to manually extract features to analyse, which is a knowledge and labour intensive process. Thus, a self-learning technique could improve current methods. Moreover, certain machines in a factory are particularly complex, such as an oil-hydraulic press. Here, its sensors can only identify few failures by setting up some thresholds, but they commonly cannot detect wear on its internal components. So, a self-learning technique would be required to detect anomalies related to deterioration. The concept is to determine the condition of a machine and to predict breakdowns by analysing patterns in the measurements from their sensors. This document proposes a self-learning methodology that uses a deep learning model to predict failures in such a machine. The core idea is to train an algorithm that can identify by itself the relevant features to extract on a work cycle, and to relate them to a part which will breakdown. The conducted evaluation focuses on an example case where a hydraulic accumulator fails. As result, it was possible to forecast its breakdown two weeks in advance. Finally, the proposed method provides explanations at every step, after acknowledging their importance in industrial applications. Also, some considerations and limitations of this technique are stated to support guiding the expectation of some stakeholders in a factory, i.e. a (Global) Process Owner.
Deep Learning-metoder har dramatiskt förbättrat det senaste inom flera  fält, såsom taligenkänning, objektdetektering, bland andra.  Ändå har dess  tillämpning på signalbehandling, där data ofta är omärkt, fått relativt lite uppmärksamhet. I detta fält används numera ofta en uppsättning suboptimala tekniker. De kräver vanligtvis en expert för att manuellt extrahera funktioner för att analysera, vilket är en kunskaps och arbetsintensiv process. Således kan en självlärande teknik förbättra nuvarande metoder.   Dessutom är vissa maskiner i en fabrik särskilt komplexa, såsom en oljehydraulisk press. Här kan dess sensorer bara identifiera några fel genom att ställa in vissa trösklar, men de kan vanligtvis inte upptäcka slitage på dess interna komponenter. Så, en självlärande teknik skulle krävas för att upptäcka avvikelser relaterade till försämring. Konceptet är att bestämma maskinens tillstånd och att förutsäga haverier genom att analysera mönster i mätningarna från deras sensorer.   Detta dokument föreslår en självlärningsmetodik som använder en djupinlärningsmodell för att förutsäga fel i en sådan maskin. Kärnidén är att träna en algoritm som i sig kan identifiera de relevanta funktionerna som ska extraheras i en arbetscykel och att relatera dem till en del som kommer att bryta ner. Den genomförda utvärderingen fokuserar på ett exempel på fall där en hydraulisk ackumulator misslyckas. Som ett resultat var det möjligt att förutse dess fördelning två veckor i förväg.   Slutligen ger den föreslagna metoden förklaringar i varje steg, efter att ha erkänt deras betydelse i industriella applikationer. Några överväganden och begränsningar av denna teknik anges också som stöd för att vägleda förväntningarna hos vissa intressenter i en fabrik, dvs. en (global) processägare.
APA, Harvard, Vancouver, ISO, and other styles
23

Lembke, Benjamin. "Bearing Diagnosis Using Fault Signal Enhancing Teqniques and Data-driven Classification." Thesis, Linköpings universitet, Fordonssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158240.

Full text
Abstract:
Rolling element bearings are a vital part in many rotating machinery, including vehicles. A defective bearing can be a symptom of other problems in the machinery and is due to a high failure rate. Early detection of bearing defects can therefore help to prevent malfunction which ultimately could lead to a total collapse. The thesis is done in collaboration with Scania that wants a better understanding of how external sensors such as accelerometers, can be used for condition monitoring in their gearboxes. Defective bearings creates vibrations with specific frequencies, known as Bearing Characteristic Frequencies, BCF [23]. A key component in the proposed method is based on identification and extraction of these frequencies from vibration signals from accelerometers mounted near the monitored bearing. Three solutions are proposed for automatic bearing fault detection. Two are based on data-driven classification using a set of machine learning methods called Support Vector Machines and one method using only the computed characteristic frequencies from the considered bearing faults. Two types of features are developed as inputs to the data-driven classifiers. One is based on the extracted amplitudes of the BCF and the other on statistical properties from Intrinsic Mode Functions generated by an improved Empirical Mode Decomposition algorithm. In order to enhance the diagnostic information in the vibration signals two pre-processing steps are proposed. Separation of the bearing signal from masking noise are done with the Cepstral Editing Procedure, which removes discrete frequencies from the raw vibration signal. Enhancement of the bearing signal is achieved by band pass filtering and amplitude demodulation. The frequency band is produced by the band selection algorithms Kurtogram and Autogram. The proposed methods are evaluated on two large public data sets considering bearing fault classification using accelerometer data, and a smaller data set collected from a Scania gearbox. The produced features achieved significant separation on the public and collected data. Manual detection of the induced defect on the outer race on the bearing from the gearbox was achieved. Due to the small amount of training data the automatic solutions were only tested on the public data sets. Isolation performance of correct bearing and fault mode among multiplebearings were investigated. One of the best trade offs achieved was 76.39 % fault detection rate with 8.33 % false alarm rate. Another was 54.86 % fault detection rate with 0 % false alarm rate.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, TsungPo. "An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24819.

Full text
Abstract:
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Dimitri Mavris; Committee Member: Erwing Calleros; Committee Member: Hongmei Chen; Committee Member: Mark Waters; Committee Member: Vitali Volovoi.
APA, Harvard, Vancouver, ISO, and other styles
25

Storoshchuk, Orest Lev Poehlman William Frederick Skipper. "Model based synchronization of monitoring and control systems /." *McMaster only, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
26

Diallo, Ousmane Nasr. "A data analytics approach to gas turbine prognostics and health management." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/42845.

Full text
Abstract:
As a consequence of the recent deregulation in the electrical power production industry, there has been a shift in the traditional ownership of power plants and the way they are operated. To hedge their business risks, the many new private entrepreneurs enter into long-term service agreement (LTSA) with third parties for their operation and maintenance activities. As the major LTSA providers, original equipment manufacturers have invested huge amounts of money to develop preventive maintenance strategies to minimize the occurrence of costly unplanned outages resulting from failures of the equipments covered under LTSA contracts. As a matter of fact, a recent study by the Electric Power Research Institute estimates the cost benefit of preventing a failure of a General Electric 7FA or 9FA technology compressor at $10 to $20 million. Therefore, in this dissertation, a two-phase data analytics approach is proposed to use the existing monitoring gas path and vibration sensors data to first develop a proactive strategy that systematically detects and validates catastrophic failure precursors so as to avoid the failure; and secondly to estimate the residual time to failure of the unhealthy items. For the first part of this work, the time-frequency technique of the wavelet packet transforms is used to de-noise the noisy sensor data. Next, the time-series signal of each sensor is decomposed to perform a multi-resolution analysis to extract its features. After that, the probabilistic principal component analysis is applied as a data fusion technique to reduce the number of the potentially correlated multi-sensors measurement into a few uncorrelated principal components. The last step of the failure precursor detection methodology, the anomaly detection decision, is in itself a multi-stage process. The obtained principal components from the data fusion step are first combined into a one-dimensional reconstructed signal representing the overall health assessment of the monitored systems. Then, two damage indicators of the reconstructed signal are defined and monitored for defect using a statistical process control approach. Finally, the Bayesian evaluation method for hypothesis testing is applied to a computed threshold to test for deviations from the healthy band. To model the residual time to failure, the anomaly severity index and the anomaly duration index are defined as defects characteristics. Two modeling techniques are investigated for the prognostication of the survival time after an anomaly is detected: the deterministic regression approach, and parametric approximation of the non-parametric Kaplan-Meier plot estimator. It is established that the deterministic regression provides poor prediction estimation. The non parametric survival data analysis technique of the Kaplan-Meier estimator provides the empirical survivor function of the data set comprised of both non-censored and right censored data. Though powerful because no a-priori predefined lifetime distribution is made, the Kaplan-Meier result lacks the flexibility to be transplanted to other units of a given fleet. The parametric analysis of survival data is performed with two popular failure analysis distributions: the exponential distribution and the Weibull distribution. The conclusion from the parametric analysis of the Kaplan-Meier plot is that the larger the data set, the more accurate is the prognostication ability of the residual time to failure model.
APA, Harvard, Vancouver, ISO, and other styles
27

Trümper, Jonas. "Visualization techniques for the analysis of software behavior and related structures." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7214/.

Full text
Abstract:
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes. However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization. This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature. This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner. The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios.
Die Softwarewartung umfasst alle Änderungen an einem Softwaresystem nach dessen initialer Bereitstellung und stellt damit eine der wesentlichen Phasen im typischen Softwarelebenszyklus dar. In der Softwarewartung müssen wir insbesondere strukturelle und verhaltensbezogene Aspekte verstehen, welche z.B. alleine durch Lesen von Quelltext schwer herzuleiten sind. Die Softwareanalyse ist daher ein unverzichtbares Werkzeug zur Wartung solcher Systeme: Sie bietet - vorzugsweise automatisierte - Mittel, um Informationen über deren Artefakte, wie Softwarestruktur, Laufzeitverhalten und verwandte Prozesse, zu extrahieren und zu evaluieren. Eine solche Analyse resultiert jedoch typischerweise in großen und größten Rohdaten, die selbst erfahrene Softwareingenieure direkt nur schwer untersuchen, bewerten und verstehen können. Unter Anderem dann, wenn vorab keine klare Frage formulierbar ist, benötigen sie Werkzeuge, um diese Daten zu erforschen. Hierfür bietet die Softwareanalyse und Visualisierung ihren Nutzern leistungsstarke, interaktive Mittel. Diese ermöglichen es Aufgaben zu automatisieren und insbesondere wertvolle und belastbare Einsichten aus den Rohdaten zu erlangen. Beispielsweise ist die Visualisierung von Software-Traces ein Mittel, um das Laufzeitverhalten eines Systems zu ergründen. Diese Arbeit zielt darauf ab, den "Werkzeugkasten" der visuellen Softwareanalyse zu erweitern und zu verbessern, indem sie sich auf bestimmte, offene Herausforderungen in den Bereichen der dynamischen und statischen Analyse von Softwaresystemen konzentriert. Die Arbeit entwickelt eine Reihe von Konzepten und Werkzeugen für die explorative Visualisierung der entsprechenden Daten, um Nutzer darin zu unterstützen, Informationen über betroffene Systemartefakte zu lokalisieren und zu verstehen. Da es insbesondere an geeigneten Visualisierungsmetaphern mangelt, ist dies eine schwierige Aufgabe. Es bestehen, insbesondere bei komplexen Softwaresystemen, verschiedenste offene technische sowie konzeptionelle Fragestellungen und Herausforderungen. Diese Arbeit konzentriert sich auf Techniken zur visuellen Darstellung kontrollflussbezogener Aspekte aus Software-Traces von Shared-Memory Softwaresystemen: Ein Trace-Visualisierungskonzept, basierend auf Icicle Plots, unterstützt das Verstehen von single- und multi-threaded Laufzeitverhalten auf Funktionsebene. Die Erweiterbarkeit des Konzepts ermöglicht es zudem spezifische Aspekte des Multi-Threading, wie Synchronisation, zu visualisieren und zu analysieren, derartige Traces mit Daten aus der statischen Softwareanalyse zu korrelieren sowie Traces mit einander zu vergleichen. Darüber hinaus werden komplementäre Techniken für die kombinierte Analyse von Systemstrukturen und der Evolution zugehöriger Eigenschaften vorgestellt. Diese zielen darauf ab, die Langzeitplanung von Softwarearchitekturen und Management-Entscheidungen in Softwareprojekten mittels Erweiterungen an der Circular-Bundle-View-Technik zu unterstützen: Eine Erweiterung auf den 3-dimensionalen Raum ermöglicht es zusätzliche visuelle Variablen zu nutzen; Strukturen können mithilfe von Interaktionstechniken visuell bearbeitet werden. Die gezeigten Techniken und Konzepte sind allgemein verwendbar und lassen sich daher auch jenseits der Softwareanalyse einsetzen, um ähnlich strukturierte Daten zu visualisieren. Mehrere qualitative Studien an Softwaresystemen in industriellem Maßstab stellen die Praktikabilität der Techniken dar. Die Ergebnisse sind erste Belege dafür, dass die Anwendung der Techniken in verschiedenen Szenarien nützliche Einsichten in die untersuchten Daten und deren Zusammenhänge liefert.
APA, Harvard, Vancouver, ISO, and other styles
28

Nguyen, Hoang-Phuong. "Model-based and data-driven prediction methods for prognostics." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASC021.

Full text
Abstract:
La dégradation est un phénomène inévitable qui affecte les composants et les systèmes d'ingénierie, et qui peut entraîner leurs défaillances avec des conséquences potentiellement catastrophiques selon l'application. La motivation de cette Thèse est d'essayer de modéliser, d'analyser et de prédire les défaillances par des méthodes pronostiques qui peuvent permettre une gestion prédictive de la maintenance des actifs. Cela permettrait aux décideurs d'améliorer la planification de la maintenance, augmentant ainsi la disponibilité et la sûreté du système en minimisant les arrêts imprévus. Dans cet objectif, la recherche au cours de la thèse a été consacrée à l'adaptation et à l'utilisation d'approches basées sur des modèles et d'approches pilotées par les données pour traiter les processus de dégradation qui peuvent conduire à différents modes de défaillance dans les composants industriels, en utilisant différentes sources d'informations et de données pour effectuer des prédictions sur l'évolution de la dégradation et estimer la durée de vie utile restante (RUL).Les travaux de thèse ont porté sur deux applications pronostiques spécifiques: les pronostics basés sur des modèles pour la prédiction de la croissance des fissures par fatigue et les pronostics pilotées par les données pour les prédictions à pas multiples des données de séries chronologiques des composants des Centrales Nucléaires.Les pronostics basé sur des modèles compter sur le choix des modèles adoptés de Physics-of-Failure (PoF). Cependant, chaque modèle de dégradation ne convient qu'à certains processus de dégradation dans certaines conditions de fonctionnement, qui souvent ne sont pas connues avec précision. Pour généraliser, des ensembles de multiples modèles de dégradation ont été intégrés dans la méthode pronostique basée sur les modèles afin de tirer profit des différentes précisions des modèles spécifiques aux différentes dégradations et conditions. Les principales contributions des approches pronostiques proposées basées sur l'ensemble des modèles sont l'intégration d'approches de filtrage, y compris le filtrage Bayésien récursif et le Particle Filtering (PF), et de nouvelles stratégies d'ensemble pondérées tenant compte des précisions des modèles individuels dans l'ensemble aux étapes de prédiction précédentes. Les méthodes proposées ont été validées par des études de cas de croissance par fissures de fatigue simulées dans des conditions de fonctionnement variables dans le temps.Quant à la prédictions à pas multiples, elle reste une tâche difficile pour le Prognostics and Health Management (PHM) car l'incertitude de prédiction a tendance à augmenter avec l'horizon temporel de la prédiction. La grande incertitude de prédiction a limité le développement de pronostics à pas multiples dans les applications. Pour résoudre le problème, de nouveaux modèles de prédiction à pas multiples basés sur la Long Short-Term Memory (LSTM), un réseau de neurones profond développé pour traiter les dépendances à long terme dans les données de séries chronologiques, ont été développés dans cette Thèse. Pour des applications pratiques réalistes, les méthodes proposées abordent également les problèmes supplémentaires de détection d'anomalie, d'optimisation automatique des hyper-paramètres et de quantification de l'incertitude de prédiction. Des études de cas pratiques ont été envisagées, concernant les données de séries chronologiques collectées auprès des Générateurs de Vapeur et de Pompes de Refroidissement de Réacteurs de Centrales Nucléaires
Degradation is an unavoidable phenomenon that affects engineering components and systems, and which may lead to their failures with potentially catastrophic consequences depending on the application. The motivation of this Thesis is trying to model, analyze and predict failures with prognostic methods that can enable a predictive management of asset maintenance. This would allow decision makers to improve maintenance planning, thus increasing system availability and safety by minimizing unexpected shutdowns. To this aim, research during the Thesis has been devoted to the tailoring and use of both model-based and data-driven approaches to treat the degradation processes that can lead to different failure modes in industrial components, making use of different information and data sources for performing predictions on the degradation evolution and estimating the Remaining Useful Life (RUL).The Ph.D. work has addressed two specific prognostic applications: model-based prognostics for fatigue crack growth prediction and data-driven prognostics for multi-step ahead predictions of time series data of Nuclear Power Plant (NPP) components.Model-based prognostics relies on the choice of the adopted Physics-of-Failure (PoF) models. However, each degradation model is appropriate only to certain degradation process under certain operating conditions, which are often not precisely known. To generalize this, ensembles of multiple degradation models have been embedded in the model-based prognostic method in order to take advantage of the different accuracies of the models specific to different degradations and conditions. The main contributions of the proposed ensemble of models-based prognostic approaches are the integration of filtering approaches, including recursive Bayesian filtering and Particle Filtering (PF), and novel weighted ensemble strategies considering the accuracies of the individual models in the ensemble at the previous time steps of prediction. The proposed methods have been validated by case studies of fatigue crack growth simulated with time-varying operating conditions.As for multi-step ahead prediction, it remains a difficult task of Prognostics and Health Management (PHM) because prediction uncertainty tends to increase with the time horizon of the prediction. Large prediction uncertainty has limited the development of multi-step ahead prognostics in applications. To address the problem, novel multi-step ahead prediction models based on Long Short- Term Memory (LSTM), a deep neural network developed for dealing with the long-term dependencies in the time series data have been developed in this Thesis. For realistic practical applications, the proposed methods also address the additional issues of anomaly detection, automatic hyperparameter optimization and prediction uncertainty quantification. Practical case studies have been considered, concerning time series data collected from Steam Generators (SGs) and Reactor Coolant Pumps (RCPs) of NPPs
APA, Harvard, Vancouver, ISO, and other styles
29

Ramalingam, Nagarajan. "Non-contact multispectral and thermal sensing techniques for detecting leaf surface wetness." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1104392582.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xxii, 271 p.; also includes graphics (some col.) Includes bibliographical references (p. 206-214).
APA, Harvard, Vancouver, ISO, and other styles
30

Surajbali, Bholanathsingh, Paul Grace, and Geoff Coulson. "Preserving dynamic reconfiguration consistency in aspect oriented middleware." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4137/.

Full text
Abstract:
Aspect-oriented middleware is a promising technology for the realisation of dynamic reconfiguration in heterogeneous distributed systems. However, like other dynamic reconfiguration approaches, AO-middleware-based reconfiguration requires that the consistency of the system is maintained across reconfigurations. AO-middleware-based reconfiguration is an ongoing research topic and several consistency approaches have been proposed. However, most of these approaches tend to be targeted at specific contexts, whereas for distributed systems it is crucial to cover a wide range of operating conditions. In this paper we propose an approach that offers distributed, dynamic reconfiguration in a consistent manner, and features a flexible framework-based consistency management approach to cover a wide range of operating conditions. We evaluate our approach by investigating the configurability and transparency of our approach and also quantify the performance overheads of the associated consistency mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
31

Hobson, Alan George Cawood. "Optimising the renewal of natural gas reticulation pipes using GIS." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52980.

Full text
Abstract:
Thesis (MA)--University of Stellenbosch, 2002.
ENGLISH ABSTRACT: A major concern for Energex, Australia's largest energy utility in South East Queensland, is the escape of natural gas out of their reticulation systems. Within many of the older areas in Brisbane, these networks operate primarily at low and medium pressure with a significant percentage of mains being cast iron or steel. Over many years pipes in these networks have been replaced, yet reports show that unaccounted for gas from the same networks remain high. Furthermore, operation and maintenance budgets for these networks are high with many of these pipes close to the end of their economic life. When operation and maintenance costs exceed the costs of replacement, the Energex gas utility initiates projects to renew reticulation networks with polyethylene pipes. Making decisions about pipe renewal requires an evaluation of historical records from a number of sources, namely: • gas consumption figures, • history of leaks, • maintenance and other related cost, and • the loss of revenue contributed by unaccounted for gas. Financial justification of capital expenditure has always been a requirement for renewal projects at the Energex gas utility, however the impact of a deregulation in the energy utility market has necessitated a review of their financial assessment for capital projects. The Energex gas utility has developed an application that evaluates the financial viability of renewal projects. This research will demonstrate the role of GIS integration with the Energex financial application. The results of this study showed that a GIS integrated renewal planning approach incorporates significant benefits including: • Efficient selection of a sub-network based on pipe connectivity, • Discovery of hidden relationships between spatially enabled alphanumeric data and environmental information that improves decision making, and • Enhanced testing of proposed renewal design options by scrutinizing the attributes of spatial data.
AFRIKAANSE OPSOMMING: 'n Groot bron van kommer vir Energex, Australië se grootste energieverskaffer in Suidoos- Queensland, is die verlies van natuurlike gas uit hul gasdistribusie netwerke. In 'n groot deel van ouer Brisbane opereer hierdie netwerke hoofsaaklik teen lae en medium druk, met 'n aansienlike persentasie van hoofpyplyne wat uit gietyster of staal bestaan. Al is sommige pyplyne in hierdie netwerke met verloop van tyd vervang, maak verslae dit duidelik dat 'n groot deel van die gas in hierdie netwerke steeds langs die pad verlore gaan. Die operasionele - en onderhoudsbegrotings vir hierdie netwerke is boonop hoog, met 'n groot persentasie van die pyplyne wat binnekort aan die einde van hulle ekonomiese leeftyd kom. Wanneer operasionele- en onderhoudsonkostes die koste van vervanging oorskry, beplan Energex se gasvoorsienings-afdeling projekte om verspreidingsnetwerke te hernu met poli-etileen pype. Om sinvolle besluite te neem tydens pyplynhernuwings, word verskeie historiese verslae geraadpleeg, insluitend: gasverbruikvlakke, lekplek geskiedenis rekords, onderhoud- en ander verwante onkostes, asook die verlies van inkomste weens verlore gas. Alhoewel finansiële stawing van kapitale uitgawes nog altyd 'n voorvereiste was tydens hernuwingsprojekte by Energex, het die impak van privatisering op die energieverskaffingsmark dit noodsaaklik gemaak om hulle finansiële goedkeuringsproses vir kapitaalprojekte te hersien. Energex het dus 'n sagteware toepassing ontwikkel wat die finansiële gangbaarheid van hernuwingsprojekte evalueer. Hierdie navorsing sal die moontlike integrasie van geografiese inligtingstelsels (GIS) met dié van Energex se finansiële evalueringspakket demonstreer. Die resultate van hierdie studie toon dat die integrasie van GIS in die hernuwingsproses aansienlike voordele inhou, insluitende: • die effektiewe seleksie van sub-netwerke, gebaseer op pyp konnektiwiteit, • die ontdekking van verskuilde verwantskappe tussen geografies-ruimtelike alfanumeriese data en omgewingsinligting, wat besluitneming vergemaklik, en • verbeterde toetsing van voorgestelde hernuwingsopsies deur die indiepte-nagaan van geografiesruimtelike elemente.
APA, Harvard, Vancouver, ISO, and other styles
32

Mehta, Alok. "Evolving legacy system's features into fine-grained components using regression test-cases." Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-1211102-163800.

Full text
Abstract:
Dissertatio (Ph. D.)--Worcester Polytechnic Institute.
Keywords: software maintenance; software evolution; regression test-cases; components; legacy system; incremental software evolution methodology; fine-grained components. Includes bibliographical references (p. 283-294).
APA, Harvard, Vancouver, ISO, and other styles
33

Hassan, Muhammad. "Production 4.0 of Ring Mill 4 Ovako AB." Thesis, Högskolan i Gävle, Elektronik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-33405.

Full text
Abstract:
Cyber-Physical System (CPS) or Digital-Twin approach are becoming popular in industry 4.0 revolution. CPS not only allow to view the online status of equipment, but also allow to predict the health of tool. Based on the real time sensor data, it aims to detect anomalies in the industrial operation and prefigure future failure, which lead it towards smart maintenance. CPS can contribute to sustainable environment as well as sustainable production, due to its real-time analysis on production. In this thesis, we analyzed the behavior of a tool of Ringvalsverk 4, at Ovako with its twin model (known as Digital-Twin) over a series of data. Initially, the data contained unwanted signals which is then cleaned in the data processing phase, and only before production signal is used to identify the tool’s model. Matlab’s system identification toolbox is used for identifying the system model, the identified model is also validated and analyzed in term of stability, which is then used in CPS. The Digital-Twin model is then used and its output being analyzed together with tool’s output to detect when its start deviate from normal behavior.
APA, Harvard, Vancouver, ISO, and other styles
34

Popovic, Dragan. "Heat transfer and flow distribution in an aircooler of a large steam surface condenser." Thesis, Queensland University of Technology, 1999. https://eprints.qut.edu.au/36097/1/36097_Popovic_1999.pdf.

Full text
Abstract:
The temperature at which steam condenses stays more or less constant throughout the aircooler while the temperature of the cooling water increases, a temperature rise of 12 °C being a typical figure for cooling tower operation. Those parts of the aircooler where the cooling water enters therefore condense more steam than equal areas where the cooling water leaves. The orifice plates used to feed the separate compartments of the aircooler therefore need to be sized to cater for different mass flows. Up until now aircooler designs have used the same diameter for the orifices in each compartment. This project describes the generation of a computer program which, by combining steam-air mixture mass flow and pressure drop computation with the condensation heat transfer in the corresponding compartment of an aircooler with up to 20 compartments, enables engineers to specify orifice plate sizes which will optimise mass and heat flow to produce the best venting of the condenser. The program was written and compared with field test results. The computer results gave valid explanations for corrosion found in one major condenser and suspected backflow from the collecting duct to the compartments. They also validated an earlier design decision to apply variable aircooler inlet orifice cross sections in order to achieve equal pressure in the aircooler compartments along the tube bundle. The program was applied to a 300 MW el condenser to verify the results of a new design method. Design modifications to the condenser were then introduced to allow for more detailed future comparison between computer results and field test results. The program was written in Fortran after investigation of several major already existing software packages assessed them as being inappropriate for this application to steam power station condensers.
APA, Harvard, Vancouver, ISO, and other styles
35

Artchounin, Daniel. "Tuning of machine learning algorithms for automatic bug assignment." Thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139230.

Full text
Abstract:
In software development projects, bug triage consists mainly of assigning bug reports to software developers or teams (depending on the project). The partial or total automation of this task would have a positive economic impact on many software projects. This thesis introduces a systematic four-step method to find some of the best configurations of several machine learning algorithms intending to solve the automatic bug assignment problem. These four steps are respectively used to select a combination of pre-processing techniques, a bug report representation, a potential feature selection technique and to tune several classifiers. The aforementioned method has been applied on three software projects: 66 066 bug reports of a proprietary project, 24 450 bug reports of Eclipse JDT and 30 358 bug reports of Mozilla Firefox. 619 configurations have been applied and compared on each of these three projects. In production, using the approach introduced in this work on the bug reports of the proprietary project would have increased the accuracy by up to 16.64 percentage points.
APA, Harvard, Vancouver, ISO, and other styles
36

SHIH, YI-JU, and 石意如. "Identification of cross-platform data query system:Example In depot maintenance plant." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/4668eb.

Full text
Abstract:
碩士
佛光大學
資訊應用學系
104
In the past, the car repair factory managers must manually write forms to build customer and vehicle registration information. In order to reduce the waste of human resources, we develop a web system to replace the manual operation of the process in this study. In particular, this system allows the user to input the relevant information to the database. Moreover, the managers can access, delete, modify and query data through the cross-platform system. In addition, the managers use the data to analyze customer needs, including the average amount of consumer’s consumption, and how often maintenance time, thus help managers reduce the risk of hoarding goods and achieve the best cost strategy. Through the bar code (QR-Code) of this system, the managers can quickly query vehicle registration data, and historical maintenance records.
APA, Harvard, Vancouver, ISO, and other styles
37

Santiago, Ana Rita Antunes. "Predictive maintenance mechanisms for heating equipment." Master's thesis, 2019. http://hdl.handle.net/10773/29723.

Full text
Abstract:
Heating appliances such as HVAC systems are susceptible to failures that may result in disruption of important operations. With this in mind, it is relevant to increase the efficiency of those solutions and diminish the number of detected faults. Moreover, understand why these failures occur that be relevant for future devices. Thus, there is a need to develop methods that allow the identification of eventual failures before they occur. This is only achievable when solutions capable of analyzing data, interpret it and obtaining knowledge from it, are created. This dissertation presents an infrastructure that supports the inspection of failure detection in boilers, making viable to forecast faults and errors. A major part of the work is data analysis and the creation of procedures that can process it. The main goal is creating an efficient system able to identify, predict and notify the occurrence of failure events.
Equipamentos de Climatização, como caldeiras e ar-condicionado, são suscetíveis a falhas que podem resultar na interrupção de operações importantes. Assim, é relevante aumentar a eficiência dessas soluções e diminuir o número de falhas detectadas. Além disso, entender o porquê da ocorrências dessas falhas torna-se importante para a criação de equipamentos futuros. Existe, assim, a necessidade de desenvolver métodos que permitam a identificação de eventuais falhas antes que elas ocorram. Isso só é possível quando são criadas soluções capazes de analisar dados, interpretá-los e obter conhecimento a partir deles. Esta dissertação apresenta uma infraestrutura que suporta a inspeção de detecção de falhas em caldeiras, viabilizando a previsão de falhas e erros. Uma parte importante do trabalho é a análise de dados e a criação de procedimentos que possam processá-los. O objetivo principal é criar um sistema eficiente capaz de identificar, prever e notificar a ocorrência de eventos de falha.
Mestrado em Engenharia de Computadores e Telemática
APA, Harvard, Vancouver, ISO, and other styles
38

Hood, Shannon. "Action learning in engineering management : using financial data to optimise plant maintenance strategies." 2000. http://arrow.unisa.edu.au/vital/access/manager/Repository/unisa:36823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sibanda, Ray Takudwanashe Jones. "Reliability improvement of the boiler-coal processing plant in eskom using reliability centred maintenance principles." Thesis, 2016. https://hdl.handle.net/10539/25784.

Full text
Abstract:
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, in partial fulfillment of the requirements for the degree of Master of Science in Engineering in Mechanical Engineering by course work and research report. 25 October 2016
The objective of this report is to compare the existing maintenance methods in Eskom to RCM and then test the applicability of RCM in improving boiler reliability. Firstly, a comparative study was conducted on different RCM methods and the RCM method to apply selected and compared to the Eskom’s initiatives. The RCM method is piloted to a sample system and the results are compared with those from the current Eskom’s initiatives. The biggest change due to the RCM analysis was the way to document the results of the RCM process as compared to the current practice. It was also found that intervals between maintenance tasks proposed by the RCM study are different from the intervals currently used. In conclusion, the report recommends that the RCM results be used as a guide for continuous improvement in order to fill in gaps that are crucial in determining reliability goals.
MT 2018
APA, Harvard, Vancouver, ISO, and other styles
40

Coelho, Daniel Filipe Silveira. "A predictive maintenance approach based on time series segmentation." Master's thesis, 2021. http://hdl.handle.net/10773/32120.

Full text
Abstract:
The increase in automation provided by Industry 4.0 combined with the growing competitiveness in the market highlights the importance of intelligent maintenance. Companies must rethink current maintenance strategies in order to detect failures before they occur. This is the motto of predictive maintenance, through the analysis of data from equipment it is possible to predict when failures will occur and act in accordance with the forecast. This project, in addition to developing a platform capable of receiving and processing data in real-time from deferent equipment, also proposes a predictive maintenance approach based on time series segmentation. This new predictive maintenance approach was applied to data from a mechanical press, located in Bosch Thermotechnology, S.A., having achieved an efficiency of 90.91%. Throughout the document, all elements of the developed system are discussed in detail, from the data acquisition systems to sending forecasts on the condition of the equipment to a visualization platform.
O aumento da automatização proporcionada pela Industria 4.0 aliada à crescente competitividade no mercado destaca a importância de uma manutenção inteligente. As empresas devem repensar as atuais estratégias de manutenção de modo detetar de forma antecipada as avarias. Este é o lema da manutenção preditiva, através da análise dos dados dos equipamentos é possível prever quando as avarias irão ocorrer e agir em conformidade com a previsão. Este projeto, para além de desenvolver uma plataforma capaz de receber e processar dados em tempo real de diversos equipamentos, também propõe uma abordagem de manutenção preditiva baseada na segmentação de series temporais. Esta nova abordagem foi aplicada a dados de uma prensa mecânica da Bosch Thermotechnology, S.A., tendo-se alcançado uma eficiência de 90.91%. Ao longo do documento é abordado em detalhe todos os elementos do sistema desenvolvido, desde os sistemas de aquisição de dados, até ao envio das previsões sobre a condição do equipamento para uma plataforma de visualização.
Mestrado em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Chih-Hsuan, and 林芷璿. "Furniture Data Processing of Building Information Modeling Usage for the Facility Management and Maintenance Phase." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/resvub.

Full text
Abstract:
碩士
國立中央大學
土木工程學系
105
In today’s life, people are often faced with many challenges when designing their home. Professional staff usually use various types of professional software to do designing, but the software belongs to their own professional field. There are architectural software to draw building models. Also there are computer animation software. Using 3D modeling can improve speed of the design process and eliminates errors. However, furniture won’t be placed before the building has been built. When the building has been built, the furniture will be placed for the facility management and maintenance phase. So, the furniture assets won’t be edit in the architectural software or computer animation software. There are no research for the combination of the furniture and BIM for or the facility management and maintenance phase. When transferring the 3D model between different software, the model data may lost. Most 3D software chose FBX(FilmBoX) format to transfer their model files. FBX format were usually be used in 3D model field, but FBX format were developed as a commercial format. Also, the FBX format can’t be read and analyze easily. The study used COLDADA (COLLAborative Design Activity) format to replace the FBX format. COLLADA format had passed the ISO Certification. With the analysis of how COLLADA format do data processing, the study presents an interactive home design application in 3D game engine. In 3D game engine, user can do furniture facility management and maintenance in building. User can simulate purchase, replace, move, and rotate furniture in the system.
APA, Harvard, Vancouver, ISO, and other styles
42

Silva, Antonio Jose da. "A computer program for assessing the hourly and peak refrigeration loads of an airconditioning constant volume flow plant." Thesis, 2015. http://hdl.handle.net/10539/17018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Blackmore, Andrew Craig. "The variation of ecophysiological trains of Savanna plants, in relation to indices of plant available moisture and nutrients." Thesis, 2016. http://hdl.handle.net/10539/20834.

Full text
Abstract:
A thesis submi.tted to the Faculty of Science, University of the witwatersrand, Johannesburg I in the fulfillmen,,;"o:f' 't\ :he requirements of the Degree of Magister Scienta~. June 1992
The present study was undertaken; withirl the South African " savannas, to provide "insight into a j;unctional classification of aavanna plants using ecophY$iologiLcal charact~:t's.THe pri.mary r.:>bjective of this study Was to investigate the vari.ation of these tt'aits throughout: the savanna, aridto relate this variation to plant avail.able moisture and nU~l'ie~~s• !t was conclu.ded that~ 1) no formal or specialized strategies have evolved within a number of the study sites, .2) unlike the woody component I neither divergellce nor convergence was demonstrated within the grass layer, c' 3) plant aVailable nutrients did not appear to be a major determinant of either component. Although plant available moistur~ proved to be unimportant in the woody layer, it did playa role as a determinant of the grass layer, and 4) constancy of the plant traits was not demonstrated to \": OCCllr over the gr~~ing season. A succeisf',\lclassification' would require the components to be separat~pl specific determinan.ts be identified for each component, and an element of time be included into both edaphic and biotic measurements.
APA, Harvard, Vancouver, ISO, and other styles
44

"Materialized Views over Heterogeneous Structured Data Sources in a Distributed Event Stream Processing Environment." Doctoral diss., 2011. http://hdl.handle.net/2286/R.I.8991.

Full text
Abstract:
abstract: Data-driven applications are becoming increasingly complex with support for processing events and data streams in a loosely-coupled distributed environment, providing integrated access to heterogeneous data sources such as relational databases and XML documents. This dissertation explores the use of materialized views over structured heterogeneous data sources to support multiple query optimization in a distributed event stream processing framework that supports such applications involving various query expressions for detecting events, monitoring conditions, handling data streams, and querying data. Materialized views store the results of the computed view so that subsequent access to the view retrieves the materialized results, avoiding the cost of recomputing the entire view from base data sources. Using a service-based metadata repository that provides metadata level access to the various language components in the system, a heuristics-based algorithm detects the common subexpressions from the queries represented in a mixed multigraph model over relational and structured XML data sources. These common subexpressions can be relational, XML or a hybrid join over the heterogeneous data sources. This research examines the challenges in the definition and materialization of views when the heterogeneous data sources are retained in their native format, instead of converting the data to a common model. LINQ serves as the materialized view definition language for creating the view definitions. An algorithm is introduced that uses LINQ to create a data structure for the persistence of these hybrid views. Any changes to base data sources used to materialize views are captured and mapped to a delta structure. The deltas are then streamed within the framework for use in the incremental update of the materialized view. Algorithms are presented that use the magic sets query optimization approach to both efficiently materialize the views and to propagate the relevant changes to the views for incremental maintenance. Using representative scenarios over structured heterogeneous data sources, an evaluation of the framework demonstrates an improvement in performance. Thus, defining the LINQ-based materialized views over heterogeneous structured data sources using the detected common subexpressions and incrementally maintaining the views by using magic sets enhances the efficiency of the distributed event stream processing environment.
Dissertation/Thesis
Ph.D. Computer Science 2011
APA, Harvard, Vancouver, ISO, and other styles
45

Kautz, Stefanie [Verfasser]. "Acacia inhabiting Pseudomyrmex ants - integrating physiological, behavioral, chemical and genetic data to understand the maintenance of ant-plant mutualismus / vorgelegt von Stefanie Kautz." 2009. http://d-nb.info/1001988647/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Kai-Wen, and 張凱雯. "Minimizing Query Processing and View Maintenance Cost with Stochastic Query and Update under a Response Time Constraints in a Data Warehouse System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/75335680028471707010.

Full text
Abstract:
碩士
國立中央大學
工業管理研究所
94
Data warehouse is built up to reply queries efficiently. The view selection is to select a set of views to materialize under constraints, when minimizing the total of query processing cost and view maintenance cost. The update policy decides when to refresh the data in a data warehouse. Previous researches dealt with these two problems independently, however under the real situation, they are correlated with each other. Therefore, simultaneously determining view selection and update policy in designing a data warehouse is important. Besides, as to previous researches, they assume that query arrival rates and update frequency are deterministic which can’t reflect uncertain demand of query in real situation, that will lead to a incorrect outcome. Therefore, the stochastic arrival should be considered. In this research, we propose a mathematical model to minimize the total cost when the set of materialized views are known. In the model, we adopt stochastic view maintenance frequency, which does not be considered in the former researches. Our model also incorporates the stochastic phenomenon to reflect the uncertain query and uncertain update with Poisson process, which is common in the real life. The mean system response time constrained by a specified time is formulated by an M/G/1 model, which is within a given threshold with a desired probability. As to application, we consider different special cases to implement the mathematical model and the greedy algorithm. A computational analysis is conducted to explore the impact of different constraints and system parameters on view selection. In addition, we also design some experiments to evaluate the difference of view selection and its solution. Finally, we recognize the mathematical model and algorithm we propose here are correct and reliable via these experiments.
APA, Harvard, Vancouver, ISO, and other styles
47

"Definition of soil water dynamics by combining hydrometry and geophysics in a hillslope transect in the KNP." Thesis, 2006. http://hdl.handle.net/10413/3441.

Full text
Abstract:
The budgeting of water fluxes in the soil is an extremely complex problem, and is compounded by subsurface controls and environmental forces which modify the soil water dynamics. Of the controlling factors, the underlying geology and the soil media are vital components and are often misinterpreted. The geology and soil media components have been neglected mostly because of the difficulty in monitoring the dominant processes that are linked to the water balance in the subsurface. Until recently, hydrometry has been the dominant method of measuring and monitoring the subsurface water balance. Hydrometric measurements have included water content measurement by Time Domain Reflectometry (TDR), soil water potential measurements through tensiometry and groundwater water level monitoring. Hydrometry is still the preferred method of monitoring soil water dynamics, but measurements are generally localised and lateral accumulations and fluxes of water are difficult to interpret. Using geophysical methods and instrumentation to define soil water dynamics could have numerous advantages over conventional hydrometric methods. Among the geophysical techniques dedicated to image the near surface, Electrical Resistivity Tomography (ERT) surveying has been increasingly used for environmental, engineering and geological purposes during the last decade. The aim of this study is to determine if ERT observations could yield the accuracy required to define vertical and lateral soil water dynamics. The ERT instrumentation uses an electrical current that is inserted into the subsurface through various electrode arrangements and a resulting resistance is determined at the take-out electrodes. With the aid of a modelling package these resistance values are reproduced into a pseudosection of underlying resistivity distribution which is influenced by the moisture conditions of the subsurface medium. This geophysical method is primarily used for geological studies but by doing repeated surveys with the same electrode positioning, moisture fluctuation monitoring could be realised. Use of the ERT technique is at the forefront of soil water dynamics monitoring. The main objective of this study is to propose that the ERT instrumentation could be a more efficient and more informative method of studying soil water dynamics than the traditional soil water dynamics monitoring equipment, particularly to define lateral fluxes and accumulation of subsurface water. The study site is a well instrumented transect in the Nkuhlu Exclosures in the Kruger National Park, South Africa, where ongoing soil water dynamics are monitored. The project aims to compare the ERT data to fiR data on a daily basis, over a period of three weeks, during the rain season, monitoring event based wetting and the subsequent drying phases of the soils in a 2-dimensional section. The project and its fmdings are shown to be valuable to the hydrological interpretation of the subsurface water balance. The application is shown to be particularly important to ecohydrology, in the monitoring of soil water dynamics in a 2-dimensional transect and understanding how the natural cycles of water distribution and plant uptake are linked together. The study demonstrates that ERT can be used to observe changes in the water storage and lateral fluxes within a transect which supports varying vegetation and ecologies. The linking of water fluxes in the hydrology cycle to uptakes and controls in the ecosystem has been developed into the research focus known as ecohydrology The use of the ERT instrument can only benefit this research focus in the future. The study demonstrates that ERT instrumentation can be used to provide valuable understanding of subsurface water dynamics and in turn the effects on ecohydrology.
Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2006.
APA, Harvard, Vancouver, ISO, and other styles
48

Khesa, Neo. "Exergy analysis and heat integration of a pulverized coal oxy combustion power plant using ASPEN plus." Thesis, 2017. http://hdl.handle.net/10539/22961.

Full text
Abstract:
A dissertation submitted to the faculty of Engineering and the Built Environment, University of the Witwatersrand, in fulfillment of the requirements for the degree of Master of Science in Engineering. 21 November 2016
In this work a comprehensive exergy analysis and heat integration study was carried out on a coal based oxy-combustion power plant simulated using ASPEN plus. This is an extension on the work of Fu and Gundersen (2013). Several of the assumptions made in their work have been relaxed here. Their impact was found to be negligible with the results here matching closely with those in the original work. The thermal efficiency penalty was found to be 9.24% whilst that in the original work was 9.4%. The theoretical minimum efficiency penalty was determined to be 3% whilst that in the original work was 3.4%. Integrating the compression processes and the steam cycle was determined to have the potential to increase net thermal efficiency by 0.679%. This was close to the 0.72% potential reported in the original work for the same action.
MT2017
APA, Harvard, Vancouver, ISO, and other styles
49

Dlamini, Wisdom Mdumiseni Dabulizwe. "Spatial analysis of invasive alien plant distribution patterns and processes using Bayesian network-based data mining techniques." Thesis, 2016. http://hdl.handle.net/10500/20692.

Full text
Abstract:
Invasive alien plants have widespread ecological and socioeconomic impacts throughout many parts of the world, including Swaziland where the government declared them a national disaster. Control of these species requires knowledge on the invasion ecology of each species including how they interact with the invaded environment. Species distribution models are vital for providing solutions to such problems including the prediction of their niche and distribution. Various modelling approaches are used for species distribution modelling albeit with limitations resulting from statistical assumptions, implementation and interpretation of outputs. This study explores the usefulness of Bayesian networks (BNs) due their ability to model stochastic, nonlinear inter-causal relationships and uncertainty. Data-driven BNs were used to explore patterns and processes influencing the spatial distribution of 16 priority invasive alien plants in Swaziland. Various BN structure learning algorithms were applied within the Weka software to build models from a set of 170 variables incorporating climatic, anthropogenic, topo-edaphic and landscape factors. While all the BN models produced accurate predictions of alien plant invasion, the globally scored networks, particularly the hill climbing algorithms, performed relatively well. However, when considering the probabilistic outputs, the constraint-based Inferred Causation algorithm which attempts to generate a causal BN structure, performed relatively better. The learned BNs reveal that the main pathways of alien plants into new areas are ruderal areas such as road verges and riverbanks whilst humans and human activity are key driving factors and the main dispersal mechanism. However, the distribution of most of the species is constrained by climate particularly tolerance to very low temperatures and precipitation seasonality. Biotic interactions and/or associations among the species are also prevalent. The findings suggest that most of the species will proliferate by extending their range resulting in the whole country being at risk of further invasion. The ability of BNs to express uncertain, rather complex conditional and probabilistic dependencies and to combine multisource data makes them an attractive technique for species distribution modeling, especially as joint invasive species distribution models (JiSDM). Suggestions for further research are provided including the need for rigorous invasive species monitoring, data stewardship and testing more BN learning algorithms.
Environmental Sciences
D. Phil. (Environmental Science)
APA, Harvard, Vancouver, ISO, and other styles
50

Buckle, Warren Dean. "Renewal of a linear electrical network simulator into Ada." Thesis, 1993. http://hdl.handle.net/10539/21780.

Full text
Abstract:
A dissertation submitted to the Faculty of Engineering, University of the Witwatersrand, Johannesburg, in fulfilment Of the requirements for the degree of Master of Science in Engineering. Johannesburg, 1993
Renewal is the extraction of the intellectual content (algorithms, data structures) from an existing program, and then puilding a new more maiatainable program using more modem progra1Tlming methods and languages. A survey of software structure on maintenance. highlighted the different hierarchies produced by functional and object-oriented design methods. Elecsim, a linear circuit sL~ulator written in Pascal, was chosen as the existing program to be renewed, The new version follows the approach of decoupling the user interface and introducing an explicit scheduler. The object-oriented design technique is used extensively. Other issues addressed include online-help and. documentation for the program. Conclusions are drawn which are generally applicable from the specificlessons learnt from the Elecsim/Elector case study.
MT2017
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography