Dissertations / Theses on the topic 'Real time prediction'

To see the other types of publications on this topic, follow the link: Real time prediction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Real time prediction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Neikter, Carl-Fredrik. "Cache Prediction and Execution Time Analysis on Real-Time MPSoC." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15394.

Full text
Abstract:

Real-time systems do not only require that the logical operations are correct. Equally important is that the specified time constraints always are complied. This has successfully been studied before for mono-processor systems. However, as the hardware in the systems gets more complex, the previous approaches become invalidated. For example, multi-processor systems-on-chip (MPSoC) get more and more common every day, and together with a shared memory, the bus access time is unpredictable in nature. This has recently been resolved, but a safe and not too pessimistic cache analysis approach for MPSoC has not been investigated before. This thesis has resulted in designed and implemented algorithms for cache analysis on real-time MPSoC with a shared communication infrastructure. An additional advantage is that the algorithms include improvements compared to previous approaches for mono-processor systems. The verification of these algorithms has been performed with the help of data flow analysis theory. Furthermore, it is not known how different types of cache miss characteristic of a task influence the worst case execution time on MPSoC. Therefore, a program that generates randomized tasks, according to different parameters, has been constructed. The parameters can, for example, influence the complexity of the control flow graph and average distance between the cache misses.

APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Hao. "Real-time Traffic State Prediction: Modeling and Applications." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64292.

Full text
Abstract:
Travel-time information is essential in Advanced Traveler Information Systems (ATISs) and Advanced Traffic Management Systems (ATMSs). A key component of these systems is the prediction of the spatiotemporal evolution of roadway traffic state and travel time. From the perspective of travelers, such information can result in better traveler route choice and departure time decisions. From the transportation agency perspective, such data provide enhanced information with which to better manage and control the transportation system to reduce congestion, enhance safety, and reduce the carbon footprint of the transportation system. The objective of the research presented in this dissertation is to develop a framework that includes three major categories of methodologies to predict the spatiotemporal evolution of the traffic state. The proposed methodologies include macroscopic traffic modeling, computer vision and recursive probabilistic algorithms. Each developed method attempts to predict traffic state, including roadway travel times, for different prediction horizons. In total, the developed multi-tool framework produces traffic state prediction algorithms ranging from short – (0~5 minutes) to medium-term (1~4 hours) considering departure times up to an hour into the future. The dissertation first develops a particle filter approach for use in short-term traffic state prediction. The flow continuity equation is combined with the Van Aerde fundamental diagram to derive a time series model that can accurately describe the spatiotemporal evolution of traffic state. The developed model is applied within a particle filter approach to provide multi-step traffic state prediction. The testing of the algorithm on a simulated section of I-66 demonstrates that the proposed algorithm can accurately predict the propagation of shockwaves up to five minutes into the future. The developed algorithm is further improved by incorporating on- and off-ramp effects and more realistic boundary conditions. Furthermore, the case study demonstrates that the improved algorithm produces a 50 percent reduction in the prediction error compared to the classic LWR density formulation. Considering the fact that the prediction accuracy deteriorates significantly for longer prediction horizons, historical data are integrated and considered in the measurement update in the developed particle filter approach to extend the prediction horizon up to half an hour into the future. The dissertation then develops a travel time prediction framework using pattern recognition techniques to match historical data with real-time traffic conditions. The Euclidean distance is initially used as the measure of similarity between current and historical traffic patterns. This method is further improved using a dynamic template matching technique developed as part of this research effort. Unlike previous approaches, which use fixed template sizes, the proposed method uses a dynamic template size that is updated each time interval based on the spatiotemporal shape of the congestion upstream of a bottleneck. In addition, the computational cost is reduced using a Fast Fourier Transform instead of a Euclidean distance measure. Subsequently, the historical candidates that are similar to the current conditions are used to predict the experienced travel times. Test results demonstrate that the proposed dynamic template matching method produces significantly better and more stable prediction results for prediction horizons up to 30 minutes into the future for a two hour trip (prediction horizon of two and a half hours) compared to other state-of-the-practice and state-of-the-art methods. Finally, the dissertation develops recursive probabilistic approaches including particle filtering and agent-based modeling methods to predict travel times further into the future. Given the challenges in defining the particle filter time update process, the proposed particle filtering algorithm selects particles from a historical dataset and propagates particles using data trends of past experiences as opposed to using a state-transition model. A partial resampling strategy is then developed to address the degeneracy problem in the particle filtering process. INRIX probe data along I-64 and I-264 from Richmond to Virginia Beach are used to test the proposed algorithm. The results demonstrate that the particle filtering approach produces less than a 10 percent prediction error for trip departures up to one hour into the future for a two hour trip. Furthermore, the dissertation develops an agent-based modeling approach to predict travel times using real-time and historical spatiotemporal traffic data. At the microscopic level, each agent represents an expert in the decision making system, which predicts the travel time for each time interval according to past experiences from a historical dataset. A set of agent interactions are developed to preserve agents that correspond to traffic patterns similar to the real-time measurements and replace invalid agents or agents with negligible weights with new agents. Consequently, the aggregation of each agent's recommendation (predicted travel time with associated weight) provides a macroscopic level of output – predicted travel time distribution. The case study demonstrated that the agent-based model produces less than a 9 percent prediction error for prediction horizons up to one hour into the future.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Gross, Hans-Gerhard. "Measuring evolutionary testability of real-time software." Thesis, University of South Wales, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brune, Sascha. "Landslide generated tsunamis : numerical modeling and real-time prediction." Phd thesis, Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/3298/.

Full text
Abstract:
Submarine landslides can generate local tsunamis posing a hazard to human lives and coastal facilities. Two major related problems are: (i) quantitative estimation of tsunami hazard and (ii) early detection of the most dangerous landslides. This thesis focuses on both those issues by providing numerical modeling of landslide-induced tsunamis and by suggesting and justifying a new method for fast detection of tsunamigenic landslides by means of tiltmeters. Due to the proximity to the Sunda subduction zone, Indonesian coasts are prone to earthquake, but also landslide tsunamis. The aim of the GITEWS-project (German-Indonesian Tsunami Early Warning System) is to provide fast and reliable tsunami warnings, but also to deepen the knowledge about tsunami hazards. New bathymetric data at the Sunda Arc provide the opportunity to evaluate the hazard potential of landslide tsunamis for the adjacent Indonesian islands. I present nine large mass movements in proximity to Sumatra, Java, Sumbawa and Sumba, whereof the largest event displaced 20 km³ of sediments. Using numerical modeling, I compute the generated tsunami of each event, its propagation and runup at the coast. Moreover, I investigate the age of the largest slope failures by relating them to the Great 1977 Sumba earthquake. Continental slopes off northwest Europe are well known for their history of huge underwater landslides. The current geological situation west of Spitsbergen is comparable to the continental margin off Norway after the last glaciation, when the large tsunamigenic Storegga slide took place. The influence of Arctic warming on the stability of the Svalbard glacial margin is discussed. Based on new geophysical data, I present four possible landslide scenarios and compute the generated tsunamis. Waves of 6 m height would be capable of reaching northwest Europe threatening coastal areas. I present a novel technique to detect large submarine landslides using an array of tiltmeters, as a possible tool in future tsunami early warning systems. The dislocation of a large amount of sediment during a landslide produces a permanent elastic response of the earth. I analyze this response with a mathematical model and calculate the theoretical tilt signal. Applications to the hypothetical Spitsbergen event and the historical Storegga slide show tilt signals exceeding 1000 nrad. The amplitude of landslide tsunamis is controlled by the product of slide volume and maximal velocity (slide tsunamigenic potential). I introduce an inversion routine that provides slide location and tsunamigenic potential, based on tiltmeter measurements. The accuracy of the inversion and of the estimated tsunami height near the coast depends on the noise level of tiltmeter measurements, the distance of tiltmeters from the slide, and the slide tsunamigenic potential. Finally, I estimate the applicability scope of this method by employing it to known landslide events worldwide.
Submarine Erdrutsche können lokale Tsunamis auslösen und stellen somit eine Gefahr für Siedlungen an der Küste und deren Einwohner dar. Zwei Hauptprobleme sind (i) die quantitative Abschätzung der Gefahr, die von einem Tsunami ausgeht und (ii) das schnelle Erkennen von gefährlichen Rutschungsereignissen. In dieser Doktorarbeit beschäftige ich mich mit beiden Problemen, indem ich Erdrutschtsunamis numerisch modelliere und eine neue Methode vorstelle, in der submarine Erdrutsche mit Hilfe von Tiltmetern detektiert werden. Die Küstengebiete Indonesiens sind wegen der Nähe zur Sunda-Subduktionszone besonders durch Tsunamis gefährdet. Das Ziel des GITEWS-Projektes (Deutsch- Indonesisches Tsunami-Frühwarnsystem) ist es, schnell und verlässlich vor Tsunamis zu warnen, aber auch das Wissen über Tsunamis und ihre Anregung zu vertiefen. Neue bathymetrische Daten am Sundabogen bieten die Möglichkeit, das Gefahrenpotential von Erdrutschtsunamis für die anliegenden indonesischen Inseln zu studieren. Ich präsentiere neun große Rutschungereignisse nahe Sumatra, Java, Sumbawa und Sumba, wobei das größte von ihnen 20 km³ Sediment bewegte. Ich modelliere die Ausbreitung und die Überschwemmung der bei diesen Rutschungen angeregten Tsunamis. Weiterhin untersuche ich das Alter der größten Hanginstabilitäten, indem ich sie zu dem Sumba Erdbeben von 1977 in Beziehung setze. Die Kontinentalhänge im Nordwesten Europa sind für Ihre immensen unterseeischen Rutschungen bekannt. Die gegenwärtige geologische Situation westlich von Spitzbergen ist vergleichbar mit derjenigen des norwegischen Kontinentalhangs nach der letzten Vergletscherung, als der große Tsunamianregende Storegga-Erdrutsch stattfand. Der Einfluss der arktischen Erwärmung auf die Hangstabilität vor Spitzbergen wird untersucht. Basierend auf neuen geophysikalischen Messungen, konstruiere ich vier mögliche Rutschungsszenarien und berechne die entsprechenden Tsunamis. Wellen von 6 Metern Höhe könnten dabei Nordwesteuropa erreichen. Ich stelle eine neue Methode vor, mit der große submarine Erdrutsche mit Hilfe eines Netzes aus Tiltmetern erkannt werden können. Diese Methode könnte in einem Tsunami-Frühwarnsystem angewendet werden. Sie basiert darauf, dass die Bewegung von großen Sedimentmassen während einer Rutschung eine dauerhafte Verformung der Erdoberfläche auslöst. Ich berechne diese Verformung und das einhergehende Tiltsignal. Im Falle der hypothetischen Spitzbergen-Rutschung sowie für das Storegga-Ereignis erhalte ich Amplituden von mehr als 1000 nrad. Die Wellenhöhe von Erdrutschtsunamis wird in erster Linie von dem Produkt aus Volumen und maximaler Rutschungsgeschwindigkeit (dem Tsunamipotential einer Rutschung) bestimmt. Ich führe eine Inversionsroutine vor, die unter Verwendung von Tiltdaten den Ort und das Tsunamipotential einer Rutschung bestimmt. Die Genauigkeit dieser Inversion und damit der vorhergesagten Wellenhöhe an der Küste hängt von dem Fehler der Tiltdaten, der Entfernung zwischen Tiltmeter und Rutschung sowie vom Tsunamipotential ab. Letztlich bestimme ich die Anwendbarkeitsreichweite dieser Methode, indem ich sie auf bekannte Rutschungsereignisse weltweit beziehe.
APA, Harvard, Vancouver, ISO, and other styles
5

Raykhel, Ilya. "Real-time automatic price prediction for eBay online trading /." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2697.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cosma, Andrei Claudiu. "Real-Time Individual Thermal Preferences Prediction Using Visual Sensors." Thesis, The George Washington University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=13422566.

Full text
Abstract:

The thermal comfort of a building’s occupants is an important aspect of building design. Providing an increased level of thermal comfort is critical given that humans spend the majority of the day indoors, and that their well-being, productivity, and comfort depend on the quality of these environments. In today’s world, Heating, Ventilation, and Air Conditioning (HVAC) systems deliver heated or cooled air based on a fixed operating point or target temperature; individuals or building managers are able to adjust this operating point through human communication of dissatisfaction. Currently, there is a lack in automatic detection of an individual’s thermal preferences in real-time, and the integration of these measurements in an HVAC system controller.

To achieve this, a non-invasive approach to automatically predict personal thermal comfort and the mean time to discomfort in real-time is proposed and studied in this thesis. The goal of this research is to explore the consequences of human body thermoregulation on skin temperature and tone as a means to predict thermal comfort. For this reason, the temperature information extracted from multiple local body parts, and the skin tone information extracted from the face will be investigated as a means to model individual thermal preferences.

In a first study, we proposed a real-time system for individual thermal preferences prediction in transient conditions using temperature values from multiple local body parts. The proposed solution consists of a novel visual sensing platform, which we called RGB-DT, that fused information from three sensors: a color camera, a depth sensor, and a thermographic camera. This platform was used to extract skin and clothing temperature from multiple local body parts in real-time. Using this method, personal thermal comfort was predicted with more than 80% accuracy, while mean time to warm discomfort was predicted with more than 85% accuracy.

In a second study, we introduced a new visual sensing platform and method that uses a single thermal image of the occupant to predict personal thermal comfort. We focused on close-up images of the occupant’s face to extract fine-grained details of the skin temperature. We extracted manually selected features, as well as a set of automated features. Results showed that the automated features outperformed the manual features in all the tests that were run, and that these features predicted personal thermal comfort with more than 76% accuracy.

The last proposed study analyzed the thermoregulation activity at the face level to predict skin temperature in the context of thermal comfort assessment. This solution uses a single color camera to model thermoregulation based on the side effects of the vasodilatation and vasoconstriction. To achieve this, new methods to isolate skin tone response to an individual’s thermal regulation were explored. The relation between the extracted skin tone measurement and the skin temperature was analyzed using a regression model.

Our experiments showed that a thermal model generated using noninvasive and contactless visual sensors could be used to accurately predict individual thermal preferences in real-time. Therefore, instantaneous feedback with respect to the occupants' thermal comfort can be provided to the HVAC system controller to adjust the room temperature.

APA, Harvard, Vancouver, ISO, and other styles
7

Raykhel, Ilya Igorevitch. "Real-Time Automatic Price Prediction for eBay Online Trading." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1631.

Full text
Abstract:
While Machine Learning is one of the most popular research areas in Computer Science, there are still only a few deployed applications intended for use by the general public. We have developed an exemplary application that can be directly applied to eBay trading. Our system predicts how much an item would sell for on eBay based on that item's attributes. We ran our experiments on the eBay laptop category, with prior trades used as training data. The system implements a feature-weighted k-Nearest Neighbor algorithm, using genetic algorithms to determine feature weights. Our results demonstrate an average prediction error of 16%; we have also shown that this application greatly reduces the time a reseller would need to spend on trading activities, since the bulk of market research is now done automatically with the help of the learned model.
APA, Harvard, Vancouver, ISO, and other styles
8

Su, Yibing. "Real-time prediction of stream water temperature for Iowa." Thesis, University of Iowa, 2017. https://ir.uiowa.edu/etd/5653.

Full text
Abstract:
In the agricultural state of Iowa, water quality research is of great importance for monitoring and managing the health of aquatic systems. Among many water quality parameters, water temperature is a critical variable that governs the rates of chemical and biological processes which affect river health. The main objective of this thesis is to develop a real-time high resolution predictive stream temperature model for the entire state of Iowa. A statistical model based solely on the water-air temperature relationship was developed using logistic regression approach. With hourly High Resolution Rapid Refresh (HRRR) air temperature estimations, the implemented stream temperature model produces current state-wide estimations. The results are updated hourly in real-time and presented on a web-based visualization platform: the Iowa Water Quality Information System, Beta version (IWQIS Beta). Streams of 4th order and up are color-coded according to the estimated temperatures. Hourly forecasts for lead time of up to 18 hours are also available. A model was developed separately for spring (March to May), summer (June to August), and autumn (September to November) seasons. 2016 model estimation results generate less than 3 °C average RMSE for the three seasons, with a summer season RMSE of below 2 °C. The model is transferrable to basins of different catchment sizes within the state of Iowa and requires hourly air temperature as the only input variable. The product will assist Iowa water quality research and provide information to support public management decisions.
APA, Harvard, Vancouver, ISO, and other styles
9

Naye, Edouard. "Real-time arrival prediction models for light rail train systems." Thesis, KTH, Systemanalys och ekonomi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170645.

Full text
Abstract:
One of the main objectives of public transport operators is to adhere to the planned timetable and to provide accurate information to passengers in order to improve actual and perceived service reliability. The aim of this thesis is to address the flowing question: how can the accuracy of a prediction system for light rail systems be measured and improved? The real-time prediction is an output of a telecommunication system, named Automatic Vehicle Location System, which computerizes the predictions. In order to improve a system, it is first important to understand how it works. The mechanism of the prediction computation will be analyzed and each part of the process will be studied in order to seek potential improvements. The first part of the prediction scheme development consists in a statistical analysis of historical data to provide the reference travel times and dwell times and their variations along a day or along a week. Then, two models (the designed-speed model and the speed/position model) will be studied to estimate the remaining time to reach the downstream stop. This estimation is mainly based on the current data (vehicle position and speed). The proposed prediction schemes were implemented and applied for a case study light rail line. Bybanen, a light rail train in Bergen was selected as case study. Real-time information displays are available at all platforms and refer to the waiting to the next two light rail trains. This study focuses on improving the accuracy of these waiting times predictions. In order to establish and analyze the performance of the current prediction scheme, a model for reproducing these computations was developed. Then, the possible improvements have been implemented in the model and the accuracy of the new predictions has been compared to the base case. The assessment and the comparison of prediction systems are not trivial tasks. Which predictions should be taken into account? How does the model identify inconsistency in the data? How could the perception of passengers be taken into account? A set of measures has been used in order to evaluate alternative prediction schemes. The comparison of the different models shows that it is possible to improve the accuracy of the short-term predictions, but it is more difficult to improve the accuracy of long-term predictions because the incertitude of small changes has more impact in long-term predictions. This thesis shows that the reference travel times and dwell times should be assimilated to the most common value instead of the average which is too dependent on high values. Moreover, the dwell time variations are related to the flow passengers. Finally, the most accurate and efficient model is the designed-speed model. The speed/position model is a bit less accurate except in the case of disturbances along the line but its modularity made easier possible improvements. Finally, this paper highlights the time-depending variations of the dwell time in the case of a light rail train system. It could be interesting to analyze the behavior of variations of two consequent dwell times and to implement a forgetting factor. Moreover, the speed-position model shows really good results and a better understanding of the drivers’ behaviors is a key to improve the model. Finally, the differences between the different models will be probably larger for a middle-distance train system, which could be an interesting application of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
10

Bataineh, Mohammad Hindi. "New neural network for real-time human dynamic motion prediction." Thesis, The University of Iowa, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3711174.

Full text
Abstract:

Artificial neural networks (ANNs) have been used successfully in various practical problems. Though extensive improvements on different types of ANNs have been made to improve their performance, each ANN design still experiences its own limitations. The existing digital human models are mature enough to provide accurate and useful results for different tasks and scenarios under various conditions. There is, however, a critical need for these models to run in real time, especially those with large-scale problems like motion prediction which can be computationally demanding. For even small changes to the task conditions, the motion simulation needs to run for a relatively long time (minutes to tens of minutes). Thus, there can be a limited number of training cases due to the computational time and cost associated with collecting training data. In addition, the motion problem is relatively large with respect to the number of outputs, where there are hundreds of outputs (between 500-700 outputs) to predict for a single problem. Therefore, the aforementioned necessities in motion problems lead to the use of tools like the ANN in this work.

This work introduces new algorithms for the design of the radial-basis network (RBN) for problems with minimal available training data. The new RBN design incorporates new training stages with approaches to facilitate proper setting of necessary network parameters. The use of training algorithms with minimal heuristics allows the new RBN design to produce results with quality that none of the competing methods have achieved. The new RBN design, called Opt_RBN, is tested on experimental and practical problems, and the results outperform those produced from standard regression and ANN models. In general, the Opt_RBN shows stable and robust performance for a given set of training cases.

When the Opt_RBN is applied on the large-scale motion prediction application, the network experiences a CPU memory issue when performing the optimization step in the training process. Therefore, new algorithms are introduced to modify some steps of the new Opt_RBN training process to address the memory issue. The modified steps should only be used for large-scale applications similar to the motion problem. The new RBN design proposes an ANN that is capable of improved learning without needing more training data. Although the new design is driven by its use with motion prediction problems, the consequent ANN design can be used with a broad range of large-scale problems in various engineering and industrial fields that experience delay issues when running computational tools that require a massive number of procedures and a great deal of CPU memory.

The results of evaluating the modified Opt_RBN design on two motion problems are promising, with relatively small errors obtained when predicting approximately 500-700 outputs. In addition, new methods for constraint implementation within the new RBN design are introduced. Moreover, the new RBN design and its associated parameters are used as a tool for simulated task analysis. This work initiates the idea that output weights (W) can be used to determine the most critical basis functions that cause the greatest reduction in the network test error. Then, the critical basis functions can specify the most significant training cases that are responsible for the proper performance achieved by the network. The inputs with the most change in value can be extracted from the basis function centers (U) in order to determine the dominant inputs. The outputs with the most change in value and their corresponding key body degrees-of-freedom for a motion task can also be specified using the training cases that are used to create the network's basis functions.

APA, Harvard, Vancouver, ISO, and other styles
11

Loutos, Gerasimos. "Development of prediction schemes for real-time bus arrival information." Thesis, KTH, Transportvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145939.

Full text
Abstract:
Intelligent Transport Systems (ITS) are increasingly used in public transport systems in order to provide real-time information (RTI) to passengers and operators. In particular, the RTI related to the prediction of remaining time until the arrival of the next vehicle is the most commonly provisioned information and the main focus of research. A number of predictions methods have been proposed without clear evidence of their real-world applicability, mainly because of their highly computational complexity. Moreover new sources of information, which could be used in RTI generators, become available but they have not been utilized yet. This thesis formulates a widely used real-world RTI generation meth-od, which is based on the scheduled travel time. Then, the potential contribution of real-time public transport data to RTI generation is investigated. Furthermore, a method that considers both the recent downstream running time information as well as anticipated headways and their impact on downstream dwell times is proposed. The generated predictions have to be compared against empirical bus arrival data in order to analyse the performance of the different schemes. Automatic Vehicle Location (AVL) data of the trunk bus network in Stockholm, were used for the evaluation of the proposed prediction schemes. The results illustrate the successful introduction of a robust methodology for bus arrival predictions, which outperforms the currently applied RTI generator. This methodology by integrating real-time public transport data is expected to reduce significantly passengers waiting time. In addition, the second proposed method provides a milestone for the incorporation of the dwell time component in the computation process of RTI.
APA, Harvard, Vancouver, ISO, and other styles
12

Bernhardsson, Viktor, and Rasmus Ringdahl. "Real time highway traffic prediction based on dynamic demand modeling." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112094.

Full text
Abstract:
Traffic problems caused by congestion are increasing in cities all over the world. As a traffic management tool traffic predictions can be used in order to make prevention actions against traffic congestion. There is one software for traffic state estimations called Mobile Millennium Stockholm (MMS) that are a part of a project for estimate real-time traffic information.In this thesis a framework for running traffic predictions in the MMS software have been implemented and tested on a stretch north of Stockholm. The thesis is focusing on the implementation and evaluation of traffic prediction by running a cell transmission model (CTM) forward in time.This method gives reliable predictions for a prediction horizon of up to 5 minutes. In order to improve the results for traffic predictions, a framework for dynamic inputs of demand and sink capacity has been implemented in the MMS system. The third part of the master thesis presents a model which adjusts the split ratios in a macroscopic traffic model based on driver behavior during congestion.
APA, Harvard, Vancouver, ISO, and other styles
13

Tong, Xianqiao. "Real-time Prediction of Dynamic Systems Based on Computer Modeling." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47361.

Full text
Abstract:
This dissertation proposes a novel computer modeling (DTFLOP modeling) technique to predict the real-time behavior of dynamic systems. The proposed DTFLOP modeling classifies the computation into the sequential computation, which is conducted on the CPU, and the parallel computation, which is performed on the GPU and formulates the data transmission between the CPU and the GPU using the parameters of the memory access speed and the floating point operations to be carried out on the CPU and the GPU by relating the calculation rate respectively. With the help of the proposed DTFLOP modeling it is possible to estimate the time cost for computing the model that represents a dynamic system given a certain computer. The proposed DTFLOP modeling can be utilized as a general method to analyze the computation of a model related to a dynamic system and two real life systems are selected to demonstrate its performance, the cooperative autonomous vehicle system and the full-field measurement system. For the cooperative autonomous vehicle system a novel parallel grid-based RBE technique is firstly proposed. The formulations are derived by identifying the parallel computation in the prediction and correction processes of the RBE. A belief fusion technique, which fuses not only the observation information but also the target motion information, has hen been proposed. The proposed DTFLOP modeling is validated using the proposed parallel grid-based RBE technique with the GPU implementation by comparing the estimated time cost with the actual time cost of the parallel grid-based RBE. The superiority of the proposed parallel grid-based RBE technique is investigated by a number of numerical examples in comparison with the conventional grid-based RBE technique. The belief fusion technique is examined by a simulated target search and rescue test and it is observed to maintain more information of the target compared with the conventional observation fusion technique and eventually leads to the better performance of the target search and rescue. For the full-field measurement system a novel parallel DCT full-field measurement technique for measuring the displacement and strain field on the deformed surface of a structure is proposed. The proposed parallel DCT full-field measurement technique measures the displacement and strain field by tracking the centroids of the marked dots on the deformed surface. It identifies and develops the parallel computation in the image analysis and the field estimation processes and then is implemented into the GPU to accelerate the conventional full-field measurement techniques. The detail strategy of the GPU implementation is also developed and presented. The corresponding software package, which also includes a graphic user interface, and the hardware system consist of two digital cameras, LED lights and adjustable support legs to accommodate indoor or outdoor experimental environments are proposed. The proposed DTFLOP modeling is applied to the proposed parallel DCT full-field measurement technique to estimate its performance and the well match with the actual performance demonstrates the DTFLOP modeling. A number of both simulated and real experiments, including the tensile, compressive and bending experiments in the laboratory and outdoor environments, are performed to validate and demonstrate the proposed parallel DCT full-field measurement technique.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Ahmed, Safayet N. "Adaptive CPU-budget allocation for soft-real-time applications." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52215.

Full text
Abstract:
The focus of this dissertation is adaptive CPU-budget allocation for periodic soft-real-time applications. The presented algorithms are developed in the context of a power-management framework. First, the prediction-based bandwidth scheduler (PBS) is developed. This algorithm is designed to adapt CPU-budget allocations at a faster rate than previous adaptive algorithms. Simulation results are presented to demonstrate that this approach allows for a faster response to under allocations than previous algorithms. A second algorithm is presented called Two-Stage Prediction (TSP) that improves on the PBS algorithm. Specifically, a more sophisticated algorithm is used to predict execution times and a stronger guarantee is provided on the timeliness of jobs. Implementation details and experimental results are presented for both the PBS and TSP algorithms. An abstraction is presented called virtual instruction count (VIC) to allow for more efficient budget allocation in power-managed systems. Power management decisions affect job-execution times. VIC is an abstract measure of computation that allows budget allocations to be made independent of power-management decisions. Implementation details and experimental results are presented for a VIC-based budget mechanism. Finally, a power-management framework is presented called the linear adaptive models based system (LAMbS). LAMbS is designed to minimize power consumption while honoring budget allocations specified in terms of VIC.
APA, Harvard, Vancouver, ISO, and other styles
15

Moshgbar, Mojgan. "Prediction and real-time compensation of liner wear in cone crushers." Thesis, Loughborough University, 1996. https://dspace.lboro.ac.uk/2134/27362.

Full text
Abstract:
In the comminution industry, cone crushers are widely used for secondary and subsequent stages of size reduction. For a given crusher, the achieved size reduction is governed by the closed-side setting. Hadfield Steel is commonly used to line the crushing members to minimize wear. Yet, liner wear caused by some rock types can still be excessive. Enlargement of discharge opening induced by wear of liners produces a drift in product size which, if unchecked, can lead to high volumes of re-circulating load. Alteration of closed-side setting is now commonly achieved via hydraulic means. However, compensation of liner wear still involves plant shut down and loss of production.
APA, Harvard, Vancouver, ISO, and other styles
16

Jewell, Chris. "Real-time inference and risk-prediction for notifable disease of animals." Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.536005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Qaddoum, Kefaya. "Intelligent real-time decision support systems for tomato yield prediction management." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/58333/.

Full text
Abstract:
This thesis describes the research and development of a decision support system for tomato yield prediction. Greenhouse horticulture such as tomato growing offers an interesting test bed for comparing and refining different predictive modelling techniques. The ability to accurately predict future yields, even for as little as days ahead has considerable commercial value to growers. There are several (measurable) causal variables. Some such as temperature are under the grower's control, while others are not. Modern predictive techniques, based on data mining and self-calibrating models, may be able to forecast future yields per unit area of greenhouse better than the biological causal models implicitly now used by growers. Over the past few decades, it has been possible to use the recorded daily environmental conditions in a greenhouse to predict future crop yields. Existing models fail to accurately predict the weekly fluctuations of yield, yet predicting future yields is becoming desperately required especially with weather change. This research project used data collected during seasonal tomato life cycle to develop a decision support system that would assist growers to adjust crops to meet demand, and to alter marketing strategies. The three main objectives are: firstly, to research and utilize intelligent systems techniques for analysing greenhouse environmental variables to identify the variable or variables that most effect yield fluctuations, and Secondly, to research the use of these techniques for predicting tomato yields and produce handy rules for growers to use in decision-making. Finally, to combine some existing techniques to form a hybrid technique that achieves lower prediction errors and more confident results. There are a range of intelligent systems (IS), which are used to process environment data, including artificial neural networks (ANNs), genetic algorithms (GA) and fuzzy logic (FL). A model providing more accurate yield prediction was developed and tested using industrial data from growers. The author develops and investigates the application of an intelligent decision support system for yield management, and to provide an improved prediction model using intelligent systems (IS). Using real-world data, the intelligent system employs a combination of FL, NN and GA. The thesis presents a modified hybrid adaptive neural network with revised adaptive error smoothing, which is based on genetic algorithm to build a learning system for complex problem solving in yield prediction. This system can closely predict weekly yield values of a tomato crop. The proposed learning system is constructed as an intelligent technique and then further optimized. The method is evaluated using real-world data. The results show comparatively good accuracy.Use was made of existing algorithms, such as self-organizing maps (SOMs), and principal component analysis (PCA), to analyse our datasets and identify the critical input variables. The primary conclusion from this thesis is that intelligent systems, such as artificial neural networks, genetic algorithm, and fuzzy inference systems, can be successfully applied to the creation of tomato yield predictions, these predictions were better and hence support growers’ decisions. All of these techniques are benchmarked against published existing models, such as GNMM, and RBF.
APA, Harvard, Vancouver, ISO, and other styles
18

Teal, Paul D., and p. teal@irl cri nz. "Real Time Characterisation of the Mobile Multipath Channel." The Australian National University. Research School of Information Sciences and Engineering, 2002. http://thesis.anu.edu.au./public/adt-ANU20020722.085502.

Full text
Abstract:
In this thesis a new approach for characterisation of digital mobile radio channels is investigated. The new approach is based on recognition of the fact that while the fading which is characteristic of the mobile radio channel is very rapid, the processes underlying this fading may vary much more slowly. The comparative stability of these underlying processes has not been exploited in system designs to date. Channel models are proposed which take account of the stability of the channel. Estimators for the parameters of the models are proposed, and their performance is analysed theoretically and by simulation and measurement. Bounds are derived for the extent to which the mobile channel can be predicted, and the critical factors which define these bounds are identified. Two main applications arise for these channel models. The first is the possibility of prediction of the overall system performance. This may be used to avoid channel fading (for instance by change of frequency), or compensate for it (by change of the signal rate or by power control). The second application is in channel equalisation. An equaliser based on a model which has parameters varying only very slowly can offer improved performance especially in the case of channels which appear to be varying so rapidly that the convergence rate of an equaliser based on the conventional model is not adequate. The first of these applications is explored, and a relationship is derived between the channel impulse response and the performance of a broadband system.
APA, Harvard, Vancouver, ISO, and other styles
19

Kolhatkar, Dhanvin. "Real-Time Instance and Semantic Segmentation Using Deep Learning." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40616.

Full text
Abstract:
In this thesis, we explore the use of Convolutional Neural Networks for semantic and instance segmentation, with a focus on studying the application of existing methods with cheaper neural networks. We modify a fast object detection architecture for the instance segmentation task, and study the concepts behind these modifications both in the simpler context of semantic segmentation and the more difficult context of instance segmentation. Various instance segmentation branch architectures are implemented in parallel with a box prediction branch, using its results to crop each instance's features. We negate the imprecision of the final box predictions and eliminate the need for bounding box alignment by using an enlarged bounding box for cropping. We report and study the performance, advantages, and disadvantages of each. We achieve fast speeds with all of our methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Liu, Jia. "Rainfall-runoff modelling and numerical weather prediction for real-time flood forecasting." Thesis, University of Bristol, 2011. http://hdl.handle.net/1983/87375e5e-4186-4707-b7c6-465617dc1ac1.

Full text
Abstract:
This thesis focuses on integrating rainfall-runoff modelling with a mesoscale numerical weather prediction (NWP) model to make real-time flood forecasts at the catchment scale. Studies carried out are based on catchments in Southwest England with a main focus on the Brue catchment of an area of 135 km2 and covered by a dense network of 49 rain gauges and a C-band weather radar. The studies are composed of three main parts: Firstly, two data mining issues are investigated to enable a better calibrated rainfall-runoff model for flood forecasting. The Probability Distributed Model (PDM) is chosen which is widely used in the UK. One of the issues is the selection of appropriate data for model calibration regarding the data length and duration. It is found that the information quality of the calibration data is more important than the data length in determining the model performance after calibration. An index named the Information Cost Function (ICF) developed on the discrete wavelet decomposition is found to be efficient in identifying the most appropriate calibration data scenario. Another issue is for the impact of the temporal resolution of the model input data when using the rainfall-runoff model for real-time forecasting. Through case studies and spectral analyses, the optimal choice of the data time interval is found to have a positive relation with the forecast lead time, i.e., the longer is the lead time, the larger should the time interval be. This positive relation is also found to be more obvious in the catchment with a longer concentration time. A hypothetical curve is finally concluded to describe the general impact of data time interval in real-time forecasting. The development of the NWP model together with the weather radar allows rainfall forecasts to be made in high resolutions of time and space. In the second part of studies, numerical experiments for improving the NWP rainfall forecasts are carried out based on the newest generation mesoscale NWP model, the Weather Research & Forecasting (WRF) model. The sensitivity of the WRF performance is firstly investigated for different domain configurations and various storm types regarding the evenness of rainfall distribution in time and space. Meanwhile a two-dimensional verification scheme is developed to quantitatively evaluate the WRF performance in the temporal and spatial dimensions. Following that the WRF model is run in the cycling mode in tandem with the three-dimensional variational assimilation technique for continuous assimilation of the radar reflectivity and traditional surface/ upperair observations. The WRF model has shown its best performance in producing both rainfall simulations and improved rainfall forecasts through data assimilation for the storm events with two dimensional evenness of rainfall distribution; while for highly convective storms with rainfall concentrated in a small area and a short time period, the results are not ideal and much work remains to be done in the future. Finally, the rainfall-runoff model PDM and the rainfall forecasting results from WRF are integrated together with a real-time updating scheme, the Auto-Regressive and Moving Average (ARMA) model to constitute a flood forecasting system. The system is tested to be reliable in the small catchment such as Brue and the use of the NWP rainfall products has shown its advantages for long lead-time forecasting beyond the catchment concentration time. Keywords: rainfall-runoff modelling, numerical weather prediction, flood forecasting, real-time updating, spectral analysis, data assimilation, weather radar.
APA, Harvard, Vancouver, ISO, and other styles
21

Ma, Rui. "Solid oxide fuel cell modeling and lifetime prediction for real-time simulations." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCA018.

Full text
Abstract:
Cette thèse présente d'abord une modélisation multi-physique d'une cellule de pile à combustible à oxydes solides de géométrie tubulaire réversible 2D. Le modèle développé peut représenter à la fois une cellule d'électrolyse à oxydes solides (SOEC) et une cellule de pile à combustible à oxydes solides (SOFC). En tenant compte des phénomènes physiques, électrochimiques, fluidiques et thermiques, le modèle présenté peut décrire avec précision les effets multi-physiques à l'intérieur d'une cellule pour le fonctionnement en mode électrolyseur ou en mode pile sur toute la plage de fonctionnement en courant et en température. En outre, un solveur itératif a été mis en place afin de résoudre la distribution 2D des quantités physiques le long de la cellule tubulaire. Le modèle de cellule réversible est ensuite validé expérimentalement dans les deux configurations sous différentes conditions. Par ailleurs, un modèle de pile à combustible alimentée par du syngas a été développé. Ce dernier est orienté contrôle et prend en compte à la fois des phénomènes de co-oxydation de l'hydrogène et du monoxyde de carbone. Le modèle de gaz de synthèse développé est validé expérimentalement dans différentes conditions de fonctionnement. Le modèle développé peut être utilisé dans des applications embarquées comme la simulation en temps réel, ce qui peut aider à concevoir et tester la stratégie de contrôle et de diagnostic en ligne pour le système de génération d'énergie des piles à combustible dans les applications industrielles.La simulation en temps réel est importante pour le diagnostic en ligne des piles à combustible et les tests HIL (hardware-in-the-loop) avant les applications industrielles. Cependant, il est difficile de mettre en œuvre des modèles de piles à combustible multi-dimensionnels et multi-physiques en temps réel en raison des problèmes de rigidité numérique du modèle. Ainsi, la rigidité numérique du modèle en temps réel de la pile de type SOFC est d'abord analysée. Certains des solveurs d'équations différentielles ordinaires (ODE) couramment utilisés sont ensuite testés par la mise en place d’une simulation en temps réel comme objectif principal. Enfin, un nouveau solveur ODE rigide est employé pour améliorer la stabilité et réduire le temps d'exécution du modèle de pile à combustible en temps réel multidimensionnel. Pour vérifier le modèle proposé et le solveur ODE, des expériences de simulation en temps réel sont réalisées au sein d’une plate-forme temps réel embarquée commune. Les résultats expérimentaux montrent que la vitesse d'exécution satisfait à l'exigence de la simulation en temps réel. La stabilité du solveur sous forte rigidité et la grande précision du modèle sont également validées.Les piles à combustible sont vulnérables aux impuretés de l'hydrogène et aux conditions de fonctionnement qui entraînent une dégradation des performances de la pile au cours du temps. Ainsi, au cours de ces dernières années, la prédiction de la dégradation des performances attire l'attention qui conduit à des remarques critiques sur la fiabilité du système. Ainsi, une méthode innovante de prédiction de dégradation PEMFC utilisant un réseau neutre récurrent (RNN) à longue distance (G-LSTM) est étudiée. Le système LSTM peut efficacement éviter les problèmes d'explosion et de disparition de gradient en comparaison avec l'architecture RNN conventionnelle, ce qui le rend pertinent pour le problème de prédiction pour une longue période. En mettant en parallèle et en combinant les cellules LSTM, l'architecture G-LSTM peut optimiser de façon avantageuse la précision de prédiction de la dégradation des performances de PEMFC. Le modèle de prédiction proposé est validé expérimentalement par trois types différents de PEMFC. Les résultats indiquent que le réseau G-LSTM utilisé peut prédire la dégradation de la pile à combustible d'une manière précise
This thesis first presents a multi-physical modeling of a 2D reversible tubular solid oxide cell. The developed model can represent both a solid oxide electrolysis cell (SOEC) and solid oxide fuel cell (SOFC) operations. By taking into account of the electrochemical, fluidic and thermal physical phenomena, the presented model can accurately describe the multi-physical effects inside a cell for both fuel cell and electrolysis cell operation under entire working range of cell current and temperature. In addition, an iterative solver is proposed which is used to solve the 2D distribution of physical quantities along the tubular cell. The reversible solid oxide cell model is then validated experimentally in both SOEC and SOFC configurations under different species partial pressures, operating temperatures and current densities conditions. Meanwhile, a control-oriented syngas fuel cell model includes both hydrogen and carbon monoxide co-oxidation phenomena are also proposed. The developed syngas model is validated experimentally under different operating conditions regarding different reaction temperatures, species partial pressures and entire working range of current densities. The developed model can be used in embedded applications like real-time simulation, which can help to design and test the control and online diagnostic strategy for fuel cell power generation system in the industrial applications.Real-time simulation is important for the fuel cell online diagnostics and hardware-in-the-loop (HIL) tests before industrial applications. However, it is hard to implement real-time multi-dimensional, multi-physical fuel cell models due to the model numerical stiffness issues. Thus, the numerical stiffness of the tubular solid oxide fuel cell (SOFC) real-time model is analyzed to identify the perturbation ranges related to the fuel cell electrochemical, fluidic and thermal domains. Some of the commonly used ordinary differential equation (ODE) solvers are then tested for the real-time simulation purpose. At last, the novel stiff ODE solver is proposed to improve the stability and reduce the multi-dimensional real-time fuel cell model execution time. To verify the proposed model and the ODE solver, real-time simulation experiments are carried out in a common embedded real-time platform. The experimental results show that the execution speed satisfies the requirement of real-time simulation. The solver stability under strong stiffness and the high model accuracy are also validated.Fuel cell are vulnerable to the impurities of hydrogen and operating conditions, which could cause the degradation of output performance over time during operation. Thus, the prediction of the performance degradation draws attention lately and is critical for the reliability of the fuel cell system. Thus, an innovative degradation prediction method using Grid Long Short-Term Memory (G-LSTM) recurrent neutral network (RNN) is proposed. LSTM can effectively avoid the gradient exploding and vanishing problem compared with conventional RNN architecture, which makes it suitable for the prediction of long time period. By paralleling and combining the LSTM cells, G-LSTM architecture can further optimize the prediction accuracy of the PEMFC performance degradation. The proposed prediction model is experimentally validated by three different types of PEMFC: 1.2 kW NEXA Ballard fuel cells, 1 kW Proton Motor PM200 fuel cells and 25 kW Proton Motor PM200 fuel cells. The results indicate that the proposed G-LSTM network can predict the fuel cell degradation in a precise way. The proposed G-LSTM deep learning approach can be efficiently applied to predict and optimize the lifetime of fuel cell in transportation applications
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Wenda. "Machine Learning Based Fault Prediction for Real-time Scheduling on Shop Floor." Thesis, KTH, Industriell produktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-245221.

Full text
Abstract:
Nowadays, scheduling on a shop floor is only focused on the availabil-ity of resources, where the potential faults are not able to be predicted. A big data analytics based fault prediction was proposed to be ap-plied in scheduling, which require a real-time decision making. To select a proper machine learning algorithm for real-time scheduling, this paper first proposes a data generation method in terms of pattern complexity and scale. Three levels of depth, an index of data complex-ity, and three levels of data attributes, an index of data scale, are used to obtain the data sets. Based on those data sets, ten commonly used machine learning algorithms are trained, in which the parameters are adjusted to achieve a high accuracy. The testing results including three indexes including training time, testing time and prediction accuracy, are used to evaluate the algorithms. The results of the tests shows that when working with data of sim-ple structure and small scale, typical machine learning methods like Naive Bayes classifier and SVM is good enough with fast training an high accuracy. When dealing with complex data on large scale, deep learning methods like CNN and DBN outperform all other methods.
Nu för tiden, schemaläggning på en affärsplan är endast inriktad på tillgången på resurser, där de potentiella felen inte kan förutses. En stor dataanalysbaserad felprediktion föreslogs att tillämpas vid schemaläggning, vilket kräver beslutsfattande i realtid. För att välja en riktig maskininlärningsalgoritm för realtidsplanering, föreslår det-ta papper först en datagenereringsmetod när det gäller mönsterkom-plexitet och skala. Baserat på dessa datasatser utbildas tio allmänt an-vända maskininlärningsalgoritmer, där parametrarna justeras för att uppnå hög noggrannhet. Testresultaten inklusive tre index inklusive träningstid, testtid och prediktionsnoggrannhet används för att utvär-dera algoritmerna. Resultaten av testen visar att typiska maskininlärningsmetoder som Naive Bayes-klassificerare och SVM är bra nog med snabb träning med hög noggrannhet när de arbetar med data med enkel struktur och liten skala. När man hanterar komplexa data i stor skala, överträffar djupa inlärningsmetoder som CNN och DBN alla andra metoder.
APA, Harvard, Vancouver, ISO, and other styles
23

Kommaraju, Mallik. "Predictor development for controlling real-time applications over the Internet." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4813.

Full text
Abstract:
Over the past decade there has been a growing demand for interactive multimedia applications deployed over public IP networks. To achieve acceptable Quality of Ser- vice (QoS) without significantly modifying the existing infrastructure, the end-to-end applications need to optimize their behavior and adapt according to network char- acteristics. Most existing application optimization techniques are based on reactive strategies, i.e. reacting to occurrences of congestion. We propose the use of predic- tive control to address the problem in an anticipatory manner. This research deals with developing models to predict end-to-end single flow characteristics of Wide Area Networks (WANs). A novel signal, in the form of single flow packet accumulation, is proposed for feedback purposes. This thesis presents a variety of effective predictors for the above signal using Auto-Regressive (AR) models, Radial Basis Functions (RBF) and Sparse Basis Functions (SBF). The study consists of three sections. We first develop time- series models to predict the accumulation signal. Since encoder bit-rate is the most logical and generic control input, a statistical analysis is conducted to analyze the effect of input bit-rate on end-to-end delay and the accumulation signal. Finally, models are developed using this bit-rate as an input to predict the resulting accu- mulation signal. The predictors are evaluated based on Noise-to-Signal Ratio (NSR) along with their accuracy with increasing accumulation levels. In time-series models, RBF gave the best NSR closely followed by AR models. Analysis based on accu- racy with increasing accumulation levels showed AR to be better in some cases. The study on effect of bit-rate revealed that bit-rate may not be a good control input on all paths. Models such as Auto-Regressive with Exogenous input (ARX) and RBF were used to develop models to predict the accumulation signal using bit-rate as a modeling input. ARX and RBF models were found to give comparable accuracy, with RBF being slightly better.
APA, Harvard, Vancouver, ISO, and other styles
24

Han, Mei. "Studies of Dynamic Bandwidth Allocation for Real-Time VBR Video Applications." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32027.

Full text
Abstract:
Variable bit rate (VBR) compressed video traffic, such as live video news, is expected to account for a large portion of traffic in future integrated networks. This real-time video traffic has strict delay and loss requirements, and exhibits burstiness over multiple time scales, thus imposing a challenge on network resource allocation and management. The renegotiated VBR (R-VBR) scheme, dynamically allocating resources to capture the burstiness of VBR traffic, substantially increases network utilization while satisfying any desired quality of service (QoS) requirements. This thesis focuses on the performance evaluation of R-VBR in the context of different R-VBR approaches. The renegotiated deterministic VBR (RED-VBR) scheme, proposed by Dr. H. Zhang et al., is thoroughly investigated in this research using a variety of real-world videos, with both high quality and low quality. A new Virtual-Queue-Based RED-VBR is then developed to reduce the implementation complexity of RED-VBR. Simulation results show that this approach obtains a comparable network performance as RED-VBR: relatively high network utilization and a very low drop rate. A Prediction-Based R-VBR based on a multiresolution learning neural network traffic predictor, developed by Dr. Y. Liang, is studied and the use of binary exponential backoff (BEB) algorithm is introduced to efficiently decrease the renegotiation frequency. Compared with RED-VBR, Prediction-Based R-VBR obtains significantly improved network utilization at a little expense of the drop rate. This work provides evaluations of the advantages and disadvantages of several R-VBR approaches, and thus provides a clearer big picture on the performance of the studied R-VBR approaches, which can be used as the basis to choose an appropriate R-VBR scheme to optimize network utilization while enabling QoS for the application tasks.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
25

Roach, Jeffrey Wayne. "Predicting Realistic Standing Postures in a Real-Time Environment." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/291.

Full text
Abstract:
Procedural human motion generation is still an open area of research. Most research into procedural human motion focus on two problem areas: the realism of the generated motion and the computation time required to generate the motion. Realism is a problem because humans are very adept at spotting the subtle nuances of human motion and so the computer generated motion tends to look mechanical. Computation time is a problem because the complexity of the motion generation algorithms results in lengthy processing times for greater levels of realism. The balancing human problem poses the question of how to procedurally generate, in real-time, realistic standing poses of an articulated human body. This report presents the balancing human algorithm that addresses both concerns: realism and computation time. Realism was addressed by integrating two existing algorithms. One algorithm addressed the physics of the human motion and the second addressed the prediction of the next pose in the animation sequence. Computation time was addressed by identifying techniques to simplify or constrain the algorithms so that the real-time goal can be met. The research methodology involved three tasks: developing and implementing the balancing human algorithm, devising a real-time simulation graphics engine, and then evaluating the algorithm with the engine. An object-oriented approach was used to model the balancing human as an articulated body consisting of systems of rigid-bodies connected together with joints. The attributes and operations of the object-oriented model were derived from existing published algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Fan, Zheyu Jerry. "Kalman Filter Based Approach : Real-time Control-based Human Motion Prediction in Teleoperation." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189210.

Full text
Abstract:
This work is to investigate the performance of two Kalman Filter Algorithms, namely Linear Kalman Filter and Extended Kalman Filter on control-based human motion prediction in a real-time teleoperation. The Kalman Filter Algorithm has been widely used in research areas of motion tracking and GPS-navigation. However, the potential of human motion prediction by utilizing this algorithm is rarely being mentioned. Combine with the known issue - the delay issue in today’s teleoperation services, the author decided to build a prototype of simple teleoperation model based on the Kalman Filter Algorithm with the aim of eliminated the unsynchronization between the user’s inputs and the visual frames, where all the data were transferred over the network. In the first part of the thesis, two types of Kalman Filter Algorithm are applied on the prototype to predict the movement of the robotic arm based on the user’s motion applied on a Haptic Device. The comparisons in performance among the Kalman Filters have also been focused. In the second part, the thesis focuses on optimizing the motion prediction which based on the results of Kalman filtering by using the smoothing algorithm. The last part of the thesis examines the limitation of the prototype, such as how much the delays are accepted and how fast the movement speed of the Phantom Haptic can be, to still be able to obtain reasonable predations with acceptable error rate.   The results show that the Extended Kalman Filter has achieved more advantages in motion prediction than the Linear Kalman Filter during the experiments. The unsynchronization issue has been effectively improved by applying the Kalman Filter Algorithm on both state and measurement models when the latency is set to below 200 milliseconds. The additional smoothing algorithm further increases the accuracy. More important, it also solves shaking issue on the visual frames on robotic arm which is caused by the wavy property of the Kalman Filter Algorithm. Furthermore, the optimization method effectively synchronizes the timing when robotic arm touches the interactable object in the prediction.   The method which is utilized in this research can be a good reference for the future researches in control-based human motion tracking and prediction.
Detta arbete fokuserar på att undersöka prestandan hos två Kalman Filter Algoritmer, nämligen Linear Kalman Filter och Extended Kalman Filter som används i realtids uppskattningar av kontrollbaserad mänsklig rörelse i teleoperationen. Dessa Kalman Filter Algoritmer har används i stor utsträckning forskningsområden i rörelsespårning och GPS-navigering. Emellertid är potentialen i uppskattning av mänsklig rörelse genom att utnyttja denna algoritm sällan nämnas. Genom att kombinera med det kända problemet – fördröjningsproblem i dagens teleoperation tjänster beslutar författaren att bygga en prototyp av en enkel teleoperation modell vilket är baserad på Kalman Filter algoritmen i syftet att eliminera icke-synkronisering mellan användarens inmatningssignaler och visuella information, där alla data överfördes via nätverket. I den första delen av avhandlingen appliceras både Kalman Filter Algoritmer på prototypen för att uppskatta rörelsen av robotarmen baserat på användarens rörelse som anbringas på en haptik enhet. Jämförelserna i prestandan bland de Kalman Filter Algoritmerna har också fokuserats. I den andra delen fokuserar avhandlingen på att optimera uppskattningar av rörelsen som baserat på resultaten av Kalman-filtrering med hjälp av en utjämningsalgoritm. Den sista delen av avhandlingen undersökes begräsning av prototypen, som till exempel hur mycket fördröjningar accepteras och hur snabbt den haptik enheten kan vara, för att kunna erhålla skäliga uppskattningar med acceptabel felfrekvens.   Resultaten visar att den Extended Kalman Filter har bättre prestandan i rörelse uppskattningarna än den Linear Kalman Filter under experimenten. Det icke-synkroniseringsproblemet har förbättrats genom att tillämpa de Kalman Filter Algoritmerna på både statliga och värderingsmodeller när latensen är inställd på under 200 millisekunder. Den extra utjämningsalgoritmen ökar ytterligare noggrannheten. Denna algoritm löser också det skakande problem hos de visuella bilder på robotarmen som orsakas av den vågiga egenskapen hos Kalman Filter Algoritmen. Dessutom effektivt synkroniserar den optimeringsmetoden tidpunkten när robotarmen berör objekten i uppskattningarna.   Den metod som används i denna forskning kan vara en god referens för framtida undersökningar i kontrollbaserad rörelse- spåning och uppskattning.
APA, Harvard, Vancouver, ISO, and other styles
27

Shahidi, Zandi Ali. "Scalp EEG quantitative analysis : automated real-time detection and prediction of epileptic seizures." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42748.

Full text
Abstract:
As a chronic neurological disorder, epilepsy is associated with recurrent, unprovoked epileptic seizures resulting from a sudden disturbance of brain function. Long-term monitoring of epileptic patients' Electroencephalogram (EEG) is often needed for diagnosis of seizures, which is tedious, expensive, and time-consuming. Also, clinical staff may not identify the seizure early enough to determine the semiology at the onset. This motivates EEG-based automated real-time detection of seizures. Apart from their possible severe side effects, common treatments for epilepsy (medication and surgery) fail to satisfactorily control seizures in ~25% of patients. EEG-based seizure prediction systems would significantly enhance the chance of controlling/aborting seizures and improve safety and quality of life for patients. This thesis proposes novel EEG-based patient-specific techniques for real-time detection and prediction of epileptic seizures and also presents a pilot study of scalp EEGs acquired in a unique low-noise underground environment. The proposed detection method is based on the wavelet packet analysis of EEG. A novel index, termed the combined seizure index, is introduced which is sensitive to both the rhythmicity and relative energy of the EEG in a given channel and considers the consistency among different channels at the same time. This index is monitored by a cumulative sum procedure in each channel. This channel-based information is then used to generate the final seizure alarm. In this thesis, a prediction method based on a variational Bayesian Gaussian mixture model of the EEG positive zero-crossing intervals is proposed. Novel indices of similarity and dissimilarity are introduced to compare current observations with the preictal and interictal references and monitor the changes for each channel. Information from individual channels is finally combined to trigger an alarm for upcoming seizures. These methods are evaluated using scalp EEG data. The prediction method is also tested against a random predictor. Finally, this thesis investigates the capability of an ultra-shielded underground capsule for acquiring clean EEG. Results demonstrate the potential of the capsule for novel EEG studies, including establishing novel low-noise EEG benchmarks which could be helpful in better understanding of the brain functions and mechanisms deriving various brain disorders, such as epilepsy.
APA, Harvard, Vancouver, ISO, and other styles
28

Gwatiringa, Tinashe G. "Sea state estimation from inertial platform data for real-time ocean wave prediction." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29496.

Full text
Abstract:
Ocean observation is vital in understanding how the oceans contribute toward climate change and other effects. This is one of many undertakings requiring a persistent presence in the oceans. These maritime activities are mainly carried out on large research vessels chartered for weeks at a time, which can be extremely costly. In addition, the data obtained when using these vessels are only short snapshots of the continual processes that occur. Recently, there has been a drive toward using Unmanned Surface Vehicles (USVs) and Unmanned Underwater Vehicles (UUVs), which can be deployed at a fraction of the cost, and provide greatly improved spatio-temporal data. The wave glider (WG) is one such autonomous marine robot used for persistent ocean research and other maritime activities, and forms the focus of this study. The WG is a low power USV/UUV hybrid that harnesses wave energy for propulsion, and has a small solar- and battery-powered thruster, and a rudder for steering. Due to effects of waves, currents, and other disturbances, the platform tends to veer off its desired path. Additionally, local sea state information is not taken into consideration while manoeuvring, hence energy extraction from ocean waves is not optimal. More sophisticated navigation algorithms operating on a per-wave strategy may improve accuracy along a specified path and maximise the energy uptake from the waves. To realise these improvements requires prediction of local wave behaviour. If one can predict what the wave field will be a short time in the future, then possible control action can be taken to efficiently navigate in the environment. Inertial measurements and wave modelling have been used to improve localisation of the WG platform directly, and predict the platform’s velocity. However there is limited work in the context of WG navigation. Hence the problem this dissertation aims to solve is the estimation and subsequent prediction of local wave behaviour. This work proposes a novel approach to estimate the sea state and hence predict short-term, local wave behaviour from inertial measurements on a slow-moving marine platform such as the WG. A Kalman filtering strategy consisting of a phase-locked loop and filter based sea state estimator is used to generate local height and angle of arrival estimates. This method offers an improvement over existing Fast Fourier Transform methods as it does not require long time series data to produce results, and enables the prediction of wave behaviour a short time into the future. The ideas are tested in simulation by generating wind waves using ocean wave models such as the Pierson Moskowitz model, and dynamic a dynamic model of the WG platform. In addition, a small scale lab experiment is carried out to verify the performance of the sea-state estimator developed. Preliminary results obtained indicate that relative wave height can be estimated on-board a marine platform, using only inertial sensors.
APA, Harvard, Vancouver, ISO, and other styles
29

Darbyshire, Karl James. "Real-time pump scheduling through model identification, utilising neural and hybrid prediction techniques." Thesis, Leeds Beckett University, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Jansson, Daniel, and Rasmus Blomstrand. "REAL-TIME PREDICTION OF SHIMS DIMENSIONS IN POWER TRANSFER UNITS USING MACHINE LEARNING." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-45615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wei, Zhengzhe. "H.264 Baseline Real-time High Definition Encoder on CELL." Thesis, Linköping University, Computer Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-53678.

Full text
Abstract:

In this thesis a H.264 baseline high definition encoder is implemented on CELL processor. The target video sequence is YUV420 1080p at 30 frames per second in our encoder. To meet real-time requirements, a system architecture which reduces DMA requests is designed for large memory accessing. Several key computing kernels: Intra frame encoding, motion estimation searching and entropy coding are designed and ported to CELL processor units. A main challenge is to find a good tradeoff between DMA latency and processing time. The limited 256K bytes on-chip memory of SPE has to be organized efficiently in SIMD way. CAVLC is performed in non-real-time on the PPE.

 

The experimental results show that our encoder is able to encode I frame in high quality and encode common 1080p video sequences in real-time. With the using of five SPEs and 63KB executable code size, 20.72M cycles are needed to encode one P frame partitions for one SPE. The average PSNR of P frames increases a maximum of 1.52%. In the case of fast speed video sequence, 64x64 search range gets better frame qualities than 16x16 search range and increases only less than two times computing cycles of 16x16. Our results also demonstrate that more potential power of the CELL processor can be utilized in multimedia computing.

 

The H.264 main profile will be implemented in future phases of this encoder project. Since the platform we use is IBM Full-System Simulator, DMA performance in a real CELL processor is an interesting issue. Real-time entropy coding is another challenge to CELL.

APA, Harvard, Vancouver, ISO, and other styles
32

Ghafir, Ibrahim. "A machine-learning-based system for real-time advanced persistent threat detection and prediction." Thesis, Manchester Metropolitan University, 2017. http://e-space.mmu.ac.uk/618896/.

Full text
Abstract:
It is widely cited that cyber attacks have become more prevalent on a global scale. In light of this, the cybercrime industry has been established for various purposes such as political, economic and socio-cultural aims. Such attacks can be used as a harmful weapon and cyberspace is often cited as a battlefield. One of the most serious types of cyber attacks is the Advanced Persistent Threat (APT), which is a new and more complex version of multi-step attack. The main aim of the APT attack is espionage and data exfiltration, which has the potential to cause significant damage and substantial financial loss. This research aims to develop a novel system to detect and predict APT attacks. A Machine-Learning-based APT detection system, called MLAPT, is proposed. MLAPT runs through three main phases: (1) Threat detection, in which eight methods are developed to detect different techniques used during the various APT steps. The implementation and validation of these methods with real traffic is a significant contribution to the current body of research; (2) Alert correlation, in which a correlation framework is designed to link the outputs of the detection methods, aiming to find alerts that could be related and belong to one APT scenario; and (3) Attack prediction, in which a machine-learning-based prediction module is proposed based on the correlation framework output, to be used by the network security team to determine the probability of the early alerts to develop a complete APT attack. The correlation framework and prediction module are two other major contributions in this work. MLAPT is experimentally evaluated and the presented system is able to predict APT in its early steps with a prediction accuracy of 84.8%.
APA, Harvard, Vancouver, ISO, and other styles
33

Lauer, Michelle(Michelle F. ). "Real-time household energy prediction : approaches and applications for a blockchain-backed smart grid." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121676.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 91-94).
In the current era of Internet of Things (IoT) devices, household solar panels, and increasingly aordable local energy storage, energy grid systems are facing a new set of challenges that they were not originally designed to support. Energy systems of the near future must be capable of supporting these new technologies, but new technology can also be leveraged to improve reliability and eciency overall. A major source of potential improvements comes from the increase of connected devices that are capable of dynamically adjusting their behavior, and offer new data that can be used for optimization and prediction. Energy predictions are used today at the bulk power system level to ensure demand is met through appropriate resource allocation. As energy systems become more responsive, prediction will be important at more granular system levels and timescales.
Enabled by the rise in available data, existing research has shown some machine learning models to be superior to traditional statistical models in predicting long-term aggregate usage. However, these models tend to be computationally expensive; if machine learning prediction models are to be used at short timescales and performed close to the end nodes, there is a need for more ecient models. Additionally, most machine learning models today do not take advantage of the known and studied properties of the underlying energy data. This thesis explores the circumstances under which machine learning can be used to make predictions more accurately than existing methods, and how machine learning and statistical methods can serve to complement each other (specically for short timescales at the household level).
We nd that basic machine learning models outperform other baseline and statistical models by using energy usage trends observed from statistical methods to better engineer the input features. For the increasingly distributed energy systems that these predictive models aim to support, the distributed nature of blockchain technology has been proposed as a good match for managing such systems. As an example of one possible distributed management implementation, this thesis presents a novel blockchain-enabled architecture that provides privacy for users, information security through improved household-level prediction, and takes into consideration the security vulnerabilities and computational constraints of the participants.
by Michelle Lauer.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
34

Adams, Kevin Page. "An Approach to Real Time Adaptive Decision Making in Dynamic Distributed Systems." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/25943.

Full text
Abstract:
Efficient operation of a dynamic system requires (near) optimal real-time control decisions. Those decisions depend on a set of control parameters that change over time. Very often, the optimal decision can be made only with the knowledge of future values of control parameters. As a consequence, the decision process is heuristic in nature. The optimal decision can be determined only after the fact, once the uncertainty is removed. For some types of dynamic systems, the heuristic approach can be very effective. The basic premise is that the future values of control parameters can be predicted with sufficient accuracy. We can either predict those value based on a good model of the system or based on historical data. In many cases, the good model is not available. In that case, prediction using historical data is the only option. It is necessary to detect similarities with the current situation and extrapolate future values. In other words, we need to (quickly) identify patterns in historical data that match the current data pattern. The low sensitivity of the optimal solution is critical. Small variations in data patterns should affect minimally the optimal solution. Resource allocation problems and other â discrete decision systemsâ are good examples of such systems. The main contribution of this work is a novel heuristic methodology that uses neural networks for classifying, learning and detecting changing patterns, as well as making (near) real-time decisions. We improve on existing approaches by providing a real-time adaptive approach that takes into account changes in system behavior with minimal operational delay without the need for an accurate model. The methodology is validated by extensive simulation and practical measurements. Two metrics are proposed to quantify the quality of control decisions as well as a comparison to the optimal solution.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Qin, Xiao. "Traffic flow modeling with real-time data for on-line network traffic estimation and prediction." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3628.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Civil Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
36

Park, YongWoo. "Observation of phytoplankton multiplication processes and real-time prediction of its blooming in Tanabe Bay." 京都大学 (Kyoto University), 2003. http://hdl.handle.net/2433/148523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Suryo, Eko Andi. "Real - time prediction of rainfall induced instability of residual soil slopes associated with deep cracks." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/63775/1/Eko_Andi_Suryo_Thesis.pdf.

Full text
Abstract:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
APA, Harvard, Vancouver, ISO, and other styles
38

Dilmore, Jeremy Harvey. "IMPLEMENTATION STRATEGIES FOR REAL-TIME TRAFFIC SAFETY IMPROVEMENTS ON URBAN FREEWAYS." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3254.

Full text
Abstract:
This research evaluates Intelligent Transportation System (ITS) implementation strategies to improve the safety of a freeway once a potential of a crash is detected. Among these strategies are Variable Speed Limit (VSL) and ramp metering. VSL are ITS devices that are commonly used to calm traffic in an attempt to relieve congestion and enhance throughput. With proper use, VSL can be more cost effective than adding more lanes. In addition to maximizing the capacity of a roadway, a different aspect of VSL can be realized by the potential of improving traffic safety. Through the use of multiple microscopic traffic simulations, best practices can be determined, and a final recommendation can be made. Ramp metering is a method to control the amount of traffic flow entering from on-ramps to achieve a better efficiency of the freeway. It can also have a potential benefit in improving the safety of the freeway. This thesis pursues the goal of a best-case implementation of VSL. Two loading scenarios, a fully loaded case (90% of ramp maximums) and an off-peak loading case (60% of ramp maximums), at multiple stations with multiple implementation methods are strategically attempted until a best-case implementation is found. The final recommendation for the off-peak loading is a 15 mph speed reduction for 2 miles upstream and a 15 mph increase in speed for the 2 miles downstream of the detector that shows a high crash potential. The speed change is to be implemented in 5 mph increments every 10 minutes. The recommended case is found to reduce relative crash potential from .065 to -.292, as measured by a high-speed crash prediction algorithm (Abdel-Aty et al. 2005). A possibility of crash migration to downstream and upstream locations was observed, however, the safety and efficiency benefits far outweigh the crash migration potential. No final recommendation is made for the use of VSL in the fully loaded case (low-speed case); however, ramp metering indicated a promising potential for safety improvement.
M.S.C.E.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering
APA, Harvard, Vancouver, ISO, and other styles
39

Baroya, Sydney. "Real-time Body Tracking and Projection Mapping in the Interactive Arts." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2250.

Full text
Abstract:
Projection mapping, a subtopic of augmented reality, displays computer-generated light visualizations from projectors onto the real environment. A challenge for projection mapping in performing interactive arts is dynamic body movements. Accuracy and speed are key components for an immersive application of body projection mapping and dependent on scanning and processing time. This thesis presents a novel technique to achieve real-time body projection mapping utilizing a state of the art body tracking device, Microsoft’s Azure Kinect DK, by using an array of trackers for error minimization and movement prediction. The device's Sensor and Bodytracking SDKs allow multiple device synchronization. We combine our tracking results from this feature with motion prediction to provide an accurate approximation for body joint tracking. Using the new joint approximations and the depth information from the Kinect, we create a silhouette and map textures and animations to it before projecting it back onto the user. Our implementation of gesture detection provides interaction between the user and the projected images. Our results decreased the lag time created from the devices, code, and projector to create a realistic real-time body projection mapping. Our end goal was to display it in an art show. This thesis was presented at Burning Man 2019 and Delfines de San Carlos 2020 as interactive art installations.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Yizhou. "Real Time Crowding Information (RTCI) Provision : Impacts and Proposed Technical Solution." Thesis, KTH, Industriell ekologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174113.

Full text
Abstract:
The increasing population leads to higher passenger travel demand in Stockholm. The public transport becomes more and more crowded in rush hours. However, passengers carry out decisions usually based on limited traffic information and their travel experience. Passengers cannot take the initiative to avoid crowding based on existing SL traffic information. Real time crowding information (RTCI) research aims to help passenger to have more initiative to plan their travel in metro system, and assist operator to have higher space utilization efficiency. RTCI system contains4 subsystems: projection system, communication system, speaker system and recording system. The practical test was applied in Tekniska Högskolan metro station for two weeks in May 2015 with the permission from SL. The triangle analysis was applied to analyze the impacts of RTCI. The analysiscontains three analysis methods: passenger load data analysis, video record analysis and interview result analysis. The interview result shows RTCI increased round nine tenth of passengers ‘satisfaction and 43% of interviewees thought it was very useful for them. The calculation based on video record and interview result shows that 25% of passengers consulted this information and changed their behaviors on platform. According to the video record, the path became wider and passenger flow became smoother while RTCI system was activated. Passenger distribution was more even in metro based on passenger load data. The number of passengers who got into last unit train increased 8%, and the number in first and second unit train decreased 4% during RTCI practical test. The thesis mainly focused to analyze the impacts of RTCI instead of solving technical challenges. But the technical solution for RTCI system was proposed in thesis. The concept - “Smart Travel” was discussed in chapter11 which mainly considers travel time, crowding information and travel cost as most important factors to passenger.
APA, Harvard, Vancouver, ISO, and other styles
41

Roshandel, Saman. "Impact of real-time traffic characteristics on freeway crash occurrence : systematic review and meta-analysis." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79151/2/Saman_Roshandel_Thesis.pdf.

Full text
Abstract:
A systematic literature review and a comprehensive meta-analysis that combines the findings from existing studies, was conducted in this thesis to analyse the impact of traffic characteristics on crash occurrence. Sensitivity analyses were conducted to investigate the quality, publication bias and outlier bias of the various studies, and the time intervals used to measure traffic characteristics were considered. Based on this comprehensive and systematic review, and the results of the subsequent meta-analysis, major issues in study design, traffic and crash data, and model development and evaluation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
42

Oluyemi, Gbenga Folorunso. "Intelligent grain size profiling using neural network and application to sanding potential prediction in real time." Thesis, Robert Gordon University, 2007. http://hdl.handle.net/10059/1258.

Full text
Abstract:
Production of hydrocarbon from both consolidated and unconsolidated clastic reservoir rocks poses a risk of sand production especially if a well articulated programme of sand management strategy is not put in place to deal with the problem at the onset of field development. A well articulated programme of sand management would include sand production potential prediction in real time if it is going to be effective at all in achieving the goal of dealing with likely sand problem. Sanding potential prediction in real time is considered an element of sand management strategy that involves the evaluation of risk of sand failure/production and the prediction of the likely sand rate and volume to facilitate optimum design of both downhole and surface equipment especially as related to sand control. Sanding potential prediction is therefore very crucial to reducing costs of field developments to make hitherto unattractive development environments profitable. This undoubtedly will impact positively the present drive to increase worldwide production of hydrocarbon . Specifically, real time sanding potential prediction enables timely reservoir management decisions relating to the choice, design and installation of sand control methods. It is also an important input to sand monitoring and topside management. The current sanding potential prediction models in the industry are found to lack the robustness to predict sanding potential in real time. They also are unable to provide the functionality to track the grain size distributions of the sand producing formation and that of the produced sand. This functionality can be useful in the application of grain size distribution to sanding potential prediction. The scope of this work therefore covers the development of coupled models for grain size distribution and sanding potential predictions in real time. A previous work has introduced the use of a commercial neural network technique for grain size distribution prediction. This work has built upon this by using a purposefully coded neural network in conjunction with statistical techniques to develop a model for grain size distribution prediction in both horizontal and vertical directions and extending the application to failure analysis and prediction of strength and sanding potential in formation rocks. The theoretical basis for this work consists in the cross relationships between formation petrophysical properties and grain size distribution parameters on one hand and between grain size distribution parameters and formation strength parameters on the other hand. Hoek and Brown failure criterion, through an analytical treatment, serves as the platform for the development of the failure model, which is coupled to the grain size distribution and Unconfined Compressive Strength (UCS) models. The results obtained in this work have further demonstrated the application of neural network to grain size distribution prediction. They also demonstrate that grain size distribution information can be used in monitoring changes in formation strength and by extension, the formation movement within the failure envelope space especially during production from a reservoir formation.
APA, Harvard, Vancouver, ISO, and other styles
43

Tom, Tracey Hiroto Alena. "Development of Wave Prediction and Virtual Buoy Systems." 京都大学 (Kyoto University), 2010. http://hdl.handle.net/2433/120845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Berrebi, Simon Jonas Youna. "A real-time bus dispatching policy to minimize headway variance." Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51899.

Full text
Abstract:
Transit agencies include buffer time in their schedules to maintain stable headways and avoid bus bunching. In this work, a real-time holding mechanism is proposed to dispatch buses on a loop-shaped route, solely based on operating conditions in real-time. Holds are applied at the terminal station to minimize the expected variance of bus headways at departure. The bus-dispatching problem is formulated as a stochastic decision process. The optimality equations are derived and structural properties of the optimal policy are inferred by backward induction. The exact optimal holding policy is then found in closed form, as a function of the expected travel time of buses currently running. A simulation assuming stochastic operating conditions and unstable headway dynamics is performed to assess the expected average waiting time of passengers at stations. The proposed control strategy is found to provide lower passenger waiting time and better resiliency than methods recommended in the literature and used in practice.
APA, Harvard, Vancouver, ISO, and other styles
45

Devarasetty, Ravi Kiran. "Heuristic Algorithms for Adaptive Resource Management of Periodic Tasks in Soft Real-Time Distributed Systems." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/31219.

Full text
Abstract:
Dynamic real-time distributed systems are characterized by significant run-time uncertainties at the mission and system levels. Typically, processing and communication latencies in such systems do not have known upper bounds and event and task arrivals and failure occurrences are non-deterministically distributed. This thesis proposes adaptive resource management heuristic techniques for periodic tasks in dynamic real-time distributed systems with the (soft real-time) objective of minimizing missed deadline ratios. The proposed resource management techniques continuously monitor the application tasks at run-time for adherence to the desired real-time requirements, detects timing failures or trends for impending failures (due to workload fluctuations), and dynamically allocate resources by replicating subtasks of application tasks for load sharing. We present "predictive" resource allocation algorithms that determine the number of subtask replicas that are required for adapting the application to a given workload situation using statistical regression theory. The algorithms use regression equations that forecast subtask timeliness as a function of external load parameters such as number of sensor reports and internal resource load parameters such as CPU utilization. The regression equations are determined off-line and on-line from application profiles that are collected off-line and on-line, respectively. To evaluate the performance of the predictive algorithms, we consider algorithms that determine the number of subtask replicas using empirically determined functions. The empirical functions compute the number of replicas as a function of the rate of change in the application workload during a "window" of past task periods. We implemented the resource management algorithms as part of a middleware infrastructure and measured the performance of the algorithms using a real-time benchmark. The experimental results indicate that the predictive, regression theory-based algorithms generally produce lower missed deadline ratios than the empirical strategies under the workload conditions that were studied.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
46

Jiang, Yu. "Inference and prediction in a multiple structural break model of economic time series." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/244.

Full text
Abstract:
This thesis develops a new Bayesian approach to structural break modeling. The focuses of the approach are the modeling of in-sample structural breaks and forecasting time series allowing out-of-sample breaks. Our model has some desirable features. First, the number of regimes is not fixed and is treated as a random variable in our model. Second, our model adopts a hierarchical prior for regime coefficients, which allows for the regime coefficients of one regime to contain information about regime coefficients of other regimes. However, the regime coefficients can be analytically integrated out of the posterior distribution and therefore we only need to deal with one level of the hierarchy. Third, the implementation of our model is simple and the computational cost is low. Our model is applied to two different time series: S&P 500 monthly returns and U.S. real GDP quarterly growth rates. We linked breaks detected by our model to certain historical events.
APA, Harvard, Vancouver, ISO, and other styles
47

王雅芬. "A Real-time Vehicle Speed Prediction Method." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/61350853356938522013.

Full text
Abstract:
碩士
國立交通大學
資訊管理研究所
100
For the past few years, with the advance of technology and economic growth, the qualities of traditoional transport systems have improved significantly. Intelligent Transportation System (ITS) has become more and more popular. So far, there are two ways to collect real-time traffic information: (1) stationary Vehicle Detectors (VD) and (2) Global Position System (GPS)-equipped probe cars reporting. However, VD devices need large amounts of money to build and maintain. Therefore, we propose the linear regression model to infer the equation between vehicle speed and traffic flow. The traffic flow can be estimated from the speed which is obtained from GPS-equipped probe cars. For vehicle speed prediction, we propose the regression based methods to predict the future vehicle speed by using the real-time vehicle speed. In experiments, the future traffic information estimation results show that the accuracies of vehicle speed prediction is 98.24%. The Speed Error Ratio and Flow Error Ratio of linear regression model are 4.46% and 33.75% respectively. The estimated speed and traffic flow by using linear regression model is better than by using any other models. Therefore, the linear regression model can be used to estimate traffic flow for ITS. This approach is feasible to estimate the future vehicle speed and the real-time traffic flow for ITS improvement.
APA, Harvard, Vancouver, ISO, and other styles
48

"Mason: Real-time NBA Matches Outcome Prediction." Master's thesis, 2017. http://hdl.handle.net/2286/R.I.43914.

Full text
Abstract:
abstract: The National Basketball Association (NBA) is the most popular basketball league in the world. The world-wide mighty high popularity to the league leads to large amount of interesting and challenging research problems. Among them, predicting the outcome of an upcoming NBA match between two specific teams according to their historical data is especially attractive. With rapid development of machine learning techniques, it opens the door to examine the correlation between statistical data and outcome of matches. However, existing methods typically make predictions before game starts. In-game prediction, or real-time prediction, has not yet been sufficiently studied. During a match, data are cumulatively generated, and with the accumulation, data become more comprehensive and potentially embrace more predictive power, so that prediction accuracy may dynamically increase with a match goes on. In this study, I design game-level and player-level features based on realtime data of NBA matches and apply a machine learning model to investigate the possibility and characteristics of using real-time prediction in NBA matches.
Dissertation/Thesis
Masters Thesis Computer Science 2017
APA, Harvard, Vancouver, ISO, and other styles
49

Sun, Hongyu. "Adaptive short-term traffic prediction in real-time application." 2005. http://catalog.hathitrust.org/api/volumes/oclc/62409118.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Leitão, Bruno Miguel Direito Pereira. "Development of classification methods for real-time seizure prediction." Doctoral thesis, 2013. http://hdl.handle.net/10316/23583.

Full text
Abstract:
Tese de doutoramento do Programa de Doutoramento em Ciências e Tecnologias da Informação, apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.
In the last decades, the scientific community has made enormous efforts to understand the basic mechanisms underlying the generation of epileptic seizures. The analysis of the pre-ictal dynamics among different brain regions has been shown as an important source of information towards the understanding of the spatio-temporal mechanisms. This study, partially a contribution to the EPILEPSIAE project, aims the prediction of unforeseeable and uncontrollable epileptic seizures. Ultimately, the successful development of seizure prediction algorithms represents a fundamental step towards the creation of closed-loop intervention systems, which would improve the quality of life of epileptic patients. The first part of this study aims the development of a patient-specific seizure prediction algorithm based on machine learning with high sensitivity and low false positives rate. The dynamical changes of the brain activity are analyzed using a high dimensional feature set obtained from both scalp and intracranial multichannel electroencephalogram (EEG). The features represent low complexity measures, implementable in real-time scenarios and the classification was performed using cost sensitive support vector machines (SVM). The proposed method was tested in 216 patients of the multicenter EPILEPSIAE database and presented statistical significant results for a small group of patients. We have also analyzed different optimization strategies such as feature selection, feature reduction (classical multidimensional scaling) and post-processing (moving average filter and Kalman filter) in order to improve the results. We addressed the characterization of the EEG spatio-temporal patterns and the classification of specific brain states. The method proposed, based on the segmentation of topographic maps and on a statistical framework (hidden Markov models), shows promising results for the identification of a pre-ictal stage. Lastly, we present a novel approach to characterize the pre-ictal period using multi-way models. Using the PARAFAC model, the EEG data is decomposed in rank-one tensors. It is hypothesized that one of the components represents variations related to the pre-ictal period. Using the high-order data representation, we have also proposed a method to detect the variability of the data using incremental tensor analysis. The conclusions of this study sustain the hypothesis that epileptic seizures (of a group of patient) are predictable. Concerning the methodologies proposed to analyze the space-time-frequency domain, we hope that the suggested approaches point towards new directions in the field of research of seizure prediction.
Nas últimas décadas, a comunidade científica tem vindo a desenvolver grandes esforços no sentido de compreender os mecanismos básicos responsáveis pelo desencadeamento de crises epilépticas. A análise da dinâmica entre diferentes regiões do cérebro humano mostrou que esta pode ser uma importante fonte de informação para explicar a génese das crises. Enquadrado no projeto europeu EPILEPSIAE, este estudo visa o desenvolvimento de um algoritmo de previsão de crises epilépticas cujo sucesso representaria um passo fundamental na criação de sistemas de intervenção malha fechada e uma melhoria da qualidade de vida dos doentes com epilepsia. O trabalho de investigação permitiu, numa primeira fase, o desenvolvimento de um algoritmo específico para cada paciente baseado em métodos de aprendizagem computacional com elevada sensibilidade e baixa taxa de falsos alarmes. As variações da dinâmica cerebral de cada paciente são analisadas através de um conjunto de características calculadas através de electroencefalografia (EEG) multicanal de escalpe ou intracraniano. Estas características representam medidas de relativa pouca complexidade, facilmente implementáveis em temporeal. A classificação é realizada através de máquinas de vectores de suporte com custo assimétrico. A metodologia proposta foi testada num conjunto de 216 pacientes pertencentes a diferentes centros de epilepsia (da base de dados europeia EPILEPSIAE) apresentando resultados de previsão estatisticamente significativos para um reduzido número de pacientes. Posteriormente analisamos o impacto de métodos de seleção de características, redução de características e estratégias de regularização da saída dos classificadores (aplicando os filtros de média móvel e de Kalman) tendo em vista a melhoria dos resultados obtidos. A caracterização dos padrões espaço-temporais do sinal EEG e a classificação dos estados cerebrais relacionados com a epilepsia foi um dos procedimentos metodológicos usados. Baseado na segmentação dos mapas topográficos e no modelo estatístico modelo escondido de Markov, o método proposto demonstra resultados promissores na identificação e caracterização do estado pré-ictal. Por fim, apresentamos uma nova metodologia para a caracterização do período pré-ictal usando tensores de 3ª ordem. Usando o modelo de decomposição PARAFAC, as componentes espaço, tempo e frequência do sinal EEG são decompostas em bases de vectores no espaço tridimensional. Hipotetizamos que uma das bases tridimensionais resultantes representa as variações relacionadas com os processos pré-ictais. Usando a representação tridimensional dos dados (espaço, tempo e características do sinal EEG), propomos um método que permite identificar variações na estrutura de dados. As conclusões deste estudo sustentam, a nosso ver, a hipótese de que as crises epilépticas (de um determinado número) de pacientes com epilepsia podem ser previstas. Relativamente às metodologias de análise espaço-temporais, expectamos que a abordagens metodológica apresentada aponte para novas direções no estudo de previsão de crises epiléticas.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography