Dissertations / Theses on the topic 'Real data model'

To see the other types of publications on this topic, follow the link: Real data model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Real data model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Frafjord, Christine. "Friction Factor Model and Interpretation of Real Time Data." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for petroleumsteknologi og anvendt geofysikk, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-21772.

Full text
Abstract:
The interest in torque and drag issues has over the last years increased when the complexity of wells drilled has become higher. The petroleum industry continually expands the extended reach drilling envelope, which forces the industry to find new methods and tools to better interpret friction because of the central role it has in achieving successful wells. The importance of model friction is further elaborated in chapter 2.Present report contains an investigation of the theory in Aadnoy`s friction model in terms of possibilities and limitations. The theory has been used as foundation to build an Excel calculation tool to model friction. To study the model, three field cases have been applied with the friction model. Excel program which is applied in this work can be provided by the author of the present report to readers if requested.Even though the limitations listed in chapter 6 do exist, the model has been shown to be applicable to get an indication on the downhole situation. By comparing the result from two different well sections in the same well, it was possible to evaluate high borehole friction because of hole cleaning issues. However, one important issue detected during present work, is that the modeled friction factor is highly sensitive to changes in hook load measurements in shallow sections with small inclination and little drillpipe downhole. This is an issue that demands awareness during application of the model because it is a source of risk to get misleading results.For future work, the model should be improved in a more powerful tool than Excel with added features built in to it to reduce constrains in the model, which are mentioned in chapter 6. As continuation of present work the analytical model should also be further investigated by using real wells with more quantitative data and qualitative measured data.
APA, Harvard, Vancouver, ISO, and other styles
2

Granholm, George Richard 1976. "Near-real time atmospheric density model correction using space catalog data." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/44899.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2000.
Includes bibliographical references (p. 179-184).
Several theories have been presented in regard to creating a neutral density model that is corrected or calibrated in near-real time using data from space catalogs. These theories are usually limited to a small number of frequently tracked "calibration satellites" about which information such as mass and crosssectional area is known very accurately. This work, however, attempts to validate a methodology by which drag information from all available low-altitude space objects is used to update any given density model on a comprehensive basis. The basic update and prediction algorithms and a technique to estimate true ballistic factors are derived in detail. A full simulation capability is independently verified. The process is initially demonstrated using simulated range, azimuth, and elevation observations so that issues such as required number and types of calibration satellites, density of observations, and susceptibility to atmospheric conditions can be examined. Methods of forecasting the density correction models are also validated under different atmospheric conditions.
by George Richard Granholm.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Bloodsworth, Peter Charles. "A generic model for real-time scheduling based on dynamic heterogeneous data." Thesis, Oxford Brookes University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Larsson, Daniel. "ARAVQ for discretization of radar data : An experimental study on real world sensor data." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11114.

Full text
Abstract:
The aim of this work was to investigate if interesting patterns could be found in time series radar data that had been discretized by the algorithm ARAVQ into symbolic representations and if the ARAVQ thus might be suitable for use in the radar domain. An experimental study was performed where the ARAVQ was used to create symbolic representations of data sets with radar data. Two experiments were carried out that used a Markov model to calculate probabilities used for discovering potentially interesting patterns. Some of the most interesting patterns were then investigated further. Results have shown that the ARAVQ was able to create accurate representations for several time series and that it was possible to discover patterns that were interesting and represented higher level concepts. However, the results also showed that the ARAVQ was not able to create accurate representations for some of the time series.
APA, Harvard, Vancouver, ISO, and other styles
5

Robidoux, Jeff. "Real-Time Spatial Monitoring of Vehicle Vibration Data as a Model for TeleGeoMonitoring Systems." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32426.

Full text
Abstract:
This research presents the development and proof of concept of a TeleGeoMonitoring (TGM) system for spatially monitoring and analyzing, in real-time, data derived from vehicle-mounted sensors. In response to the concern for vibration related injuries experienced by equipment operators in surface mining and construction operations, the prototype TGM system focuses on spatially monitoring vehicle vibration in real-time. The TGM vibration system consists of 3 components: (1) Data Acquisition Component, (2) Data Transfer Component, and (3) Data Analysis Component. A GPS receiver, laptop PC, data acquisition hardware, triaxial accelerometer, and client software make up the Data Acquisition Component. The Data Transfer Component consists of a wireless data network and a data server. The Data Analysis Component provides tools to the end user for spatially monitoring and analyzing vehicle vibration data in real-time via the web or GIS workstations. Functionality of the prototype TGM system was successfully demonstrated in both lab and field tests. The TGM vibration system presented in this research demonstrates the potential for TGM systems as a tool for research and management projects, which aim to spatially monitor and analyze data derived from mobile sensors in real-time.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

hu, xiaoxiang. "Analysis of Time-related Properties in Real-time Data Aggregation Design." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39046.

Full text
Abstract:
Data aggregation is extensively used in data management systems nowadays. Based on a data aggregation taxonomy named DAGGTAX, we propose an analytic process to evaluate the run-time platform and time-related parameters of Data Aggregation Processes (DAP) in a real-time system design, which can help designers to eliminate infeasible design decisions at early stage. The process for data aggregation design and analysis mainly includes the following outlined steps. Firstly, the user needs to specify the variation of the platform and DAP by figuring out the features of the system and time-related parameters respectively. Then, the user can choose one combination of the variations between the features of the platform and DAP, which forms the initial design of the system. Finally, apply the analytic method for feasibility analysis by schedulability analysis techniques. If there are no infeasibilities found in the process, then the design can be finished. Otherwise, the design must be altered from the run-time platform and DAP design stage, and the schedulability analysis will be applied again for the revised design until all the infeasibilities are fixed. In order to help designers to understand and describe the system and do feasibility analysis, we propose a new UML (Unified Modeling Language) profile that extends UML with concepts related to real-time data aggregation design. These extensions aim to accomplish the conceptual modeling of a real-time data aggregation. In addition, the transferring method based on UML profile to transfer the data aggregation design into a task model is proposed as well. In the end of the thesis, a case study, which applies the analytic process to analyze the architecture design of an environmental monitoring sensor network, is presented as a demonstration of our research.
APA, Harvard, Vancouver, ISO, and other styles
7

Hwang, Yuan-Chun. "Local and personalised models for prediction, classification and knowledge discovery on real world data modelling problems." Click here to access this resource online, 2009. http://hdl.handle.net/10292/776.

Full text
Abstract:
This thesis presents several novel methods to address some of the real world data modelling issues through the use of local and individualised modelling approaches. A set of real world data modelling issues such as modelling evolving processes, defining unique problem subspaces, identifying and dealing with noise, outliers, missing values, imbalanced data and irrelevant features, are reviewed and their impact on the models are analysed. The thesis has made nine major contributions to information science, includes four generic modelling methods, three real world application systems that apply these methods, a comprehensive review of the real world data modelling problems and a data analysis and modelling software. Four novel methods have been developed and published in the course of this study. They are: (1) DyNFIS – Dynamic Neuro-Fuzzy Inference System, (2) MUFIS – A Fuzzy Inference System That Uses Multiple Types of Fuzzy Rules, (3) Integrated Temporal and Spatial Multi-Model System, (4) Personalised Regression Model. DyNFIS addresses the issue of unique problem subspaces by identifying them through a clustering process, creating a fuzzy inference system based on the clusters and applies supervised learning to update the fuzzy rules, both antecedent and consequent part. This puts strong emphasis on the unique problem subspaces and allows easy to understand rules to be extracted from the model, which adds knowledge to the problem. MUFIS takes DyNFIS a step further by integrating a mixture of different types of fuzzy rules together in a single fuzzy inference system. In many real world problems, some problem subspaces were found to be more suitable for one type of fuzzy rule than others and, therefore, by integrating multiple types of fuzzy rules together, a better prediction can be made. The type of fuzzy rule assigned to each unique problem subspace also provides additional understanding of its characteristics. The Integrated Temporal and Spatial Multi-Model System is a different approach to integrating two contrasting views of the problem for better results. The temporal model uses recent data and the spatial model uses historical data to make the prediction. By combining the two through a dynamic contribution adjustment function, the system is able to provide stable yet accurate prediction on real world data modelling problems that have intermittently changing patterns. The personalised regression model is designed for classification problems. With the understanding that real world data modelling problems often involve noisy or irrelevant variables and the number of input vectors in each class may be highly imbalanced, these issues make the definition of unique problem subspaces less accurate. The proposed method uses a model selection system based on an incremental feature selection method to select the best set of features. A global model is then created based on this set of features and then optimised using training input vectors in the test input vector’s vicinity. This approach focus on the definition of the problem space and put emphasis the test input vector’s residing problem subspace. The novel generic prediction methods listed above have been applied to the following three real world data modelling problems: 1. Renal function evaluation which achieved higher accuracy than all other existing methods while allowing easy to understand rules to be extracted from the model for future studies. 2. Milk volume prediction system for Fonterra achieved a 20% improvement over the method currently used by Fonterra. 3. Prognoses system for pregnancy outcome prediction (SCOPE), achieved a more stable and slightly better accuracy than traditional statistical methods. These solutions constitute a contribution to the area of applied information science. In addition to the above contributions, a data analysis software package, NeuCom, was primarily developed by the author prior and during the PhD study to facilitate some of the standard experiments and analysis on various case studies. This is a full featured data analysis and modelling software that is freely available for non-commercial purposes (see Appendix A for more details). In summary, many real world problems consist of many smaller problems. It was found beneficial to acknowledge the existence of these sub-problems and address them through the use of local or personalised models. The rules extracted from the local models also brought about the availability of new knowledge for the researchers and allowed more in-depth study of the sub-problems to be carried out in future research.
APA, Harvard, Vancouver, ISO, and other styles
8

Bergstrom, Sarah Elizabeth 1979. "An algorithm for reducing atmospheric density model errors using satellite observation data in real-time." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/17537.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2002.
Vita.
Includes bibliographical references (p. 233-240).
Atmospheric density mismodeling is a large source of errors in satellite orbit determination and prediction in the 200-600 kilometer range. Algorithms for correcting or "calibrating" an existing atmospheric density model to improve accuracy have been seen as a major way to reduce these errors. This thesis examines one particular algorithm, which does not require launching special "calibration satellites" or new sensor platforms. It relies solely on the large quantity of observations of existing satellites, which are already being made for space catalog maintenance. By processing these satellite observations in near real-time, a linear correction factor can be determined and forecasted into the near future. As a side benefit, improved estimates of the ballistic coefficients of some satellites are also produced. Also, statistics concerning the accuracy of the underlying density model can also be extracted from the correction. This algorithm had previously been implemented and the implementation had been partially validated using simulated data. This thesis describes the completion of the validation process using simulated data and the beginning of the real data validation process. It is also intended to serve as a manual for using and modifying the implementation of the algorithm.
by Sarah Elizabeth Bergstrom.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

Byrnes, Denise Dianne. "Static scheduling of hard real-time control software using an asynchronous data-driven execution model /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu14877799148243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hajjam, Sohrab. "Real-time flood forecasting model intercomparison and parameter updating rain gauge and weather radar data." Thesis, University of Salford, 1997. http://usir.salford.ac.uk/43019/.

Full text
Abstract:
This thesis describes the development of real-time flood forecasting models at selected catchments in the three countries, using rain gauge and radar derived rainfall estimates and time-series analysis. An extended inter-comparison of real-time flood forecasting models has been carried out and an attempt has been made to rank the flood forecasting models. It was found that an increase in model complexity does not necessarily lead to an increase in forecast accuracy. An extensive analysis of group calibrated transfer function (TF) models on the basis of antecedent conditions of the catchment and storm characteristics has revealed that the use of group model resulted in a significant improvement in the quality of the forecast. A simple model to calculate the average pulse response has also been developed. The development of a hybrid genetic algorithm (HGA), applied to a physically realisable transfer function model is described. The techniques of interview selection and fitness scaling as well as random bit mutation and multiple crossover have been included, and both binary and real number encoding technique have been assessed. The HGA has been successfully applied for the identification and simulation of the dynamic TF model. Four software packages have been developed and extensive development and testing has proved the viability of the approach. Extensive research has been conducted to find the most important adjustment factor of the dynamic TF model. The impact of volume, shape and time adjustment factors on forecast quality has been evaluated. It has been concluded that the volume adjustment factor is the most important factor of the three. Furthermore, several attempts have been made to relate the adjustment factors to different elements. The interaction of adjustment factors has also been investigated. An autoregressive model has been used to develop a new updating technique for the dynamic TF model by the updating of the B parameters through the prediction of future volume adjustment factors over the forecast lead-time. An autoregressive error prediction model has also been combined with a static TF model. Testing has shown that the performance of both new TF models is superior to conventional procedures.
APA, Harvard, Vancouver, ISO, and other styles
11

SELICATI, VALERIA. "Innovative thermodynamic hybrid model-based and data-driven techniques for real time manufacturing sustainability assessment." Doctoral thesis, Università degli studi della Basilicata, 2022. http://hdl.handle.net/11563/157566.

Full text
Abstract:
This doctoral thesis is the result of the supervision and collaboration of the University of Basilicata, the Polytechnic of Bari, and the enterprise Master Italy s.r.l. The main research lines explored and discussed in the thesis are: sustainability in general and, more specifically, manufacturing sustainability, the Industry 4.0 paradigm linked to smart (green) manufacturing, model-based assessment techniques of manufacturing processes, and data-driven analysis methodologies. These seemingly unrelated topics are handled throughout the thesis in such a way that it reveal how strongly interwoven and characterised by transversality they are. The goal of the PhD programme was to design and validate innovative assessment models in order to investigate the nature of manufacturing processes and rationalize the relationships and correlations between the different stages of the process. This composite model may be utilized as a tool in political decision-making about the long-term development of industrial processes and the continuous improvement of manufacturing processes. The overarching goal of this research is to provide strategies for real-time monitoring of manufacturing performance and sustainability based on hybrid thermodynamic models of the first and second order, as well as those based on data and machine learning. The proposed model is tested on a real industrial case study using a systemic approach: the phases of identifying the requirements, data inventory (materials, energetic, geometric, physical, economic, social, qualitative, quantitative), modelling, analysis, ad hoc algorithm adjustment (tuning), implementation, and validation are developed for the aluminium alloy die-casting processes of Master Italy s.r.l., a southern Italian SME which designs and produces the accessories and metal components for windows since 1986. The thesis digs in the topic of the sustainability of smart industrial processes from each and every perspective, including both the quantity and quality of resources used throughout the manufacturing process's life cycle. Traditional sustainability analysis models (such as life cycle analysis, LCA) are combined with approaches based on the second law of thermodynamics (exergetic analysis); they are then complemented by models based on information technology (big-data analysis). A full analysis of the potential of each strategy, whether executed alone or in combination, is provided. Following a summary of the metrics relevant for determining the degree of sustainability of industrial processes, the case study is demonstrated using modelling and extensive analysis of the processes, namely aluminium alloy die casting. After assessing the sustainability of production processes using a model-based approach, we move on to the real-time application of machine learning analyses with the goal of identifying downtime and failures during the production cycle and predicting their occurrence well in advance using real-time process thermodynamic parameter values and automatic learning. Finally, the thesis suggests the use of integrated models on various case studies, such as laser deposition processes and the renovation of existing buildings, to demonstrate the multidisciplinarity and transversality of these issues. The thesis reveals fascinating findings derived from the use of a hybrid method to assessing the sustainability of manufacturing processes, combining exergetic analysis with life cycle assessment. The proposed theme is completely current and relevant to the most recent developments in the field of industrial sustainability, combining traditional model-based approaches with innovative approaches based on the collection of big data and its analysis using the most appropriate machine learning methodologies. Furthermore, the thesis demonstrates a highly promising application of machine learning approaches to real-time data collected in order to identify any fault source in the manufacturing line beginning with sustainability measures generated from exergetic analysis and life cycle analysis. As such, it unquestionably represents an advancement above earlier information depicted in the initial state of the art. In actuality, manufacturing companies that implement business strategies based on smart models and key enabling technologies today have a higher market value in terms of quality, customisation, flexibility, and sustainability.
APA, Harvard, Vancouver, ISO, and other styles
12

Lee, Kelvin Kai-wing. "A delay model approach to analysing the performance of wireless communications /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?COMP%202005%20LEE.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Shay, Nathan Michael. "Investigating Real-Time Employer-Based Ridesharing Preferences Based on Stated Preference Survey Data." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1471587439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Yingying. "Algorithms and Data Structures for Efficient Timing Analysis of Asynchronous Real-time Systems." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4622.

Full text
Abstract:
This thesis presents a framework to verify asynchronous real-time systems based on model checking. These systems are modeled by using a common modeling formalism named Labeled Petri-nets(LPNs). In order to verify the real-time systems algorithmically, the zone-based timing analysis method is used for LPNs. It searches the state space with timing information (represented by zones). When there is a high degree of concurrency in the model, firing concurrent enabled transitions in different order may result in different zones, and these zones may be combined without affecting the verification result. Since the zone-based method could not deal with this problem efficiently, the POSET timing analysis method is adopted for LPNs. It separates concurrency from causality and generates an exactly one zone for a single state. But it needs to maintain an extra POSET matrix for each state. In order to save time and memory, an improved zone-based timing analysis method is introduced by integrating above two methods. It searches the state space with zones but eliminates the use of the POSET matrix, which generates the same result as with the POSET method. To illustrate these methods, a circuit example is used throughout the thesis. Since the state space generated is usually very large, a graph data structure named multi-value decision diagrams (MDDs) is implemented to store the zones compactly. In order to share common clock value of dierent zones, two zone encoding methods are described: direct encoding and minimal constraint encoding. They ignore the unnecessary information in zones thus reduce the length of the integer tuples. The effectiveness of these two encoding methods is demonstrated by experimental result of the circuit example.
APA, Harvard, Vancouver, ISO, and other styles
15

Reynolds, Curt Andrew 1960. "Estimating crop yields by integrating the FAO crop specific water balance model with real-time satellite data and ground-based ancillary data." Thesis, The University of Arizona, 1998. http://hdl.handle.net/10150/192102.

Full text
Abstract:
The broad objective of this research was to develop a spatial model which provides both timely and quantitative regional maize yield estimates for real-time Early Warning Systems (EWS) by integrating satellite data with groundbased ancillary data. The Food and Agriculture Organization (FAO) Crop Specific Water Balance (CSWB) model was modified by using the real-time spatial data that include: dekad (ten-day) estimated rainfall (RFE) and Normalized Difference Vegetation Index (NDVI) composites derived from the METEOSAT and NOAA-AVHRR satellites, respectively; ground-based dekad potential evapo-transpiration (PET) data and seasonal estimated area-planted data provided by the Government of Kenya (GoK). A Geographical Information System (GIS) software was utilized to: drive the crop yield model; manage the spatial and temporal variability of the satellite images; interpolate between ground-based potential evapotranspiration and rainfall measurements; and import ancillary data such as soil maps, administrative boundaries, etc.. In addition, agro-ecological zones, length of growing season, and crop production functions, as defined by the FAO, were utilized to estimate quantitative maize yields. The GIS-based CSWB model was developed for three different resolutions: agro-ecological zone (AEZ) polygons; 7.6-kilometer pixels; and 1.1-kilometer pixels. The model was validated by comparing model production estimates from archived satellite and agro-meteorological data to historical district maize production reports from two Kenya government agencies, the Ministry of Agriculture (MoA) and the Department of Resource Surveys and Remote Sensing (DRSRS). For the AEZ analysis, comparison of model district maize production results and district maize production estimates from the MoA (1989-1997) and the DRSRS (1989-1993) revealed correlation coefficients of 0.94 and 0.93, respectively. The comparison for the 7.6-kilometer analysis showed correlation coefficients of 0.95 and 0.94, respectively. Comparison of results from the 1.1-kilometer model with district maize production data from the MoA (1993-1997) gave a correlation coefficient of 0.94. These results indicate the 7.6-kilometer pixel-by-pixel analysis is the most favorable method. Recommendations to improve the model are finer resolution images for area planted, soil moisture storage, and RFE maps; and measuring the actual length of growing season from a satellite-derived Growing Degree Day product.
APA, Harvard, Vancouver, ISO, and other styles
16

Schirber, Sebastian, Daniel Klocke, Robert Pincus, Johannes Quaas, and Jeffrey L. Anderson. "Parameter estimation using data assimilation in an atmospheric general circulation model: Parameter estimation using data assimilation in an atmosphericgeneral circulation model: from a perfect toward the real world." American Geophysical Union (AGU), 2013. https://ul.qucosa.de/id/qucosa%3A13463.

Full text
Abstract:
This study explores the viability of parameter estimation in the comprehensive general circulation model ECHAM6 using ensemble Kalman filter data assimilation techniques. Four closure parameters of the cumulus-convection scheme are estimated using increasingly less idealized scenarios ranging from perfect-model experiments to the assimilation of conventional observations. Updated parameter values from experiments with real observations are used to assess the error of the model state on short 6 h forecasts and on climatological timescales. All parameters converge to their default values in single parameter perfect-model experiments. Estimating parameters simultaneously has a neutral effect on the success of the parameter estimation, but applying an imperfect model deteriorates the assimilation performance. With real observations, single parameter estimation generates the default parameter value in one case, converges to different parameter values in two cases, and diverges in the fourth case. The implementation of the two converging parameters influences the model state: Although the estimated parameter values lead to an overall error reduction on short timescales, the error of the model state increases on climatological timescales.
APA, Harvard, Vancouver, ISO, and other styles
17

Robinson-Mallett, Christopher. "Modellbasierte Modulprüfung für die Entwicklung technischer, softwareintensiver Systeme mit Real-Time Object-Oriented Modeling." Phd thesis, Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2005/604/.

Full text
Abstract:
Mit zunehmender Komplexität technischer Softwaresysteme ist die Nachfrage an produktiveren Methoden und Werkzeugen auch im sicherheitskritischen Umfeld gewachsen. Da insbesondere objektorientierte und modellbasierte Ansätze und Methoden ausgezeichnete Eigenschaften zur Entwicklung großer und komplexer Systeme besitzen, ist zu erwarten, dass diese in naher Zukunft selbst bis in sicherheitskritische Bereiche der Softwareentwicklung vordringen. Mit der Unified Modeling Language Real-Time (UML-RT) wird eine Softwareentwicklungsmethode für technische Systeme durch die Object Management Group (OMG) propagiert. Für den praktischen Einsatz im technischen und sicherheitskritischen Umfeld muss diese Methode nicht nur bestimmte technische Eigenschaften, beispielsweise temporale Analysierbarkeit, besitzen, sondern auch in einen bestehenden Qualitätssicherungsprozess integrierbar sein. Ein wichtiger Aspekt der Integration der UML-RT in ein qualitätsorientiertes Prozessmodell, beispielsweise in das V-Modell, ist die Verfügbarkeit von ausgereiften Konzepten und Methoden für einen systematischen Modultest.

Der Modultest dient als erste Qualititätssicherungsphase nach der Implementierung der Fehlerfindung und dem Qualitätsnachweis für jede separat prüfbare Softwarekomponente eines Systems. Während dieser Phase stellt die Durchführung von systematischen Tests die wichtigste Qualitätssicherungsmaßnahme dar. Während zum jetzigen Zeitpunkt zwar ausgereifte Methoden und Werkzeuge für die modellbasierte Softwareentwicklung zur Verfügung stehen, existieren nur wenig überzeugende Lösungen für eine systematische modellbasierte Modulprüfung.

Die durchgängige Verwendung ausführbarer Modelle und Codegenerierung stellen wesentliche Konzepte der modellbasierten Softwareentwicklung dar. Sie dienen der konstruktiven Fehlerreduktion durch Automatisierung ansonsten fehlerträchtiger, manueller Vorgänge. Im Rahmen einer modellbasierten Qualitätssicherung sollten diese Konzepte konsequenterweise in die späteren Qualitätssicherungsphasen transportiert werden. Daher ist eine wesentliche Forderung an ein Verfahren zur modellbasierten Modulprüfung ein möglichst hoher Grad an Automatisierung.

In aktuellen Entwicklungen hat sich für die Generierung von Testfällen auf Basis von Zustandsautomaten die Verwendung von Model Checking als effiziente und an die vielfältigsten Testprobleme anpassbare Methode bewährt. Der Ansatz des Model Checking stammt ursprünglich aus dem Entwurf von Kommunikationsprotokollen und wurde bereits erfolgreich auf verschiedene Probleme der Modellierung technischer Software angewendet. Insbesondere in der Gegenwart ausführbarer, automatenbasierter Modelle erscheint die Verwendung von Model Checking sinnvoll, das die Existenz einer formalen, zustandsbasierten Spezifikation voraussetzt. Ein ausführbares, zustandsbasiertes Modell erfüllt diese Anforderungen in der Regel. Aus diesen Gründen ist die Wahl eines Model Checking Ansatzes für die Generierung von Testfällen im Rahmen eines modellbasierten Modultestverfahrens eine logische Konsequenz.

Obwohl in der aktuellen Spezifikation der UML-RT keine eindeutigen Aussagen über den zur Verhaltensbeschreibung zu verwendenden Formalismus gemacht werden, ist es wahrscheinlich, dass es sich bei der UML-RT um eine zu Real-Time Object-Oriented Modeling (ROOM) kompatible Methode handelt. Alle in dieser Arbeit präsentierten Methoden und Ergebnisse sind somit auf die kommende UML-RT übertragbar und von sehr aktueller Bedeutung.

Aus den genannten Gründen verfolgt diese Arbeit das Ziel, die analytische Qualitätssicherung in der modellbasierten Softwareentwicklung mittels einer modellbasierten Methode für den Modultest zu verbessern. Zu diesem Zweck wird eine neuartige Testmethode präsentiert, die auf automatenbasierten Verhaltensmodellen und CTL Model Checking basiert. Die Testfallgenerierung kann weitgehend automatisch erfolgen, um Fehler durch menschlichen Einfluss auszuschließen. Das entwickelte Modultestverfahren ist in die technischen Konzepte Model Driven Architecture und ROOM, beziehungsweise UML-RT, sowie in die organisatorischen Konzepte eines qualitätsorientierten Prozessmodells, beispielsweise das V-Modell, integrierbar.
In consequence to the increasing complexity of technical software-systems the demand on highly productive methods and tools is increasing even in the field of safety-critical systems. In particular, object-oriented and model-based approaches to software-development provide excellent abilities to develop large and highly complex systems. Therefore, it can be expected that in the near future these methods will find application even in the safety-critical area. The Unified Modeling Language Real-Time (UML-RT) is a software-development methods for technical systems, which is propagated by the Object Management Group (OMG). For the practical application of this method in the field of technical and safety-critical systems it has to provide certain technical qualities, e.g. applicability of temporal analyses. Furthermore, it needs to be integrated into the existing quality assurance process. An important aspect of the integration of UML-RT in an quality-oriented process model, e.g. the V-Model, represents the availability of sophisticated concepts and methods for systematic unit-testing.

Unit-testing is the first quality assurance phase after implementation to reveal faults and to approve the quality of each independently testable software component. During this phase the systematic execution of test-cases is the most important quality assurance task. Despite the fact, that today many sophisticated, commercial methods and tools for model-based software-development are available, no convincing solutions exist for systematic model-based unit-testing.

The use of executable models and automatic code generation are important concepts of model-based software development, which enable the constructive reduction of faults through automation of error-prone tasks. Consequently, these concepts should be transferred into the testing phases by a model-based quality assurance approach. Therefore, a major requirement of a model-based unit-testing method is a high degree of automation. In the best case, this should result in fully automatic test-case generation.

Model checking already has been approved an efficient and flexible method for the automated generation of test-cases from specifications in the form of finite state-machines. The model checking approach has been developed for the verification of communication protocols and it was applied successfully to a wide range of problems in the field of technical software modelling. The application of model checking demands a formal, state-based representation of the system. Therefore, the use of model checking for the generation of test-cases is a beneficial approach to improve the quality in a model-based software development with executable, state-based models.

Although, in its current state the specification of UML-RT provides only little information on the semantics of the formalism that has to be used to specify a component’s behaviour, it can be assumed that it will be compatible to Real-Time Object-Oriented Modeling. Therefore, all presented methods and results in this dissertation are transferable to UML-RT.

For these reasons, this dissertations aims at the improvement of the analytical quality assurance in a model-based software development process. To achieve this goal, a new model-based approach to automated unit-testing on the basis of state-based behavioural models and CTL Model Checking is presented. The presented method for test-case generation can be automated to avoid faults due to error-prone human activities. Furthermore it can be integrated into the technical concepts of the Model Driven Architecture and ROOM, respectively UML-RT, and into a quality-oriented process model, like the V-Model.
APA, Harvard, Vancouver, ISO, and other styles
18

Quaranta, Giacomo. "Efficient simulation tools for real-time monitoring and control using model order reduction and data-driven techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/667474.

Full text
Abstract:
Numerical simulation, the use of computers to run a program which implements a mathematical model for a physical system, is an important part of today technological world. It is required in many scientific and engineering fields to study the behaviour of systems whose mathematical models are too complex to provide analytical solutions and it makes virtual evaluation of systems responses possible (virtual twins). This drastically reduces the number of experimental tests for accurate designs of the real system that the numerical model represents. However these virtual twins, based on classical methods which make use of a rich representations of the system (ex. finite element method), rarely allows real-time feedback, even when considering high performance computing, operating on powerful platforms. In these circumstances, the real-time performance required in some applications are compromised. Indeed the virtual twins are static, that is, they are used in the design of complex systems and their components, but they are not expected to accommodate or assimilate data so as to define dynamic data-driven application systems. Moreover significant deviations between the observed response and the one predicted by the model are usually noticed due to inaccuracy in the employed models, in the determination of the model parameters or in their time evolution. In this thesis we propose different methods to solve these handicaps in order to perform real-time monitoring and control. In the first part Model Order Reduction (MOR) techniques are used to accommodate real-time constraints; they compute a good approximation of the solution by simplifying the solution procedure instead of the model. The accuracy of the predicted solution is not compromised and efficient simulations can be performed (digital twins). In the second part Data-Driven modelling are employed to fill the gap between the parametric solution computed by using non-intrusive MOR techniques and the measured fields, in order to make dynamic data-driven application systems, DDDAS, possible (Hybrid Twins).
La simulación numérica, el uso de ordenadores para ejecutar un programa que implementa un modelo matemático de un sistema físico, es una parte importante del mundo tecnológico actual. En muchos campos de la ciencia y la ingeniería es necesario estudiar el comportamiento de sistemas cuyos modelos matemáticos son demasiado complejos para proporcionar soluciones analíticas, haciendo posible la evaluación virtual de las respuestas de los sistemas (gemelos virtuales). Esto reduce drásticamente el número de pruebas experimentales para los diseños precisos del sistema real que el modelo numérico representa. Sin embargo, estos gemelos virtuales, basados en métodos clásicos que hacen uso de una rica representación del sistema (por ejemplo, el método de elementos finitos), rara vez permiten la retroalimentación en tiempo real, incluso cuando se considera la computación en plataformas de alto rendimiento. En estas circunstancias, el rendimiento en tiempo real requerido en algunas aplicaciones se ve comprometido. En efecto, los gemelos virtuales son estáticos, es decir, se utilizan en el diseño de sistemas complejos y sus componentes, pero no se espera que acomoden o asimilen los datos para definir sistemas de aplicación dinámicos basados en datos. Además, se suelen apreciar desviaciones significativas entre la respuesta observada y la predicha por el modelo, debido a inexactitudes en los modelos empleados, en la determinación de los parámetros del modelo o en su evolución temporal. En esta tesis se proponen diferentes métodos para resolver estas limitaciones con el fin de realizar un seguimiento y un control en tiempo real. En la primera parte se utilizan técnicas de Reducción de Modelos para satisfacer las restricciones en tiempo real; estas técnicas calculan una buena aproximación de la solución simplificando el procedimiento de resolución en lugar del modelo. La precisión de la solución no se ve comprometida y se pueden realizar simulaciones efficientes (gemelos digitales). En la segunda parte se emplea la modelización basada en datos para llenar el vacío entre la solución paramétrica, calculada utilizando técnicas de reducción de modelos no intrusivas, y los campos medidos, con el fin de hacer posibles los sistemas de aplicación dinámicos basados en datos (gemelos híbridos).
La simulation numérique, c'est-à-dire l'utilisation des ordinateurs pour exécuter un programme qui met en oeuvre un modèle mathématique d'un système physique, est une partie importante du monde technologique actuel. Elle est nécessaire dans de nombreux domaines scientifiques et techniques pour étudier le comportement de systèmes dont les modèles mathématiques sont trop complexes pour fournir des solutions analytiques et elle rend possible l'évaluation virtuelle des réponses des systèmes (jumeaux virtuels). Cela réduit considérablement le nombre de tests expérimentaux nécessaires à la conception précise du système réel que le modèle numérique représente. Cependant, ces jumeaux virtuels, basés sur des méthodes classiques qui utilisent une représentation fine du système (ex. méthode des éléments finis), permettent rarement une rétroaction en temps réel, même dans un contexte de calcul haute performance, fonctionnant sur des plates-formes puissantes. Dans ces circonstances, les performances en temps réel requises dans certaines applications sont compromises. En effet, les jumeaux virtuels sont statiques, c'est-à-dire qu'ils sont utilisés dans la conception de systèmes complexes et de leurs composants, mais on ne s'attend pas à ce qu'ils prennent en compte ou assimilent des données afin de définir des systèmes d'application dynamiques pilotés par les données. De plus, des écarts significatifs entre la réponse observée et celle prévue par le modèle sont généralement constatés en raison de l'imprécision des modèles employés, de la détermination des paramètres du modèle ou de leur évolution dans le temps. Dans cette thèse, nous proposons di érentes méthodes pour résoudre ces handicaps afin d'effectuer une surveillance et un contrôle en temps réel. Dans la première partie, les techniques de Réduction de Modèles sont utilisées pour tenir compte des contraintes en temps réel ; elles calculent une bonne approximation de la solution en simplifiant la procédure de résolution plutôt que le modèle. La précision de la solution n'est pas compromise et des simulations e caces peuvent être réalisées (jumeaux numériquex). Dans la deuxième partie, la modélisation pilotée par les données est utilisée pour combler l'écart entre la solution paramétrique calculée, en utilisant des techniques de réduction de modèles non intrusives, et les champs mesurés, afin de rendre possibles des systèmes d'application dynamiques basés sur les données (jumeaux hybrides).
APA, Harvard, Vancouver, ISO, and other styles
19

Storoshchuk, Orest Lev Poehlman William Frederick Skipper. "Model based synchronization of monitoring and control systems /." *McMaster only, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
20

Singh, Rahul. "A model to integrate Data Mining and On-line Analytical Processing: with application to Real Time Process Control." VCU Scholars Compass, 1999. https://scholarscompass.vcu.edu/etd/5521.

Full text
Abstract:
Since the widespread use of computers in business and industry, a lot of research has been done on the design of computer systems to support the decision making task. Decision support systems support decision makers in solving unstructured decision problems by providing tools to help understand and analyze decision problems to help make better decisions. Artificial intelligence is concerned with creating computer systems that perform tasks that would require intelligence if performed by humans. Much research has focused on using artificial intelligence to develop decision support systems to provide intelligent decision support. Knowledge discovery from databases, centers around data mining algorithms to discover novel and potentially useful information contained in the large volumes of data that is ubiquitous in contemporary business organizations. Data mining deals with large volumes of data and tries to develop multiple views that the decision maker can use to study this multi-dimensional data. On-line analytical processing (OLAP) provides a mechanism that supports multiple views of multi-dimensional data to facilitate efficient analysis. These two techniques together can provide a powerful mechanism for the analysis of large quantities of data to aid the task of making decisions. This research develops a model for the real time process control of a large manufacturing process using an integrated approach of data mining and on-line analytical processing. Data mining is used to develop models of the process based on the large volumes of the process data. The purpose is to provide prediction and explanatory capability based on the models of the data and to allow for efficient generation of multiple views of the data so as to support analysis on multiple levels. Artificial neural networks provide a mechanism for predicting the behavior of nonlinear systems, while decision trees provide a mechanism for the explanation of states of systems given a set of inputs and outputs. OLAP is used to generate multidimensional views of the data and support analysis based on models developed by data mining. The architecture and implementation of the model for real-time process control based on the integration of data mining and OLAP is presented in detail. The model is validated by comparing results obtained from the integrated system, OLAP-only and expert opinion. The system is validated using actual process data and the results of this verification are presented. A discussion of the results of the validation of the integrated system and some limitations of this research with discussion on possible future research directions is provided.
APA, Harvard, Vancouver, ISO, and other styles
21

Ferreira, E. (Eija). "Model selection in time series machine learning applications." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526209012.

Full text
Abstract:
Abstract Model selection is a necessary step for any practical modeling task. Since the true model behind a real-world process cannot be known, the goal of model selection is to find the best approximation among a set of candidate models. In this thesis, we discuss model selection in the context of time series machine learning applications. We cover four steps of the commonly followed machine learning process: data preparation, algorithm choice, feature selection and validation. We consider how the characteristics and the amount of data available should guide the selection of algorithms to be used, and how the data set at hand should be divided for model training, selection and validation to optimize the generalizability and future performance of the model. We also consider what are the special restrictions and requirements that need to be taken into account when applying regular machine learning algorithms to time series data. We especially aim to bring forth problems relating model over-fitting and over-selection that might occur due to careless or uninformed application of model selection methods. We present our results in three different time series machine learning application areas: resistance spot welding, exercise energy expenditure estimation and cognitive load modeling. Based on our findings in these studies, we draw general guidelines on which points to consider when starting to solve a new machine learning problem from the point of view of data characteristics, amount of data, computational resources and possible time series nature of the problem. We also discuss how the practical aspects and requirements set by the environment where the final model will be implemented affect the choice of algorithms to use
Tiivistelmä Mallinvalinta on oleellinen osa minkä tahansa käytännön mallinnusongelman ratkaisua. Koska mallinnettavan ilmiön toiminnan taustalla olevaa todellista mallia ei voida tietää, on mallinvalinnan tarkoituksena valita malliehdokkaiden joukosta sitä lähimpänä oleva malli. Tässä väitöskirjassa käsitellään mallinvalintaa aikasarjamuotoista dataa sisältävissä sovelluksissa neljän koneoppimisprosessissa yleisesti noudatetun askeleen kautta: aineiston esikäsittely, algoritmin valinta, piirteiden valinta ja validointi. Väitöskirjassa tutkitaan, kuinka käytettävissä olevan aineiston ominaisuudet ja määrä tulisi ottaa huomioon algoritmin valinnassa, ja kuinka aineisto tulisi jakaa mallin opetusta, testausta ja validointia varten mallin yleistettävyyden ja tulevan suorituskyvyn optimoimiseksi. Myös erityisiä rajoitteita ja vaatimuksia tavanomaisten koneoppimismenetelmien soveltamiselle aikasarjadataan käsitellään. Työn tavoitteena on erityisesti tuoda esille mallin ylioppimiseen ja ylivalintaan liittyviä ongelmia, jotka voivat seurata mallinvalin- tamenetelmien huolimattomasta tai osaamattomasta käytöstä. Työn käytännön tulokset perustuvat koneoppimismenetelmien soveltamiseen aikasar- jadatan mallinnukseen kolmella eri tutkimusalueella: pistehitsaus, fyysisen harjoittelun aikasen energiankulutuksen arviointi sekä kognitiivisen kuormituksen mallintaminen. Väitöskirja tarjoaa näihin tuloksiin pohjautuen yleisiä suuntaviivoja, joita voidaan käyttää apuna lähdettäessä ratkaisemaan uutta koneoppimisongelmaa erityisesti aineiston ominaisuuksien ja määrän, laskennallisten resurssien sekä ongelman mahdollisen aikasar- jaluonteen näkökulmasta. Työssä pohditaan myös mallin lopullisen toimintaympäristön asettamien käytännön näkökohtien ja rajoitteiden vaikutusta algoritmin valintaan
APA, Harvard, Vancouver, ISO, and other styles
22

GARBARINO, DAVIDE. "Acknowledging the structured nature of real-world data with graphs embeddings and probabilistic inference methods." Doctoral thesis, Università degli studi di Genova, 2022. http://hdl.handle.net/11567/1092453.

Full text
Abstract:
In the artificial intelligence community there is a growing consensus that real world data is naturally represented as graphs because they can easily incorporate complexity at several levels, e.g. hierarchies or time dependencies. In this context, this thesis studies two main branches for structured data. In the first part we explore how state-of-the-art machine learning methods can be extended to graph modeled data provided that one is able to represent graphs in vector spaces. Such extensions can be applied to analyze several kinds of real-world data and tackle different problems. Here we study the following problems: a) understand the relational nature and evolution of websites which belong to different categories (e-commerce, academic (p.a.) and encyclopedic (forum)); b) model tennis players scores based on different game surfaces and tournaments in order to predict matches results; c) analyze preter- m-infants motion patterns able to characterize possible neuro degenerative disorders and d) build an academic collaboration recommender system able to model academic groups and individual research interest while suggesting possible researchers to connect with, topics of interest and representative publications to external users. In the second part we focus on graphs inference methods from data which present two main challenges: missing data and non-stationary time dependency. In particular, we study the problem of inferring Gaussian Graphical Models in the following settings: a) inference of Gaussian Graphical Models when data are missing or latent in the context of multiclass or temporal network inference and b) inference of time-varying Gaussian Graphical Models when data is multivariate and non-stationary. Such methods have a natural application in the composition of an optimized stock markets portfolio. Overall this work sheds light on how to acknowledge the intrinsic structure of data with the aim of building statistical models that are able to capture the actual complexity of the real world.
APA, Harvard, Vancouver, ISO, and other styles
23

Breithaupt, Krista J. "A comparison of the sample invariance of item statistics from the classical test model, item response model, and structural equation model, a case study of real response data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58266.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ruchirat, Piyapathu. "State detection mechanism, productivity bias and a medium open economy real business cycle model for Thailand (1993-2005 data)." Thesis, Cardiff University, 2005. http://orca.cf.ac.uk/55568/.

Full text
Abstract:
The first section is a study on the Thai Baht using an advanced regime-identification tool by Hamilton (1989) on the 1998-2005 monthly data, nesting the special case of a monetary model with emphasis on the real interest rate differential. Three scenarios post-crisis for the Thai Baht were identified by the nonlinear detection mechanism where the unobservable states are assumed to follow the Markov chain of events where past history does not matter. The states identified are described as panic, calm, and favourable markets for the currency. Furthermore, using the Markov-switching software developed by Krolzig (1998), it is possible to provide weak evidence that the nominal exchange rate moved to restore equilibrium in the fundamentals but not vice versa. The second section tested the productivity bias for the Thai quarterly data from 1993-2005, using Johansen cointegration method and found no evidence. The final section has the whole of the Thai economy specified and exogenous shocks sent to see the effects on the real exchange rate. In particular, a surge in productivity in the fully-specified economy causes the real exchange rate to appreciate, confirming the evidence for the productivity bias for the Thai Baht.
APA, Harvard, Vancouver, ISO, and other styles
25

Yoshida, Jiro 1970. "Effects of uncertainty on the investment decision : an examination of the option-based investment model using Japanese real estate data." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/70726.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 1999.
Includes bibliographical references (leaves 49-52).
This paper examines the validity of the option-based investment model as opposed to the neoclassical investment model in the decision-making of commercial real estate development, using aggregate real estate data from Japan. I particularly focus on the effect of uncertainty because it is the central difference between the two models. I specify a structural model in order to incorporate the interactions between supply and demand in the real estate asset market. In order to conduct detailed empirical tests for a long period of time, I set three data series. The Long Series uses quarterly data of 25 years and Short Series 1 and Short Series 2 use monthly data of about 15 years. I find strong evidence that supports the option-based investment model. Especially in the supply equation, total uncertainty has significant effects on the investment decision. A lag structure is found in the effect of total uncertainty. The parameters for other variables also generally favor the option-based model. In the demand equation, too, the results strongly support the option-based investment model. It should be concluded from these results that various kinds of real options must be incorporated in investment and economic models.
by Jiro Yoshida.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Kasinathan, Gokulnath. "Data Transformation Trajectories in Embedded Systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205276.

Full text
Abstract:
Mobile phone tracking is the ascertaining of the position or location of a mobile phone when moving from one place to another place. Location Based Services Solutions include Mobile positioning system that can be used for a wide array of consumer-demand services like search, mapping, navigation, road transport traffic management and emergency-call positioning. The Mobile Positioning System (MPS) supports complementary positioning methods for 2G, 3G and 4G/LTE (Long Term Evolution) networks. Mobile phone is popularly known as an UE (User Equipment) in LTE. A prototype method of live trajectory estimation for massive UE in LTE network has been proposed in this thesis work. RSRP (Reference Signal Received Power) values and TA(Timing Advance) values are part of LTE events for UE. These specific LTE events can be streamed to a system from eNodeB of LTE in real time by activating measurements on UEs in the network. AoA (Angle of Arrival) and TA values are used to estimate the UE position. AoA calculation is performed using RSRP values. The calculated UE positions are filtered using Particle Filter(PF) to estimate trajectory. To obtain live trajectory estimation for massive UEs, the LTE event streamer is modelled to produce several task units with events data for massive UEs. The task level modelled data structures are scheduled across Arm Cortex A15 based MPcore, with multiple threads. Finally, with massive UE live trajectory estimation, IMSI (International mobile subscriber identity) is used to maintain hidden markov requirements of particle filter functionality while maintaining load balance for 4 Arm A15 cores. This is proved by serial and parallel performance engineering. Future work is proposed for Decentralized task level scheduling with hash function for IMSI with extension of cores and Concentric circles method for AoA accuracy.
Mobiltelefoners positionering är välfungerande för positionslokalisering av mobiltelefoner när de rör sig från en plats till en annan. Lokaliseringstjänsterna inkluderar mobil positionering system som kan användas till en mängd olika kundbehovs tjänster som sökning av position, position i kartor, navigering, vägtransporters trafik managering och nödsituationssamtal med positionering. Mobil positions system (MPS) stödjer komplementär positions metoder för 2G, 3G och 4G/LTE (Long Term Evolution) nätverk. Mobiltelefoner är populärt känd som UE (User Equipment) inom LTE. En prototypmetod med verkliga rörelsers estimering för massiv UE i LTE nätverk har blivit föreslagen för detta examens arbete. RSRP (Reference Signal Received Power) värden och TA (Timing Advance) värden är del av LTE händelser för UE. Dessa specifika LTE event kan strömmas till ett system från eNodeB del av LTE, i realtid genom aktivering av mätningar på UEar i nätverk. AoA (Angel of Arrival) och TA värden är använt för att beräkna UEs position. AoA beräkningar är genomförda genom användandet av RSRP värden. Den kalkylerade UE positionen är filtrerad genom användande av Particle Filter (PF) för att estimera rörelsen. För att identifiera verkliga rörelser, beräkningar för massiva UEs, LTE event streamer är modulerad att producera flera uppgifts enheter med event data från massiva UEar. De tasks modulerade data strukturerna är planerade över Arm Cortex A15 baserade MPcore, med multipla trådar. Slutligen, med massiva UE verkliga rörelser, beräkningar med IMSI(International mobile subscriber identity) är använt av den Hidden Markov kraven i Particle Filter’s funktionalitet medans kravet att underhålla last balansen för 4 Arm A15 kärnor. Detta är utfört genom seriell och parallell prestanda teknik. Framtida arbeten för decentraliserade task nivå skedulering med hash funktion för IMSI med utökning av kärnor och Concentric circles metod för AoA noggrannhet.
APA, Harvard, Vancouver, ISO, and other styles
27

Rodittis, Katherine, and Patrick Mattingly. "USING MICROSOFT’S COMPONENT OBJECT MODEL (COM) TO IMPROVE REAL-TIME DISPLAY DEVELOPMENT FOR THE ADVANCED DATA ACQUISITION AND PROCESSING SYSTEM (ADAPS)." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606801.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Microsoft’s Component Object Model (COM) allows us to rapidly develop display and analysis features for the Advanced Data Acquisition and Processing System (ADAPS).
APA, Harvard, Vancouver, ISO, and other styles
28

Zámečníková, Eva. "FORMÁLNÍ MODEL ROZHODOVACÍHO PROCESU PRO ZPRACOVÁNÍ VYSOKOFREKVENČNÍCH DAT." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-412586.

Full text
Abstract:
Tato disertační práce se zabývá problematikou zpracování vysokofrekvenčních časových řad. Zaměřuje se na návrh algoritmů a metod pro podporu predikce těchto dat. Výsledkem je model pro podporu řízení rozhodovacího procesu implementovaný do platformy pro komplexní zpracování dat. Model navrhuje způsob formalizace množiny podnikových pravidel, které popisují rozhodovací proces. Navržený model musí vyhovovat splnění požadavků na robustnost, rozšiřitelnost, zpracování v reálném čase a požadavkům ekonometriky. Práce shrnuje současné poznatky a metodologie pro zpracování vysokofrekvenčních finančních dat, jejichž zdrojem jsou nejčastěji burzy. První část práce se věnuje popisu základních principů a přístupů používaných pro zpracování vysokofrekvenčních časových dat v současné době. Další část se věnuje popisu podnikových pravidel, rozhodovacího procesu a komplexní platformy pro zpracování vysokofrekvenčních dat a samotnému zpracování dat pomocí zvolené komplexní platformy. Důraz je kladen na výběr a úpravu množiny pravidel, které řídí rozhodovací proces. Navržený model popisuje množinu pravidel pomocí maticové gramatiky. Tato gramatika spadá do oblasti gramatik s řízeným přepisováním a pomocí definovaných matic umožňuje ovlivnit zpracování dat.
APA, Harvard, Vancouver, ISO, and other styles
29

Brandão, Jesse Wayde. "Analysis of the truncated response model for fixed priority on HMPSoCs." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14836.

Full text
Abstract:
Mestrado em Engenharia Electrónica e Telecomunicações
With the ever more ubiquitous nature of embedded systems and their increasingly demanding applications, such as audio/video decoding and networking, the popularity of MultiProcessor Systems-on-Chip (MPSoCs) continues to increase. As such, their modern uses often involve the execution of multiple applications on the same system. Embedded systems often have applications that are faced with timing restrictions, some of which are deadlines, throughput and latency. The resources available to the applications running on these systems are nite and, therefore, applications need to share the available resources while guaranteeing that their timing requirements are met. These guarantees are established via schedulers which may employ some of the many techniques devised for the arbitration of resource usage among applications. The main technique considered in this dissertation is the Preemptive Fixed Priority (PFP) scheduling technique. Also, there is a growing trend in the usage of the data ow computational model for analysis of applications on MultiProcessor System-on-Chips (MPSoCs). Data ow graphs are functionally intuitive, and have interesting and useful analytical properties. This dissertation intends to further previous work done in temporal analysis of PFP scheduling of Real-Time applications on MPSoCs by implementing the truncated response model for PFP scheduling and analyzing the its results. This response model promises tighter bounds for the worst case response times of the actors in a low priority data ow graph by considering the worst case response times over consecutive rings of an actor rather than just a single ring. As a follow up to this work, we also introduce in this dissertation a burst analysis technique for actors in a data ow graph.
Com a natureza cada vez mais ubíqua de sistemas embutidos e as suas aplicações cada vez mais exigentes, como a decodicação de áudio/video e rede, a popularidade de MultiProcessor Systems-on-Chip (MPSoCs) continua a aumentar. Como tal, os seus usos modernos muitas vezes envolvem a execução de várias aplicações no mesmo sistema. Sistemas embutidos, frequentemente correm aplicações que são confrontadas com restrições temporais, algumas das quais são prazos, taxa de transferência e latência. Os recursos disponíveis para as aplicações que estão a correr nestes sistemas são finitos e, portanto, as aplicações necessitam de partilhar os recursos disponíveis, garantindo simultaneamente que os seus requisitos temporais sejam satisfeitos. Estas garantias são estabelecidas por meio escalonadores que podem empregar algumas das muitas técnicas elaboradas para a arbitragem de uso de recursos entre as aplicações. A técnica principal considerada nesta dissertação é Preemptive Fixed Priority (PFP). Além disso existe uma tendência crescente no uso do modelo computacional data flow para a análise de aplicações a correr em MPSoCs. Grafos data flow são funcionalmente intuitivos e possuem propriedades interessantes e úteis. Esta dissertação pretende avançar trabalho prévio na área de escalonamento PFP de aplicações ai implementar o modelo de resposta truncatedo para escalonamento PFP e analisar os seus resultados. Este modelo de resposta promete limites mais estritos para os tempos de resposta de pior caso para atores num grafo de baixa prioridade ao considerar os tempos de resposta de pior caso ao longo de várias execuções consecutivas de um actor em vez de uma só. Como seguimento a este trabalho, também introduzimos nesta dissertação uma técnica para a análise de execuções em rajada de atores num grafo data flow.
APA, Harvard, Vancouver, ISO, and other styles
30

Gonella, Philippe. "Business Process Management and Process Mining within a Real Business Environment: An Empirical Analysis of Event Logs Data in a Consulting Project." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11799/.

Full text
Abstract:
Il presente elaborato esplora l’attitudine delle organizzazioni nei confronti dei processi di business che le sostengono: dalla semi-assenza di struttura, all’organizzazione funzionale, fino all’avvento del Business Process Reengineering e del Business Process Management, nato come superamento dei limiti e delle problematiche del modello precedente. All’interno del ciclo di vita del BPM, trova spazio la metodologia del process mining, che permette un livello di analisi dei processi a partire dagli event data log, ossia dai dati di registrazione degli eventi, che fanno riferimento a tutte quelle attività supportate da un sistema informativo aziendale. Il process mining può essere visto come naturale ponte che collega le discipline del management basate sui processi (ma non data-driven) e i nuovi sviluppi della business intelligence, capaci di gestire e manipolare l’enorme mole di dati a disposizione delle aziende (ma che non sono process-driven). Nella tesi, i requisiti e le tecnologie che abilitano l’utilizzo della disciplina sono descritti, cosi come le tre tecniche che questa abilita: process discovery, conformance checking e process enhancement. Il process mining è stato utilizzato come strumento principale in un progetto di consulenza da HSPI S.p.A. per conto di un importante cliente italiano, fornitore di piattaforme e di soluzioni IT. Il progetto a cui ho preso parte, descritto all’interno dell’elaborato, ha come scopo quello di sostenere l’organizzazione nel suo piano di improvement delle prestazioni interne e ha permesso di verificare l’applicabilità e i limiti delle tecniche di process mining. Infine, nell’appendice finale, è presente un paper da me realizzato, che raccoglie tutte le applicazioni della disciplina in un contesto di business reale, traendo dati e informazioni da working papers, casi aziendali e da canali diretti. Per la sua validità e completezza, questo documento è stata pubblicato nel sito dell'IEEE Task Force on Process Mining.
APA, Harvard, Vancouver, ISO, and other styles
31

Cusinato, Rafael Tiecher. "Ensaios sobre previsão de inflação e análise de dados em tempo real no Brasil." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/22654.

Full text
Abstract:
Esta tese apresenta três ensaios sobre previsão de inflação e análise de dados em tempo real no Brasil. Utilizando uma curva de Phillips, o primeiro ensaio propõe um “modelo evolucionário” para prever inflação no Brasil. O modelo evolucionário consiste em uma combinação de um modelo não-linear (que é formado pela combinação de três redes neurais artificiais – RNAs) e de um modelo linear (que também é a referência para propósitos de comparação). Alguns parâmetros do modelo evolucionário, incluindo os pesos das combinações, evoluem ao longo do tempo segundo ajustes definidos por três algoritmos que avaliam os erros fora-da-amostra. As RNAs foram estimadas através de uma abordagem híbrida baseada em um algoritmo genético (AG) e em um algoritmo simplex de Nelder-Mead. Em um experimento de previsão fora-da-amostra para 3, 6, 9 e 12 passos à frente, o desempenho do modelo evolucionário foi comparado ao do modelo linear de referência, segundo os critérios de raiz do erro quadrático médio (REQM) e de erro absoluto médio (EAM). O desempenho do modelo evolucionário foi superior ao desempenho do modelo linear para todos os passos de previsão analisados, segundo ambos os critérios. O segundo ensaio é motivado pela recente literatura sobre análise de dados em tempo real, que tem mostrado que diversas medidas de atividade econômica passam por importantes revisões de dados ao longo do tempo, implicando importantes limitações para o uso dessas medidas. Elaboramos um conjunto de dados de PIB em tempo real para o Brasil e avaliamos a extensão na qual as séries de crescimento do PIB e de hiato do produto são revisadas ao longo do tempo. Mostramos que as revisões de crescimento do PIB (trimestre/trimestre anterior) são economicamente relevantes, embora as revisões de crescimento do PIB percam parte da importância à medida que o período de agregação aumenta (por exemplo, crescimento em quatro trimestres). Para analisar as revisões do hiato do produto, utilizamos quatro métodos de extração de tendência: o filtro de Hodrick-Prescott, a tendência linear, a tendência quadrática, e o modelo de Harvey-Clark de componentes não-observáveis. Todos os métodos apresentaram revisões de magnitudes economicamente relevantes. Em geral, tanto a revisão de dados do PIB como a baixa precisão das estimativas de final-de-amostra da tendência do produto mostraram-se fontes relevantes das revisões de hiato do produto. O terceiro ensaio é também um estudo de dados em tempo real, mas que analisa os dados de produção industrial (PI) e as estimativas de hiato da produção industrial. Mostramos que as revisões de crescimento da PI (mês/mês anterior) e da média móvel trimestral são economicamente relevantes, embora as revisões de crescimento da PI tornem-se menos importantes à medida que o período de agregação aumenta (por exemplo, crescimento em doze meses). Para analisar as revisões do hiato da PI, utilizamos três métodos de extração de tendência: o filtro de Hodrick-Prescott, a tendência linear e a tendência quadrática. Todos os métodos apresentaram revisões de magnitudes economicamente relevantes. Em geral, tanto a revisão de dados da PI como a baixa precisão das estimativas de final-de-amostra da tendência da PI mostraram-se fontes relevantes das revisões de hiato da PI, embora os resultados sugiram certa predominância das revisões provenientes da baixa precisão de final-de-amostra.
This thesis presents three essays on inflation forecasting and real-time data analysis in Brazil. By using a Phillips curve, the first essay presents an “evolutionary model” to forecast Brazilian inflation. The evolutionary model consists in a combination of a non-linear model (that is formed by a combination of three artificial neural networks - ANNs) and a linear model (that is also a benchmark for comparison purposes). Some parameters of the evolutionary model, including the combination weight, evolve throughout time according to adjustments defined by three algorithms that evaluate the out-of-sample errors. The ANNs were estimated by using a hybrid approach based on a genetic algorithm (GA) and on a Nelder-Mead simplex algorithm. In a 3, 6, 9 and 12 steps ahead out-of-sample forecasting experiment, the performance of the evolutionary model was compared to the performance of the benchmark linear model, according to root mean squared errors (RMSE) and to mean absolute error (MAE) criteria. The evolutionary model performed better than the linear model for all forecasting steps that were analyzed, according to both criteria. The second essay is motivated by recent literature on real-time data analysis, which has shown that several measures of economic activities go through important data revisions throughout time, implying important limitations to the use of these measures. We developed a GDP real-time data set to Brazilian economy and we analyzed the extent to which GDP growth and output gap series are revised over time. We showed that revisions to GDP growth (quarter-onquarter) are economic relevant, although the GDP growth revisions lose part of their importance as aggregation period increases (for example, four-quarter growth). To analyze the output gap revisions, we applied four detrending methods: the Hodrick-Prescott filter, the linear trend, the quadratic trend, and the Harvey-Clark model of unobservable components. It was shown that all methods had economically relevant magnitude of revisions. In a general way, both GDP data revisions and the low accuracy of end-of-sample output trend estimates were relevant sources of output gap revisions. The third essay is also a study about real-time data, but focused on industrial production (IP) data and on industrial production gap estimates. We showed that revisions to IP growth (month-on-month) and to IP quarterly moving average growth are economic relevant, although the IP growth revisions become less important as aggregation period increases (for example, twelve-month growth). To analyze the output gap revisions, we applied three detrending methods: the Hodrick-Prescott filter, the linear trend, and the quadratic trend. It was shown that all methods had economically relevant magnitude of revisions. In general, both IP data revisions and low accuracy of end-of-sample IP trend estimates were relevant sources of IP gap revisions, although the results suggest some prevalence of revisions originated from low accuracy of end-of-sample estimates.
APA, Harvard, Vancouver, ISO, and other styles
32

Pai, Yu-Jou. "Risks in Financial Markets." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1584003500272517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Cattenoz, Mathieu. "MIMO Radar Processing Methods for Anticipating and Preventing Real World Imperfections." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112077/document.

Full text
Abstract:
Le concept du radar MIMO est prometteur en raison des nombreux avantages qu'il apporte par rapport aux architectures radars actuelles : flexibilité pour la formation de faisceau à l'émission - large illumination de la scène et résolution fine après traitement - et allègement de la complexité des systèmes, via la réduction du nombre d'antennes et la possibilité de transférer des fonctions de contrôle et d'étalonnage du système dans le domaine numérique. Cependant, le radar MIMO reste au stade du concept théorique, avec une prise en compte insuffisante des impacts du manque d'orthogonalité des formes d'onde et des défauts matériels.Ce travail de thèse, dans son ambition de contribuer à ouvrir la voie vers le radar MIMO opérationnel, consiste à anticiper et compenser les défauts du monde réel par des traitements numériques. La première partie traite de l'élaboration des formes d'onde MIMO. Nous montrons que les codes de phase sont optimaux en termes de résolution spatiale. Nous présentons également leurs limites en termes d'apparition de lobes secondaires en sortie de filtre adapté. La seconde partie consiste à accepter les défauts intrinsèques des formes d'onde et proposer des traitements adaptés au modèle de signal permettant d'éliminer les lobes secondaires résiduels induits. Nous développons une extension de l'Orthogonal Matching Pursuit (OMP) qui satisfait les conditions opérationnelles, notamment par sa robustesse aux erreurs de localisation, sa faible complexité calculatoire et la non nécessité de données d'apprentissage. La troisième partie traite de la robustesse des traitements vis-à-vis des écarts au modèle de signal, et particulièrement la prévention et l'anticipation de ces phénomènes afin d'éviter des dégradations de performance. En particulier, nous proposons une méthode numérique d'étalonnage des phases des émetteurs. La dernière partie consiste à mener des expérimentations en conditions réelles avec la plateforme radar MIMO Hycam. Nous montrons que certaines distorsions subies non anticipées, même limitées en sortie de filtre adapté, peuvent impacter fortement les performances en détection des traitements dépendant du modèle de signal
The MIMO radar concept promises numerous advantages compared to today's radar architectures: flexibility for the transmitting beampattern design - including wide scene illumination and fine resolution after processing - and system complexity reduction, through the use of less antennas and the possibility to transfer system control and calibration to the digital domain. However, the MIMO radar is still at the stage of theoretical concept, with insufficient consideration for the impacts of waveforms' lack of orthogonality and system hardware imperfections.The ambition of this thesis is to contribute to paving the way to the operational MIMO radar. In this perspective, this thesis work consists in anticipating and compensating the imperfections of the real world with processing techniques. The first part deals with MIMO waveform design and we show that phase code waveforms are optimal in terms of spatial resolution. We also exhibit their limits in terms of sidelobes appearance at matched filter output. The second part consists in taking on the waveform intrinsic imperfections and proposing data-dependent processing schemes for the rejection of the induced residual sidelobes. We develop an extension for the Orthogonal Matching Pursuit (OMP) that satisfies operational requirements, especially localization error robustness, low computation complexity, and nonnecessity of training data. The third part deals with processing robustness to signal model mismatch, especially how it can be prevented or anticipated to avoid performance degradation. In particular, we propose a digital method of transmitter phase calibration. The last part consists in carrying out experiments in real conditions with the Hycam MIMO radar testbed. We exhibit that some unanticipated encountered distortions, even when limited at the matched filter output, can greatly impact the performance in detection of the data-dependent processing methods
APA, Harvard, Vancouver, ISO, and other styles
34

Trapp, Matthias. "Analysis and exploration of virtual 3D city models using 3D information lenses." Master's thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2008/1393/.

Full text
Abstract:
This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development.
Diese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen.
APA, Harvard, Vancouver, ISO, and other styles
35

Kalapati, Raga S. "Analysis of Ozone Data Trends as an Effect of Meteorology and Development of Forecasting Models for Predicting Hourly Ozone Concentrations and Exceedances for Dayton, OH, Using MM5 Real-Time Forecasts." University of Toledo / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1091216133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

V´ásquez, Huanchi Miriam Elizabeth. "El impacto del tipo de cambio real y su volatilidad en el desempeño de las exportaciones de América Latina durante el periodo 1989-2018." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/653817.

Full text
Abstract:
Este trabajo de investigación examina el impacto de la volatilidad del tipo de cambio real, como proxy de la incertidumbre cambiaria, en el desempeño de las exportaciones totales para un panel de países de América Latina en el periodo 1989-2018. Se utilizan las variables como brecha de las exportaciones, brecha del tipo de cambio real, brecha del producto bruto interno, la brecha de la demanda mundial y la brecha de los término de intercambio. Asimismo, se estima el comportamiento de la volatilidad del tipo de cambio real modelizándola a través de modelos GARCH. Se estima un modelo panel de Vectores Autorregresivos para una muestra equilibrada de cinco países de América Latina (Argentina, Brasil, Chile, México y Perú) para el periodo 1989-2018. Los resultados sugieren que la volatilidad del tipo de cambio real tiene un efecto negativo en las exportaciones de los países seleccionados. Adicionalmente, esta investigación es relevante porque proporciona evidencia empírica de países con diferentes características económicas para comprender el efecto de las variaciones del tipo de cambio real en el desempeño de las exportaciones y, por ende, en la estabilidad del crecimiento económico.
This research work examines the impact of real exchange rate volatility, as a proxy for exchange rate uncertainty, on the performance of total exports for a panel of Latin American countries in the period 1989-2018. Variables such as the export gap, the real exchange rate gap, the gross domestic product gap, the world demand gap, and the trade terms gap are used. Likewise, the behavior of the volatility of the real exchange rate is estimated by modeling it through GARCH models. A panel model of Autoregressive Vectors is estimated for a balanced sample of five Latin American countries (Argentina, Brazil, Chile, Mexico, and Peru) for the period 1989-2018. The results suggest that the volatility of the real exchange rate has a negative effect on the exports of the selected countries. Additionally, this research is relevant because it provides empirical evidence from countries with different economic characteristics to understand the effect of variations in the real exchange rate on export performance and, therefore, on the stability of economic growth.
Trabajo de investigación
APA, Harvard, Vancouver, ISO, and other styles
37

Wahlberg, Fredrik. "Parallel algorithms for target tracking on multi-coreplatform with mobile LEGO robots." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-155537.

Full text
Abstract:
The aim of this master thesis was to develop a versatile and reliable experimentalplatform of mobile robots, solving tracking problems, for education and research.Evaluation of parallel bearings-only tracking and control algorithms on a multi-corearchitecture has been performed. The platform was implemented as a mobile wirelesssensor network using multiple mobile robots, each using a mounted camera for dataacquisition. Data processing was performed on the mobile robots and on a server,which also played the role of network communication hub. A major focus was toimplement this platform in a flexible manner to allow for education and futureresearch in the fields of signal processing, wireless sensor networks and automaticcontrol. The implemented platform was intended to act as a bridge between the idealworld of simulation and the non-ideal real world of full scale prototypes.The implemented algorithms did estimation of the positions of the robots, estimationof a non-cooperating target's position and regulating the positions of the robots. Thetracking algorithms implemented were the Gaussian particle filter, the globallydistributed particle filter and the locally distributed particle filter. The regulator triedto move the robots to give the highest possible sensor information under givenconstraints. The regulators implemented used model predictive control algorithms.Code for communicating with filters in external processes were implementedtogether with tools for data extraction and statistical analysis.Both implementation details and evaluation of different tracking algorithms arepresented. Some algorithms have been tested as examples of the platformscapabilities, among them scalability and accuracy of some particle filtering techniques.The filters performed with sufficient accuracy and showed a close to linear speedupusing up to 12 processor cores. Performance of parallel particle filtering withconstraints on network bandwidth was also studied, measuring breakpoints on filtercommunication to avoid weight starvation. Quality of the sensor readings, networklatency and hardware performance are discussed. Experiments showed that theplatform was a viable alternative for data acquisition in algorithm development and forbenchmarking to multi-core architecture. The platform was shown to be flexibleenough to be used a framework for future algorithm development and education inautomatic control.
APA, Harvard, Vancouver, ISO, and other styles
38

Allen, Brett. "Learning body shape models from real-world data /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/6969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Truzzi, Stefano. "Event classification in MAGIC through Convolutional Neural Networks." Doctoral thesis, Università di Siena, 2022. http://hdl.handle.net/11365/1216295.

Full text
Abstract:
The Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes are able to detect gamma rays from the ground with energies beyond several tens of GeV emitted by the most energetic known objects, including Pulsar Wind Nebulae, Active Galactic Nuclei, and Gamma-Ray Bursts. Gamma rays and cosmic rays are detected by imaging the Cherenkov light produced by the charged superluminal leptons in the extended air shower originated when the primary particle interacts with the atmosphere. These Cherenkov flashes brighten the night sky for short times in the nanosecond scale. From the image topology and other observables, gamma rays can be separated from the unwanted cosmic rays, and thereafter incoming direction and energy of the primary gamma rays can be reconstructed. The standard algorithm in MAGIC data analysis for the gamma/hadron separation is the so-called Random Forest, that works on a parametrization of the stereo events based on the shower image parameters. Until a few years ago, these algorithms were limited by the computational resources but modern devices, such as GPUs, make it possible to work efficiently on the pixel maps information. Most neural network applications in the field perform the training on Monte Carlo simulated data for the gamma-ray sample. This choice is prone to systematics arising from discrepancies between observational data and simulations. Instead, in this thesis I trained a known neural network scheme with observation data from a giant flare of the bright TeV blazar Mrk421 observed by MAGIC in 2013. With this method for gamma/hadron separation, the preliminary results compete with the standard MAGIC analysis based on Random Forest classification, which also shows the potential of this approach for further improvement. In this thesis first an introduction to the High-Energy Astrophysics and the Astroparticle physics is given. The cosmic messengers are briefly reviewed, with a focus on the photons, then astronomical sources of γ rays are described, followed by a description of the detection techniques. In the second chapter the MAGIC analysis pipeline starting from the low level data acquisition to the high level data is described. The MAGIC Instrument Response Functions are detailed. Finally, the most important astronomical sources used in the standard MAGIC analysis are listed. The third chapter is devoted to Deep Neural Network techniques, starting from an historical Artificial Intelligence excursus followed by a Machine Learning description. The basic principles behind an Artificial Neural Network and the Convolutional Neural Network used for this work are explained. Last chapter describes my original work, showing in detail the data selection/manipulation for training the Inception Resnet V2 Convolutional Neural Network and the preliminary results obtained from four test sources.
APA, Harvard, Vancouver, ISO, and other styles
40

Pettersson, Emma, and Johanna Karlsson. "Design för översikt av kontinuerliga dataflöden : En studie om informationsgränssnitt för energimätning till hjälp för fastighetsbolag." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78510.

Full text
Abstract:
Programvaror och gränssnitt är idag en naturlig del av vår vardag. Att ta fram användbara och framgångsrika gränssnitt är i företagens intresse då det kan leda till nöjdare och fler kunder. Problemformulering i den här rapporten bygger på användarundersökningar som genomförts på ett energipresenterade informationsgränssnitt som används av personer i fastighetsbranschen. Företaget som äger programvaran genomförde en enkätundersökning, i den indikerades att programvarans användbarhet behövde utvecklas och detta gavs i uppgift till projektgruppen att vidareutveckla. Vidareutvecklingen baseras på Delone och McLeans (2003) Information system success model samt begreppen informationsdesign, användbarhet och featuritis. Utifrån dessa skapades den teoretiska bakgrund som låg till grund för de kvalitativa intervjuerna och frågeformulär som togs fram. Den teoretiska bakgrunden låg dessutom till grund för de gränssnittsförslag som slutligen togs fram i projektet (Se figur 4). Resultatet av undersökningen visade att användare och supportpersonal hade förhållandevis olika upplevelser av Programvaran. Andra slutsatser som kunde dras om hur ett informationsgränssnitt ska designas för att fungera som stöd för användaren var följande. Det ska följa konventionella designmönster som ska vara konsekvent genom hela programvaran. De ska använda ett anpassat och tydligt språk och antingen vara så tydlig och intuitiv att alla verkligen kan förstå programvaran eller ha en bra och tydlig manual.
Software and interfaces are today a natural part of our everyday lives. Developing useful and successful interfaces is in business interest as it can lead to more satisfied customers. The problem in this report is based on user surveys conducted on an energy-presented information interface used by individuals in the real estate industry. The company that owns the software conducted a survey, indicating that the software usability needed to develop, and this was assigned to the project team to further develop. Further development is based on Delone and McLeans (2003) Information System Success Model as well as the terms information design, usability and featuritis. Based on these, the theoretical background used was the basis for the qualitative interviews and questionnaires that were presented. The theoretical background provided the basis for the interface proposals that were finally presented in the project (See Figure 6). The results of the survey showed that users and support staff had relatively different experiences of the software. The other conclusions that could be drawn about how an information interface should be designed to serve as support for the user were the following, it should follow conventional design patterns. The design should be consistent throughout the software, it should use an adapted and clear language, and either be so clear and intuitive that anyone can understand the software or offer a clear manual.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Kui. "Modeling and estimation of degradation for PEM fuel cells in real conditions of use for mobile applications." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA022.

Full text
Abstract:
La PEMFC sont une source d'énergie propre en raison de ses avantages comme une efficacité énergétique élevée, un faible bruit, une température de fonctionnement faible et zéro polluant. Cependant, la courte durée de vie causée par la dégradation a un grand impact sur l'intégration de la PEMFC dans les systèmes de transport. Le pronostic et la gestion de la santé sont un moyen important pour améliorer les performances et la durée de vie de la PEMFC. Cette thèse présente cinq méthodes de pronostic de dégradation pour la PEMFC. Elle examine l'influence des principales conditions de fonctionnement, incluant le courant de charge, la température, la pression d'hydrogène et l'humidité relative, sur la dégradation de la PEMFC. La dégradation globale et les phénomènes réversibles sont analysés en se basant sur les données numériques issues de trois expériences de PEMFC menées dans des conditions différentes d'usage (une flotte de véhicules à PEMFC et deux bancs de test de type laboratoire). Un premier modèle basé sur l'algorithme de filtre de Kalman UKF (Unscented Kalman Filter) et le modèle de dégradation de la tension est proposé pour prédire la dégradation de la PEMFC dans les véhicules électriques à pile à combustible. Puis, la méthode hybride basée sur l'analyse des ondelettes, la machine d'apprentissage extrême et l'algorithme génétique est proposée pour construire un deuxième modèle de dégradation de la PEMFC. Pour prévoir la dégradation du PEMFC avec des données de expérimentales limitées, la méthode améliorée basée sur sur la combinaison du modèle des réseaux de neurones gris, l'optimisation de l'essaim de particules et les méthodes de fenêtre mobile, est utilisée pour développer le troisième modèle. La quatrième contribution est un modèle de pronostic de vieillissement de la PEMFC fonctionnant dans différentes conditions, en utilisant le réseau neuronal de rétro-propagation et l'algorithme évolutif. Enfin, un pronostic de dégradation de la PEMFC basé sur le réseau neuronal en ondelettes et l'algorithme de recherche du coucou est proposé pour prédire la durée de vie restante de la PEMFC
Proton Exchange Membrane Fuel Cells (PEMFC) is a clean energy source because of the merits like high energy efficiency, low noise, low operating temperature, and zero pollutants. However, the short lifetime caused by degradation has a great impact on the integration of PEMFC in the transportation systems. Prognostics and health management is an important way to improve performance and remaining useful life for PEMFC. This thesis proposes five degradation prognosis methods for PEMFC. The thesis considers the influence of main operating conditions including the load current, temperature, hydrogen pressure, and relative humidity on the degradation of PEMFC. The global degradation trend and reversible phenomena are analyzed on the basis of data from three PEMFC experiments conducted under different conditions of use (a fleet of 10 PEMFC vehicles and two laboratory test benches). First, the model-driven method based on unscented Kalman Filter algorithm and voltage degradation model is presented to predict the degradation of PEMFC in fuel cell electric vehicles. Then, the hybrid method based on the wavelet analysis, extreme learning machine and genetic algorithm is proposed to build the degradation model of PEMFC. To forecast the degradation of PEMFC with limited experimental data, the improved data-driven method based on the combination of the grey neural network model, the particle swarm optimization and the moving window methods, is used for developing the third model. The fourth contribution is an aging prognosis model of PEMFC operating in different conditions, by using the Backpropagation neural network and evolutionary algorithm. Finally, a degradation prognosis of PEMFC based on wavelet neural network and cuckoo search algorithm is proposed to predict the remaining useful life of PEMFC
APA, Harvard, Vancouver, ISO, and other styles
42

Hartmann, Daniel. "Stock markets and real-time macroeconomic data /." Hamburg : Kovač, 2007. http://d-nb.info/985325682/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Rao, Ashwani Pratap. "Statistical information retrieval models| Experiments, evaluation on real time data." Thesis, University of Delaware, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1567821.

Full text
Abstract:

We are all aware of the rise of information age: heterogeneous sources of information and the ability to publish rapidly and indiscriminately are responsible for information chaos. In this work, we are interested in a system which can separate the "wheat" of vital information from the chaff within this information chaos. An efficient filtering system can accelerate meaningful utilization of knowledge. Consider Wikipedia, an example of community-driven knowledge synthesis. Facts about topics on Wikipedia are continuously being updated by users interested in a particular topic. Consider an automatic system (or an invisible robot) to which a topic such as "President of the United States" can be fed. This system will work ceaselessly, filtering new information created on the web in order to provide the small set of documents about the "President of the United States" that are vital to keeping the Wikipedia page relevant and up-to-date. In this work, we present an automatic information filtering system for this task. While building such a system, we have encountered issues related to scalability, retrieval algorithms, and system evaluation; we describe our efforts to understand and overcome these issues.

APA, Harvard, Vancouver, ISO, and other styles
44

pande, anurag. "ESTIMATION OF HYBRID MODELS FOR REAL-TIME CRASH RISK ASSESSMENT ON FREEWAYS." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3016.

Full text
Abstract:
Relevance of reactive traffic management strategies such as freeway incident detection has been diminishing with advancements in mobile phone usage and video surveillance technology. On the other hand, capacity to collect, store, and analyze traffic data from underground loop detectors has witnessed enormous growth in the recent past. These two facts together provide us with motivation as well as the means to shift the focus of freeway traffic management toward proactive strategies that would involve anticipating incidents such as crashes. The primary element of proactive traffic management strategy would be model(s) that can separate 'crash prone' conditions from 'normal' traffic conditions in real-time. The aim in this research is to establish relationship(s) between historical crashes of specific types and corresponding loop detector data, which may be used as the basis for classifying real-time traffic conditions into 'normal' or 'crash prone' in the future. In this regard traffic data in this study were also collected for cases which did not lead to crashes (non-crash cases) so that the problem may be set up as a binary classification. A thorough review of the literature suggested that existing real-time crash 'prediction' models (classification or otherwise) are generic in nature, i.e., a single model has been used to identify all crashes (such as rear-end, sideswipe, or angle), even though traffic conditions preceding crashes are known to differ by type of crash. Moreover, a generic model would yield no information about the collision most likely to occur. To be able to analyze different groups of crashes independently, a large database of crashes reported during the 5-year period from 1999 through 2003 on Interstate-4 corridor in Orlando were collected. The 36.25-mile instrumented corridor is equipped with 69 dual loop detector stations in each direction (eastbound and westbound) located approximately every ½ mile. These stations report speed, volume, and occupancy data every 30-seconds from the three through lanes of the corridor. Geometric design parameters for the freeway were also collected and collated with historical crash and corresponding loop detector data. The first group of crashes to be analyzed were the rear-end crashes, which account to about 51% of the total crashes. Based on preliminary explorations of average traffic speeds; rear-end crashes were grouped into two mutually exclusive groups. First, those occurring under extended congestion (referred to as regime 1 traffic conditions) and the other which occurred with relatively free-flow conditions (referred to as regime 2 traffic conditions) prevailing 5-10 minutes before the crash. Simple rules to separate these two groups of rear-end crashes were formulated based on the classification tree methodology. It was found that the first group of rear-end crashes can be attributed to parameters measurable through loop detectors such as the coefficient of variation in speed and average occupancy at stations in the vicinity of crash location. For the second group of rear-end crashes (referred to as regime 2) traffic parameters such as average speed and occupancy at stations downstream of the crash location were significant along with off-line factors such as the time of day and presence of an on-ramp in the downstream direction. It was found that regime 1 traffic conditions make up only about 6% of the traffic conditions on the freeway. Almost half of rear-end crashes occurred under regime 1 traffic regime even with such little exposure. This observation led to the conclusion that freeway locations operating under regime 1 traffic may be flagged for (rear-end) crashes without any further investigation. MLP (multilayer perceptron) and NRBF (normalized radial basis function) neural network architecture were explored to identify regime 2 rear-end crashes. The performance of individual neural network models was improved by hybridizing their outputs. Individual and hybrid PNN (probabilistic neural network) models were also explored along with matched case control logistic regression. The stepwise selection procedure yielded the matched logistic regression model indicating the difference between average speeds upstream and downstream as significant. Even though the model provided good interpretation, its classification accuracy over the validation dataset was far inferior to the hybrid MLP/NRBF and PNN models. Hybrid neural network models along with classification tree model (developed to identify the traffic regimes) were able to identify about 60% of the regime 2 rear-end crashes in addition to all regime 1 rear-end crashes with a reasonable number of positive decisions (warnings). It translates into identification of more than ¾ (77%) of all rear-end crashes. Classification models were then developed for the next most frequent type, i.e., lane change related crashes. Based on preliminary analysis, it was concluded that the location specific characteristics, such as presence of ramps, mile-post location, etc. were not significantly associated with these crashes. Average difference between occupancies of adjacent lanes and average speeds upstream and downstream of the crash location were found significant. The significant variables were then subjected as inputs to MLP and NRBF based classifiers. The best models in each category were hybridized by averaging their respective outputs. The hybrid model significantly improved on the crash identification achieved through individual models and 57% of the crashes in the validation dataset could be identified with 30% warnings. Although the hybrid models in this research were developed with corresponding data for rear-end and lane-change related crashes only, it was observed that about 60% of the historical single vehicle crashes (other than rollovers) could also be identified using these models. The majority of the identified single vehicle crashes, according to the crash reports, were caused due to evasive actions by the drivers in order to avoid another vehicle in front or in the other lane. Vehicle rollover crashes were found to be associated with speeding and curvature of the freeway section; the established relationship, however, was not sufficient to identify occurrence of these crashes in real-time. Based on the results from modeling procedure, a framework for parallel real-time application of these two sets of models (rear-end and lane-change) in the form of a system was proposed. To identify rear-end crashes, the data are first subjected to classification tree based rules to identify traffic regimes. If traffic patterns belong to regime 1, a rear-end crash warning is issued for the location. If the patterns are identified to be regime 2, then they are subjected to hybrid MLP/NRBF model employing traffic data from five surrounding traffic stations. If the model identifies the patterns as crash prone then the location may be flagged for rear-end crash, otherwise final check for a regime 2 rear-end crash is applied on the data through the hybrid PNN model. If data from five stations are not available due to intermittent loop failures, the system is provided with the flexibility to switch to models with more tolerant data requirements (i.e., model using traffic data from only one station or three stations). To assess the risk of a lane-change related crash, if all three lanes at the immediate upstream station are functioning, the hybrid of the two of the best individual neural network models (NRBF with three hidden neurons and MLP with four hidden neurons) is applied to the input data. A warning for a lane-change related crash may be issued based on its output. The proposed strategy is demonstrated over a complete day of loop data in a virtual real-time application. It was shown that the system of models may be used to continuously assess and update the risk for rear-end and lane-change related crashes. The system developed in this research should be perceived as the primary component of proactive traffic management strategy. Output of the system along with the knowledge of variables critically associated with specific types of crashes identified in this research can be used to formulate ways for avoiding impending crashes. However, specific crash prevention strategies e.g., variable speed limit and warnings to the commuters demand separate attention and should be addressed through thorough future research.
Ph.D.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering
APA, Harvard, Vancouver, ISO, and other styles
45

Buchholz, Henrik. "Real-time visualization of 3D city models." Phd thesis, Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2007/1333/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Cheng, Andersson Penny Peng. "Yield curve estimation models with real market data implementation and performance observation." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-52399.

Full text
Abstract:
It always exists different methods/models to build a yield curve from a set of observed market rates even when the curve completely reproduces the price of the given instruments. To create an accurate and smooth interest rate curve has been a challenging all the time. The purpose of this thesis is to use the real market data to construct the yield curves by the bootstrapping method and the Smith Wilson model in order to observe and compare the performance ability between the models. Furthermore, the extended Nelson Siegel model is introduced without implementation. Instead of implementation I compare the ENS model and the traditional bootstrapping method from a more theoretical perspective in order to perceive the performance capabilities of them.
APA, Harvard, Vancouver, ISO, and other styles
47

Colombelli, Simona <1986&gt. "Early Warning For Large Earthquakes: Observations, Models and Real-Time Data Analysis." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6339/1/colombelli_simona_tesi.pdf.

Full text
Abstract:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
APA, Harvard, Vancouver, ISO, and other styles
48

Colombelli, Simona <1986&gt. "Early Warning For Large Earthquakes: Observations, Models and Real-Time Data Analysis." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6339/.

Full text
Abstract:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
APA, Harvard, Vancouver, ISO, and other styles
49

Lauzeral, Nathan. "Reduced order and sparse representations for patient-specific modeling in computational surgery." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0062.

Full text
Abstract:
Cette thèse a pour but d’évaluer l'utilisation des méthodes de réduction de modèles fondées sur des approches parcimonieuses pour atteindre des performances en temps réel dans la cadre de la chirurgie computationnelle. Elle se concentre notamment sur l’intégration de la simulation biophysique dans des modèles personnalisés de tissus et d'organes afin d'augmenter les images médicales et ainsi éclairer le clinicien dans sa prise de décision. Dans ce contexte, trois enjeux fondamentaux sont mis en évidence. Le premier réside dans l'intégration de la paramétrisation de la forme au sein du modèle réduit afin de représenter fidèlement l'anatomie du patient. Une approche non intrusive reposant sur un échantillonnage parcimonieux de l'espace des caractéristiques anatomiques est introduite et validée. Ensuite, nous abordons le problème de la complétion des données et de la reconstruction des images à partir de données partielles ou incomplètes via des à priori physiques. Nous explorons le potentiel de la solution proposée dans le cadre du recalage d’images pour la réalité augmentée en laparoscopie. Des performances proches du temps réel sont obtenues grâce à une nouvelle approche d'hyper-réduction fondée sur une technique de représentation parcimonieuse. Enfin, le troisième défi concerne la propagation des incertitudes dans le cadre de systèmes biophysiques. Il est démontré que les approches de réduction de modèles traditionnelles ne réussissent pas toujours à produire une représentation de faible rang, et ce, en particulier dans le cas de la simulation électrochirurgicale. Une alternative est alors proposée via la métamodélisation. Pour ce faire, nous étendons avec succès l'utilisation de méthodes de régression parcimonieuses aux cas des systèmes à paramètres stochastiques
This thesis investigates the use of model order reduction methods based on sparsity-related techniques for the development of real-time biophysical modeling. In particular, it focuses on the embedding of interactive biophysical simulation into patient-specific models of tissues and organs to enhance medical images and assist the clinician in the process of informed decision making. In this context, three fundamental bottlenecks arise. The first lies in the embedding of the shape parametrization into the parametric reduced order model to faithfully represent the patient’s anatomy. A non-intrusive approach relying on a sparse sampling of the space of anatomical features is introduced and validated. Then, we tackle the problem of data completion and image reconstruction from partial or incomplete datasets based on physical priors. The proposed solution has the potential to perform scene registration in the context of augmented reality for laparoscopy. Quasi-real-time computations are reached by using a new hyperreduction approach based on a sparsity promoting technique. Finally, the third challenge concerns the representation of biophysical systems under uncertainty of the underlying parameters. It is shown that traditional model order reduction approaches are not always successful in producing a low dimensional representation of a model, in particular in the case of electrosurgery simulation. An alternative is proposed using a metamodeling approach. To this end, we successfully extend the use of sparse regression methods to the case of systems with stochastic parameters
APA, Harvard, Vancouver, ISO, and other styles
50

De, Wulf Martin. "From timed models to timed implementations." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210797.

Full text
Abstract:

Computer Science is currently facing a grand challenge :finding good design practices for embedded systems. Embedded systems are essentially computers interacting with some physical process. You could find one in a braking systems or in a nuclear power plant for example. They present several design difficulties :first they are reactive systems, interacting indefinitely with their environment. Second,they must satisfy real-time constraints specifying when they should respond, and not only how. Finally, their environment is often deeply continuous, presenting complex dynamics. The formal models of choice for specifying such systems are timed and hybrid automata for which model checking is pretty well studied.

In a first part of this thesis, we study a complete design approach, including verification and code generation, for timed automata. We have to define a new semantics for timed automata, the AASAP semantics, that preserves the decidability properties for model checking and at the same time is implementable. Our notion of implementability is completely novel, and relies on the simulation of a semantics that is obviously implementable on a real platform. We wrote tools for the analysis and code generation and exemplify them on a case study about the well known Philips Audio Control Protocol.

In a second part of this thesis, we study the problem of controller synthesis for an environment specified as a hybrid automaton. We give a new solution for discrete controllers having only an imperfect information about the state of the system. In the process, we defined a new algorithm, based on the monotonicity of the controllable predecessors operator, for efficiently finding a controller and we show some promising applications on a classical problem :the universality test for finite automata.
Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography