Dissertations / Theses on the topic 'Integration dynamic'

To see the other types of publications on this topic, follow the link: Integration dynamic.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Integration dynamic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Viljoen, Dewald. "Dynamic building model integration." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20257.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: The amount and complexity of software applications for the building industry is increasing constantly. It has been a long term goal of the software industry to support integration of the various models and applications. This is a difficult task due to the complexity of the models and the diversity of the fields that they model. As a result, only large software houses have the ability to provide integrated solutions on the basis of a common information model. Such a model can more easily be established since the different software is developed within the same group. Other software suppliers usually have to revert to importing and exporting of data to establish some form of integration. Even large software houses still sometimes make use of this technique between their different packages. In order to obtain a fully integrated solution, clients have to acquire complex and expensive software, even if only a small percentage of the functionality of this software is actually required. A different approach to integration is proposed here, based on providing an integration framework that links different existing software models. The framework must be customisable for each individual's unique requirements as well as for the software already used by the individual. In order for the framework to be customisable, it must either encompass the information requirements of all existing software models from the outset, or be flexible and adaptable for each user. Developing an encompassing software model is difficult and expensive and thus the latter approach is followed here. The result is a model that is less general than BIM-style models, but more focussed and less complex. The elements of this flexible model do not have predetermined properties, but properties can instead be added and removed at runtime. Furthermore, derived properties are not stored as values, but rather as methods by which their values are obtained. These can also be added, removed and modified at runtime. These two concepts allow the structure and the functionality of the model to be changed at runtime. An added advantage is that a knowledgeable user can do this himself. Changes to the models can easily be incorporated in the integration framework, so their future development is not limited. This has the advantage that the information content of the various applications does not have to be pre-determined. It is acknowledged that a specific solution is required for each integration model; however the user still has full control to expand his model to the complexity of BIM-type models. Furthermore, if new software models are developed to incorporate the proposed structures, even more seamless and flexible integration will be possible. The proposed framework is demonstrated by linking a CAD application to a cost-estimation application for buildings. A prototype implementation demonstrates full integration by synchronising selection between the different applications.
AFRIKAANSE OPSOMMING: Die hoeveelheid en kompleksiteit van sagteware programme vir die bou industrie is konstant aan die vermeerder. Dit was nog altyd 'n lang termyn doelwit van die sagteware industrie om integrasie van die verskeie modelle en programme te ondersteun. Hierdie is 'n moeilike taak as gevolg van die kompleksiteit van die modelle, en die diversiteit van die velde wat hierdie programme modelleer. Die gevolg is dat net groot sagteware huise die vermoë het om geïntegreerde oplossings te bied op die basis van 'n gemeenskaplike inligting model. So 'n tipe model kan makliker bymekaargestel word siende dat al die verskillende sagteware binne dieselfde groep ontwikkel word. Ander sagteware verskaffers moet gewoonlik gebruik maak van sogenaamde uitvoer/invoer tegnieke om 'n mate van integrasie te verkry. Selfs groot sagteware huise maak ook gebruik van hierdie tegnieke tussen hulle verskeie pakkette, in plaas van om die programme direk met mekaar te koppel. Om 'n vol geïntegreerde oplossing te verkry, moet kliënte komplekse en duur sagteware aanskaf, selfs al word net 'n klein gedeelte van die funksionaliteit van hierdie sagteware gebruik. 'n Verskillende benadering word hier gevolg, gebaseer op 'n integrasie raamwerk wat verskillende bestaande sagteware modelle met mekaar koppel. Die raamwerk moet aanpasbaar wees vir elke individu se unieke opset. Vir die raamwerk om aanpasbaar te wees, moet dit óf alle bou industrie inligting inkorporeer van die staanspoor af, óf dit moet buigbaar en aanpasbaar wees vir elke gebruiker. Om 'n model te ontwikkel wat alle bestaande inligting inkorporeer van die staanspoor af is moeilik en duur, dus word die tweede benadering gevolg. Die eindresultaat is 'n model wat minder omvattend is as BIM-tipe modelle, maar eerder gefokus en minder kompleks. Die elemente van hierdie buigbare model het nie voorafbepaalde eienskappe nie, eienskappe kan bygevoeg en weggevat word terwyl die program hardloop. Verder word afgeleide eienskappe nie gestoor as waardes nie, maar eerder as metodes wat gebruik word om hulle waardes mee af te lei. Hierdie konsepte laat toe dat die struktuur en funksionaliteit van die model verander kan word terwyl die program hardloop. 'n Verdere voordeel is dat 'n kundige verbruiker die veranderinge self kan doen. Veranderinge in die modelle kan maklik ingesluit word in die integrasie model, so toekomstige ontwikkeling word nie beperk nie. Dit beteken dat die inhoud van die modelle nie vooraf bepaal hoef te word nie. Al het die raamwerk 'n gespesialiseerde oplossing vir elke gebruiker tot gevolg, het die gebruiker nogtans volle beheer om sy model uit te brei tot die omvattendheid van BIM-tipe modelle. Indien nuwe sagteware modelle ontwikkel word met die integrasie raamwerk in gedagte, kan nog gladder en buigbare integrasie moontlik wees. In hierdie tesis word 'n tekenprogram met 'n kosteberaming program gekoppel om die voorgestelde raamwerk te demonstreer. 'n Prototipe implementering demonstreer volle integrasie deur seleksie binne die programme te sinchroniseer.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Owen Jianwen 1968. "Integration of dynamic traffic control and assignment." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/10135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rogers, Christopher Reed. "A Comprehensive Integration and Analysis of Dynamic Load Balancing Architectures within Molecular Dynamics." DigitalCommons@USU, 2009. https://digitalcommons.usu.edu/etd/412.

Full text
Abstract:
The world of nano-science is an ever-changing field. Molecular Dynamics (MD) is a computational suite of tools that is useful for analyzing and predicting behaviors of substances on the molecular level. The nature of MD is such that only a few types of computations are repeated thousands or sometimes millions of times over. Even a small increase speedup or efficiency of an MD simulator can compound itself over the life of the simulation and have a positive and observable effect. This thesis is the end result of an attempted speedup of the MD problem. Two types of MD architectures are developed: a dynamic architecture that is able to change along with the computational demands of the system, and a static architecture that is configured in terms of processing elements to be best suited to a variety of computational demands. The efficiency, throughput, area, and speed of the dynamic and static architectures are presented, highlighting the improvement that the dynamic architecture presents in its ability to provide load balancing.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Yi, and Kajsa Olsson. "Dynamic integration in SCM- the role of TPL." Thesis, Jönköping University, JIBS, Centre of Logistics and Supply Chain Management, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-9371.

Full text
Abstract:

 

 

 

 

Introduction:

 

 

Companies are facing an environment with fierce competition therefore to respond to the customers' needs and to deliver on time at a competitive cost is becoming more and more important. Integration between the actors in the SC is increasing in importance and is seen as a core competitive strategy to respond to the customers' demands. SCI can be achieved through efficient linkages among various supply chain activities however internal excellence is not enough and SCM seeks to integrate internal functions with external operations of suppliers, customer and other SC members. In SCI the TPL firms are said to play an important role because of their expertise and knowledge.

Problem:

Previous researchers have identified gaps in the SCI literature which does not consider the role of the TPL firm. Similar gaps have been found in the TPL literature which does not put emphasis on SCI. Nevertheless the importance of TPL firms in SCI has been pointed out as significant. Therefore this thesis will study the role of the TPL firm in SCI to improve the knowledge and create a better understanding.

Purpose:

The purpose of this thesis is to study and uncover the role of the TPL firm Schenker Logistics AB Nässjö in supporting SCI with its customer Relacom and its supplier Nexans to gain a deeper understanding of the phenomenon. By analyzing the drivers, barriers and outcomes of the SCI for each firm, the paper pursues the notion that SCI is a dynamic process and TPL firm plays an important role.

Method:

This thesis is based on a qualitative approach where interviews with key persons are the main approach to gathering information. The qualitative approach has its strengths is being able to obtain rich nuances in the information which fits our purpose to go deeper in a phenomenon.

Conclusions:

By analyzing the drivers, barriers and outcomes of SCI we have reached the conclusion that the role of the TPL firm is to achieve benefits through the three C's (the company, its customers and its competitors). The TPL firm also smooths out the friction between other members of the SC and help to create a better, faster, cheaper, smarter and greener SCI. Since the factors influencing SCI are constantly changing, all actors continuously have to keep updated to react to the pressures from the market.

 

APA, Harvard, Vancouver, ISO, and other styles
5

Ocampo, Quintero Manuel Antonio. "Business process based integration of dynamic collaborative organizations." Monterrey : Tecnológico de Monterrey, 2006. http://biblioteca.itesm.mx/cgi-bin/doctec/listdocs?co_recurso=doctec:133300.

Full text
Abstract:
Tesis (Master in Science in Information Technology) -- Tecnológico de Monterrey, Campus Monterrey.
Título tomado de la pantalla de presentación [como fue visto el 30 de agosto de 2006] Incluye referencias bibliográficas. También disponible en formato impreso.
APA, Harvard, Vancouver, ISO, and other styles
6

Adourian, Chahe. "Bidirectional integration of geometric and dynamic simulation tools." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96756.

Full text
Abstract:
Mechanisms to share information from Mechanical Computer Assisted Design (MCAD) to simulation model have been demonstrated using various approaches. However, in all cases the information sharing is unidirectional - from the MCAD to Multi-Body Systems (MBS) simulation - which lacks the bidirectional mapping required in a concurrent engineering context where both models need to develop in parallel while remaining consistent.We present a modelling library and a model mapping that permits and encourages parallel development of the mechanical assembly in both the MBS simulation and MCAD environments while supporting both bidirectional initial full transfer and incremental updates. Furthermore, with the adopted approach and with a careful selection of the simulation language, MCAD parts can be extended with non-mechanical behaviour in the simulation tool.
Des mécanismes pour partager l'information entre un modèle CAD et un modèle de simulation ont été démontrés utilisant divers approches. Pourtant, dans tous les cas, le partage d'information était unidirectionnel - allant du modèle CAD vers le modèle de simulation - donc ne possédant pas les qualités bidirectionnelles nécessaires dans le contexte de l'ingénierie collaborative ou les modèles doivent rester consistantes en permanence.Nous présentons notre librairie de modélisation et de transformations entre modèles qui permettent et encouragent le développement parallèle de l'assemblage mécanique dans les deux environnements de simulation de conception. Notre approche supporte le partage et la synchronisation des deux modèles dans les deux sens et de façon incrémentale si nécessaire. En complément, avec l'approche que nous avons adopté, les modèles mécaniques peuvent être associés a des modèles comportementales non mécanique dans l'outil de simulation.
APA, Harvard, Vancouver, ISO, and other styles
7

Weng, Bin. "Dynamic integration of evolving distributed databases using services." Thesis, Durham University, 2010. http://etheses.dur.ac.uk/322/.

Full text
Abstract:
This thesis investigates the integration of many separate existing heterogeneous and distributed databases which, due to organizational changes, must be merged and appear as one database. A solution to some database evolution problems is presented. It presents an Evolution Adaptive Service-Oriented Data Integration Architecture (EA-SODIA) to dynamically integrate heterogeneous and distributed source databases, aiming to minimize the cost of the maintenance caused by database evolution. An algorithm, named Relational Schema Mapping by Views (RSMV), is designed to integrate source databases that are exposed as services into a pre-designed global schema that is in a data integrator service. Instead of producing hard-coded programs, views are built using relational algebra operations to eliminate the heterogeneities among the source databases. More importantly, the definitions of those views are represented and stored in the meta-database with some constraints to test their validity. Consequently, the method, called Evolution Detection, is then able to identify in the meta-database the views affected by evolutions and then modify them automatically. An evaluation is presented using case study. Firstly, it is shown that most types of heterogeneity defined in this thesis can be eliminated by RSMV, except semantic conflict. Secondly, it presents that few manual modification on the system is required as long as the evolutions follow the rules. For only three types of database evolutions, human intervention is required and some existing views are discarded. Thirdly, the computational cost of the automatic modification shows a slow linear growth in the number of source database. Other characteristics addressed include EA-SODIA’ scalability, domain independence, autonomy of source databases, and potential of involving other data sources (e.g.XML). Finally, the descriptive comparison with other data integration approaches is presented. It shows that although other approaches may provide better performance of query processing in some circumstances, the service-oriented architecture provide better autonomy, flexibility and capability of evolution.
APA, Harvard, Vancouver, ISO, and other styles
8

Christofi, Stelios. "Dynamic application integration using peer to peer technology." Thesis, City University London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

CERQUEIRA, RENATO FONTOURA DE GUSMAO. "A DYNAMIC INTEGRATION MODEL FOR SOFTWARE COMPONENT SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2000. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2792@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Diferentes sistemas de componentes de software, tais como CORBA, COM e JavaBeans, apresentam diferentes modelos de objetos e sistemas de tipos. Essas diferenças dificultam a integração de componentes oriundos de sistemas distintos e, conseqüentemente, são uma barreira para o reuso desses componentes. Neste trabalho, defendemos a tese de que uma linguagem interpretada com um determinado conjunto de mecanismos reflexivos, aliada à compatibilidade estrutural de tipos, oferece um mecanismo de composição adequado tanto para a conexão dinâmica de componentes, quanto para a interoperabilidade entre diferentes sistemas de componentes. Esse mecanismo de composição realiza em tempo de execução as tarefas de conexão, adaptação, implementação e verificação de tipos de componentes, e trata de uma maneira uniforme componentes de diferentes sistemas, permitindo que estes sejam conectados de uma forma transparente. O mecanismo de composição que propomos se baseia em um modelo que privilegia a flexibilidade em tempo de execução. Esse modelo de composição é composto por dois elementos principais. O primeiro elemento é um modelo de objetos que definimos com a finalidade de poder representar componentes dos diferentes sistemas tratados neste trabalho. Assim, esse modelo de objetos faz o papel de um modelo integrador, isto é, um modelo sob o qual objetos de diferentes sistemas podem ser representados e interagir de forma transparente. O segundo elemento de nosso modelo de composição é um padrão de projeto (design pattern) para a implementação de bindings entre linguagens interpretadas e sistemas de componentes. Esse padrão de projeto, chamado Dynamic Language Binding, não utiliza a técnica tradicional de stubs. Ao invés disso, ele utiliza mecanismos de reflexividade e tipagem dinâmica para implementar tanto proxies genéricos, que podem representar qualquer componente de um determinado sistema, quanto adaptadores genéricos, que permitem a implementação de componentes utilizando a própria linguagem de composição. Como instrumento de validação da nossa proposta, descrevemos uma implementação do modelo de composição denominada LuaOrb. LuaOrb utiliza a linguagem interpretada Lua como linguagem de composição dinâmica, e integra os sistemas CORBA, COM e Java.
Different component systems, such as CORBA, COM, and Java, have different object models and type systems. Such differences make the interoperability between components of distinct systems more difficult, and thus are an obstacle for component reuse. In this dissertation, we argue that an interpreted language with a specific set of reflexive mechanisms, together with a type system with structural compatibility, offers a composition mechanism suitable for dynamic component connection and for interoperability between different component systems. This composition mechanism performs at runtime the tasks of verifying types, connecting, adapting and implementing components, and handles components of different systems in a uniform way, allowing them to be connected transparently. The proposed composition mechanism is based on a model that favors flexibility at runtime. This composition model is composed of two major elements. The first one is an object model, defined in order to represent components of the different systems addressed in this dissertation. Thus, this object model performs the role of a unifying model, that is, a model in which objects from different systems can interact and be represented transparently. The second element of our composition model is a design pattern to implement bindings between interpreted languages and component systems. This design pattern, named Dynamic Language Binding, does not use the traditional stubs technique. Instead of this, it uses reflection and dynamic typing to implement generic proxies, which can represent any component of a specific system, and generic adapters, which allow component implementations using the composition language itself. In order to validate our proposal, we describe the LuaOrb system, which is an implementation of our composition model. LuaOrb uses the interpreted language Lua as its dynamic composition language, and integrates the systems CORBA, COM and Java.
APA, Harvard, Vancouver, ISO, and other styles
10

AndÅ, Hiroshi. "Dynamic reconstruction and integration of 3D structure information." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gresham, Lori J. "Toddlers' Problem Solving: The Importance of Dynamic Integration." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1281459838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hellström, Johan. "Dynamic Interactions : National Political Parties, Voters and European Integration." Doctoral thesis, Umeå universitet, Statsvetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-25925.

Full text
Abstract:
This thesis consists of an introduction and four self-contained papers, designated I-IV, which extend previous research on national political parties and voters in Western Europe. More specifically, the issues addressed are parties’ positions and voters’ opinions on European integration and their dynamic interactions, i.e. the extent to which parties’ influence voters’ opinions, voters influence parties, and the conditions under which they influence each other. All four papers make contributions to both the content of the research field and methodology (statistical techniques) applied. Paper I re-examines and evaluates several hypotheses regarding the way national political parties position themselves with respect to European integration. Based on analysis of panel data on references to Europe in the election manifestos of political parties in 16 West European countries between 1970 and 2003, I present evidence that their stances on European integration have been largely determined by their ideology, here measured by the locations of the parties within party families and their general orientation along the left/right ideological continuum. The results indicate that the influence of ideology has diminished over time and parties have adopted more favourable positions towards the European project, but it is too early to ignore the connection between left/right and pro/anti integration, since many marginal parties are still taking oppositional stances that are strongly related to their ideological commitments. In Paper II, I discuss how configurational comparative methods (i.e. Qualitative Comparative Analysis, QCA) and statistical methods can be combined to provide tests for the sufficiency of any given set of combination of causal conditions. The potential utility of the mixed-method approach for analyzing political phenomena is demonstrated by applying it to cross-national data regarding party-based Euroscepti¬cism in Western Europe. The findings show that oppositional stances to European integration are mainly restricted to non-governmental ideological fringe parties on both the left and right. Further, radical left parties with Eurosceptical positions are largely restricted to countries with social democratic (i.e. Nordic) welfare state regimes. The empirical example presented in this paper demonstrates that configurational methods can be successfully combined with related statistical methods. Paper III examines and evaluates the link between electorates’ opinions and national political parties’ positions on European integration, i.e. the extent to which political parties lead and/or follow public opinion on this issue. Applying a method for causal modelling to panel data concerning political parties’ positions and voters’ opinions in 15 countries from 1973 to 2003, I find (contrary to previous investigations of this relationship) that there is little empirical support for an electoral connection or reciprocal causation between party positions and electorates’ opinion regarding European integration. Parties have an influence on voter opinions, but they are largely unresponsive to changes in voter opinion. In Paper IV, I examine when parties do (and do not) influence voters’ opinions about EU policy issues. According to previous research, whether parties are able to persuade their constituents to adopt their standpoints depends on several conditions: characteristics and preferences of individual voters, intra-party factors, inter-party factors and several factors that affect the salience of EU issues at the domestic level. Applying hierarchical linear models to data concerning voters’ opinions and political parties’ positions in 14 West European countries, I present findings regarding the conditions under which parties are actually able to influence voters’ opinions concerning European integration.
APA, Harvard, Vancouver, ISO, and other styles
13

Pan, Min-Cheng. "Frame to frame integration and recognition for dynamic imagery." Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

BADIOLA, SERGIO MATEO. "SUPPORT INTEGRATION OF DYNAMIC WORKLOAD GENERATION TO SAMBA FRAMEWORK." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=7349@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
FURNAS CENTRAIS ELÉTRICAS S.A
Alexandre Plastino em sua tese de doutorado apresenta um ambiente de desenvolvimento de aplicações paralelas SPMD (Single Program, Multiple Data) denominado SAMBA que permite a geração de diferentes versões de uma aplicação paralela a partir da incorporação de diferentes algoritmos de balanceamento de carga disponíveis numa biblioteca própria. O presente trabalho apresenta uma ferramenta de geração de carga dinâmica integrada a este ambiente que possibilita criar, em tempo de execução, diferentes perfis de carga externa a serem aplicados a uma aplicação paralela em estudo. Dessa forma, pretende-se permitir que o desenvolvedor de uma aplicação paralela possa selecionar o algoritmo de balanceamento de carga mais apropriado frente a condições variáveis de carga externa. Com o objetivo de validar a integração da ferramenta ao ambiente SAMBA, foram obtidos resultados da execução de duas aplicações SPMD distintas.
Alexandre Plastino`s tesis presents a framework for the development of SPMD parallel applications, named SAMBA, that enables the generation of different versions of a parallel application by incorporating different load balancing algorithms from an internal library. This dissertation presents a dynamic workload generation`s tool, integrated to SAMBA, that affords to create, at execution time, different external workload profiles to be applied over a parallel application in study. The objective is to enable that a parallel application developer selects the most appropriated load balancing algorithm based in its performance under variable conditions of external workload. In order to validate this integration, two SPMD applications were implemented.
APA, Harvard, Vancouver, ISO, and other styles
15

Fernández, Ruiz Bruno Miguel 1973. "Architecture for the integration of dynamic traffic management systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/80949.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2000.
Includes bibliographical references (p. 104-106).
by Bruno Miguel Fernández Ruiz.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
16

Heikkilä, V. (Ville). "Optimizing continuous integration testing using dynamic source code analysis." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201802131229.

Full text
Abstract:
The amount of different tools and methods available to perform software testing is massive. Exhaustive testing of a project can easily take days, weeks or even months. Therefore, it is generally advisable to prioritize and optimize the performed tests. The optimization method chosen to be studied in this thesis was Dynamic Source Code Analysis (DSCA). In DSCA a piece of target software is monitored during testing to find out what parts of the target code get executed. By finding and storing this information, further code changes can be triggered to execute the stored test cases that caused execution in the modified parts of code. The test setup for this project consisted of three open-source software targets, three fuzz testing test suites, and the DSCA software. Test runs of different lengths were triggered by code changes of varying size. The durations of these test runs and the sizes of the code changes were stored for further analysis. The purpose of this thesis was to create a method for the fuzz testing suite to reliably communicate with the DSCA software. This was done to find out how much time can be saved if CI-testing is optimized by scanning every source code change to obtain a targeted test set as opposed to running a comprehensive set of tests after each change. The data analysis demonstrates with certainty that using DSCA reduces the average run-time of a test by up to 50%
Ohjelmistotestauksessa käytettävien työkalujen ja metodien määrä on massiivinen. Ohjelmistoprojektin läpikotainen testaus saattaa kestää päiviä, viikkoja tai jopa kuukausia. Tämän takia on yleisesti suositeltavaa priorisoida ja optimoida suoritetut testit. Tässä opinnäytetyössä tarkasteltavaksi optimointimetodiksi valittiin dynaaminen lähdekoodianalyysi (DSCA), jossa ohjelmistoa monitoroidaan ajonaikaisesti, jotta saadaan selville mitä osia lähdekoodista mikäkin testi suorittaa. Tämä projekti koostui kolmesta avoimen lähdekoodin ohjelmistoprojektista, kolmesta fuzz-testaustyökalusta sekä DSCA-ohjelmistosta. Erikokoisilla lähdekoodin muutoksilla saatiin aikaan erikokoisia testimääriä uudelleenajettaviksi. Näiden ajojen suuruudet ja kestot tallennetiin, ja niitä vertailtiin. Tämän opinnäytetyön tarkoituksena oli löytää keino saada fuzz-testaustyökalu keskustelemaan DSCA-ohjelmiston kanssa luotettavasti, sekä selvittää kuinka paljon aikaa pystytään säästämään optimoimalla CI-testausta skannaamalla jokainen lähdekoodimuutos kohdennettujen testien saamiseksi verrattuna siihen että jokainen lähdekoodimuutos aiheuttaisi kokonaisvaltaisen testiajon. DSCA-ohjelmistoja käyttämällä saatiin varmuus siitä, että CI-järjestelmien testiajojen pituutta pystytään pienentämään huomattavasti. Keskimääräisen testiajon pituus pystyttiin testeissä jopa puolittamaan
APA, Harvard, Vancouver, ISO, and other styles
17

Wray, Thomas. "Developments in dynamic field gradient focusing : microfluidics and integration." Thesis, University of Liverpool, 2012. http://livrepository.liverpool.ac.uk/7973/.

Full text
Abstract:
Advances in modern science require the development of more robust and improved systems for electroseparations in chromatography. In response, the progress of a new analytical platform is discussed. DFGF (Dynamic Field Gradient Focusing) is a separation technique, first described in 1998, which exploits the differences in electrophoretic mobility and hydrodynamic area of analytes to result in separation. This is achieved by taking a channel and applying a hydrodynamic flow in one direction and a counteracting electric field gradient acting in the opposite direction, resulting in analytes reaching a focal point according to their electrophoretic mobility. Work through this project has seen innovations to improve existing DFGF devices, including the design and manufacture of a novel packing material, while developing the latest DFGF system. This incorporates a microfluidic separation channel, eliminating the need for packing material or monolith. The new microfluidic device also features whole-on-column UV detection. Improvements through the developments of this device are discussed, most notably the utilisation of a new rapid prototyping technique. Examples of applications undertaken with the new device are demonstrated including novel samples and integration with mass spectrometry and 2D-HPLC.
APA, Harvard, Vancouver, ISO, and other styles
18

Hua, Tuan Cuong. "Financial integration and dynamic linkage in the ASEAN-5." Thesis, Bangor University, 2007. https://research.bangor.ac.uk/portal/en/theses/financial-integration-and-dynamic-linkage-in-the-asean5(6a088f97-7b4f-401e-b439-3300ccbe5f4c).html.

Full text
Abstract:
This PhD dissertation focuses on examining the state of financial integration in Indonesia, Malaysia, the Philippines, Singapore and Thailand. These countries are grouped together under the shorthand description of "ASEAN-5". The United States (US) and Japan are considered reference markets. Two main financial markets - credit and stock markets - are investigated based on the data availability and reliability. For credit market integration, data are monthly observations of three-month real money market rates covering from January 1981 to August 2006. Empirical tests of long-term integration are conducted on the basis of the Johansen cointegration tests. Unrestricted VAR and generalised impulse response functions (IRF) forecasts are used to assess shortrun dynamics. As a result, we find the ASEAN markets tend to integrate to and interact with each other rather than with the reference markets, the US and Japan. Indeed, we find the evidence of full integration or a common trend of real interest rates within the ASEAN. Singapore, the only developed financial market in the group, confirms their considerable influence to other regional markets. For the US and Japan, they have declined their impacts on the region over time, especially in the post-crisis period. For stock market integration, data are weekly and daily price returns covering from January 1991 to December 2006. The CAPM - TGARCH model is employed to examine the degree of market integration. ADCC-MVGARCH model is conducted to estimate time-varying conditional correlations in the US, Japan and the ASEAN. Besides, news impact curve and news impact surfaces are also applied to detect asymmetric effects of news on conditional volatilities and correlations. In line with findings from credit market integration, the results of stock market integration indicate that the tendency of regionalisation has been strongly enhanced over time, especially after the Asian crisis. The ASEAN stock markets do not only increase their regional impacts but also, to some extent, have influence on the most developed financial markets in the world, such as the US and Japan. The completion of this research may significantly contribute to the existing literature on financial integration; help regional as well as global investor manage their investment portfolios more efficiently; and, to some extent, support policy makers to negotiate and enhance the agenda of reform and cooperation in the region.
APA, Harvard, Vancouver, ISO, and other styles
19

Frazier, William. "Application of Symplectic Integration on a Dynamical System." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3213.

Full text
Abstract:
Molecular Dynamics (MD) is the numerical simulation of a large system of interacting molecules, and one of the key components of a MD simulation is the numerical estimation of the solutions to a system of nonlinear differential equations. Such systems are very sensitive to discretization and round-off error, and correspondingly, standard techniques such as Runge-Kutta methods can lead to poor results. However, MD systems are conservative, which means that we can use Hamiltonian mechanics and symplectic transformations (also known as canonical transformations) in analyzing and approximating solutions. This is standard in MD applications, leading to numerical techniques known as symplectic integrators, and often, these techniques are developed for well-understood Hamiltonian systems such as Hill’s lunar equation. In this presentation, we explore how well symplectic techniques developed for well-understood systems (specifically, Hill’s Lunar equation) address discretization errors in MD systems which fail for one or more reasons.
APA, Harvard, Vancouver, ISO, and other styles
20

Acosta, Serafini Pablo M. (Pablo Manuel) 1971. "Predictive multiple sampling algorithm with overlapping integration intervals for linear wide dynamic range integrating image sensors." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/16612.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 163-170).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Machine vision systems are used in a wide range of applications such as security, automated quality control and intelligent transportation systems. Several of these systems need to extract information from natural scenes in the section of the electromagnetic spectrum visible to humans. These scenes can easily have intra-frame illumination ratios in excess of 10⁶ : 1. Solid-state image sensors that can correctly process wide illumination dynamic range scenes are therefore required to ensure correct reliability and performance. This thesis describes a new algorithm to linearly increase the illumination dynamic range of integrating-type image sensors. A user-defined integration time is taken as a reference to create a potentially large set of integration intervals of different duration (the selected integration time being the longest) but with a common end. The light intensity received by each pixel in the sensing array is used to choose the optimal integration interval from the set, while a pixel saturation predictive decision is used to overlap the integration intervals within the given integration time such that only one frame using the optimal integration interval for each pixel is produced. The total integration time is never exceeded. Benefits from this approach are motion minimization, real-time operation, reduced memory requirements, programmable light intensity dynamic range increase and access to incremental light intensity information during the integration time.
(cont.) The algorithm is fully described with special attention to the resulting sensor transfer function, the signal-to-noise ratio, characterization of types and effects of errors in the predictive decision, calculation of the optimal integration intervals set given a certain set size, calculation of the optimal number of integration intervals, and impact of the new algorithm to image data compression. An efficient mapping of this algorithm to a CMOS process was done by designing a proof-of-concept integrated circuit in a 0.18[mu]m 1.8V 5-metal layer process. The major components of the chip are a 1/3" VGA (640 x 480) pixel array, a 4bit per pixel memory array, an integration controller array and an analog-to-digital converter/correlated double sampled (ADC/CDS) array. Supporting components include pixel and memory row decoders, memory and converter output digital multiplexers, pixel-to-ADC/CDS analog multiplexer and test structures. The pixels have a fill factor of nearly 50%, as most of the needed system additions and complexity were taken off-pixel. The prototype is fully functional and linearly expands the dynamic range by more than 60dB.
by Pablo M. Acosta-Serafini.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Grooms, Daniel Douglas. "Optimization of hybrid dynamic/steady-state processes using process integration." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

He, Zhong. "Integration of dynamic data into reservoir description using streamline approaches." Texas A&M University, 2003. http://hdl.handle.net/1969.1/1188.

Full text
Abstract:
Integration of dynamic data is critical for reliable reservoir description and has been an outstanding challenge for the petroleum industry. This work develops practical dynamic data integration techniques using streamline approaches to condition static geological models to various kinds of dynamic data, including two-phase production history, interference pressure observations and primary production data. The proposed techniques are computationally efficient and robust, and thus well-suited for large-scale field applications. We can account for realistic field conditions, such as gravity, and changing field conditions, arising from infill drilling, pattern conversion, and recompletion, etc., during the integration of two-phase production data. Our approach is fast and exhibits rapid convergence even when the initial model is far from the solution. The power and practical applicability of the proposed techniques are demonstrated with a variety of field examples. To integrate two-phase production data, a travel-time inversion analogous to seismic inversion is adopted. We extend the method via a 'generalized travel-time' inversion to ensure matching of the entire production response rather than just a single time point while retaining most of the quasi-linear property of travel-time inversion. To integrate the interference pressure data, we propose an alternating procedure of travel-time inversion and peak amplitude inversion or pressure inversion to improve the overall matching of the pressure response. A key component of the proposed techniques is the efficient computation of the sensitivities of dynamic responses with respect to reservoir parameters. These sensitivities are calculated analytically using a single forward simulation. Thus, our methods can be orders of magnitude faster than finite-difference based numerical approaches that require multiple forward simulations. Streamline approach has also been extended to identify reservoir compartmentalization and flow barriers using primary production data in conjunction with decline type-curve analysis. The streamline 'diffusive' time of flight provides an effective way to calculate the drainage volume in 3D heterogeneous reservoirs. The flow barriers and reservoir compartmentalization are inferred based on the matching of drainage volumes from streamline-based calculation and decline type-curve analysis. The proposed approach is well-suited for application in the early stages of field development with limited well data and has been illustrated using a field example from the Gulf of Mexico.
APA, Harvard, Vancouver, ISO, and other styles
23

Bellström, Peter. "Schema Integration : How to Integrate Static and Dynamic Database Schemata." Doctoral thesis, Karlstads universitet, Avdelningen för informatik och projektledning, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-5541.

Full text
Abstract:
Schema integration is the task of integrating several local schemata into one global database schema. It is a complex, error-prone and time consuming task. Problems arise in recognizing and resolving problems, such as differences and similarities, between two schemata. Problems also arise in integrating static and dynamic schemata. In this thesis, three research topics are addressed: Maintaining Vocabulary in Schema Integration, Integration of Static Schemata and Integration of Static and Dynamic Schemata, while applying the notation in the Enterprise Modeling approach. In Maintaining Vocabulary in Schema Integration an analysis of what semantic loss is and why it occurs in schema integration is conducted. Semantic loss is a problem that should be avoided because both concepts and dependencies might be lost. In the thesis, it is argued that concepts and dependencies should be retained as long as possible in the schemata. This should facilitate user involvement since the users’ vocabulary is retained even after resolving similarities and differences between two schemata. In Integration of Static Schemata two methods are developed. These methods facilitate recognition and resolution of similarities and differences between two conceptual database schemata.  By applying the first method, problems between two schemata can be recognized that otherwise could pass unnoticed; by applying the second method, problems can be resolved without causing semantic loss by retaining concepts and dependencies in the schemata. In Integration of Static and Dynamic Schemata a method on how to integrate static and dynamic schemata is developed. In the method, focus is put on pre- and post-conditions and how to map these to states and state changes in the database. By applying the method, states that are important for the database can be designed and integrated into the conceptual database schema. Also, by applying the method, active database rules can be designed and integrated into the conceptual database schema.
APA, Harvard, Vancouver, ISO, and other styles
24

Piyasinghe, Lakshan Prageeth. "Dynamic Phasor Based Analysis and Control in Renewable Energy Integration." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/6015.

Full text
Abstract:
The objective of this dissertation is to carry out dynamic modeling, analysis and control of power systems with Renewable Energy Sources (RES) such as: Photovoltaic (PV) power sources and wind farms. The dissertation work is mainly focused on microgrid since it plays a major role in modern power systems and tend to have higher renewable power penetration. Two main theoretical concepts, dynamic phasor and impedance modeling have been adopted to model and analyze the power systems/mocrogrids with RES. The initial state calculation which is essential for small signal analysis of a system is carried out as the first step of the dissertation work. Dynamic phasor and impedance modeling techniques have been utilized to model and analyze power systems/micogrids as the second phase of the work. This part consists of two main studies. First case investigates the impedance modeling of Thyristor Controller Series Capacitor (TCSC) for sub-synchronous resonance (SSR) analysis where a wind farm is connected to a power system through series compensated line. Second case utilizes the dynamic phasor concept to model a microgrid in unbalanced condition. Here the unbalance is caused by a single phase PV connected to the microgrid. Third Phase of the dissertation work includes upper level control of the microgrid. Here prediction and optimization control for a microgrid with a wind farm, a PV system, an energy storage system and loads is evaluated. The last part of the dissertation work focuses on real time modeling and hardware in loop simulation test bed for microgrid applications. This dissertation has led to four journal papers (three accepted, one submitted) and five conference papers.
APA, Harvard, Vancouver, ISO, and other styles
25

Staats, Richard C. (Richard Charles). "Integration of predictive routing information with dynamic traffic signal control." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35433.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 306-310).
by Richard C. Staats.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
26

Sanchez, Jennifer D. (Jennifer D'Metria). "Evaluation of a direct time integration scheme for dynamic analysis." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40479.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2007.
Includes bibliographical references (p. 43).
Direct integration schemes are important tools used in the dynamic analysis of many structures. It is critical that the solutions obtained from these schemes produce accurate results. Currently, one of the most widely used direct integration schemes is the trapezoidal rule. It is favored because it is a method that requires single steps and its results are second-order accurate. However, in cases where there are large deformations and longer integration times, the trapezoidal rule fails. A new composite method scheme shows promise in maintaining stability where the trapezoidal rule fails. It is a two step method that makes use of the trapezoidal rule and the three-point Euler backward method. The purpose of this study is to compare the trapezoidal rule and the new composite method using two nonlinear problems in order to determine if the composite method generates more accurate results than the trapezoidal rule.
by Jennifer D. Sanchez.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
27

Damle, Pushkar Hari. "A system dynamics model of the integration of new technologies for ship systems." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/35216.

Full text
Abstract:
System dynamics has been used to better understand the dynamics within complex natural and social systems. This understanding enables us to make decisions and define strategies that help to resolve the problematic behaviors associated within these systems. For example within an operating environment such as the US Navy, decisions taken today can have long lasting impact on system performance. The Navy has experienced large cost overruns during the new technology implementation process on ship systems that can also have an impact on total life cycle performance. The integration phase of the implementation process represents most of the cost overruns experienced in the overall new technology life cycle (development, integration, and operation/support/disposal). We have observed a general concern that there is a lack of understanding for the dynamic behavior of those processes which comprise the integration phase, among ship-builders and planners. One of the goals of our research effort has been to better understand the dynamic behavior of the new technology integration processes, using a dynamic modeling technique known as System Dynamics. Our approach has also been to provide a comprehensive knowledge elicitation process in which members from the shipbuilding industry, the US Navy, and the Virginia Tech System Performance Laboratory take part in group model building exercises. The system dynamics model that is developed in this manner is based on data obtained from the experts. An investigation of these dynamics yields a dominant cost behavior that characterizes the technology integration processes. This behavior is S-shaped growth. The following two dynamic hypotheses relative to lifecycle cost and performance of the inserted new technology were confirmed: (1) For the current structure of the model we observe the more the complexity of the new technology, the less affordable a technology becomes; (2) Integration of immature (less developed) technologies is associated with higher costs. Another interesting insight is that cost is very sensitive to the material procurement. Future research can be addressed to a more detailed level of abstraction for various activities included in the technology integration phase, such as testing and evaluation, cost of rework and risks associated with inadequate testing etc. This will add to our evolving understanding of the behavior of individual activities in the technology integration process.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
28

Schäfer, Andreas. "Economic Development and Economic Integration." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-128100.

Full text
Abstract:
Macroeconomists dedicated substantial efforts to clarify the puzzle of growing incomes in some regions of the world and rising differences in standards of living across the globe. Although the question of why economies perform differently is as old as the theory of economic thought itself, it is only since recent times that economists integrate development patterns over the very long-run into formal dynamic general equilibrium models. The models we present here consider development patterns observed in advanced economies since the Industrial Revolution. The objective of this study is to shed light on the mechanics of economic development within the frame of (dynamic) general equilibrium models. Since this requires the solution of multi-dimensional and non-linear systems of difference or differential equations that govern the evolution of the model economy over time (in some cases with heterogeneous agents) analytical solutions are in general not obtainable. Therefore, this work relies on numerical and computational methods at large, in order to visualize the development path of economies over time.
APA, Harvard, Vancouver, ISO, and other styles
29

Huang, Zhen. "Dynamic Emission Prediction Platform and It's Integration with Microscopic Traffic Simulation." Thesis, Uppsala University, Department of Information Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-108012.

Full text
Abstract:

With the increase of traffic congestion and vehicle emission, environmental pollutionbecomes an important concern for traffic policy makers and traffic planners in theirdecision-making process. In order to study and reduce road transport emissions, anaccurate estimation of emission amount is crucial for traffic planning and management purposes.

The emission value from the traffic on a given road section depends strongly on thestate of vehicles. The basis for a detailed estimation is therefore the emit rate as afunction of instantaneous vehicle state such as speed, acceleration etc.

In this thesis, an application is built by integrating emission simulation with the trafficsimulator at KTH-TPMA, which is a real time application for imitating real trafficsituations, to predict emission value. The approach adopted is based on vehicle datafrom traffic simulations which serve as real world traffic data provider. With thisapplication, traffic simulation and emission simulation could be executed with adistributed computing approach. The thesis investigates how these twosimulations are implemented in a computer simulation system and theirperformance and accuracy.

The major contribution of this thesis is its integrating traffic simulation with emissionsimulation to estimate reasonable emission values. It illustrates how these twosimulation applications could be integrated to provide a tool for making policy andplanning.

Key Words: Emission Model Simulation, Traffic Simulator (KTH-TPMA), Distributedcomputing, CORBA and Web Service.

APA, Harvard, Vancouver, ISO, and other styles
30

Calargun, Canku Alp. "Dynamic Model Integration And 3d Graphical Interface For A Virtual Ship." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609309/index.pdf.

Full text
Abstract:
This thesis addresses the improvement of a physically based modeling simulator Naval Surface Tactical Maneuvering Simulation System (NSTMSS), that combines different simulators in a distributed environment by the help of High Level Architecture (HLA), to be used in naval tactical training systems. The objective is to upgrade a computer simulation program in which physical models are improved in order to achieve a more realistic movement of a ship in a virtual environment. The simulator will also be able to model the ocean waves and ship wakes for a more realistic view. The new naval model includes a 4 degrees of freedom (DOF) maneuvering model, and a wave model. The numerical results from real life are used for modeling purposes to increase the realism level of the simulator. Since the product at the end of the thesis work is needed to be a running computer code that can be integrated into the NSTMSS system, the code implementation and algorithm details are also covered. The comparisons between the wave models and physical models are evaluated for a better real time performance. The result of this thesis shows that the integration of a 4-DOF realistic ship model to the system improved the capability of NSTMSS to give more data to the student officers while making maneuvers. The result also indicates that the use of waves and ship wakes had taken the simulator to a next level in the environment perception.
APA, Harvard, Vancouver, ISO, and other styles
31

Morozovska, Kateryna. "Dynamic Rating of Power Lines and Transformers for Wind Energy Integration." Licentiate thesis, KTH, Elektroteknisk teori och konstruktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226564.

Full text
Abstract:
Dynamic Rating (DR) is usually associated with unlocking the capacity of power lines and transformers using available information on weather conditions. Our studies show that Dynamic Rating is a broad concept that requires further study and development. The capacity of the majority of power devices is highly dependent on the heat transfer properties of the materials which the devices are made of. To ensure correct power limits of the equipment, one must take into consideration not only the power load, but also ambient conditions, such as: temperature, wind speed, wind direction, solar irradiation, humidity, pressure, radiation into the atmosphere and magnetic losses. Dynamic rating is created as an alternative to standard constant rating that is designed with reference to extreme weather and load conditions. Some areas are more likely than others to experience extreme weather conditions, which have a chance of occurring only a few days per year for short periods of time. Such a distribution of weather parameters gives an opportunity to embed existing material properties of the power equipment and achieve a better utilization of the grid. The following thesis is divided into two simultaneous topics: Dynamic line rating and Dynamic transformer rating. The division is motivated by the importance of analysing the operation of the above-mentioned parts of the power network in greater detail. Power lines and transformers play a significant part in grid planning and have a potential to result in economic benefits when used with DR. The main focus of the doctoral project "Dynamic rating of power lines and transformers for wind energy integration" is on exploring potential ways to connect power generated from wind to the grid with the help of dynamic rating technologies. Therefore, great focus of the work lies on the analysis of DR connection of variable energy sources such as wind farms. The thesis presents the comparison of different line rating methods and proposes a new way of their classification. Evaluation of dynamic line rating application has shown the possibility to expand the power grid with additional capacity from wind power generation. Literature analysis and detailed evaluation of the conductor heat balance models have led to experimental evaluation of the convective cooling effect. The dynamic transformer rating application has shown a possibility to decrease the size of the power transformer without shortcoming in component availability.

QC 20180423


Dynamic Rating for Wind Power
APA, Harvard, Vancouver, ISO, and other styles
32

Schnatter, Sylvia. "Integration-based Kalman-filtering for a Dynamic Generalized Linear Trend Model." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/424/1/document.pdf.

Full text
Abstract:
The topic of the paper is filtering for non-Gaussian dynamic (state space) models by approximate computation of posterior moments using numerical integration. A Gauss-Hermite procedure is implemented based on the approximate posterior mode estimator and curvature recently proposed in 121. This integration-based filtering method will be illustrated by a dynamic trend model for non-Gaussian time series. Comparision of the proposed method with other approximations ([15], [2]) is carried out by simulation experiments for time series from Poisson, exponential and Gamma distributions. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
33

Nie, Yisu. "Integration of Scheduling and Dynamic Optimization: Computational Strategies and Industrial Applications." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/380.

Full text
Abstract:
This thesis study focuses on the development of model-based optimization strategies for the integration of process scheduling and dynamic optimization, and applications of the integrated approaches to industrial polymerization processes. The integrated decision making approaches seek to explore the synergy between production schedule design and process unit control to improve process performance. The integration problem has received much attention from both the academia and industry since the past decade. For scheduling, we adopt two formulation approaches based on the state equipment network and resource task network, respectively. For dynamic optimization, we rely on the simultaneous collocation strategy to discretize the differential-algebraic equations. Two integrated formulations are proposed that result in mixed discrete/dynamic models, and solution methods based on decomposition approaches are addressed. A class of ring-opening polymerization processes are used for our industrial case studies. We develop rigorous dynamic reactor models for both semi-batch homopolymerization and copolymerization operations. The reactor models are based on first-principles such as mass and heat balances, reaction kinetics and vapor-liquid equilibria. We derive reactor models with both the population balance method and method of moments. The obtained reactor models are validated using historical plant data. Polymerization recipes are optimized with dynamic optimization algorithms to reduce polymerization times by modifying operating conditions such as the reactor temperature and monomer feed rates over time. Next, we study scheduling methods that involve multiple process units and products. The resource task network scheduling model is reformulated to the state space form that offers a good platform for incorporating dynamic models. Lastly for the integration study, we investigate a process with two parallel polymerization reactors and downstream storage and purification units. The dynamic behaviors of the two reactors are coupled through shared cooling resources. We formulate the integration problem by combining the state space resource task network model with the moment reactor model. The case study results indicate promising improvements of process performances by applying dynamic optimization and scheduling optimization separately, and more importantly, the integration of the two.
APA, Harvard, Vancouver, ISO, and other styles
34

Marazzato, Frédéric. "Discrete element and time-integration methods forelasto-plasticity and dynamic cracking." Thesis, Paris Est, 2020. http://www.theses.fr/2020PESC1001.

Full text
Abstract:
Cette thèse propose des contributions aux méthodes éléments discrets (MED) et à l’intégration temporelle explicite avec pour objectif applicatif les calculs de fissuration et de fragmentation pour des matériaux métalliques soumis à des chargements dynamiques. Les MED, qui sont traditionnellement utilisées pour simuler le comportement de matériaux granulaires, sont ré-interprétées à la lumière des méthodes de discrétisation de gradient afin d’être appliquées à la simulation de matériaux continus. Les maillages utilisables avec la MED proposée ont été étendus des maillages de Voronoi à des maillages polyédriques généraux. Les comportements simulables par la méthode ont été étendus de l’élasto-dynamique à l’élasto-plasticité dynamique par l’ajout d’un degré de liberté tensoriel par cellule. De plus, la méthode est robuste par rap-port à la limite incompressible et ses paramètres ne dépendent que des paramètres matériau. Une méthode d’intégration temporelle explicite conservant une pseudo-énergie, même pour des comportements non-linéaires et des pas de temps variables, a également été développée afin d’éviter une dissipation numérique de l’énergie disponible pour la dissipation plastique et la fissuration. Cette méthode a été couplée avec la MED précédente. Enfin, la propagation de fissures de Griffith à travers les facettes du maillage a été intégrée à la MED pour des comportements élastiques linéaires en deux dimensions d’espace. Le taux de restitution d’énergie est calculé pour chaque mode de fissuration à partir des facteurs d’intensités des contraintes qui sont approchés près de la fissure. Enfin, un critère de maximisation de la densité d’énergie élastique sur les facettes liée à la pointe de fissure permet de simuler l’orientation de la propagation
The present Ph.D. dissertation proposes contributions to discrete element methods (DEM) and explicit time integration schemes with a view towards dynamic cracking for metallic materials under dynamic loading. DEM, which are usually used to simulate granular materials, are understood through the prism of gradient discretization methods in order to simulate continuous materials. The method has been extended from previous Voronoi meshes to support generalpolyhedral meshes. Material behaviours have been extended from elasto-dynamics to dynamic elasto-plasticity through the addition of a tensorial degree of freedom per mesh cell. The method is robust with respect to the incompressible limit and its parameters only depend on material parameters. Moreover, an explicit pseudo-energy conserving time integration method has been developed, even for nonlinear behaviours and variable time steps, so as to avoid thedissipation of energy available for plastic dissipation and cracking. The method has been coupled to the proposed DEM. Finally, Griffith crack propagation through the mesh facets has been adapted to the present DEM for linear elastic behaviours in two space dimensions. The energy release rate is computed for every cracking mode using the stress intensity factors approximated close to the crack. A criterion of maximization of elastic energy density is used tosimulate kinking
APA, Harvard, Vancouver, ISO, and other styles
35

Cirimele, Vincenzo. "Design and Integration of a Dynamic IPT System for Automotive Applications." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS032/document.

Full text
Abstract:
La transmission inductive de puissance (IPT) pour les véhicules électriques est une technologie émergente prometteuse qui semble capable d'améliorer l'acceptation de la mobilité électrique. Au cours des deux dernières décennies, de nombreux chercheurs ont démontré la faisabilité et la possibilité de l'utiliser pour remplacer les systèmes conducteurs classiques pour la charge de la batterie à bord du véhicule. Actuellement de nombreux efforts visent à étendre la technologie IPT vers son utilisation pour la charge pendant le mouvement du véhicule. Cette application, généralement appelée IPT dynamique, vise à surmonter la limite représentée par les arrêts prolongés nécessaires pour la recharge introduisant également la possibilité de réduction de la capacité de la batterie installée à bord du véhicule. Un système IPT est essentiellement basé sur la résonance de deux inducteurs magnétiquement couplés, l'émetteur, placé sur ou sous le sol, et le récepteur, placé sous le plancher du véhicule. La gamme de fréquence de fonctionnement typique pour les applications automobiles va de 20 kHz à environ 100 kHz. Le couplage entre les deux inductances s'effectue à travers un entrefer important, généralement d'environ 10-30 cm. Cette thèse présente les résultats des activités de recherche visant à la création d'un prototype pour l'IPT dynamique orienté vers le transport privé. A partir d'une analyse de l'état de l'art et des projets de recherche en cours dans ce domaine, ce travail présente le développement d'un modèle de circuit capable de décrire les phénomènes électromagnétiques à la base du transfert de puissance et l'interface avec l'électronique de puissance. Les analyses effectuées à travers le modèle développé fournissent la base pour la conception et la mise en œuvre d'un convertisseur dédié à faible coût et efficacité élevée pour l'alimentation du côté transmetteur. Une architecture générale de l'électronique de puissance qui gère le côté récepteur est proposée avec les circuits de protection supplémentaires. Une méthodologie pour la conception intégrée de la structure magnétique est illustrée. Cette méthodologie couvre les aspects de l'interface avec l'électronique de puissance, l'intégration sur un véhicule existant et l'installation sur l'infrastructure routière. Une série d'activités visant à la réalisation d'un site d'essai dédié sont présentées et discutées. En particulier, les activités liées à la création de l'infrastructure électrique ainsi que les questions et les méthodes d'implantation des émetteurs dans le revêtement routier sont présentées. L'objectif final est la création d'une ligne de recharge IPT dédiée de 100 mètres de long. Enfin, une méthodologie d'évaluation de l'exposition humaine est présentée et appliquée à la solution développée
Inductive power transmission (IPT) for electric vehicles (EVs) is a promising emergent technology that seems able to improve the electric mobility acceptance. In the last two decades many researchers have proved its feasibility and the possibility to use it to replace the common conductive systems for the charge of the on-board battery. Many efforts are currently aimed to extend the IPT technology towards its use for the charge during the vehicle motion. This application, commonly indicated as dynamic IPT, is aimed to overcome the limit represented by the long stops needed for the recharge introducing also the possibility of reducing the battery capacity installed on vehicle. An IPT system is essentially based on the resonance of two magnetically coupled inductors, the transmitter, placed on or under the ground, and the receiver, placed under the vehicle floor. The typical operating frequency range for the EVs application goes from 20 kHz to approximately 100 kHz. The coupling between the two inductors takes place through a large air-gap, usually about 10-30 cm. This thesis presents the results of the research activities aimed to the creation of a prototype for the dynamic IPT oriented to the private transport. Starting from an analysis of the state of the art and the current research projects on this domain, this work presents the development of a circuit model able to describe the electro- magnetic phenomena at the base of the power transfer and the interface with the power electronics. This model provides the information at the base of the design and the implementation of a dedicated low cost-high efficiency H-bridge converter for the supply of the transmitter side. A general architecture of the power electronics that manages the receiver side is proposed together with the additional protection circuits. A methodology for the integrated design of the magnetic structure is illustrated covering the aspects of the matching with the power electronics, the integration on an existing vehicle and the installation on the road infrastructure. A series of activities aimed to the implementation of a dedicated test site are presented and discussed. In particular, the activities related to the creation of the electrical infrastructure and the issues and methods for the embedding of the transmitters in the road pavement are presented. The final goal is the creation of a dedicated IPT charging line one hundred meters long. Finally, a methodology for the assessment of the human exposure is presented and applied to the developed solution
APA, Harvard, Vancouver, ISO, and other styles
36

Garcia, Javier. "Integration of Static and Dynamic Middleware-based Subsystems Using an Intermediate Gateway." Thesis, KTH, Maskinkonstruktion (Inst.), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-100711.

Full text
Abstract:
This thesis is a part of the KTH’s participation in the DySCAS European project. Its purpose is to explore the possibility of integrating different middleware based embedded electronic subsystems by connecting them with an intermediate gateway. In particular, this thesis work centres in achieving the interoperability between a pair of middlewares where one is statically and the other dynamically reconfigurable. The automotive industry, among others, is starting to face the problem of trying to integrate very different kinds of elements into a same system. In the case of a car the traditional electronic control systems involved in driving are meeting a whole new domain of multimedia devices such as GPS, DVD players or cell phones, which are also integrated as a part of the system. While the first require a very safe, fault tolerant, static environment, the others require a flexible and adaptive support, so no single middleware can provide all the features demanded. This thesis tries to apply the concept of a multi-middleware system to solve the problem. Each subsystem is built over the middleware that best suits its needs and a central gateway allows all of them to interoperate. The approach is validated by means of a case study in which a subsystem using SHAPE, a dynamically reconfigurable middleware developed under the DySCAS framework, is added to an existing automotive platform based on SAINT, a statically reconfigurable middleware developed in the Machine Design department at KTH. The report contains a study of the different middlewares selected for the test case focused on the interactions and communication protocols between applications. The results are used to evaluate different design approaches for the gateway and select the most suitable one. The design chosen for implementation is a modular design with three main blocks. One is connected to the SAINT subsystem through a CAN interface and implements an adapted version of the SAINT middleware which presents the gateway as an additional node in the network. A second block does the equivalent with the SHAPE subsystem while a third is in charge of communicating between both modules. To allow communication between the two different technologies a translation process was designed during this thesis. It is done by defining an abstract metalanguage of middleware transactions. Each technology implements its own translation to this intermediate language independent of the other subsystems which grants the solution a better scalability.
Detta examensarbete är en del av KTH:s deltagande i det europeiska DySCAS-projektet. Dess syfte är att utforska möjligheten att integrera olika inbyggda system baserade på mellanprogramvaror (middleware) genom att ansluta dem med en mellanliggande gateway. Särskilt fokuseras på att åstadkomma kompabilitet mellan ett par mellanprogramvaror, där den ena är statiskt och den andra dynamiskt konfigurerbar. Bilindustrin, bland andra, börjar möta problemet att integrera väldigt olika beståndsdelar i samma system. I fallet bilar möter de traditionella elektroniska reglersystem för körningen en helt ny domän av multimediaenheter, exempelvis GPS, DVD-spelare eller mobiltelefoner, vilka också integreras som en del av systemet. Medan de förra kräver en väldigt säker, feltolerant och statisk omgivning, kräver de senare en flexibelt och adaptivt stöd, så ingen ensam mellanprogramvara kan erbjuda alla efterfrågade funktioner. Detta examensarbete försöker använda konceptet av system byggda av flera mellanprogramvaror för att lösa problemet. Varje delsystem byggs ovanpå den mellanprogramvara som bäst lämpar sig för dess behov och en central gateway låter dem alla interagera. Ansatsen valideras genom en fallstudie i vilken ett delsystem som använder SHAPE, en dynamiskt omkonfigurerbar mellanprogramvara utvecklad under DySCAS-ramverket, läggs till en existerande fordonsplattform baserad på SAINT, en statiskt omkonfigurerbar mellanprogramvara utvecklad vid institutionen för maskinkonstruktion på KTH. Rapporten innehåller en studie av de olika mellanprogramvarorna som valts för testfallet, fokuserad på interaktion och kommunikationsprotokoll mellan applikationer. Resultaten används för att utvärdera olika förslag på gatewaydesign och välja den lämpligaste. Designen som valdes för implementation är en modulär design med tre huvudblock. Ett är ansluten till SAINT-delsystemet genom ett CAN-gränssnitt och implementerar en anpassad version av SAINT-mellanprogramvaran som en extra nod på nätverket. Ett andra block gör motsvarande för SHAPE-delsystemet, medan ett tredje är ansvarigt för kommunikation mellan dessa bägge block. För att tillåta kommunikation mellan de bägge olika teknikerna används en översättningsprocess som utvecklats under projektet. Det görs genom att definiera ett abstrakt metaspråk för transaktioner med mellanprogramvaran. Varje teknik implementerar en egen översättning till ett mellanstående språk som är oberoende av övriga delsystem, vilket leder till bättre skalbarhet
APA, Harvard, Vancouver, ISO, and other styles
37

Amaral, Marcelo Francisco. "Integration of Dynamic Line Rating within a Risk-Based Security Assessment Framework." Thesis, Uppsala universitet, Industriell teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-337880.

Full text
Abstract:
The following work applies Dynamic Line Rating (DLR) to a Risk-Based SecurityAssessment (RBSA) methodology with the ultimate goal to perform day-aheadforecastanalysis. DLR quantifies transmission capacities of overhead lines whilst beingdependent on ambient conditions. It takes into account variations in its surroundings,therefore making DLR less conservative compared to the traditional worst-casescenario oriented Static Line Rating (SLR). Applying DLR to overhead transmissionlines (OHLs) entails different spans of potential benefits for solving grid exhaustionproblems due to crescent electrical consumptions.The RBSA methodology allows for the assessment of security of power systemoperations in a future state under uncertainties that arise from contingencies andforecast errors, e.g., loads or renewable infeeds. The methodology models inputuncertainty with a copula integration with Monte-Carlo (MC) sampling framework.Several case-studies are executed for the assessment of DLR impacts, especiallycomparing them to SLR through the evaluation of system risk. Two case-studies areperformed for a specific time of the day, one analyzing risk and ratings and the otherconcerning system impacts upon evolution of system forecast uncertainty (SFU) tovalues above day-ahead analysis usual intervals. The following two cases are theextensions to 24 point analyses, with the same goal, although more generalized. Thefinal case study analyzes the economical impact of DLR applications and assesses analternative system configuration behavior. This alternative setup is in constantcomparison with the original configuration used throughout the project, advantagesand drawbacks of both configurations are discussed. Finally, success of thisintegration and assessment of impact and methodology is addressed in the end.
APA, Harvard, Vancouver, ISO, and other styles
38

Wackler, Lisa A. "Auditory spectral integration effects in dynamic consonant-vowel /da/-/ga/ F3 transitions." Connect to resource, 2007. http://hdl.handle.net/1811/28358.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2007.
Title from first page of PDF file. Document formatted into pages: contains 38 p.; also includes graphics. Includes bibliographical references (p. 35-37). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Yi. "Integration of well data into dynamic reservoir interpretation using multiple seismic surveys." Thesis, Heriot-Watt University, 2011. http://hdl.handle.net/10399/2527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Singh, Adisha. "Leveraging dynamic capabilities in the post-acquisition integration phase of an acquisition." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/80521.

Full text
Abstract:
LRP is a research journal in the field of strategic management, with a research focus on strategy. The article, titled “Leveraging dynamic capabilities in the post-acquisition integration phase of an acquisition” supports the call by LRP, for research in the field of strategy. In addition, the data collection method of case study methodology is supported by the journal. The journal welcomes research from all parts of the world, and thus a South African setting is suitable for the journal requirement. The journal has published recent articles (2019) in the fields of both post-acquisition integration and dynamic capabilities making this research study a suitable fit.
Mini Dissertation (MBA)--University of Pretoria, 2020.
Gordon Institute of Business Science (GIBS)
MBA
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
41

Wahlstedt, Linnéa. "Dynamic Knowledge Integration : A field study of an Information Systems Development Project." Doctoral thesis, Linköpings universitet, Företagsekonomi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102595.

Full text
Abstract:
Current research on knowledge integration offers valuable structural analyses of factors that influence knowledge integration, performance outcomes, and knowledge integration mechanisms. Less attention has been paid to how knowledge integration is carried out over time in cross-functional development projects. This thesis is based on a year-long field study of an Information Systems Development Project. The study shows how the knowledge integration process was repeatedly interrupted by different problems that could not be resolved by merely relying on integration mechanisms that were imposed by the top management. Instead, a bottom-up dynamic evolved where the project members and participating project managers managed to reestablish coordination and knowledge integration through the invention of different ‘collective heuristics’. A novel model of Dynamic Knowledge Integration is presented which claims that knowledge integration contains two interplaying processes; one consisting of different knowledge integration mechanisms and activities, and one consisting of the collective heuristics that were invented and employed when unexpected problems emerged. In general, this research argues that knowledge integration can be understood as a dynamic process, of which both knowledge integration mechanisms and collective heuristics constitute core elements.
Aktuell forskning inom området kunskapsintegration erbjuder värdefulla strukturella analyser av påverkansfaktorer, prestationsutfall och mekanismer för kunskapsintegration. Mindre uppmärksamhet har riktats mot att förstå hur kunskapsintegration åstadkoms över tid i tvärfunktionella utvecklingsprojekt. Avhandlingen bygger på en ett år lång fältstudie av ett informationssystemutvecklingsprojekt. Studien visade att kunskapsintegrationsprocessen ideligen avbröts av olika problem som inte kunde lösas med de integrationsmekanismer som den högsta ledningen infört. Istället utvecklades en ”underifrån-dynamik” där projektmedlemmarna och de deltagande projektledarna lyckades återställa koordinering och kunskapsintegration genom skapandet av olika ”kollektiva heuristiker”. En ny modell för Dynamisk Kunskapsintegration presenteras som visar att kunskapsintegration inrymmer två samverkande processer; en som består av olika mekanismer och aktiviteter för kunskapsintegration och en som består av de kollektiva heuristikerna som uppfanns och användes när oväntade problem uppstod. Mer generellt visar denna forskning hur kunskapsintegration kan förstås som en process i vilken mekanismer och heuristiker utgör centrala element som båda behövs för att förklara processens dynamiska karaktär.
APA, Harvard, Vancouver, ISO, and other styles
42

Breidenassel, Andreas. "A high dynamic range CMOS image sensor with adaptive integration time control." [S.l. : s.n.], 2005. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11811225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Phan, Leon L. "A methodology for the efficient integration of transient constraints in the design of aircraft dynamic systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34750.

Full text
Abstract:
Transient regimes experienced by dynamic systems may have severe impacts on the operation of the aircraft. They are often regulated by dynamic constraints, requiring the dynamic signals to remain within bounds whose values vary with time. The verification of these peculiar types of constraints, which generally requires high-fidelity time-domain simulation, intervenes late in the system development process, thus potentially causing costly design iterations. The research objective of this thesis is to develop a methodology that integrates the verification of dynamic constraints in the early specification of dynamic systems. In order to circumvent the inefficiencies of time-domain simulation, multivariate dynamic surrogate models of the original time-domain simulation models are generated using wavelet neural networks (or wavenets). Concurrently, an alternate approach is formulated, in which the envelope of the dynamic response, extracted via a wavelet-based multiresolution analysis scheme, is subject to transient constraints. Dynamic surrogate models using sigmoid-based neural networks are generated to emulate the transient behavior of the envelope of the time-domain response. The run-time efficiency of the resulting dynamic surrogate models enables the implementation of a data farming approach, in which the full design space is sampled through a Monte-Carlo Simulation. An interactive visualization environment, enabling what-if analyses, is developed; the user can thereby instantaneously comprehend the transient response of the system (or its envelope) and its sensitivities to design and operation variables, as well as filter the design space to have it exhibit only the design scenarios verifying the dynamic constraints. The proposed methodology, along with its foundational hypotheses, is tested on the design and optimization of a 350VDC network, where a generator and its control system are concurrently designed in order to minimize the electrical losses, while ensuring that the transient undervoltage induced by peak demands in the consumption of a motor does not violate transient power quality constraints.
APA, Harvard, Vancouver, ISO, and other styles
44

Moreno, Noguer Francesc. "Multiple cue integration for robust tracking in dynamic environments: application to video relighting." Doctoral thesis, Universitat Politècnica de Catalunya, 2005. http://hdl.handle.net/10803/6191.

Full text
Abstract:
L'anàlisi de moviment i seguiment d'objectes ha estat un dels pricipals focus d'atenció en la comunitat de visió per computador durant les dues darreres dècades. L'interès per aquesta àrea de recerca resideix en el seu ample ventall d'aplicabilitat, que s'extén des de tasques de navegació de vehicles autònoms i robots, fins a aplications en la indústria de l'entreteniment i realitat virtual.

Tot i que s'han aconseguit resultats espectaculars en problemes específics, el seguiment d'objectes continua essent un problema obert, ja que els mètodes disponibles són propensos a ser sensibles a diversos factors i condicions no estacionàries de l'entorn, com ara moviments impredictibles de l'objecte a seguir, canvis suaus o abruptes de la il·luminació, proximitat d'objectes similars o fons confusos. Enfront aquests factors de confusió la integració de múltiples característiques ha demostrat que permet millorar la robustesa dels algoritmes de seguiment. En els darrers anys, degut a la creixent capacitat de càlcul dels ordinadors, hi ha hagut un significatiu increment en el disseny de complexes sistemes de seguiment que consideren simultàniament múltiples característiques de l'objecte. No obstant, la majoria d'aquests algoritmes estan basats en
heurístiques i regles ad-hoc formulades per aplications específiques, fent-ne impossible l'extrapolació a noves condicions de l'entorn.

En aquesta tesi proposem un marc probabilístic general per integrar el nombre de característiques de l'objecte que siguin necessàries, permetent que interactuin mútuament per tal d'estimar-ne el seu estat amb precisió, i per tant, estimar amb precisió la posició de l'objecte que s'està seguint. Aquest marc, s'utilitza posteriorment per dissenyar un algoritme de seguiment, que es valida en diverses seqüències de vídeo que contenen canvis abruptes de posició i il·luminació, camuflament de l'objecte i deformacions no rígides. Entre les característiques que s'han utilitzat per representar l'objecte, cal destacar la paramatrització robusta del color en un espai de color dependent de l'objecte, que permet distingir-lo del fons més clarament que altres espais de color típicament ulitzats al llarg de la literatura.

En la darrera part de la tesi dissenyem una tècnica per re-il·luminar tant escenes estàtiques com en moviment, de les que s'en desconeix la geometria. La re-il·luminació es realitza amb un mètode 'basat en imatges', on la generació de les images de l'escena sota noves condicions d'il·luminació s'aconsegueix a partir de combinacions lineals d'un conjunt d'imatges de referència pre-capturades, i que han estat generades il·luminant l'escena amb patrons de llum coneguts. Com que la posició i intensitat de les fonts d'il.luminació que formen aquests patrons de llum es pot controlar, és natural preguntar-nos: quina és la manera més òptima d'il·luminar una escena per tal de reduir el nombre d'imatges de referència? Demostrem que la millor manera d'il·luminar l'escena (és a dir, la que minimitza el nombre d'imatges de referència) no és utilitzant una seqüència de fonts d'il·luminació puntuals, com es fa generalment, sinó a través d'una seqüència de patrons de llum d'una base d'il·luminació depenent de l'objecte. És important destacar que quan es re-il·luminen seqüències de vídeo, les imatges successives s'han d'alinear respecte a un sistema de coordenades comú. Com que cada imatge ha estat generada per un patró de llum diferent il·uminant l'escena, es produiran canvis d'il·luminació bruscos entre imatges de referència consecutives. Sota aquestes circumstàncies, el mètode de seguiment proposat en aquesta tesi juga un paper fonamental. Finalment, presentem diversos resultats on re-il·luminem seqüències de vídeo reals d'objectes i cares d'actors en moviment. En cada cas, tot i que s'adquireix un únic vídeo, som capaços de re-il·luminar una i altra vegada, controlant la direcció de la llum, la seva intensitat, i el color.
Motion analysis and object tracking has been one of the principal focus of attention over the past two decades within the computer vision community. The interest of this research area lies in its wide range of applicability, extending from autonomous vehicle and robot navigation tasks, to entertainment and virtual reality applications.

Even though impressive results have been obtained in specific problems, object tracking is still an open problem, since available methods are prone to be sensitive to several artifacts and non-stationary environment conditions, such as unpredictable target movements, gradual or abrupt changes of illumination, proximity of similar objects or cluttered backgrounds. Multiple cue integration has been proved to enhance the robustness of the tracking algorithms in front of such disturbances. In recent years, due to the increasing power of the computers, there has been a significant interest in building complex tracking systems which simultaneously consider multiple cues. However, most of these algorithms are based on heuristics and ad-hoc rules formulated for specific applications, making impossible to extrapolate them to new environment conditions.

In this dissertation we propose a general probabilistic framework to integrate as many object features as necessary, permitting them to mutually interact in order to obtain a precise estimation of its state, and thus, a precise estimate of the target position. This framework is utilized to design a tracking algorithm, which is validated on several video sequences involving abrupt position and illumination changes, target camouflaging and non-rigid deformations. Among the utilized features to represent the target, it is important to point out the use of a robust parameterization of the target color in an object dependent colorspace which allows to distinguish the object from the background more clearly than other colorspaces commonly used in the literature.

In the last part of the dissertation, we design an approach for relighting static and moving scenes with unknown geometry. The relighting is performed through an -image-based' methodology, where the rendering under new lighting conditions is achieved by linear combinations of a set of pre-acquired reference images of the scene illuminated by known light patterns. Since the placement and brightness of the light sources composing such light patterns can be controlled, it is natural to ask: what is the optimal way to illuminate the scene to reduce the number of reference images that are needed? We show that the best way to light the scene (i.e., the way that minimizes the number of reference images) is not using a sequence of single, compact light sources as is most commonly done, but rather to use a sequence of lighting patterns as given by an object-dependent lighting basis. It is important to note that when relighting video sequences, consecutive images need to be aligned with respect to a common coordinate frame. However, since each frame is generated by a different light pattern illuminating the scene, abrupt illumination changes between consecutive reference images are produced. Under these circumstances, the tracking framework designed in this dissertation plays a central role. Finally, we present several relighting results on real video sequences of moving objects, moving faces, and scenes containing both. In each case, although a single video clip was captured, we are able to relight again and again, controlling the lighting direction, extent, and color.
APA, Harvard, Vancouver, ISO, and other styles
45

Fahlgren, Maria. "Designing for the integration of dynamic software environments in the teaching of mathematics." Doctoral thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-38213.

Full text
Abstract:
This thesis concerns the challenge of integrating dynamic software environments into the teaching of mathematics. It investigates particular aspects of the design of tasks which employ this type of computer-based system, with a focus on improvement, both of the tasks themselves and of the design process through which they are developed and refined. The thesis reports two research projects: a small initial one preceding a larger main project. The initial case study, involving two graduate students in mathematics, develops a task design model for geometrical locus problems. The main study constitutes the first iteration of a design-based study, conducted in collaboration with four upper-secondary school teachers and their classes. It seeks to identify task design characteristics that foster students’ mathematical reasoning and proficient use of software tools, and examines teachers’ organisation of ‘follow-up’ lessons. The findings concern three particular aspects: features of tasks and task environment relevant to developing a specific plan of action for a lesson; orchestration of a particular task environment to support the instrumental genesis of specific dynamic software tools; how to follow up students’ work on computer-based tasks in a whole-class discussion.
APA, Harvard, Vancouver, ISO, and other styles
46

Kontoe, Stavroula. "Development of time integration schemes and advanced boundary conditions for dynamic geotechnical analysis." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bornemann, Paul Burkhard. "Time integration algorithms for the steady states of dissipative non-linear dynamic systems." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Sackey, Esther Ewurafuah. "Strengthening Organizational Performance through Integration of Systems Leadership, Participatory Communication, and Dynamic Capabilities." Antioch University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1630883134200904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ivanov, André. "Dynamic testibility measures and their use in ATPG." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Meléndez, i. Frigola Joaquim. "Integration of knowledge-based, qualitative and numeric tools for real time dynamic systems supervision." Doctoral thesis, Universitat de Girona, 1998. http://hdl.handle.net/10803/7708.

Full text
Abstract:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management.
The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration
of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks
have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework:
·
ion Tools, for signal processing,
representation and analysis to obtain significative information.
· To deal with process variables, measures or numerical estimations, and expert observations, with uncertainty and imprecision.
· Expert knowledge representation at different levels by using a rule-based system or simple qualitative relations.
· Modularity and encapsulation of data and knowledge would be useful for structuring information.
· Graphical user interface to manage all those facilities in the same environment as actual CACSD packages.
Several tools from the AI domain have been added as Simulink ToolBoxes to deal with abstracted information, qualitative relationship and rule-based ES. Simple and intuitive qualitative relationship can be implemented by means of ablock-based qualitative representation language called ALCMEN. An ES shell, called CEES, has also been embedded into MATLAB/Simulink as a block to
allow modularisation and partition of large expert KBs. Finally, the numeric to qualitative interfaces is performed by a set of algorithms, called abstraction tools, encapsulated also in Simulink blocks. The functionality of the whole
framework is able due to the use of object oriented approach in the development and implementation of those tools.
In this thesis an attempt is undertaken to make steps towards integration of tools for expert supervision, including once for qualitative and symbolic data representation and management and symbolic knowledge processing. The main research objectives of this work include the following points :
1. Incorporation of object-variables into classical numerical data processing system. The aim is to allow structural qualitative and symbolic knowledge representation. Complex information is encapsulated in a single source/sink structure, called object-variable, providing methods for knowledge access and processing.
2. Implementation of selected particular tools for qualitative and symbolic knowledge representation and interfacing. Higher abstract level information processing based on the introduced object-variables.
3. Embedding an object oriented rule-based expert system into a classical CACSD framework in order to provide high level knowledge processing facilities based on the domain of expert knowledge, heuristics, and logic.
The object approach forces engineers to structure knowledge becoming highly locatable, modular and encapsulated. This features are very important to getexpert supervisory system design closer to process. The objective is to approach design tools to process engineers avoiding extra-time in learning application functionality and interfacing process variables and design tools. Thus, objects are used in the process variables descriptions as sources of information, encapsulating tools to provide significant (qualitative or numerical) information. Object oriented features will permit to divide large KBs into smaller ones to deal with complex systems adopting distributed solutions. Consequently, ES becomes more specialised, maintainable, and easier to validate.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography