To see the other types of publications on this topic, follow the link: Optimal matching.

Dissertations / Theses on the topic 'Optimal matching'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimal matching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Abrahamson, Jeff Shokoufandeh Ali. "Optimal matching and deterministic sampling /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/2526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moser, Hannes. "Finding optimal solutions for covering and matching problems." Göttingen Cuvillier, 2009. http://d-nb.info/999819399/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kwanashie, Augustine. "Efficient algorithms for optimal matching problems under preferences." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6706/.

Full text
Abstract:
In this thesis we consider efficient algorithms for matching problems involving preferences, i.e., problems where agents may be required to list other agents that they find acceptable in order of preference. In particular we mainly study the Stable Marriage problem (SM), the Hospitals / Residents problem (HR) and the Student / Project Allocation problem (SPA), and some of their variants. In some of these problems the aim is to find a stable matching which is one that admits no blocking pair. A blocking pair with respect to a matching is a pair of agents that prefer to be matched to each other than their assigned partners in the matching if any. We present an Integer Programming (IP) model for the Hospitals / Residents problem with Ties (HRT) and use it to find a maximum cardinality stable matching. We also present results from an empirical evaluation of our model which show it to be scalable with respect to real-world HRT instance sizes. Motivated by the observation that not all blocking pairs that exist in theory will lead to a matching being undermined in practice, we investigate a relaxed stability criterion called social stability where only pairs of agents with a social relationship have the ability to undermine a matching. This stability concept is studied in instances of the Stable Marriage problem with Incomplete lists (smi) and in instances of hr. We show that, in the smi and hr contexts, socially stable matchings can be of varying sizes and the problem of finding a maximum socially stable matching (max smiss and max hrss respectively) is NP-hard though approximable within 3/2. Furthermore we give polynomial time algorithms for three special cases of the problem arising from restrictions on the social network graph and the lengths of agents’ preference lists. We also consider other optimality criteria with respect to social stability and establish inapproximability bounds for the problems of finding an egalitarian, minimum regret and sex equal socially stable matching in the sm context. We extend our study of social stability by considering other variants and restrictions of max smiss and max hrss. We present NP-hardness results for max smiss even under certain restrictions on the degree and structure of the social network graph as well as the presence of master lists. Other NP-hardness results presented relate to the problem of determining whether a given man-woman pair belongs to a socially stable matching and the problem of determining whether a given man (or woman) is part of at least one socially stable matching. We also consider the Stable Roommates problem with Incomplete lists under Social Stability (a non-bipartite generalisation of smi under social stability). We observe that the problem of finding a maximum socially stable matching in this context is also NP-hard. We present efficient algorithms for three special cases of the problem arising from restrictions on the social network graph and the lengths of agents’ preference lists. These are the cases where (i) there exists a constant number of acquainted pairs (ii) or a constant number of unacquainted pairs or (iii) each preference list is of length at most 2. We also present algorithmic results for finding matchings in the spa context that are optimal with respect to profile, which is the vector whose ith component is the number of students assigned to their ith-choice project. We present an efficient algorithm for finding a greedy maximum matching in the spa context — this is a maximum matching whose profile is lexicographically maximum. We then show how to adapt this algorithm to find a generous maximum matching — this is a matching whose reverse profile is lexicographically minimum. We demonstrate how this approach can allow additional constraints, such as lecturer lower quotas, to be handled flexibly. We also present results of empirical evaluations carried out on both real world and randomly generated datasets. These results demonstrate the scalability of our algorithms as well as some interesting properties of these profile-based optimality criteria. Practical applications of spa motivate the investigation of certain special cases of the problem. For instance, it is often desired that the workload on lecturers is evenly distributed (i.e. load balanced). We enforce this by either adding lower quota constraints on the lecturers (which leads to the potential for infeasible problem instances) or adding a load balancing optimisation criterion. We present efficient algorithms in both cases. Another consideration is the fact that certain projects may require a minimum number of students to become viable. This can be handled by enforcing lower quota constraints on the projects (which also leads to the possibility of infeasible problem instances). A technique of handling this infeasibility is the idea of closing projects that do not meet their lower quotas (i.e. leaving such project completely unassigned). We show that the problem of finding a maximum matching subject to project lower quotas where projects can be closed is NP-hard even under severe restrictions on preference lists lengths and project upper and lower quotas. To offset this hardness, we present polynomial time heuristics that find large feasible matchings in practice. We also present ip models for the spa variants discussed and show results obtained from an empirical evaluation carried out on both real and randomly generated datasets. These results show that our algorithms and heuristics are scalable and provide good matchings with respect to profile-based optimality.
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Rohit Ph D. Massachusetts Institute of Technology. "Automatically learning optimal formula simplifiers and database entity matching rules." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113938.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 153-161).
Traditionally, machine learning (ML) is used to find a function from data to optimize a numerical score. On the other hand, synthesis is traditionally used to find a function (or a program) that can be derived from a grammar and satisfies a logical specification. The boundary between ML and synthesis has been blurred by some recent work [56,90]. However, this interaction between ML and synthesis has not been fully explored. In this thesis, we focus on the problem of finding a function given large amounts of data such that the function satisfies a logical specification and also optimizes a numerical score over the input data. We present a framework to solve this problem in two impactful application domains: formula simplification in constraint solvers and database entity matching (EM). First, we present a system called Swapper based on our framework that can automatically generate code for efficient formula simplifiers specialized to a class of problems. Formula simplification is an important part of modern constraint solvers, and writing efficient simplifiers has largely been an arduous manual task. Evaluation of Swapper on multiple applications of the Sketch constraint solver showed 15-60% improvement over the existing hand-crafted simplifier in Sketch. Second, we present a system called EM-Synth based on our framework that generates as effective and more interpretable EM rules than the state-of-the-art techniques. Database entity matching is a critical part of data integration and cleaning, and it usually involves learning rules or classifiers from labeled examples. Evaluation of EM-Synth on multiple real-world datasets against other interpretable (shallow decision trees, SIFI [116]) and noninterpretable (SVM, deep decision trees) methods showed that EM-Synth generates more concise and interpretable rules without sacrificing too much accuracy.
by Rohit Singh.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Hanyi. "Probabilistic matching systems : stability, fluid and diffusion approximations and optimal control." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10570.

Full text
Abstract:
In this work we introduce a novel queueing model with two classes of users in which, instead of accessing a resource, users wait in the system to match with a candidate from the other class. The users are selective and the matchings occur probabilistically. This new model is useful for analysing the traffic in web portals that match people who provide a service with people who demand the same service, e.g. employment portals, matrimonial and dating sites and rental portals. We first provide a Markov chain model for these systems and derive the probability distribution of the number of matches up to some finite time given the number of arrivals. We then prove that if no control mechanism is employed these systems are unstable for any set of parameters. We suggest four different classes of control policies to assure stability and conduct analysis on performance measures under the control policies. Contrary to the intuition that the rejection rate should decrease as the users become more likely to be matched, we show that for certain control policies the rejection rate is insensitive to the matching probability. Even more surprisingly, we show that for reasonable policies the rejection rate may be an increasing function of the matching probability. We also prove insensitivity results related to the average queue lengths and waiting times. Further, to gain more insight into the behaviour of probabilistic matching systems, we propose approximation methods based on fluid and diffusion limits using different scalings. We analyse the basic properties of these approximations and show that some performance measures are insensitive to the matching probability agreeing with the results found by the exact analysis. Finally we study the optimal control and revenue management for the systems with the objective of profit maximization. We formulate mathematical models for both unobservable and observable systems. For an unobservable system we suggest a deterministic optimal control, while for an observable system we develop an optimal myopic state dependent pricing.
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar, Deepak. "Optimal finite alphabet sources over partial response channels." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/1044.

Full text
Abstract:
We present a serially concatenated coding scheme for partial response channels. The encoder consists of an outer irregular LDPC code and an inner matched spectrum trellis code. These codes are shown to offer considerable improvement over the i.i.d. capacity (> 1 dB) of the channel for low rates (approximately 0.1 bits per channel use). We also present a qualitative argument on the optimality of these codes for low rates. We also formulate a performance index for such codes to predict their performance for low rates. The results have been verified via simulations for the (1-D)/sqrt(2) and the (1-D+0.8D^2)/sqrt(2.64) channels. The structure of the encoding/decoding scheme is considerably simpler than the existing scheme to maximize the information rate of encoders over partial response channels.
APA, Harvard, Vancouver, ISO, and other styles
7

Burnham, Katherine Lee. "Information fusion for an unmanned underwater vehicle through probabilistic prediction and optimal matching." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127297.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, May, 2020
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 89-92).
This thesis presents a method for information fusion for an unmanned underwater vehicle (UUV).We consider a system that fuses contact reports from automated information system (AIS) data and active and passive sonar sensors. A linear assignment problem with learned assignment costs is solved to fuse sonar and AIS data. Since the sensors operate effectively at different depths, there is a time lag between AIS and sonar data collection. A recurrent neural network predicts a contact's future occupancy grid from a segment of its AIS track. Assignment costs are formed by comparing a sonar position with the predicted occupancy grids of relevant vessels. The assignment problem is solved to determine which sonar reports to match with existing AIS contacts.
by Katherine Lee Burnham.
S.M.
S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
APA, Harvard, Vancouver, ISO, and other styles
8

Akers, Allen. "Determination of the Optimal Number of Strata for Bias Reduction in Propensity Score Matching." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc28380/.

Full text
Abstract:
Previous research implementing stratification on the propensity score has generally relied on using five strata, based on prior theoretical groundwork and minimal empirical evidence as to the suitability of quintiles to adequately reduce bias in all cases and across all sample sizes. This study investigates bias reduction across varying number of strata and sample sizes via a large-scale simulation to determine the adequacy of quintiles for bias reduction under all conditions. Sample sizes ranged from 100 to 50,000 and strata from 3 to 20. Both the percentage of bias reduction and the standardized selection bias were examined. The results show that while the particular covariates in the simulation met certain criteria with five strata that greater bias reduction could be achieved by increasing the number of strata, especially with larger sample sizes. Simulation code written in R is included.
APA, Harvard, Vancouver, ISO, and other styles
9

Salter, James Martin. "Uncertainty quantification for spatial field data using expensive computer models : refocussed Bayesian calibration with optimal projection." Thesis, University of Exeter, 2017. http://hdl.handle.net/10871/30114.

Full text
Abstract:
In this thesis, we present novel methodology for emulating and calibrating computer models with high-dimensional output. Computer models for complex physical systems, such as climate, are typically expensive and time-consuming to run. Due to this inability to run computer models efficiently, statistical models ('emulators') are used as fast approximations of the computer model, fitted based on a small number of runs of the expensive model, allowing more of the input parameter space to be explored. Common choices for emulators are regressions and Gaussian processes. The input parameters of the computer model that lead to output most consistent with the observations of the real-world system are generally unknown, hence computer models require careful tuning. Bayesian calibration and history matching are two methods that can be combined with emulators to search for the best input parameter setting of the computer model (calibration), or remove regions of parameter space unlikely to give output consistent with the observations, if the computer model were to be run at these settings (history matching). When calibrating computer models, it has been argued that fitting regression emulators is sufficient, due to the large, sparsely-sampled input space. We examine this for a range of examples with different features and input dimensions, and find that fitting a correlated residual term in the emulator is beneficial, in terms of more accurately removing regions of the input space, and identifying parameter settings that give output consistent with the observations. We demonstrate and advocate for multi-wave history matching followed by calibration for tuning. In order to emulate computer models with large spatial output, projection onto a low-dimensional basis is commonly used. The standard accepted method for selecting a basis is to use n runs of the computer model to compute principal components via the singular value decomposition (the SVD basis), with the coefficients given by this projection emulated. We show that when the n runs used to define the basis do not contain important patterns found in the real-world observations of the spatial field, linear combinations of the SVD basis vectors will not generally be able to represent these observations. Therefore, the results of a calibration exercise are meaningless, as we converge to incorrect parameter settings, likely assigning zero posterior probability to the correct region of input space. We show that the inadequacy of the SVD basis is very common and present in every climate model field we looked at. We develop a method for combining important patterns from the observations with signal from the model runs, developing a calibration-optimal rotation of the SVD basis that allows a search of the output space for fields consistent with the observations. We illustrate this method by performing two iterations of history matching on a climate model, CanAM4. We develop a method for beginning to assess model discrepancy for climate models, where modellers would first like to see whether the model can achieve certain accuracy, before allowing specific model structural errors to be accounted for. We show that calibrating using the basis coefficients often leads to poor results, with fields consistent with the observations ruled out in history matching. We develop a method for adjusting for basis projection when history matching, so that an efficient and more accurate implausibility bound can be derived that is consistent with history matching using the computationally prohibitive spatial field.
APA, Harvard, Vancouver, ISO, and other styles
10

Araujo, Bruno César Pino Oliveira de. "Trajetórias ocupacionais de engenheiros jovens no Brasil." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-23062016-153336/.

Full text
Abstract:
Esta tese analisa 9.041 trajetórias ocupacionais de jovens engenheiros como empregados formais no Brasil entre 2003-2012, a partir da técnica de Optimal Matching Analysis (OMA). Estas trajetórias foram comparadas às de uma geração anterior de jovens engenheiros, tanto em seu período-base (1995-2002) como entre 2003-2012, a fim de identificar efeitos de idade e período. Os principais resultados são: (i) conforme esperado, trajetórias ocupacionais ligadas à gestão (em áreas correlatas à engenharia ou não) são as que oferecem remuneração mais alta em todos os períodos analisados; (ii) nos anos 2000, o terceiro padrão mais atrativo para os jovens daquela geração foi permanecer como engenheiro típico, caminho perseguido por praticamente metade deles, enquanto tal atratividade não foi verificada nos anos 1990; (iii) o salário de entrada dos jovens engenheiros subiu 24% em termos reais entre 1995 e 2003; (iv) há pouca mobilidade de trajetória ocupacional por parte da geração dos engenheiros de 1995 após 2003; (v) os jovens engenheiros de 1995 que permaneceram como engenheiros típicos durante os anos 2000 chegaram a 2012 ganhando apenas 14% a mais do que os jovens engenheiros de 2003 (com 8 anos a menos de experiência); para comparação, os gestores da geração 90 ganhavam em torno de 50% a mais do que os da geração 2000; (vi) há dois momentos de definição de trajetória ocupacional: um primeiro ocorre até 3 anos após o primeiro emprego, mas promoções a cargos de gestão podem ocorrer entre 8 e 10 anos. Estes resultados indicam que, se por um lado houve uma revalorização dos profissionais de engenharia na última década, por outro lado esta revalorização não trouxe engenheiros anteriormente formados a carreiras típicas em engenharia. Isto, aliado à baixa demanda pelos cursos de engenharia durante os anos 80 e 90, corrobora a hipótese de um hiato geracional entre os engenheiros, documentado em artigos anteriores.
This PhD dissertation analyzes 9,041 occupational trajectories of young engineers as formal employees in Brazil in 2003-2012, using Optimal Matching Analysis (OMA). These trajectories were compared to those of a previous generation of young engineers, both in its base period (1995-2002) and in 2003-2012, to identify age and period effects. The main results are: (i) as expected, management occupational trajectories (in areas related to engineering or not) pay higher wages, in all periods; (ii) in the 2000s, the third most attractive trajectory was to remain as typical engineer, path pursued by nearly half of young engineers, however, this was not verified in the 1990s; (iii) entry wages of young engineers rose 24% in real terms between 1995 and 2003; (iv) there is little occupational mobility by the generation of 1995 engineers after 2003; (v) young engineers of 1995 who remained as typical engineers during the 2000s earned only 14% more in 2012 than young engineers of 2003; for comparison, in 2012 managers from the 90s earned about 50% more those from the 2000s; (vi) there are two defining moments of occupational trajectory: a first occurs until three years after the first job, but promotions to management positions can take place between 8 and 10 years. These results indicate that, on the one hand, there was a revaluation of engineers over the past decade; on the other hand, this did not attracted former bachelors back to typical careers in Engineering. This, combined with low demand for engineering courses during the 80s and 90s, supports the hypothesis of a generational gap among engineers, documented in previous articles.
APA, Harvard, Vancouver, ISO, and other styles
11

Scarponi, Matteo. "Analysis of existing tsunami scenario databases for optimal design and efficient real-time event matching." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13546/.

Full text
Abstract:
Pre-computed tsunami scenario databases constitute a traditional basis to the production of tsunami forecasts in real time, achieved through a combination of properly selected Green’s functions-like objects. The considered case-study database contains water elevation fields and waveform signals produced by an arrangement of evenly-spaced elementary seismic sources, covering fault areas relevant in determining the Portuguese tsunami hazard. This work proposes a novel real-time processing for the tsunami forecast production, aiming at the accuracy given by traditional methods but with less time cost. The study has been conducted on the Gorringe Bank fault (GBF), but has a general validity. First, the GBF database is analysed in detail, seeking for remarkable properties of the seismic sources, in terms of frequency content, cross-correlation and relative differences of the fields and waveform signals. Then, a reference forecast for a seismic event placed on the GBF is given, by using all the traditionally available subfaults. Furthermore, a novel processing algorithm is defined to produce approximate forecasts, through a strategic exploitation of the information obtainable by each of the seismic sources, taken in minor number. A further focus on sensible locations is provided. Remarkable results are obtained in terms of physical properties of the seismic sources and time-gain for the forecast production. Seismic sources at depth produce longwave dominated signals, allowing for an optimisation of the database content, in terms of sources required to properly represent seismogenic areas at certain depths. In terms of time cost, an overall improvement is obtained concerning the forecast production, since the proposed strategy gives highly accurate forecasts, using half of the seismic sources used by traditional forecasting methods, which reduces the required accesses to the database.
APA, Harvard, Vancouver, ISO, and other styles
12

Stra, Federico. "Classical and multi-marginal optimal transport, with applications." Doctoral thesis, Scuola Normale Superiore, 2018. http://hdl.handle.net/11384/85725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kazemi, Alireza. "Optimal parameter updating and appropriate 4D seismic normalization in seismic history matching of the Nelson field." Thesis, Heriot-Watt University, 2011. http://hdl.handle.net/10399/2474.

Full text
Abstract:
History matching of reservoirs is very important in the oil industry because the simulation model is an important tool that can help with management decisions and planning of future production strategies. Nowadays, time-lapse (4D) seismic data is very useful for better capturing the fluid displacement in the reservoir, especially between wells. It is now common to integrate 4D seismic with production data in order to constrain the simulation model to both types of data. This thesis is based on a technique for automatic production and seismic history matching of reservoirs by. This technique integrates various tools such as streamline simulation, parameterization via pilot points and Kriging and geo-body updating, a petro-elastic model and the neighborhood algorithm, all in an automatic framework. All studies in this thesis are applied to the Nelson field but the approaches used here can be applied to any similar field. The history matching aim was to identify shale volumes and their distribution by updating three reservoir properties, net:gross, horizontal and vertical permeability. All history matching studies were performed in a six years production period, with baseline and one monitor seismic survey available, and then a forecast of the following three years was made with a second monitor for comparison. Various challenges are addressed in this thesis. We introduce a streamline guide approach in order to efficiently select the regions in the reservoir that have a strong influence on production activity of the wells and 4D seismic signature. Updating was performed more effectively compared to an approach where parameters were changed everywhere in the vicinity of the wells. Then, three parameter updating schemes are introduced to effectively combine various reservoir parameters in order to capture correctly the flow behaviour. The observed 4D seismic data used in this study consisted of relative pseudo-impedance with a different unit compared to synthetic impedance data. This challenge was addressed by introducing normalization. 4D predictions in the vertical well locations and full field simulation cells used in the normalization study and we observed different level of signal/noise ratio in normalized observed 4D maps at the end of study. We include the normalized 4D maps in history matching of the field and we observed that normalization very important. We also compared the seismic and production history matching studies with a case where seismic was not included in history matching (production history matching). The results show that if 4D data is normalized appropriately, the reduction of both seismic and production misfits is better than the production only history matching case.
APA, Harvard, Vancouver, ISO, and other styles
14

Chowdhury, Israt Jahan. "Knowledge discovery from tree databases using balanced optimal search." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/92263/1/Israt%20Jahan_Chowdhury_Thesis.pdf.

Full text
Abstract:
This research is a step forward in discovering knowledge from databases of complex structure like tree or graph. Several data mining algorithms are developed based on a novel representation called Balanced Optimal Search for extracting implicit, unknown and potentially useful information like patterns, similarities and various relationships from tree data, which are also proved to be advantageous in analysing big data. This thesis focuses on analysing unordered tree data, which is robust to data inconsistency, irregularity and swift information changes, hence, in the era of big data it becomes a popular and widely used data model.
APA, Harvard, Vancouver, ISO, and other styles
15

Al, Hajri Abdullah Said Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Logistics technology transfer model." Publisher:University of New South Wales. Mechanical & Manufacturing Engineering, 2008. http://handle.unsw.edu.au/1959.4/41469.

Full text
Abstract:
A consecutive number of studies on the adoption trend of logistics technology since 1988 revealed that logistics organizations are not in the frontier when it comes to adopting new technology and this delayed adoption creates an information gap. In the advent of supply chain management and the strategic position of logistics, the need for accurate and timely information to accompany the logistics executives became more important than ever before. Given the integrative nature of logistics technology, failure to implement the technology successfully could result in writing off major investments in developing and implementing the technology or even in abandoning the strategic initiatives underpinned by these innovations. Consequently, the need to employ effective strategies and models to cope with these uncertainties is rather crucial. This thesis addresses the aspect of uncertainty in implementation success by process and factor research models. Process research approach focuses on the sequence of events in the technology transfer process that occurs over time. It explains the story that explains the degree of association between these sequences and implementation success. Through content analysis, this research gathers, extracts, and categorizes process data of actual stories of logistics technology adoption and implementations in organizations that are published in literature. The extracted event sequences are then analyzed using optimal matching from natural science and grouped using cluster analysis. Four patterns were revealed that organizations follow to transfer logistics technology namely, formal minimalist, mutual adaptation, development concerned, and organizational roles dispenser. Factors that contribute to successful implementation in each pattern were defined as the crucial and necessary events that characterized and differentiated each pattern from others. The factor approach identifies the potential predictors of successful technology implementation and tests empirical association between predictors and outcomes. This research develops a logistics technology success model. In developing the model, various streams of research were investigated including logistics, information systems, and organizational psychology. The model is tested using a questionnaire survey study. The data were collected from Australian companies which have recently adopted and implemented logistics technology. The results of a partial least squares structured equation modeling provide strong support for the model constructs and valuable insights to logistics/supply chain managers. The last study reports a convergent triangulation study using multiple case study of three Australian companies which have implemented logistics technology. A within and a cross case analysis of the three cases provide cross validation for the results of the other two studies. The results provided high predictive validity for the two models. Furthermore, the case study approach was so beneficial in explaining and contextualizing the linkages of the factor-based model and in confirming the importance of the crucial events in the process-based model. The thesis concludes with a research and managerial implications chapter which is devoted for logistics/supply chain managers and researchers.
APA, Harvard, Vancouver, ISO, and other styles
16

Kriwoluzky, Alexander. "Matching DSGE models to data with applications to fiscal and robust monetary policy." Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2009. http://dx.doi.org/10.18452/16052.

Full text
Abstract:
Diese Doktorarbeit untersucht drei Fragestellungen. Erstens, wie die Wirkung von plötzlichen Änderungen exogener Faktoren auf endogene Variablen empirisch im Allgemeinen zu bestimmen ist. Zweitens, welche Effekte eine Erhöhung der Staatsausgaben im Speziellen hat. Drittens, wie optimale Geldpolitik bestimmt werden kann, wenn der Entscheider keine eindeutigen Modelle für die ökonomischen Rahmenbedingungen hat. Im ersten Kapitel entwickele ich eine Methode, mithilfe derer die Effekte von plötzlichen Änderungen exogener Faktoren auf endogene Variablen geschätzt werden können. Dazu wird die gemeinsame Verteilung von Parametern einer Vektor Autoregression (VAR) und eines stochastischen allgemeinen Gleichgewichtsmodelles (DSGE) bestimmt. Auf diese Weise können zentrale Probleme gelöst werden: das Identifikationsproblem der VAR und eine mögliche Misspezifikation des DSGE Modells. Im zweitem Kapitel wende ich die Methode aus dem ersten Kapitel an, um den Effekt einer angekündigten Erhöhung der Staatsausgaben auf den privaten Konsum und die Reallöhne zu untersuchen. Die Identifikation beruht auf der Einsicht, dass endogene Variablen, oft qualitative Unterschiede in der Periode der Ankündigung und nach der Realisation zeigen. Die Ergebnisse zeigen, dass der private Konsum negativ im Zeitraum der Ankündigung reagiert und positiv nach der Realisation. Reallöhne steigen zum Zeitpunkt der Ankündigung und sind positiv für zwei Perioden nach der Realisation. Im abschließendem Kapitel untersuche ich gemeinsam mit Christian Stoltenberg, wie Geldpolitik gesteuert werden sollte, wenn die Modellierung der Ökonomie unsicher ist. Wenn ein Modell um einen Parameter erweitert wird, kann das Modell dadurch so verändert werden, dass sich die Politikempfehlungen zwischen dem ursprünglichen und dem neuen Modell unterscheiden. Oft wird aber lediglich das erweiterte Modell betrachtet. Wir schlagen eine Methode vor, die beiden Modellen Rechnung trägt und somit zu einer besseren Politik führt.
This thesis is concerned with three questions: first, how can the effects macroeconomic policy has on the economy in general be estimated? Second, what are the effects of a pre-announced increase in government expenditures? Third, how should monetary policy be conducted, if the policymaker faces uncertainty about the economic environment. In the first chapter I suggest to estimate the effects of an exogenous disturbance on the economy by considering the parameter distributions of a Vector Autoregression (VAR) model and a Dynamic Stochastic General Equilibrium (DSGE) model jointly. This allows to resolve the major issue a researcher has to deal with when working with a VAR model and a DSGE model: the identification of the VAR model and the potential misspecification of the DSGE model. The second chapter applies the methodology presented in the preceding chapter to investigate the effects of a pre-announced change in government expenditure on private consumption and real wages. The shock is identified by exploiting its pre-announced nature, i.e. different signs of the responses in endogenous variables during the announcement and after the realization of the shock. Private consumption is found to respond negatively during the announcement period and positively after the realization. The reaction of real wages is positive on impact and positive for two quarters after the realization. In the last chapter ''Optimal Policy Under Model Uncertainty: A Structural-Bayesian Estimation Approach'' I investigate jointly with Christian Stoltenberg how policy should optimally be conducted when the policymaker is faced with uncertainty about the economic environment. The standard procedure is to specify a prior over the parameter space ignoring the status of some sub-models. We propose a procedure that ensures that the specified set of sub-models is not discarded too easily. We find that optimal policy based on our procedure leads to welfare gains compared to the standard practice.
APA, Harvard, Vancouver, ISO, and other styles
17

Berger, Martin [Verfasser], and Ulrich [Akademischer Betreuer] Brannolte. "Typologiebildung und Erklärung des Aktivitäten-(Verkehrs-)verhaltens – ein Multimethodenansatz unter Verwendung der Optimal Matching Technik / Martin Berger ; Betreuer: Ulrich Brannolte." Weimar : Professur Verkehrsplanung und Verkehrstechnik, 2004. http://d-nb.info/1115727753/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Diop, Serigne Arona, and Serigne Arona Diop. "Comparing inverse probability of treatment weighting methods and optimal nonbipartite matching for estimating the causal effect of a multicategorical treatment." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/34507.

Full text
Abstract:
Des débalancements des covariables entre les groupes de traitement sont souvent présents dans les études observationnelles et peuvent biaiser les comparaisons entre les traitements. Ce biais peut notamment être corrigé grâce à des méthodes de pondération ou d’appariement. Ces méthodes de correction ont rarement été comparées dans un contexte de traitement à plusieurs catégories (>2). Nous avons mené une étude de simulation pour comparer une méthode d’appariement optimal non-biparti, la pondération par probabilité inverse de traitement ainsi qu’une pondération modifiée analogue à l’appariement (matching weights). Ces comparaisons ont été effectuées dans le cadre de simulation de type Monte Carlo à travers laquelle une variable d’exposition à 3 groupes a été utilisée. Une étude de simulation utilisant des données réelles (plasmode) a été conduite et dans laquelle la variable de traitement avait 5 catégories. Parmi toutes les méthodes comparées, celle du matching weights apparaît comme étant la plus robuste selon le critère de l’erreur quadratique moyenne. Il en ressort, aussi, que les résultats de la pondération par probabilité inverse de traitement peuvent parfois être améliorés par la troncation. De plus, la performance de la pondération dépend du niveau de chevauchement entre les différents groupes de traitement. La performance de l’appariement optimal nonbiparti est, quant à elle, fortement tributaire de la distance maximale pour qu’une paire soit formée (caliper). Toutefois, le choix du caliper optimal n’est pas facile et demeure une question ouverte. De surcroît, les résultats obtenus avec la simulation plasmode étaient positifs, dans la mesure où une réduction importante du biais a été observée. Toutes les méthodes ont pu réduire significativement le biais de confusion. Avant d’utiliser la pondération de probabilité inverse de traitement, il est recommandé de vérifier la violation de l’hypothèse de positivité ou l’existence de zones de chevauchement entre les différents groupes de traitement
Des débalancements des covariables entre les groupes de traitement sont souvent présents dans les études observationnelles et peuvent biaiser les comparaisons entre les traitements. Ce biais peut notamment être corrigé grâce à des méthodes de pondération ou d’appariement. Ces méthodes de correction ont rarement été comparées dans un contexte de traitement à plusieurs catégories (>2). Nous avons mené une étude de simulation pour comparer une méthode d’appariement optimal non-biparti, la pondération par probabilité inverse de traitement ainsi qu’une pondération modifiée analogue à l’appariement (matching weights). Ces comparaisons ont été effectuées dans le cadre de simulation de type Monte Carlo à travers laquelle une variable d’exposition à 3 groupes a été utilisée. Une étude de simulation utilisant des données réelles (plasmode) a été conduite et dans laquelle la variable de traitement avait 5 catégories. Parmi toutes les méthodes comparées, celle du matching weights apparaît comme étant la plus robuste selon le critère de l’erreur quadratique moyenne. Il en ressort, aussi, que les résultats de la pondération par probabilité inverse de traitement peuvent parfois être améliorés par la troncation. De plus, la performance de la pondération dépend du niveau de chevauchement entre les différents groupes de traitement. La performance de l’appariement optimal nonbiparti est, quant à elle, fortement tributaire de la distance maximale pour qu’une paire soit formée (caliper). Toutefois, le choix du caliper optimal n’est pas facile et demeure une question ouverte. De surcroît, les résultats obtenus avec la simulation plasmode étaient positifs, dans la mesure où une réduction importante du biais a été observée. Toutes les méthodes ont pu réduire significativement le biais de confusion. Avant d’utiliser la pondération de probabilité inverse de traitement, il est recommandé de vérifier la violation de l’hypothèse de positivité ou l’existence de zones de chevauchement entre les différents groupes de traitement
APA, Harvard, Vancouver, ISO, and other styles
19

Feichtner, Thorsten [Verfasser], Bert [Gutachter] Hecht, and Tobias [Gutachter] Brixner. "Optimal Design of Focusing Nanoantennas for Light : Novel Approaches: From Evolution to Mode-Matching / Thorsten Feichtner ; Gutachter: Bert Hecht, Tobias Brixner." Würzburg : Universität Würzburg, 2016. http://d-nb.info/1147681929/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Chari, Kartik Seshadri. "Dynamic Modelling and Optimal Control of Autonomous Heavy-duty Vehicles." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291634.

Full text
Abstract:
Autonomous vehicles have gained much importance over the last decade owing to their promising capabilities like improvement in overall traffic flow, reduction in pollution and elimination of human errors. However, when it comes to long-distance transportation or working in complex isolated environments like mines, various factors such as safety, fuel efficiency, transportation cost, robustness, and accuracy become very critical. This thesis, developed at the Connected and Autonomous Systems department of Scania AB in association with KTH, focuses on addressing the issues related to fuel efficiency, robustness and accuracy of an autonomous heavy-duty truck used for mining applications. First, in order to improve the state prediction capabilities of the simulation model, a comparative analysis of two dynamic bicycle models was performed. The first model used the empirical PAC2002 Magic Formula (MF) tyre model to generate the tyre forces, and the latter used a piece-wise Linear approximation of the former. On top of that, in order to account for the nonlinearities and time delays in the lateral direction, the steering dynamic equations were empirically derived and cascaded to the vehicle model. The fidelity of these models was tested against real experimental logs, and the best vehicle model was selected by striking a balance between accuracy and computational efficiency. The Dynamic bicycle model with piece-wise Linear approximation of tyre forces proved to tick-all-the-boxes by providing accurate state predictions within the acceptable error range and handling lateral accelerations up to 4 m/s2. Also, this model proved to be six times more computationally efficient than the industry-standard PAC2002 tyre model. Furthermore, in order to ensure smooth and accurate driving, several Model Predictive Control (MPC) formulations were tested on clothoid-based Single Lane Change (SLC), Double Lane Change (DLC) and Truncated Slalom trajectories with added disturbances in the initial position, heading and velocities. A linear time-varying Spatial error MPC is proposed, which provides a link between spatial-domain and time-domain analysis. This proposed controller proved to be a perfect balance between fuel efficiency which was achieved by minimising braking and acceleration sequences and offset-free tracking along with ensuring that the truck reached its destination within the stipulated time irrespective of the added disturbances. Lastly, a comparative analysis between various Prediction-Simulation model pairs was made, and the best pair was selected in terms of its robustness to parameter changes, simplicity, computational efficiency and accuracy.
Under det senaste årtiondet har utveckling av autonoma fordon blivit allt viktigare på grund av de stora möjligheterna till förbättringar av trafikflöden, minskade utsläpp av föroreningar och eliminering av mänskliga fel. När det gäller långdistanstransporter eller komplexa isolerade miljöer så som gruvor blir faktorer som bränsleeffektivitet, transportkostnad, robusthet och noggrannhet mycket viktiga. Detta examensarbete utvecklat vid avdelningen Connected and Autonomous Systems på Scania i samarbete med KTH fokuserar på frågor gällande bränsleeffektivitet, robusthet och exakthet hos en autonom tung lastbil i gruvmiljö. För att förbättra simuleringsmodellens tillståndsprediktioner, genomfördes en jämförande analys av två dynamiska fordonsmodeller. Den första modellen använde den empiriska däckmodellen PAC2002 Magic Formula (MF) för att approximera däckkrafterna, och den andra använde en stegvis linjär approximation av samma däckmodell. För att ta hänsyn till ickelinjäriteter och laterala tidsfördröjningar inkluderades empiriskt identifierade styrdynamiksekvationer i fordonsmodellen. Modellerna verifierades mot verkliga mätdata från fordon. Den bästa fordonsmodellen valdes genom att hitta en balans mellan noggrannhet och beräkningseffektivitet. Den Dynamiska fordonsmodellen med stegvis linjär approximation av däckkrafter visade goda resultat genom att ge noggranna tillståndsprediktioner inom det acceptabla felområdet och hantera sidoacceleration upp till 4 m/s2 . Den här modellen visade sig också vara sex gånger effektivare än PAC2002-däckmodellen. v För att säkerställa mjuk och korrekt körning testades flera MPC varianter på klotoidbaserade trajektorier av filbyte SLC, dubbelt filbyte DLC och slalom. Störningar i position, riktining och hastighet lades till startpositionen. En MPC med straff på rumslig avvikelse föreslås, vilket ger en länk mellan rumsdomän och tidsdomän. Den föreslagna regleringen visade sig vara en perfekt balans mellan bränsleeffektivitet, genom att minimering av broms- och accelerationssekvenser, och felminimering samtidigt som lastbilen nådde sin destination inom den föreskrivna tiden oberoende av de extra störningarna. Slutligen gjordes en jämförande analys mellan olika kombinationer av simulerings- och prediktionsmodell och den bästa kombinationen valdes med avseende på dess robusthet mot parameterändringar, enkelhet, beräkningseffektivitet och noggrannhet.
APA, Harvard, Vancouver, ISO, and other styles
21

Abdulaziz, Noor Amal Saud. "Evaluation of Texas Home Instruction for Parents of Preschool Youngsters Program on Reading and Math Achievement for Grades K to 8." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1538753/.

Full text
Abstract:
This study was intended to evaluate the impact of socioeconomically disadvantaged children's participation in the Texas Home Instruction for Parents of Preschool Youngsters (TX HIPPY) Program on their school readiness and academic achievement. The study used a quasi-experimental design and applied full and optimal propensity score matching (PSM) to address the evaluation concern of the impact of the TX HIPPY program on HIPPY participants' academic achievement compared to non-HIPPY participants. This evaluation targeted former HIPPY participants and tracked them in the Dallas ISD database through Grade Levels K-8. Data were obtained by administering Istation's Indicators of Progress (ISIP) for kindergarten, TerraNova/SUPERA for Grades K-2, and State of Texas Assessments of Academic Readiness for math and reading (STAAR) for Grades 3-8. HIPPY and non-HIPPY groups were matched using propensity score analysis procedures. The evaluation findings show that the TX HIPPY program positively influences kindergarten students to start school ready to learn. The findings of math and reading achievements suggest that HIPPY children scored at the same level or higher than non-HIPPY children did on math and reading achievement, indicating that TX HIPPY program has achieved its goal of helping children maintain long-term academic success. However, the evaluation findings also indicated that the impact evaluation framework must be designed with attention to higher-level factors beyond academic achievement that influence children's academic success.
APA, Harvard, Vancouver, ISO, and other styles
22

Santacruz, Muñoz José Luis. "Error-tolerant Graph Matching on Huge Graphs and Learning Strategies on the Edit Costs." Doctoral thesis, Universitat Rovira i Virgili, 2019. http://hdl.handle.net/10803/668356.

Full text
Abstract:
Els grafs són estructures de dades abstractes que s'utilitzen per a modelar problemes reals amb dues entitats bàsiques: nodes i arestes. Cada node o vèrtex representa un punt d'interès rellevant d'un problema, i cada aresta representa la relació entre aquests vèrtexs. Els nodes i les arestes podrien incorporar atributs per augmentar la precisió del problema modelat. Degut a aquesta versatilitat, s'han trobat moltes aplicacions en camps com la visió per computador, biomèdics, anàlisi de xarxes, etc. La Distància d'edició de grafs (GED) s'ha convertit en una eina important en el reconeixement de patrons estructurals, ja que permet mesurar la dissimilitud dels grafs. A la primera part d'aquesta tesi es presenta un mètode per generar una parella grafs juntament amb la seva correspondència en un cost computacional lineal. A continuació, se centra en com mesurar la dissimilitud entre dos grafs enormes (més de 10.000 nodes), utilitzant un nou algoritme de aparellament de grafs anomenat Belief Propagation. Té un cost computacional O(d^3.5N). Aquesta tesi també presenta un marc general per aprendre els costos d'edició implicats en els càlculs de la GED automàticament. Després, concretem aquest marc en dos models diferents basats en xarxes neuronals i funcions de densitat de probabilitat. S'ha realitzat una validació pràctica exhaustiva en 14 bases de dades públiques. Aquesta validació mostra que la precisió és major amb els costos d'edició apresos, que amb alguns costos impostos manualment o altres costos apresos automàticament per mètodes anteriors. Finalment proposem una aplicació de l'algoritme Belief propagation utilitzat en la simulació de la mecànica muscular.
Los grafos son estructuras de datos abstractos que se utilizan para modelar problemas reales con dos entidades básicas: nodos y aristas. Cada nodo o vértice representa un punto de interés relevante de un problema, y cada arista representa la relación entre estos vértices. Los nodos y las aristas podrían incorporar atributos para aumentar la precisión del problema modelado. Debido a esta versatilidad, se han encontrado muchas aplicaciones en campos como la visión por computador, biomédicos, análisis de redes, etc. La Distancia de edición de grafos (GED) se ha convertido en una herramienta importante en el reconocimiento de patrones estructurales, ya que permite medir la disimilitud de los grafos. En la primera parte de esta tesis se presenta un método para generar una pareja grafos junto con su correspondencia en un coste computacional lineal. A continuación, se centra en cómo medir la disimilitud entre dos grafos enormes (más de 10.000 nodos), utilizando un nuevo algoritmo de emparejamiento de grafos llamado Belief Propagation. Tiene un coste computacional O(d^3.5n). Esta tesis también presenta un marco general para aprender los costos de edición implicados en los cálculos de GED automáticamente. Luego, concretamos este marco en dos modelos diferentes basados en redes neuronales y funciones de densidad de probabilidad. Se ha realizado una validación práctica exhaustiva en 14 bases de datos públicas. Esta validación muestra que la precisión es mayor con los costos de edición aprendidos, que con algunos costos impuestos manualmente u otros costos aprendidos automáticamente por métodos anteriores. Finalmente proponemos una aplicación del algoritmo Belief propagation utilizado en la simulación de la mecánica muscular.
Graphs are abstract data structures used to model real problems with two basic entities: nodes and edges. Each node or vertex represents a relevant point of interest of a problem, and each edge represents the relationship between these points. Nodes and edges could be attributed to increase the accuracy of the modeled problem, which means that these attributes could vary from feature vectors to description labels. Due to this versatility, many applications have been found in fields such as computer vision, bio-medics, network analysis, etc. Graph Edit Distance (GED) has become an important tool in structural pattern recognition since it allows to measure the dissimilarity of attributed graphs. The first part presents a method is presented to generate graphs together with an upper and lower bound distance and a correspondence in a linear computational cost. Through this method, the behaviour of the known -or the new- sub-optimal Error-Tolerant graph matching algorithm can be tested against a lower and an upper bound GED on large graphs, even though we do not have the true distance. Next, the present is focused on how to measure the dissimilarity between two huge graphs (more than 10.000 nodes), using a new Error-Tolerant graph matching algorithm called Belief Propagation algorithm. It has a O(d^3.5n) computational cost.This thesis also presents a general framework to learn the edit costs involved in the GED calculations automatically. Then, we concretise this framework in two different models based on neural networks and probability density functions. An exhaustive practical validation on 14 public databases has been performed. This validation shows that the accuracy is higher with the learned edit costs, than with some manually imposed costs or other costs automatically learned by previous methods. Finally we propose an application of the Belief propagation algorithm applied to muscle mechanics.
APA, Harvard, Vancouver, ISO, and other styles
23

Le, Brigant Alice. "Probability on the spaces of curves and the associated metric spaces via information geometry; radar applications." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0640/document.

Full text
Abstract:
Nous nous intéressons à la comparaison de formes de courbes lisses prenant leurs valeurs dans une variété riemannienne M. Dans ce but, nous introduisons une métrique riemannienne invariante par reparamétrisations sur la variété de dimension infinie des immersions lisses dans M. L’équation géodésique est donnée et les géodésiques entre deux courbes sont construites par tir géodésique. La structure quotient induite par l’action du groupe des reparamétrisations sur l’espace des courbes est étudiée. À l’aide d’une décomposition canonique d’un chemin dans un fibré principal, nous proposons un algorithme qui construit la géodésique horizontale entre deux courbes et qui fournit un matching optimal. Dans un deuxième temps, nous introduisons une discrétisation de notre modèle qui est elle-même une structure riemannienne sur la variété de dimension finie Mn+1 des "courbes discrètes" définies par n + 1 points, où M est de courbure sectionnelle constante. Nous montrons la convergence du modèle discret vers le modèle continu, et nous étudions la géométrie induite. Des résultats de simulations dans la sphère, le plan et le demi-plan hyperbolique sont donnés. Enfin, nous donnons le contexte mathématique nécessaire à l’application de l’étude de formes dans une variété au traitement statistique du signal radar, où des signaux radars localement stationnaires sont représentés par des courbes dans le polydisque de Poincaré via la géométrie de l’information
We are concerned with the comparison of the shapes of open smooth curves that take their values in a Riemannian manifold M. To this end, we introduce a reparameterization invariant Riemannian metric on the infinite-dimensional manifold of these curves, modeled by smooth immersions in M. We derive the geodesic equation and solve the boundary value problem using geodesic shooting. The quotient structure induced by the action of the reparametrization group on the space of curves is studied. Using a canonical decomposition of a path in a principal bundle, we propose an algorithm that computes the horizontal geodesic between two curves and yields an optimal matching. In a second step, restricting to base manifolds of constant sectional curvature, we introduce a detailed discretization of the Riemannian structure on the space of smooth curves, which is itself a Riemannian metric on the finite-dimensional manifold Mn+1 of "discrete curves" given by n + 1 points. We show the convergence of the discrete model to the continuous model, and study the induced geometry. We show results of simulations in the sphere, the plane, and the hyperbolic halfplane. Finally, we give the necessary framework to apply shape analysis of manifold-valued curves to radar signal processing, where locally stationary radar signals are represented by curves in the Poincaré polydisk using information geometry
APA, Harvard, Vancouver, ISO, and other styles
24

Nguyen, Van Thanh. "Problèmes de transport partiel optimal et d'appariement avec contrainte." Thesis, Limoges, 2017. http://www.theses.fr/2017LIMO0052.

Full text
Abstract:
Cette thèse est consacrée à l'analyse mathématique et numérique pour les problèmes de transport partiel optimal et d'appariement avec contrainte (constrained matching problem). Ces deux problèmes présentent de nouvelles quantités inconnues, appelées parties actives. Pour le transport partiel optimal avec des coûts qui sont donnés par la distance finslerienne, nous présentons des formulations équivalentes caractérisant les parties actives, le potentiel de Kantorovich et le flot optimal. En particulier, l'EDP de condition d'optimalité permet de montrer l'unicité des parties actives. Ensuite, nous étudions en détail des approximations numériques pour lesquelles la convergence de la discrétisation et des simulations numériques sont fournies. Pour les coûts lagrangiens, nous justifions rigoureusement des caractérisations de solution ainsi que des formulations équivalentes. Des exemples numériques sont également donnés. Le reste de la thèse est consacré à l'étude du problème d'appariement optimal avec des contraintes pour le coût de la distance euclidienne. Ce problème a un comportement différent du transport partiel optimal. L'unicité de solution et des formulations équivalentes sont étudiées sous une condition géométrique. La convergence de la discrétisation et des exemples numériques sont aussi établis. Les principaux outils que nous utilisons dans la thèse sont des combinaisons des techniques d'EDP, de la théorie du transport optimal et de la théorie de dualité de Fenchel--Rockafellar. Pour le calcul numérique, nous utilisons des méthodes du lagrangien augmenté
The manuscript deals with the mathematical and numerical analysis of the optimal partial transport and optimal constrained matching problems. These two problems bring out new unknown quantities, called active submeasures. For the optimal partial transport with Finsler distance costs, we introduce equivalent formulations characterizing active submeasures, Kantorovich potential and optimal flow. In particular, the PDE of optimality condition allows to show the uniqueness of active submeasures. We then study in detail numerical approximations for which the convergence of discretization and numerical simulations are provided. For Lagrangian costs, we derive and justify rigorously characterizations of solution as well as equivalent formulations. Numerical examples are also given. The rest of the thesis presents the study of the optimal constrained matching with the Euclidean distance cost. This problem has a different behaviour compared to the partial transport. The uniqueness of solution and equivalent formulations are studied under geometric condition. The convergence of discretization and numerical examples are also indicated. The main tools which we use in the thesis are some combinations of PDE techniques, optimal transport theory and Fenchel--Rockafellar dual theory. For numerical computation, we make use of augmented Lagrangian methods
APA, Harvard, Vancouver, ISO, and other styles
25

Diallo, Ibrahima. "Some topics in mathematical finance: Asian basket option pricing, Optimal investment strategies." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210165.

Full text
Abstract:
This thesis presents the main results of my research in the field of computational finance and portfolios optimization. We focus on pricing Asian basket options and portfolio problems in the presence of inflation with stochastic interest rates.

In Chapter 2, we concentrate upon the derivation of bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework.We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151–168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3–33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55–57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51–90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1–52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity

In Chapter 3, we propose some moment matching pricing methods for European-style discrete arithmetic Asian basket options in a Black & Scholes framework. We generalize the approach of Curran M. (1994) [Valuing Asian and portfolio by conditioning on the geometric mean price”, Management science, 40, 1705-1711] and of Deelstra G. Liinev J. and Vanmaele M. (2004) [Pricing of arithmetic basket options by conditioning”, Insurance: Mathematics & Economics] in several ways. We create a framework that allows for a whole class of conditioning random variables which are normally distributed. We moment match not only with a lognormal random variable but also with a log-extended-skew-normal random variable. We also improve the bounds of Deelstra G. Diallo I. and Vanmaele M. (2008). [Bounds for Asian basket options”, Journal of Computational and Applied Mathematics, 218, 215-228]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and

time-to-maturity.

In Chapter 4, we use the stochastic dynamic programming approach in order to extend

Brennan and Xia’s unconstrained optimal portfolio strategies by investigating the case in which interest rates and inflation rates follow affine dynamics which combine the model of Cox et al. (1985) [A Theory of the Term Structure of Interest Rates, Econometrica, 53(2), 385-408] and the model of Vasicek (1977) [An equilibrium characterization of the term structure, Journal of Financial Economics, 5, 177-188]. We first derive the nominal price of a zero coupon bond by using the evolution PDE which can be solved by reducing the problem to the solution of three ordinary differential equations (ODE). To solve the corresponding control problems we apply a verification theorem without the usual Lipschitz assumption given in Korn R. and Kraft H.(2001)[A Stochastic control approach to portfolio problems with stochastic interest rates, SIAM Journal on Control and Optimization, 40(4), 1250-1269] or Kraft(2004)[Optimal Portfolio with Stochastic Interest Rates and Defaultable Assets, Springer, Berlin].


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
26

Barban, Nicola. "Essays on sequence analysis for life course trajectories." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3421549.

Full text
Abstract:
The thesis is articulated in three chapters in which I explore methodological aspects of sequence analysis for life course studies and I present some empirical analyses. In the first chapter, I study the reliability of two holistic methods used in life-course methodology. Using simulated data, I compare the goodness of classification of Latent Class Analysis and Sequence Analysis techniques. I first compare the consistency of the classification obtained via the two techniques using an actual dataset on the life course trajectories of young adults. Then, I adopt a simulation approach to measure the ability of these two methods to correctly classify groups of life course trajectories when specific forms of “random” variability are introduced within pre-specified classes in an artificial datasets. In order to do so, I introduce simulation operators that have a life course and/or obser- vational meaning. In the second chapter, I propose a method to study the heterogeneity in life course trajectories. Using a non parametric approach, I evaluate the association between Optimal Matching distances and a set of categorical variables. Using data from the National Longitudinal Study of Adolescent Health (Add-Health), I study the hetero- geneity of early family trajectories in young women. In particular, I investigate if the OM distances can be partially explained by family characteristics and geographical context experienced during adolescence. The statistical methodology is a generalization of the analysis of variance (ANOVA) to any metric measure. In the last chapter, I present an application of sequence analysis. Using family transitions from Wave I to Wave IV of Add-health, I investigate the association between life trajectories and health outcomes at Wave IV. In particular, I am interested in exploring how differences in timing, quantum and order of family formation transitions are connected to self-reported health, depres- sion and risky behaviors in young women. Using lagged-value regression models, I take into account selection and the effect of confounding variables.
La tesi è articolata in tre sezioni distinte in cui vengono afffrontati sia aspetti metodologici che analisi empiriche riguardanti l’analisi delle sequenze per lo studio del corso di vita. Nel primo capitolo, viene presentato un confronto tra due metodi olistici per lo studio del corso di vita. Usando dati simulati, si confronta la bontà di classificazione ottenuta con modelli di classi latenti e tecniche di analisi delle sequenze. Le simulazioni sono effettuate introducendo errori di tipo stocastico in gruppi omogenei di traiettorie. Nel secondo capitolo, si propone di studiare l’eterogeneità nei percorsi di vita familiare. Usando un approccio nonparametrico, viene valutata l’associazione tra le distanze ottenute tramite l’algoritmo di Optimal Matching ed un insieme di variabili categoriche. Usando i dati provenienti dall’indagine National Longitudinal Study of Adolescent Health (Add-Health), si studia l’eterogeneità nei percorsi di formazione familiare di un campione di giovani donne statunitensi. La metodologia statistica proposta è una generalizzazione dell’analisi della varianza (ANOVA) . Nell’ultimo capitolo, si presenta un’applicazione dell’analisi delle sequenze per dati longitudinali. Usando i dati sulla transizione alla famiglia dalla prima alla quarta rilevazione nell’indagine Add-Health, vengono studiate le associazioni tra transizioni familiari e diversi indicatori di salute. In particolare, viene studiato come alcune caratteristiche legate alle transizioni familiari (timing, quantum, sequencing) siano associate allo stato generale di salute, depressione e comportamenti a rischio. La selezione e l’effetto di variabili confondenti sono prese in considerazione nell’analisi.
APA, Harvard, Vancouver, ISO, and other styles
27

Toubi, Wafa. "Assurance chômage optimale et stabilité de l’emploi." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0309/document.

Full text
Abstract:
La thèse étudie les liens qui existent entre les recommandations issues de la littérature sur l'assurance chômage optimale et la qualité des emplois repris par les chômeurs. Nous nous intéressons en particulier à une dimension de la qualité de l'emploi qu'est la stabilité des emplois dans un contexte où les contrats de courte voire de très courte durée sont en pleine expansion en France. En utilisant les modèles théoriques de recherche d'emploi et d'appariement, nous analysons la manière dont les caractéristiques de l'assurance chômage affectent la stabilité des emplois repris. La particularité de notre analyse consiste à intégrer la manière dont les employés sont influencés par les paramètres de l'assurance chômage. En effet, si l'on souhaite analyser de manière globale l'impact des paramètres du système d'indemnisation sur l'évolution du taux de chômage, il convient de déterminer comment ces derniers influencent le taux de sortie du chômage (analyse du comportement des demandeurs d'emploi) mais aussi comment ils affectent le taux d'entrée au chômage (analyse du comportement des employés). Pour étudier le comportement des employés nous considérons que ces derniers influencent leur probabilité de conserver leur emploi en fournissant des efforts de rétention d'emploi. Nous montrons notamment que les chômeurs qui quittent rapidement le chômage retrouvent fréquemment des emplois peu stables. Une fois en emploi, ils exercent relativement peu d'efforts pour conserver leur emploi augmentant par là même leur probabilité de retourner rapidement au chômage. L'impact final d'une réduction du montant de l'indemnisation sur l'évolution du chômage est donc indéterminé dès lors que l'on intègre les employés dans l'analyse
The thesis studies the relationships between the Optimal Unemployment Insurance (UI) literature recommendations and post unemployment job stability. We focus on one particular job quality dimension that is job stability within a context of a huge increase of very short duration job contracts in France since the 2000’s. Using job search and matching frameworks, we analyse how the features of the UI system affect job stability. The particularity of our approach is that we account for employees’ behaviors while the majority of the literature on optimal UI focuses only on jobless workers behaviors. We show notably that job-seekers who leave quickly unemployment tend to find unstable jobs. Once employed they have a greater probability to return to unemployment because the job-retention efforts they exert are not sufficient
APA, Harvard, Vancouver, ISO, and other styles
28

Barišin, Ivana. "Optical sub-pixel matching and active tectonics." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:17d42603-1946-49d1-8144-edc0ca0ae501.

Full text
Abstract:
In this thesis I use sub-pixel optical matching, Interferometric Synthetic Aperture Radar (InSAR), and Light Detection and Ranging (lidar) spatial geodetic observations to produce reliable 3D displacement fields caused by co-seismic events and reliable earthquake source models with slip distribution on fault planes. I produce horizontal displacement maps for the 2005 Dabbahu segment, Afar using SPOT4 satellite images. By combining InSAR descending data and range offsets with optical sub-pixel I produced a vertical displacement map of the event. I attempted to perform the inversion of the dataset obtained by sub-pixel matching but I found that datasets are not well suited for the typical numerical inversion, and I fit data with direct dislocation modelling instead. I identify biases and errors that arise from optical sub-pixel matching of satellite images using many horizontal datasets constructed using SPOT5 images for the El Mayor-Cucapah earthquake. I develop algorithms for removal of some of these biases from horizontal displacement maps. Using sub-pixel matching I asses the quality of several DEMs available to me for study of the El Mayor-Cucapah earthquake. I developed a novel technique for producing vertical displacement maps caused by an earthquake by combining archived pre-event satellite images with post event acquired lidar. I use this technique to produce a vertical displacement map of the El Mayor-Cucapah earthquake. I produce a source model of the El Mayor-Cucapah earthquake by inverting InSAR datasets using the method. After attempts to do joint inversion of InSAR and optical sub-pixel matching I developed the code to use Bayesian inversion instead, because its advantages when it comes to joint modelling of datasets. I sucessfully invert four InSAR datasets on seven fault planes using the Bayesian approach. I found that the results of the Bayesian inversion are very similar to the results of the optimization inversion.
APA, Harvard, Vancouver, ISO, and other styles
29

Kim, Taegeun. "Optical Three-Dimensional Image Matching Using Holographic Information." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/28362.

Full text
Abstract:
We present a three-dimensional (3-D) optical image matching technique and location extraction techniques of matched 3-D objects for optical pattern recognition. We first describe the 3-D matching technique based on two-pupil optical heterodyne scanning. A hologram of the 3-D reference object is first created and then represented as one pupil function with the other pupil function being a delta function. The superposition of each beam modulated by the two pupils generates a scanning beam pattern. This beam pattern scans the 3-D target object to be recognized. The output of the scanning system gives out the 2-D correlation of the hologram of the reference object and that of the target object. When the 3-D image of the target object is matched with that of the reference object, the output of the system generates a strong correlation peak. This theory of 3-D holographic matching is analyzed in terms of two-pupil optical scanning. Computer simulation and optical experiment results are presented to reinforce the developed theory. The second part of the research concerns the extraction of the location of a 3-D image matched object. The proposed system basically performs a correlation of the hologram of a 3-D reference object and that of a 3-D target object, and hence 3-D matching is possible. However, the system does not give out the depth location of matched 3-D target objects directly because the correlation of holograms is a 2-D correlation and hence not 3-D shift invariant. We propose two methods to extract the location of matched 3-D objects directly from the correlation output of the system. One method is to use the optical system that focuses the output correlation pattern along depth and arrives at the 3-D location at the focused location. However, this technique has a drawback in that only the location of 3-D targets that are farther away from the 3-D reference object can be extracted. Thus, in this research, we propose another method in which the extraction of a location for a matched 3-D object is possible without the aforementioned drawback. This method applies the Wigner distribution to the power fringe-adjusted filtered correlation output to extract the 3-D location of a matched object. We analyze the proposed method and present computer simulation and optical experiment results.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Charbonneau-Lefort, Mathieu. "Optical parametric amplifiers using chirped quasi-phase-matching gratings /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lakemond, Ruan. "Multiple camera management using wide baseline matching." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/37668/1/Ruan_Lakemond_Thesis.pdf.

Full text
Abstract:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
APA, Harvard, Vancouver, ISO, and other styles
32

Feuillâtre, Hélène. "Détermination automatique de l'incidence optimale pour l'observation des lésions coronaires en imagerie rotationnelle R-X." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S039/document.

Full text
Abstract:
Les travaux de cette thèse s’inscrivent dans le cadre du planning de traitements minimalement invasifs des lésions des artères coronaires. Le cardiologue réalise un examen coronarographique, puis dans la continuité, une angioplastie transluminale. L’angiographie rotationnelle à rayons X permet de visualiser sous différentes incidences 2D la lumière des artères coronaires sur plusieurs cycles cardiaques et aussi d’obtenir une reconstruction 3D+T des arbres coronaires. A partir de cette séquence, notre objectif est de déterminer automatiquement une incidence optimale 2D du segment sténosé compatible avec les angles du C-arm afin d’aider le cardiologue lors de l’intervention.Différentes étapes sont considérées pour calculer la position angulaire optimale du C-arm. Afin de suivre la zone de lésion durant le cycle cardiaque, une première méthode est proposée pour mettre en correspondance tous les arbres de la séquence 3D+T. Tout d’abord, un appariement deux à deux des arbres successifs est réalisé afin de construire un arbre d’union. Ces derniers sont ensuite fusionnés afin d’obtenir un arbre mosaïque représentant l’arbre le plus complet de la séquence. L’utilisation de mesures de similarités géométriques et hiérarchiques ainsi que l’insertion de nœuds artificiels permet de prendre en compte les différents mouvements non-rigides des artères coronaires subits au cours du cycle cardiaque et les variations topologiques dû à leurs extractions. Cet appariement nous permet de proposer une deuxième méthode afin d’obtenir une vue angiographique 2D optimale de la zone de lésion tout le long du cycle cardiaque. Cette incidence est proposée spécifiquement pour trois types de région d’intérêt (segment unique, segment multiple ou bifurcation) et est calculée à partir de quatre critères (raccourcissement, chevauchement interne et externe ou angle d’ouverture de bifurcation). Une vue 2D déployée du segment projeté avec le moins de superposition avec les structures vasculaires avoisinantes est obtenue. Nous donnons également la possibilité au cardiologue d’avoir une incidence optimale privilégiant soit le déploiement du stent ou soit le guidage d’outils de la racine de l’arbre à la zone sténosée. Nos différents algorithmes ont été évalués sur une séquence réelle de 10 phases segmentées à partir d’un CT et de 41 séquences simulées
The thesis work deals with the planning of minimally invasive surgery of coronary artery lesions. The physician performs a coronarography following by a percutaneous transluminal angioplasty. The X-ray rotational angiography permits to visualize the lumen artery under different projection angles in several cardiac cycles. From these 2D projections, a 3D+T reconstruction of coronary arteries can be obtained. Our goal is to determine automatically from this 3D+T sequence, the optimal angiographic viewing angle of the stenotic segment. Several steps are proposed to compute the optimal angular position of the C-arm. Firstly, a mosaic-based tree matching algorithm of the 3D+T sequence is proposed to follow the stenotic lesion in the whole cardiac cycle. A pair-wise inexact tree matching is performed to build a tree union between successive trees. Next, these union trees are merged to obtain the mosaic tree which represents the most complete tree of the sequence. To take into account the non-rigid movement of coronary arteries during the cardiac cycle and their topology variations due to the 3D reconstruction or segmentation, similarity measures based on hierarchical and geometrical features are used. Artificial nodes are also inserted. With this global tree sequence matching, we propose secondly a new method to determine the optimal viewing angle of the stenotic lesion throughout the cardiac cycle. This 2D angiographic view which is proposed for three regions of interest (single segment, multiple segment or bifurcation) is computed from four criteria: the foreshortening, the external and internal overlap and the bifurcation opening angle rates. The optimal view shows the segment in its most extended and unobstructed dimension. This 2D view can be optimal either for the deployment of the stent or for the catheter guidance (from the root to the lesion). Our different algorithms are evaluated on real sequence (CT segmentation) and 41 simulated sequences
APA, Harvard, Vancouver, ISO, and other styles
33

Park, Gwangcheol. "Multiscale deformable template matching for image analysis." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/13737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Dornala, Maninder. "Visualization of Clustering Solutions for Large Multi-dimensional Sequential Datasets." Youngstown State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1525869411092807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Tsa, Woo-Hu. "Mode-matching method in optical corrugated waveguides with large rectangular groove depth." Thesis, University of Glasgow, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Galindo, Patricio A. "Image matching for 3D reconstruction using complementary optical and geometric information." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0007/document.

Full text
Abstract:
L’appariement d’images est un sujet central de recherche en vision par ordinateur. La recherche sur cette problématique s’est longuement concentrée sur ses aspects optiques, mais ses aspects géométriques ont reçu beaucoup moins d’attention. Cette thèse promeut l’utilisation de la géométrie pour compléter les informations optique dans les tâches de mise en correspondance d’images. Tout d’abord, nous nous penchons sur les méthodes globales pour lesquelles les occlusions et arêtes vives posent des défis. Dans ces scenarios, le résultat est fortement dépendant de la contribution du terme de régularisation. À l'aide d’une caractérisation géométrique de ce comportement, nous formulons une méthode d’appariement qui dirige les lignes de la grille loin des régions problématiques. Bien que les méthodes variationnelles fournissent des résultats qui se comportent bien en général, les méthodes locales basées sur la propagation de correspondances fournissent des résultats qui s’adaptent mieux à divers structures 3D mais au détriment de la cohérence globale de la surface reconstruite. Par conséquent, nous présentons une nouvelle méthode de propagation guidée par des reconstructions locales de surface
AbstractImage matching is a central research topic in computer vision which has been mainly focused on optical aspects. The aim of the work presented herein consists in the direct use of geometry to complement optical information in the tasks of 2D matching. First, we focus on global methods based on the calculus of variations. In such methods occlusions and sharp features raise difficult challenges. In these scenarios only the contribution of the regularizer accounts for results. Based on a geometric characterization of this behaviour, we formulate a variational matching method that steers grid lines away from problematic regions. While variational methods provide well behaved results, local methods based on match propagation provide results that adapt closely to varying 3D structures although choppy in nature. Therefore, we present a novel method to propagate matches using local information about surface regularity correcting 3D positions along with corresponding 2D matchings
APA, Harvard, Vancouver, ISO, and other styles
37

Pitcher, Courtney Richard. "Matching optical coherence tomography fingerprint scans using an iterative closest point pipeline." Master's thesis, Faculty of Science, 2021. http://hdl.handle.net/11427/33923.

Full text
Abstract:
Identifying people from their fingerprints is based on well established technology. However, a number of challenges remain, notably overcoming the low feature density of the surface fingerprint and suboptimal feature matching. 2D contact based fingerprint scanners offer low security performance, are easy to spoof, and are unhygienic. Optical Coherence Tomography (OCT) is an emerging technology that allows a 3D volumetric scan of the finger surface and its internal microstructures. The junction between the epidermis and dermis - the internal fingerprint - mirrors the external fingerprint. The external fingerprint is prone to degradation from wear, age, or disease. The internal fingerprint does not suffer these deficiencies, which makes it a viable candidate zone for feature extraction. We develop a biometrics pipeline that extracts and matches features from and around the internal fingerprint to address the deficiencies of contemporary 2D fingerprinting. Eleven different feature types are explored. For each type an extractor and Iterative Closest Point (ICP) matcher is developed. ICP is modified to operate in a Cartesiantoroidal space. Each of these features are matched with ICP against another matcher, if one existed. The feature that has the highest Area Under the Curve (AUC) on an Receiver Operating Characteristic of 0.910 is a composite of 3D minutia and mean local cloud, followed by our geometric properties feature, with an AUC of 0.896. By contrast, 2D minutiae extracted from the internal fingerprint achieved an AUC 0.860. These results make our pipeline useful in both access control and identification applications. ICP offers a low False Positive Rate and can match ∼30 composite 3D minutiae a second on a single threaded system, which is ideal for access control. Identification systems require a high True Positive and True Negative Rate, in addition time is a less stringent requirement. New identification systems would benefit from the introduction of an OCT based pipeline, as all the 3D features we tested provide more accurate matching than 2D minutiae. We also demonstrate that ICP is a viable alternative to match traditional 2D features (minutiae). This method offers a significant improvement over the popular Bozorth3 matcher, with an AUC of 0.94 for ICP versus 0.86 for Bozorth3 when matching a highly distorted dataset generated with SFinGe. This compatibility means that ICP can easily replace other matchers in existing systems to increase security performance.
APA, Harvard, Vancouver, ISO, and other styles
38

Rankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16200/1/Luke_Rankine_Thesis.pdf.

Full text
Abstract:
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
APA, Harvard, Vancouver, ISO, and other styles
39

Rankine, Luke. "Newborn EEG seizure detection using adaptive time-frequency signal processing." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16200/.

Full text
Abstract:
Dysfunction in the central nervous system of the neonate is often first identified through seizures. The diffculty in detecting clinical seizures, which involves the observation of physical manifestations characteristic to newborn seizure, has placed greater emphasis on the detection of newborn electroencephalographic (EEG) seizure. The high incidence of newborn seizure has resulted in considerable mortality and morbidity rates in the neonate. Accurate and rapid diagnosis of neonatal seizure is essential for proper treatment and therapy. This has impelled researchers to investigate possible methods for the automatic detection of newborn EEG seizure. This thesis is focused on the development of algorithms for the automatic detection of newborn EEG seizure using adaptive time-frequency signal processing. The assessment of newborn EEG seizure detection algorithms requires large datasets of nonseizure and seizure EEG which are not always readily available and often hard to acquire. This has led to the proposition of realistic models of newborn EEG which can be used to create large datasets for the evaluation and comparison of newborn EEG seizure detection algorithms. In this thesis, we develop two simulation methods which produce synthetic newborn EEG background and seizure. The simulation methods use nonlinear and time-frequency signal processing techniques to allow for the demonstrated nonlinear and nonstationary characteristics of the newborn EEG. Atomic decomposition techniques incorporating redundant time-frequency dictionaries are exciting new signal processing methods which deliver adaptive signal representations or approximations. In this thesis we have investigated two prominent atomic decomposition techniques, matching pursuit and basis pursuit, for their possible use in an automatic seizure detection algorithm. In our investigation, it was shown that matching pursuit generally provided the sparsest (i.e. most compact) approximation for various real and synthetic signals over a wide range of signal approximation levels. For this reason, we chose MP as our preferred atomic decomposition technique for this thesis. A new measure, referred to as structural complexity, which quantifes the level or degree of correlation between signal structures and the decomposition dictionary was proposed. Using the change in structural complexity, a generic method of detecting changes in signal structure was proposed. This detection methodology was then applied to the newborn EEG for the detection of state transition (i.e. nonseizure to seizure state) in the EEG signal. To optimize the seizure detection process, we developed a time-frequency dictionary that is coherent with the newborn EEG seizure state based on the time-frequency analysis of the newborn EEG seizure. It was shown that using the new coherent time-frequency dictionary and the change in structural complexity, we can detect the transition from nonseizure to seizure states in synthetic and real newborn EEG. Repetitive spiking in the EEG is a classic feature of newborn EEG seizure. Therefore, the automatic detection of spikes can be fundamental in the detection of newborn EEG seizure. The capacity of two adaptive time-frequency signal processing techniques to detect spikes was investigated. It was shown that a relationship between the EEG epoch length and the number of repetitive spikes governs the ability of both matching pursuit and adaptive spectrogram in detecting repetitive spikes. However, it was demonstrated that the law was less restrictive forth eadaptive spectrogram and it was shown to outperform matching pursuit in detecting repetitive spikes. The method of adapting the window length associated with the adaptive spectrogram used in this thesis was the maximum correlation criterion. It was observed that for the time instants where signal spikes occurred, the optimal window lengths selected by the maximum correlation criterion were small. Therefore, spike detection directly from the adaptive window optimization method was demonstrated and also shown to outperform matching pursuit. An automatic newborn EEG seizure detection algorithm was proposed based on the detection of repetitive spikes using the adaptive window optimization method. The algorithm shows excellent performance with real EEG data. A comparison of the proposed algorithm with four well documented newborn EEG seizure detection algorithms is provided. The results of the comparison show that the proposed algorithm has significantly better performance than the existing algorithms (i.e. Our proposed algorithm achieved a good detection rate (GDR) of 94% and false detection rate (FDR) of 2.3% compared with the leading algorithm which only produced a GDR of 62% and FDR of 16%). In summary, the novel contribution of this thesis to the fields of time-frequency signal processing and biomedical engineering is the successful development and application of sophisticated algorithms based on adaptive time-frequency signal processing techniques to the solution of automatic newborn EEG seizure detection.
APA, Harvard, Vancouver, ISO, and other styles
40

Ju, Hui. "A Novel Approach to Robust LiDAR/Optical Imagery Registration." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1370262649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chosson, Elie. "Le Revenu de Solidarité Active (RSA) au prisme de ses catégories formelles : pour une évaluation critique du dispositif." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAE008/document.

Full text
Abstract:
Le Revenu de Solidarité Active (RSA) a suscité un vif intérêt dans la communauté scientifique, mais son évaluation s'est focalisée sur son impact sur les taux de retour à l'emploi et sur la pauvreté laborieuse. Dans ce contexte, la thèse construit une évaluation critique du dispositif: nous montrons son incapacité à prendre en charge la position contradictoire dans laquelle sont placés les bénéficiaires. Ceux-ci sont en effet confrontés à des conditions de valorisation de la force de travail structurellement difficiles, et dans le même temps le dispositif organise, par différents moyens, la centralité de l'emploi. Dans le premier temps de la démonstration, la thèse met en discussion les catégories de travail construites par le marxisme critique de Moishe Postone et par Hannah Arendt. Grâce à cette démarche théorique nous comprenons que le RSA redéfinit les statuts d'activité des bénéficiaires autour d'une mise en scène de la nécessité du retour au travail. En parallèle, nous sommes amenés à saisir théoriquement et empiriquement la place contradictoire du travail dans le capitalisme contemporain: source de la richesse sociale certes, mais également ébranlé par des conditions de valorisation de la force de travail toujours plus difficiles. Dans le second temps de la démonstration, nous mettons en œuvre le suivi d'une cohorte de ménages bénéficiaires dans le département de l'Isère entre 2010 et 2012. L'analyse descriptive et la modélisation des mobilités et des trajectoires nous conduisent à constater l'extrême diversité des parcours individuels. À côté des usages transitoires du dispositif qui sont majoritaires, nous constatons que les parcours sont heurtés, et lorsqu'ils montrent une stabilité c'est souvent au profit d'un maintien dans les marges du marché du travail. Nous illustrons l'incapacité du RSA à rassembler, derrière l'emploi comme standard uniforme, la grande diversité des bénéficiaires
The French « Revenu de Solidarité Active » (RSA) generated a great deal of interest in the the scientific community, focused mainly on its impact on labor force participation and on working poor. In this context, the thesis looks for a critical assessment of the device: we show its inability to take over the contradictory position in which the beneficiaries are placed. Those are confronted with structurally difficult conditions for the exploitation of the labor force, and at the same time the RSA organizes, through various means, the centrality of employment. Firstly, the thesis discusses the categories of work, constructed by the critical Marxism of Moishe Postone and by Hannah Arendt. Thanks to this theoretical approach, we understand that the RSA redefines the beneficiaries' labour statuses, around a staging of the need for a return to work. Simultaneously, we show theoretically and empirically the contradictory position of work in contemporary capitalism: source of social wealth, certainly, but also undermined by increasingly difficult conditions for the exploitation of labor power. Secondly, we implement a follow-up of a cohort of beneficiary households in the department of Isère, between 2010 and 2012. Descriptive statistical analysis and the modeling of mobilities and trajectories lead us to show diversity of individual paths. In addition to the temporary uses of RSA, which constitute a majority, we note paths are broken, and when they show stability, it is often in labor market's margins. Finally, we show that the RSA fails to gather, behind employment as an uniform standard, the great diversity of beneficiaries
APA, Harvard, Vancouver, ISO, and other styles
42

Daengngam, Chalongrat. "Second-Order Nonlinear Optical Responses in Tapered Optical Fibers with Self-Assembled Organic Multilayers." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/77068.

Full text
Abstract:
Owing to its centrosymmetric structure, the critical optical component of a silica fiber cannot to possess a second-order nonlinear optical susceptibility, Χ(²), preventing a silica fiber from many potential applications. Here, we theoretically and experimentally demonstrate a new technique to generate large and thermodynamically stable second-order nonlinearity into silica optical tapered fibers without breaking the centrosymmetry of the silica glass. The nonlinearity is introduced by surface layers with high polar-ordering fabricated by a novel hybrid covalent/ionic self-assembly multilayer technique. Despite the overall rotational symmetry of the nonlinear fiber, we observe significant second harmonic generation with ~ 400–500 fold enhancement of the SHG power compared to the traditional tapers. Phase matching for a SHG process in second-order nonlinear tapered fibers is also realized by the compensation of waveguide modal dispersion with material chromatic dispersion, which occurs only for submicron tapers where the modal dispersion is large. In addition, quasi-phase-matching for a nonlinear taper can be accomplished by introducing a periodic pattern into the nonlinear film coating. We use UV laser ablation for the controlled removal of particular nonlinear film segments on a taper surface in order to produce a Χ(²) grating structure. A resulting SHG enhancement from quasi-phase-matching is observed over a broadband spectrum of the pump light mainly due to the non-uniform shape of a taper waveguide. The laser ablation is a clean and fast technique able to produce well-define patterns of polymer films on either flat or curved substrate geometry. With surface layers containing reactive functional groups e.g. primary amines, we demonstrate that the resulting patterned film obtained from the laser ablation can be used as a template for further self-assembly of nanoparticles with high selectivity. A pattern feature size down to ~ 2μm or smaller can be fabricated using this approach. We also discuss preliminary results on a novel technique to further improve spatial accuracy for selective self-assembly of nanoparticles at an unprecedented level. Different types of nanoparticles are joined in order to form well-defined, molecular-like superstructures with nanoscale accuracy and precision. The technique is based on a selective surface functionalization of photosensitive molecules coated on metallic nanoparticles utilizing enhanced two-photon photocleavage at the plasmonically-active sites (hot spots) of the nanoparticles in resonance with an applied electromagnetic wave. As a result, the surface functional groups at the nanoparticle hot spots are different from the the other areas, allowing other kinds of nanoparticles to self-assemble at the hot spots with high degree of selectivity.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Feaver, Ryan K. "Cascaded Orientation-Patterned Gallium Arsenide Optical Parametric Oscillator for Improved Longwave Infrared Conversion Efficiency." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1493206535730182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Alibakhshikenari, M., B. S. Virdee, L. Azpilicueta, C. H. See, Raed A. Abd-Alhameed, A. A. Althuwayb, F. Falcone, I. Huyen, T. A. Denidni, and E. Limiti. "Optimum power transfer in RF front end systems using adaptive impedance matching technique." Nature Publishing Group, 2021. http://hdl.handle.net/10454/18508.

Full text
Abstract:
Yes
Matching the antenna’s impedance to the RF-front-end of a wireless communications system is challenging as the impedance varies with its surround environment. Autonomously matching the antenna to the RF-front-end is therefore essential to optimize power transfer and thereby maintain the antenna’s radiation efficiency. This paper presents a theoretical technique for automatically tuning an LC impedance matching network that compensates antenna mismatch presented to the RF-front-end. The proposed technique converges to a matching point without the need of complex mathematical modelling of the system comprising of non-linear control elements. Digital circuitry is used to implement the required matching circuit. Reliable convergence is achieved within the tuning range of the LC-network using control-loops that can independently control the LC impedance. An algorithm based on the proposed technique was used to verify its effectiveness with various antenna loads. Mismatch error of the technique is less than 0.2%. The technique enables speedy convergence (< 5 µs) and is highly accurate for autonomous adaptive antenna matching networks.
This work is partially supported by RTI2018-095499-B-C31, Funded by Ministerio de Ciencia, Innovación y Universidades, Gobierno de España (MCIU/AEI/FEDER,UE), and innovation programme under grant agreement H2020-MSCA-ITN-2016 SECRET-722424 and the financial support from the UK Engineering and Physical Sciences Research Council (EPSRC) under grant EP/E022936/1.
APA, Harvard, Vancouver, ISO, and other styles
45

Ragain, James Carlton. "Matching the optical properties of direct esthetic dental restorative materials to those of human enamel and dentin." The Ohio State University, 1998. http://catalog.hathitrust.org/api/volumes/oclc/48036279.html.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 1998.
Advisor: William M. Johnson, Oral Biology Program. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
46

Strömqvist, Gustav. "Nonlinear response in engineered optical materials." Doctoral thesis, KTH, Laserfysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-92221.

Full text
Abstract:
Material and structure engineering are increasingly employed in active optical media,in this context defined as media capable of providing laser or/and optical parametric gain. For laser materials, the main aim of the engineering is to tailor the absorption and emission cross sections in order to optimise the laser performance. At the same time, the engineering also results in a collateral modification of the material’s nonlinear response. In the first part of this work, the nonlinear index of refraction is characterised for two crystallographic forms of laser-ion doped and undoped double-tungstate crystals. These laser crystals have broad gain bandwidths, in particular when doped with Yb3+. As shown in this work, the crystals also have large Kerr nonlinearities, where the values vary significantly for different chemical compositions of the crystals. The combination of a broad gain bandwidthand a high Kerr nonlinearity makes the laser-ion doped double tungstates excellent candidates to employ for the generation of ultrashort laser pulses by Kerr-lens modelocking. The second part of the work relates to the applications of engineered second-order nonlinear media, which here in particular are periodically-poled KTiOPO4 crystals. Periodic structure engineering of second-order nonlinear crystals on a submicrometre scale opens up for the realisation of novel nonlinear devices. By the use of quasi-phase matching in these structures, it is possible to efficiently downconvert a pump wave into two counterpropagating parametric waves, which leads to a device called a mirrorless optical parametric oscillator. The nonlinear response in these engineered submicrometre structures is such that the parametric wave that propagates in the opposite direction of the pump automatically has a narrow bandwidth, whereas the parametric wave that propagates with the pump essentially is a frequency-shifted replica of the pump wave. The unusual spectral properties andthe tunabilities of mirrorless optical parametric oscillators are investigated.
QC 20120330
APA, Harvard, Vancouver, ISO, and other styles
47

Campolmi, Alessia. "Essays on open economic, inflation and labour markets." Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7367.

Full text
Abstract:
En los últimos años se ha desarollado mucho la literatura que utiliza modelos estocásticos de equilibrio económico general en economía abierta. En esta clase de modelos el primer capítulo estudia si el banco central tiene que fijarse en al inflación medida mirando al los precios al consumo (CPI) o a los precios a la producción. Se demonstra como la introducción de competencia monopolística en el mercado del trabajo y rigidez de los salarios nominales justifica el utilizo de la inflación medida sobre CPI. En el segundo capítulo el enfoque es sobre las diferentes volatilidades de la inflación entre paísos de la unión monetaria y como esto se puede relacionar con diferentes estructuras del mercado del trabajo. En el último capítulo se utiliza un modelo a dos paísos para estudiar las consecuencias de una subida del precio del petróleo sobre la inflación, los salarios reales y el producto interno bruto.
In these last years there has been an increasing literature developing DSGE Open Economy Models with market imperfections and nominal rigidities. It is the so called "New Open Economy Macroeconomics". Within this class of models the first chapter analyses the issue of whether the monetary authority should target Consumer Price Index (CPI) inflation or domestic inflation. It is shown that the introduction of monopolistic competition in the labour market and nominal wage rigidities rationalise CPI inflation targeting. In the second chapter we introduce matching and searching frictions in the labour market and relate different labour market structures across European countries with differences in the volatility of inflation across the same countries. In the last chapter we use a two-country model with oil in the production function and price and wage rigidities to relate movements in wage and price inflation, real wages and GDP growth rate to oil price changes.
APA, Harvard, Vancouver, ISO, and other styles
48

Henriksson, Markus. "Nanosecond tandem optical parametric oscillators for mid-infrared generation." Licentiate thesis, KTH, Physics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4261.

Full text
Abstract:

This thesis discusses a new scheme for generating radiation in the mid infrared spectral region, especially the 3.5-5 µm range. The scheme uses established Nd3+-lasers at 1.06 µm and down conversion in nonlinear optical crystals. The down conversion is made by two optical parametric oscillators (OPO) in series. The second OPO is a classical OPO using a zink germanium phosphide (ZGP) crystal. ZGP is the best nonlinear material available for the 4-8 µm spectral range, but it is absorbing below 2 µm. The new development presented in this thesis is the OPO used to convert the 1.06 µm laser radiation to a suitable OPO pump near 2 µm.

The OPO uses a type I quasi phase-matched crystal, which accesses high nonlinearities and avoids walk-off. The problem with type I OPOs close to degeneracy is the broad bandwidth of the generated radiation, which reduces the efficiency of a second OPO. This has been solved with a spectrally selective cavity using a volume Bragg grating output coupler. Unlike other bandwidth limiting schemes this introduces no intracavity losses and thus efficient OPO operation is achievable.

Narrow linewidth (~0.5 nm) OPO operation has been achieved with periodically poled LiNbO3 (PPLN) and periodically poled KTiOPO4 (PPKTP) while locking the signal wavelength at 2008 nm and simultaneously generating an idler at 2264 nm. A high average power PPLN OPO with 36 % conversion efficiency and 47 % slope efficiency is reported. Operation very close to degeneracy at 2128 nm with the narrowband signal and idler peaks separated by 0.6 nm was demonstrated in a PPKTP OPO. Both the signal at 2008 nm and the combined signal and idler around 2128 nm from the PPKTP OPOs have been used to show efficient pumping of a ZGP OPO. The maximum conversion efficiency from 1 µm to the mid-IR demonstrated is 7 % with a slope efficiency of 10 %. This is not quite as high as what has been presented by other authors, but the experiments reported here have not shown the optimum efficiency of the new scheme. Relatively simple improvements are expected to give a significant increase in conversion efficiency.

APA, Harvard, Vancouver, ISO, and other styles
49

Myrén, Niklas. "Components based on optical fibers with internal electrodes." Licentiate thesis, KTH, Physics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1685.

Full text
Abstract:

The topic of this thesis is development ofdevices fortelecom applications based on poled optical fibers. The focusis on two different specific functions, wavelength conversionand optical switching.

Optical switching is demonstrated in a poled optical fiberat telecom wavelengths (~1.55 mm). The fiber has two holesrunning along the core in which electrodes are inserted. Thefiber device is made electro-optically active with a polingprocess in which a strong electric field is recorded in thefiber at a temperature of 270 o C. The fiber is then put in onearm of a Mach-Zehnder interferometer and by applying a voltageacross the two electrodes in the fiber the refractive index ismodulated and the optical signal switched from one output portto the other. So far the lowest switching voltage achieved is~1600 V which is too high for a commercial device, but byoptimizing the design of the fiber and the poling process aswitching voltage as low as 50 V is aimed for.

A method to deposit a thin silver electrode inside the holesof an optical fiber is also demonstrated. A new way of creatingperiodic electrodes by periodically ablating the silver filmelectrode inside the holes of an optical fiber is also shown.The periodic electrodes can be used to create a quasi-phasematched (QPM) nonlinearity in the fiber which is useful forincreasing the efficiency of a nonlinear process such aswavelength conversion. Poling of a fiber with silver electrodesshowed a huge increase in the nonlinearity. This could be dueto a resonant enhancement caused by silver nanoclusters.

Keywords:Poling, twinhole fiber, fiber electrodes,silver film electrodes, silver diffusion, quasi-phase matching,optical switching, frequency conversion, optical modulation

APA, Harvard, Vancouver, ISO, and other styles
50

Levenius, Martin. "Optical Parametric Devices in Periodically Poled LiTaO3." Doctoral thesis, KTH, Kvantelektronik och -optik, QEO, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134915.

Full text
Abstract:
Optical parametric frequency conversion based on quasi phase matching (QPM) in nonlinear optical crystals is a powerful technique for generating coherent radiation in wavelength ranges spanning from the mid-infrared (mid-IR) to the blue, displaying low thermal load and high efficiency.This thesis shows how QPM in one- (1D) or two-dimensional (2D) lattices can be employed to engineer novel devices for parametric downconversion in the IR, af-fording freedom in designing both spectral and angular properties of the parametric output. Experimental demonstrations of parametric devices are supported by theoreti-cal modelling of the nonlinear conversion processes.In particular, broadband parametric downconversion has been investigated in 1D QPM lattices, through degenerate downconversion close to the point of zero group-velocity dispersion. Ultra-broadband optical parametric generation (OPG) of 185 THz bandwidth (at 10 dB), spanning more than one octave from 1.1 to 3.7 μm, has been achieved in periodically poled 1 mol% MgO-doped near-stoichiometric LiTaO3 (MgSLT) of 25 μm QPM period, pumped at 860 nm. Such broadband gain is of high interest for ultrashort optical pulse amplification, with applications in high harmonic generation, ultrafast spectroscopy and laser ablation. Furthermore, the det-rimental impact of parasitic upconversion, creating dips in the OPG spectrum, has been investigated. By altering the pump pulse duration, energy can be backconverted to create peaks at the involved OPG wavelengths, offering a possible tool to enhance broadband parametric gain spectra.The engineering of the angular properties of a parametric output benefits greatly from 2D QPM, which is investigated in this thesis by the specific example of hexagonally poled MgSLT. It is demonstrated how two OPG processes, supported by a single 2D QPM device, can exhibit angularly and spectrally degenerate signals (idlers). This degeneracy results in a coherent coupling between the two OPG pro-cesses and a spectrally degenerate twin-beam output in the mid-IR (near IR). 2D QPM devices exhibiting such coherently coupled downconversion processes can find applications as compact sources of entangled photon-pairs. This thesis further illus-trates the design freedom of 2D QPM through the demonstration of a device support-ing multiple parametric processes, thus generating multiple beams from the mid-IR to the blue spectral regions.

QC 20131204

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography