Dissertations / Theses on the topic 'Object constancy'

To see the other types of publications on this topic, follow the link: Object constancy.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 35 dissertations / theses for your research on the topic 'Object constancy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Atherton, Christine J. "The neurobiology of object constancy." Thesis, Bangor University, 2005. https://research.bangor.ac.uk/portal/en/theses/the-neurobiology-of-object-constancy(3f31a74c-3acb-42f2-8941-967e61ad8bac).html.

Full text
Abstract:
`Object constancy' is the name given to the brain's ability to overcome the myriad environmental obstacles to visual perception and produce a stable, consistent internal representation of object shape. Changes in object orientation represent one such confound. It can be inferred from the time taken to recognise misoriented objects that we encode specific object views based on our experience of those objects and their typical orientations ('viewpoint-dependent recognition'). Such studies also suggest that we may recognise certain objects in a manner that is not dependent on their orientation ('viewpoint-invariant recognition'). Further studies indicate that the time to resolve two angularly disparate shapes (`mental rotation') increases as a function of their angular disparity. It is hypothesised, based on these findings, that viewpoint-dependent recognition and mental rotation share a common mechanism for transforming the global stimulus percept into alignment, but that viewpointinvariant recognition is achieved by some other, non-transformational means. This thesis presents studies that examine the cortical correlates of viewpoint-dependent and viewpointinvariant object recognition using novel objects to eliminate the confounding effects of prior experience. It also presents a study that directly compares the cortical correlates of mental rotation, viewpoint-dependent and viewpoint-invariant recognition. Further comparison of these object constancy processes is then made using electrophysiological markers of visuospatial transformation. The findings of these studies indicate that viewpoint-dependent recognition and mental rotation recruit a bilateral parietal-premotor network for the manipulation of global stimulus percepts, hypothesised to be the same mechanism as that used for physical object manipulation and prehension. Viewpoint-invariant recognition does not appear to recruit such a mechanism, and this process appears to be less expensive in terms of cognitive resources than transformational object constancy mechanisms. Thus, implementation of a viewpoint-invariant mechanism to recognise misoriented objects is preferable, but may not be possible where stimulus features are few or ambiguous. In recognising misoriented objects, viewpoint-dependent and viewpoint-invariant mechanisms initially proceed in parallel, but successful recognition of object invariant features may be sufficient to terminate the viewpoint-dependent mechanism.
APA, Harvard, Vancouver, ISO, and other styles
2

Craddock, Matthew Peter. "Comparing the attainment of object constancy in haptic and visual object recognition." Thesis, University of Liverpool, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Beigpour, Shida. "Illumination and Object Reflectance Modeling." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/113551.

Full text
Abstract:
El modelado de la reflectancia de las superficies es una clave importante para la comprensión de escenas. Un modelo de reflectancia preciso, basado en las leyes de la física, nos permite alcanzar resultados realísticos y físicamente plausibles. Además, el uso de tal modelo nos permite establecer un conocimiento más profundo acerca de la interacción de la luz con las superficies de los objetos, y resulta crucial para una variedad de aplicaciones de visión por computador. Debido a la alta complejidad de los modelos de reflectancia, la gran mayoría de las aplicaciones existentes de visión por computador basan sus métodos en suposiciones simplificadoras, tales como la reflectancia lambertiana o la iluminación uniforme para ser capaz de resolver sus problemas. Sin embargo, en escenas del mundo real, los objetos tienden a exhibir reflexiones más complejas (difusas y especulares), y además se ven afectados por las características y la cromaticidad de los iluminantes. En esta tesis, se incorpora un modelo de reflexión más realista para aplicaciones de visión por computador. Para abordar tal fenómeno físico complejo, extendemos los modelos de reflectancia de los objetos del estado-del-arte mediante la introducción de un Modelo de Reflexión Dicromático Multi-Iluminante (MIDR). Usando MIDR somos capaces de modelar y descomponer la reflectancia de un objeto con especularidades complejas bajo múltiples iluminantes que presentan sombras e interreflexiones. Se demuestra que este modelo nos permite realizar una recolorización realista de los objetos iluminados por luces de colores y múltiples iluminantes. Además se propone un método "local" de estimación del iluminante para modelar las escenas con iluminación no uniforme (por ejemplo, una escena al aire libre con un cielo azul y un sol amarillo, una escena interior con iluminación combinada con la iluminación al aire libre a través de una ventana, o cualquier otro caso en el que dos o más luces con diferentes colores iluminan diferentes partes de la escena). El método propuesto aprovecha un modelo probabilístico basado en grafos y resuelve el problema rededefiniendo la estimación como un problema de minimización de energía. Este método nos proporciona estimaciones locales del iluminante que mejoran en gran medida a los métodos del estado-del-arte en constancia de color. Por otra parte, hemos capturado nuestro propia base de datos multi-iluminante, que consiste de escenas complejas y condiciones de iluminación al aire libre o de laboratorio. Con ésta se demuestra la mejora lograda usando nuestro método con respecto a los métodos del estado-del-arte para la estimación automática del iluminante local. Se demuestra que tener un modelo más realista y preciso de la iluminación de la escena y la reflectancia de los objetos, mejora en gran medida la calidad en muchas tareas de visión por ordenador y gráficos por computador. Mostramos ejemplos de mejora en el balance automático de blanco, reiluminación de escenas y en la recolorización de objetos. La teoría propuesta se puede emplear también para mejorar la denominación automática de colores, la detección de objetos, el reconocimiento y la segmentación, que están entre las tendencias más populares de la visión por computador.
Surface reflectance modeling is an important key to scene understanding. An accurate reflectance model which is based on the laws of physics allows us to achieve realistic and physically plausible results. Using such model, a more profound knowledge about the interaction of light with objects surfaces can be established which proves crucial to variety of computer vision application. Due to high complexity of the reflectance model, the vast majority of the existing computer vision applications base their methods on simplifying assumptions such as Lambertian reflectance or uniform illumination to be able to solve their problem. However, in real world scenes, objects tend to exhibit more complex reflections (diffuse and specular) and are furthermore affected by the characteristics and chromaticity of the illuminants. In this thesis, we incorporate a more realistic reflection model in computer vision applications. To address such complex physical phenomenon, we extend the state-of-the-art object reflectance models by introducing a Multi-Illuminant Dichromatic Reflection model (MIDR). Using MIDR we are able to model and decompose the reflectance of an object with complex specularities under multiple illuminants presenting shadows and inter-reflections. We show that this permits us to perform realistic re-coloring of objects lit by colored lights, and multiple illuminants. Furthermore, we propose a “local” illuminant estimation method in order to model the scenes with non-uniform illumination (e.g., an outdoor scene with a blue sky and a yellow sun, a scene with indoor lighting combined with outdoor lighting through a window, or any other case in which two or more lights with distinct colors illuminating different parts of the scene). The proposed method takes advantage of a probabilistic and graph-based model and solves the problem by re-defining the estimation problem as an energy minimization. This method provides us with local illuminant estimations which improve greatly over state-of-the-art color constancy methods. Moreover, we captured our own multi-illuminant dataset which consists of complex scenes and illumination conditions both outdoor and in laboratory conditions. We show improvement achieved using our method over state-of-the-art methods for local illuminant estimation. We demonstrate that having a more realistic and accurate model of the scene illumination and object reflectance greatly improves the quality of many computer vision and computer graphics tasks. We show examples of improved automatic white balance, scene relighting, and object re-coloring. The proposed theory can be employed in order to improve color naming, object detection, recognition, and segmentation which are among the most popular computer vision trends.
APA, Harvard, Vancouver, ISO, and other styles
4

De, Simone Luca. "Tell it to the hand: Attentional modulation in the identification of misoriented chiral objects." Doctoral thesis, SISSA, 2015. http://hdl.handle.net/20.500.11767/3919.

Full text
Abstract:
Research in the field of cognitive neuroscience and neuropsychology on spatial cognition and mental imagery has increased considerably over the last few decades. While at the beginning of the XX century studying imagery was considered an object of derision – a ―sheer bunk‖ (Watson, 1928) – at the present, imagery researchers have successfully developed models and improved behavioral and neurophysiological measures (e.g., Kosslyn et al., 2006). Mental rotation constituted a major advance in terms of behavioral measures sensitive to imaginative operations executed on visual representations (i.e., Shepard & Cooper, 1982). The linearity of modulation between response times and angular disparity of the images allowed a quantitative estimate of imagery processes. The experiments described in the present thesis were motivated by the intent to continue and extend the understanding of such fascinating mental phenomena. The evolution of the present work took initial steps from the adoption of a behavioral paradigm, the hand laterality judgment task, as privileged tool for studying motor imagery in healthy individuals and brain-damaged patients. The similarity with mental rotation tasks and the implicit nature of the task made it the best candidate to test hypotheses regarding the mental simulation of body movements. In this task, response times are linearly affected by the angular departures the hand pictures are shown in, as for mental rotation, and their distributions are asymmetric between left and right hands. Drawing from these task features a widely held view posits that laterality judgment of rotated hand pictures requires participants to imagine hand-arm movements, although they receive no instruction to do so (e.g., Parsons, 1987a; Parsons, 1994). In Chapter 1, I provided a review of the relevant literature on visual and motor imagery. Particular aspects of the mental rotation literature are also explored. In Chapter 2, I examined the hand laterality task and the vast literature of studies that employed this task as means to test motor imagery processes. An alternative view to the motor imagery account is also discussed (i.e., the disembodied account). In Chapter 3, I exploited the hand laterality task, and a visual laterality task (Tomasino et al., 2010) to test motor and visual imagery abilities in a group of healthy aged individuals. In Chapter 4, I described an alternative view that has been proposed by others to explain the pattern of RTs in the hand laterality task: The multisensory integration account (Grafton & Viswanathan, 2014). In this view, hand laterality is recognized by pairing information between the seen hand's visual features and the observer's felt own hand. In Chapter 5, I tested and found evidence for a new interpretation of the particular configuration of response times in the hand laterality task. I demonstrated a spatial compatibility effect for rotated pictures of hands given by the interaction between the direction of stimulus rotation (clockwise vs. counterclockwise) and the laterality of the motor response. These effects changed by following temporal dynamics that were attributed to shifts of spatial attention. In the same chapter, I conducted other psychophysics experiments that confirmed the role of spatial attention and that ruled out the view of multisensory integration as the key aspect in determining the asymmetries of the response times' distribution. In Chapter 6, I conducted a study with patients suffering from Unilateral Neglect in which they performed the hand laterality task and a visual laterality task. The findings indicated that patients failed to integrate visual information with spatially incompatible responses irrespective of the type of task, and depending on egocentric stimulus-response spatial codes. A general discussion is presented in Chapter 7.
APA, Harvard, Vancouver, ISO, and other styles
5

Miles, Geoffrey. "Untir'd spirits and formal constancy : Shakespeare's Roman plays and formal constancy." Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:c5830cc5-e1a4-4efa-ae40-98dc4d7eb651.

Full text
Abstract:
Critics who have noted the importance of Stoic constancy in Shakespeare 's Roman plays have failed to recognise the full complexity of the idea. It has two forms, both derived from the Stoic principle of homologia (consistency), and centred on the ideal of being always the same: Seneca's constantia sapientis, the rocklike or godlike virtue of the Stoic sage who is unmoved and unchanged by external circumstances; and Cicero's decorum (De officiis I), virtue as the consistent playing of an appropriate part. Seneca is more concerned with heroic self-sufficiency, Cicero with social virtue, but both forms of the ideal contain a tension between concern for inner truth and external appearances. In the late sixteenth century Stoic constancy becomes a subject of fierce debate as it is revived by the Neostoics, who stress the opposition of constancy and "opinion." Shakespeare's view of this debate may derive particularly from Montaigne, who moves from a Neostoic position to a sceptical critique of constancy as unattainable by inconstant man, and as less desirable than self-knowledge and flexibility. Reading North's Plutarch with these themes in mind, Shakespeare sees in the lives of Brutus, Antony, and Coriolanus an Aristotelian pattern of ideal, defective, and excessive constancy - a pattern which he modifies, in the light of his understanding of Seneca, Cicero, and Montaigne, in the three Roman plays. He explores the tension which exists between the Senecan and Ciceronian forms of constancy, and indeed within each of them: a tension between heroic Stoic virtue ("untir'd spirits") and public role-playing ("forrral constancy"). Julius Caesar shows Roman constancy as essentially "formal," resting on pretence and self-deception; in Rome, ironically, constancy depends on "opinion." Coriolanus, by taking constancy to an extreme, demonstrates the self-destructive contradictions within it. Antony and Cleopatra, by contrast, embrace a Montaigne-like ideal of "infinite variety" and inconsistent decorum; Antony fails, but Cleopatra achieves in death a paradoxical fusion of constancy and mutability.
APA, Harvard, Vancouver, ISO, and other styles
6

Ling, Yazhu. "The colour perception of natural objects : familiarity, constancy and memory." Thesis, University of Newcastle Upon Tyne, 2006. http://hdl.handle.net/10443/639.

Full text
Abstract:
Perceived object colour tends to stay constant under changes in illumination. This phenomenon is called colour constancy. Colour constancy is an essential component of colour perception and is typically studied in the laboratory via asymmetric colour matching experiments, in which the observer views two colours under two different illuminations side by side and makes matches between them. This situation is unlike colour constancy in the real world, which must typically involve a comparison between the colour one views and the colour one remembers - in other words, colour memory must be required. Furthermore, most colour constancy studies use twodimensional Mondrian images as experimental stimuli. These stimuli enable easy computer control of colour but exclude most of the natural perceptual cues such as binocular disparity, 3D luminance shading, mutual reflection, surface texture, glossy highlights, all of which may contribute to colour perception. My aim, in this project, is to study the colour perception of real objects in a more natural environment. To do so, I have developed an experimental setup which preserves the advantages conferred by easy computer-driven control of colour as well as the natural binocular and monocular cues to 3D shape. The setup also permits the use of real solid objects as stimuli, and the manipulation of their apparent surface colour as well as the background illumination. Thus, using this setup, I have been able to employ both 2D and 3D natural objects as stimuli and investigate aspects of colour perception related to colour constancy and colour memory as well as object familiarity. In developing and analysing these experiments, I have also introduced a new index of colour constancy which explicitly incorporates colour memory. My experiments reveal the following main principles: 1) colour constancy relies on colour memory, and is as good as colour memory allows; 2) colour and shape perception interact in both object similarity and discrimination tasks, indicating that colour and shape cannot be studied completely independently of each other; 3) object familiarity affects colour perception, for both foreground and background objects; 4) object familiarity also affects colour perception at perceptual levels, as measured by the reaction times and the range of appropriate colours accepted for an object.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Ying. "Visual Appearances of the Metric Shapes of Three-Dimensional Objects: Variation and Constancy." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1592254922173432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Verneque, Felipe de Almeida. "Críticas da pesquisa educacional brasileira: presença constante do empirismo." Universidade do Estado do Rio de Janeiro, 2013. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7655.

Full text
Abstract:
Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro
O tema central desta investigação, desenvolvida através de pesquisa bibliográfica, é a prática da pesquisa em educação no Brasil. O objetivo principal foi apresentar as análises sobre os aspectos teóricos-metodológicos dessa prática. A pesquisa foi feita a partir de uma discussão conceitual sobre os fundamentos teórico-metodológicos da investigação científica em educação. As questões abordadas pela dissertação acerca dos pressupostos teórico-metodológicos indicam a presença da empiria como opção epistemológica na prática investigativa em educação. Com ênfase especial às críticas à presença da perspectiva empírica no trabalho de conhecimento da realidade educativa, a dissertação busca identificar as fragilidades teórico-metodológicas na pesquisa em educação. Por fim, em face destas questões, foi apresentada outra perspectiva epistemológica para se pensar a prática investigativa em educação, a partir das contribuições de Miriam Limoeiro Cardoso (1976, 1978, 1990). Em linhas gerais, ao longo da dissertação, foram levantadas questões, discussões que, de modo geral, são concernentes ao processo de produção de conhecimento, a partir da investigação na área da educação, no que tange aos aspectos teóricos, metodológicos, epistemológicos.
The central theme of this research, developed through bibliographic research, is the practice of educational research. The main objective was to present the analysis of the theoretical and methodological aspects of this practice. The research was done from a conceptual discussion on the theoretical and methodological foundations of scientific research in education. The questions presents by the dissertation about the theoretical and methodological assumptions indicate the presence of empiricism as an option in epistemological research practice in education. With special emphasis on the presence of critical empirical perspective on working knowledge of educational reality, the dissertation seeks to identify the theoretical and methodological weaknesses in educational research. Finally, in the face of these issues was presented another epistemological perspective to think the research practice in education, from the contributions of Miriam Limoeiro Cardoso (1976, 1978, 1990). In general, throughout the dissertation, questions were raised, discussions that, in general, are concerned with the process of knowledge production, from research in the area of education, in relation to the theoretical, methodological, epistemological.
APA, Harvard, Vancouver, ISO, and other styles
9

Nakamura-Mather, Mika. "Notions of Home: Constant, Fluid, and Mobile." Thesis, Griffith University, 2017. http://hdl.handle.net/10072/370354.

Full text
Abstract:
I have spent more than a quarter of a century living outside my homeland of Japan. In recent visits to Japan, I have noticed that my sense of belonging is growing stronger. This has caused me to question whether this is simply nostalgia or something deeper. I wonder whether my prolonged exposure to other cultures has enhanced my appreciation of my own, or whether I am losing my cultural identity and the idea of home is becoming more attractive because it feels familiar and safe. Through my studio work, I seek to juxtapose the present with the past, to examine the role that memory plays in our notions of home, and particularly to discover how my memories influence my emotional response to geographical and cultural dislocation. In this exegesis, I examine the nature of memory and the idea that home is not merely a place on a map. My research investigates whether a particular material associated with a specific place—in my case, wood—can be fundamental to developing a better understanding of who we are, where we come from, and why we call one place home over another.
Thesis (PhD Doctorate)
Doctor of Visual Arts (DVA)
Queensland College of Art
Arts, Education and Law
Full Text
APA, Harvard, Vancouver, ISO, and other styles
10

Burt, George. "Towards a theory of volitional strategic change : the role of transitional objects in constancy and change." Thesis, University of Strathclyde, 2001. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=24283.

Full text
Abstract:
Scenario planning is management approach to deal with uncertainty in the business environment. The intention of the approach is to allow management of organisations to better understand and manage their environment. There are many examples of scenario planning in the practitioner literature that suggest that the approach works in practice. There is however little empirical evidence to support or explore the validity of such claims. The origin of this thesis was an exploratory study to understand the impact of interventions using scenario planning in the context of small and medium sized enterprises. In conducting empirical research, the researcher can reflect on what has become a 'learning journey', which identifies the cognitive processes managers employ to manage change arising from such interventions. The research identifies managerial recipes and transitional objects allowing volitional strategic change to occur. That is, the existing managerial understanding based on past experience and success acts as a bridge from the existing world to a new world, without which change cannot be rationalised and management would be incapacitated. I have called this the 'upframed recipe', expressing its elements of lasting validity, the transitional object.
APA, Harvard, Vancouver, ISO, and other styles
11

Vurro, Milena. "The role of chromatic texture and 3D shape in colour discrimination, memory colour, and colour constancy of natural objects." Thesis, University of Newcastle upon Tyne, 2011. http://hdl.handle.net/10443/1251.

Full text
Abstract:
The primary goal of this work was to investigate colour perception in a natural environment and to contribute to the understanding of how cues to familiar object identity influence colour appearance. A large number of studies on colour appearance employ 2D uniformly coloured patches, discarding perceptual cues such as binocular disparity, 3D luminance shading, mutual reflection, and glossy highlights are integral part of a natural scene. Moreover, natural objects possess specific cues that help our recognition (shape, surface texture or colour distribution). The aim of the first main experiment presented in this thesis was to understand the effect of shape on (1) memory colour under constant and varying illumination and on (2) colour constancy for uniformly coloured stimuli. The results demonstrated the existence of a range of memory colours associated with a familiar object, the size of which was strongly object-shape-dependent. For all objects, memory retrieval was significantly faster for object-diagnostic shape relative to generic shapes. Based on two successive controls, the author suggests that shape cues to the object identity affect the range of memory colour proportionally to the original object chromatic distribution. The second experiment examined the subject’s accuracy and precision in adjusting a stimulus colour to its typical appearance. Independently on the illuminant, results showed that memory colour accuracy and precision were enhanced by the presence of chromatic textures, diagnostic shapes, or 3D configurations with a strong interaction between diagnosticity and dimensionality of the shape. Hence, more cues to the object identity and more natural stimuli facilitate the observers in accessing their colour information from memory. A direct relationship was demonstrated between chromatic surface representation, object’s physical properties, and identificability and dimensionality of shape on memory colour accuracy, suggesting high-level mechanisms. Chromatic textures facilitated colour constancy. The third and fourth experiments tested the subject’s ability to discriminate between two chromatic stimuli in a simultaneous and successive 2AFC task, respectively. Simultaneous discrimination threshold performances for polychromatic surfaces were only due to low-level mechanism of the stimulus, whereas in the successive discrimination, i.e. when memory is involved, high-level mechanisms were established. The effect of shape was strongly task- dependent and was modulate by the object memory colour. These findings together with the strong interaction between chromatic cues and shape cues to the object identity lead to the conclusion that high level mechanisms linked to object recognition facilitated both tasks. Hence, the current thesis presents new findings on memory colour and colour constancy presented in a natural context and demonstrates the effect of high-level mechanisms in chromatic discrimination as a function of cues to the object identity such as shape and texture. This work contributes to a deeper understanding of colour perception and object recognition in the natural world.
APA, Harvard, Vancouver, ISO, and other styles
12

Wallace, D. S. "Electron-lattice coupling in conjugated polymers." Thesis, University of Oxford, 1989. http://ora.ox.ac.uk/objects/uuid:49bef560-dfdc-43b2-b4e2-fdf6e4ee763d.

Full text
Abstract:
The results obtained by this new method are shown to be able to account for most of the shortcomings of the earlier methods, in particular their failure satisfactorily to explain the quenching of luminescence in cis-polyacetylene and their poor predictions of the relative strengths of the two photoinduced absorption peaks in polythiophene. The ability of trans-polyacetylene (t-PA) to support a novel type of dynamic defect known as a breather is also verified. A quantitative estimate is made of the mobility of the fundamental defect in t-PA, known as a soliton, and this is in good agreement with experiment.
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Mengchen. "Measurements of OH* and CH* in a constant volume combustion bomb." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:ab2fba4d-d9fb-417b-993b-1b756b9ad7d6.

Full text
Abstract:
Combustion monitoring in internal combustion engine or burners is a difficult task due to the harsh environment for any sensor, therefore optical diagnostics are very attractive for these types of application. Chemiluminescence measurement is one of the most common and most promising ways of implementing optical diagnostics in combustion monitoring applications because the measured signal, emitted naturally with combustion, has potential to be an indirect measure of combustion relevant parameters, such as the equivalence ratio and heat release rate. In hydrocarbon combustion, the most common chemiluminescence emitters are OH*, CH*, C2* and CO2*. This thesis focuses on the measurement of OH* and CH* chemiluminescence, whose sensitivities are affected by temperature, pressure, equivalence ratio and stretch rate. To measure OH* and CH* chemiluminescence, an existing constant volume combustion vessel has been refurbished, along with the sub-systems for fuel delivery, ignition, LabView control, data acquisition, and optical detection using a pair of photo-multiplier tubes (PMTs), interference filters and a series of apertures. Modelling accurately the optical setup is essential for the CH* and OH* chemiluminescence measurements in the combustion bomb. To achieve this goal, a narrow field of view system has been selected as it enables the elimination of photons scattered from the internal surfaces. A calibration of the PMTs converts the measurements into the absolute OH* and CH* chemiluminescence in terms of watt. Measurements from a combustion bomb are versatile and accurate since it determines the OH* and CH* chemiluminescence as a function of temperature and pressure from a single experiment. The calculation of the normalised OH* and CH* chemiluminescence (against mass burned rate) was based on a multi-zone combustion model and measured pressure record from the vessel. NIICS (Normalised Intensity Integrated Calculation System) has been created to fetch data from the multi-zone model, the optical model, and experimental measurements, to match them up by interpolation and to normalise the OH* and CH* chemiluminescence. NIICS also allows the user to select data uncorrupted by the noise and heat transfer. The chosen data (in this case, CH*/OH* chemiluminescence ratio) have been fitted using a multi-variate fitting and correlation analysis. This formulation can be used to indicate the local equivalence ratio from premixed methane / air and iso-octane / air flames over the local pressure range 0.5 – 20 bar, the unburned gas temperature range 450 – 600 K, and equivalence ratio range 0.8 – 1.1. The chemical-kinetic mechanisms of the absolute OH* and CH* chemiluminescence have been investigated by studying the influence of the equivalence ratio, unburned gas temperature, and local pressure. It should be pointed out that two confounding observations occur, i.e. a discontinuity in the chemiluminescence along the isentropes, and chemiluminescence continuing after the end of combustion. This led to the further spectroscopic analysis. This study concluded with spectroscopic measurements using an Ocean Optics spectrometer and a Princeton ICCD spectrometer. It was found that the broadband CO2* is responsible for the two disconcerting observations. In addition, CH* chemiluminescence has been shown to be very faint from premixed laminar methane / air flames; hence the CH*/OH* formula in essence quantifies the CO2*/OH* ratio as a function of pressure, temperature, and equivalence ratio. The ‘CH* chemiluminescence’ can characterise the background CO2*, so as to provide a practical way to probe the feasibility of absolute OH* as an indicator of combustion relevant parameters in the future.
APA, Harvard, Vancouver, ISO, and other styles
14

Hinton, Nathan Ian David. "Measuring laminar burning velocities using constant volume combustion vessel techniques." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:5b641b04-8040-4d49-a7e8-aae0b0ffc8b5.

Full text
Abstract:
The laminar burning velocity is an important fundamental property of a fuel-air mixture at given conditions of temperature and pressure. Knowledge of burning velocities is required as an input for combustion models, including engine simulations, and the validation of chemical kinetic mechanisms. It is also important to understand the effect of stretch upon laminar flames, to correct for stretch and determine true (unstretched) laminar burning velocities, but also for modelling combustion where stretch rates are high, such as turbulent combustion models. A constant volume combustion vessel has been used in this work to determine burning velocities using two methods: a) flame speed measurements during the constant pressure period, and b) analysis of the pressure rise data. Consistency between these two techniques has been demonstrated for the first time. Flame front imaging and linear extrapolation of flame speed has been used to determine unstretched flame speeds at constant pressure and burned gas Markstein lengths. Measurement of the pressure rise during constant volume combustion has been used along with a numerical multi-zone combustion model to determine burning velocities for elevated temperatures and pressures as the unburned gas ahead of the spherically expanding flame front is compressed isentropically. This burning velocity data is correlated using a 14 term correlation to account for the effects of equivalence ratio, temperature, pressure and fraction of diluents. This correlation has been modified from an existing 12 term correlation to more accurately represent the dependence of burning velocity upon temperature and pressure. A number of fuels have been tested in the combustion vessel. Biogas (mixtures of CH4 and CO2) has been tested for a range of equivalence ratios (0.7–1.4), with initial temperatures of 298, 380 and 450 K, initial pressures of 1, 2 and 4 bar and CO2 fractions of up to 40% by mole. Hydrous ethanol has been tested at the same conditions (apart from 298 K due to the need to vaporise the ethanol), and for fractions of water up to 40% by volume. Binary, ternary and quaternary blends of toluene, n-heptane, ethanol and iso-octane (THEO) have been tested for stoichiometric mixtures only, at 380 and 450 K, and 1, 2 and 4 bar, to represent surrogate gasoline blended with ethanol. For all fuels, correlation coefficients have been obtained to represent the burning velocities over wide ranging conditions. Common trends are seen, such as the reduction in burning velocity with pressure and increase with temperature. In the case of biogas, increasing CO2 results in a decrease in burning velocity, a shift in peak burning velocity towards stoichiometric, a decrease in burned gas Markstein length and a delayed onset of cellularity. For hydrous ethanol the reduction in burning velocity as H2O content is increased is more noticeably non-linear, and whilst the onset of cellularity is delayed, the effect on Markstein length is minor. Chemical kinetic simulations are performed to replicate the conditions for biogas mixtures using the GRI 3.0 mechanism and the FlameMaster package. For hydrous ethanol, simulations were performed by Carsten Olm at Eötvös Loránd University, using the OpenSMOKE 1D premixed flame solver. In both cases, good agreement with experimental results is seen. Tests have also been performed using a single cylinder optical engine to compare the results of the hydrous ethanol tests with early burn combustion, and a good comparison is seen. Results from tests on THEO fuels are compared with mixing rules developed in the literature to enable burning velocities of blends to be determined from knowledge of that of the pure components alone. A variety of rules are compared, and it is found that in most cases, the best approximation is found by using the rule in which the burning velocity of the blend is represented by weighting by the energy fraction of the individual components.
APA, Harvard, Vancouver, ISO, and other styles
15

Higgins, Benjamin David Robert. "We have a constant will to publish : the publishers of Shakespeare's First Folio." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:ab876515-5984-46a5-8bf0-8346165fb583.

Full text
Abstract:
This thesis is a cultural history of the publishing businesses that financed Shakespeare's First Folio. The thesis argues that by 1623 each of the four businesses that formed the Folio syndicate had developed an influential reputation in the book trade, and that these reputations were crucial to the cultural positioning of the Folio on publication. Taking its lead from a dynamic new field of study that has been called 'cultural bibliography', the thesis investigates the histories and publishing strategies of the business owned by the stationers William and Isaac Jaggard, who are usually thought of as the leading members of the Folio project, as well as those owned by William Aspley, John Smethwick, and Edward Blount. Through detailed analysis of the publishing strategies of each stationer, the thesis puts forward new theories about how these men influenced the reception of the Folio by transferring onto it their brands, and the expectations of their readerships. The business of each Folio stationer was like a stage with an audience assembled around it, waiting for the next production to emerge. This thesis identifies the publishing activities that attracted the audiences of the Jaggards, Blount, Smethwick, and Aspley, and ultimately suggests the Folio was granted significant legitimacy through the collaboration of these men. After an introductory chapter that locates the thesis in its scholarly field, the first chapter tells the history of syndicated book publishing in England, and reviews what we know of the pre-production process of the First Folio, taking a particular interest in how the publishing syndicate formed. The following chapters then form a series of case studies of the four publishing businesses, reviewing the apprenticeships and careers of each stationer before suggesting how those careers created a context of meaning for the Folio. These case studies focus on the authoritative reference publishing of the Jaggards, the religious publishing of William Aspley, the geographical location of John Smethwick's publishing business beside the Inns of Court, and the cultural achievements of Edward Blount. In conclusion the thesis explores the idea that it was the unique partnership of these businesses that consecrated the Folio as an emblem of literary taste.
APA, Harvard, Vancouver, ISO, and other styles
16

Hill, Rory Anthony Daniel. "Local, loyal and constant? : on the dynamism of 'terroir' in sustainable agriculture." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:392a4f20-7660-4b54-b7b1-c333bb12c922.

Full text
Abstract:
'Terroir' is a concept that is used in France, and increasingly elsewhere, to evoke character and quality in food and drink in relation to the place it comes from. In this thesis, I investigate how terroir has attained its present-day economic value and cultural resonance; how it is subject to multiple forms of articulation across France; and how it is put to use as part of the philosophies and practices of environmentally sustainable modes of production. I use cultural and historical modes of enquiry and I draw upon interviews, participant observation, discourse and archive analysis carried out on fieldwork in three production chains in eastern France; being wine production in Burgundy, walnut production in the Isère valley, and Reblochon cheese production in the Alps. In the course of this thesis, I elucidate the cultural significance and epistemology of the concept, and make arguments that propositions for terroir consist of both specific geographical extent and historical density of explanation; that the rhetorical assembly of stories about terroir permits claims for continuity in production and tradition; and that the adoption of organic and biodynamic methods of farming troubles inherited understandings of what terroir is, through the intervention of the lively propensities of biotic actors. This is a story about food, farming and culture in France that I tell to critically examine the local, loyal and constant predicates of terroir, and to make an original contribution to our understanding of the cultural and historical background to the French and European systems of geographical protection in food and drink.
APA, Harvard, Vancouver, ISO, and other styles
17

Hedrich, Monika. "Human colour perception : a psychophysical study of human colour perception for real and computer-simulated two-dimensional and three-dimensional objects." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Austin, Anthony P. "Some new results on, and applications of, interpolation in numerical computation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:11c16937-4a59-494d-a06f-6d27b634e2f3.

Full text
Abstract:
This thesis discusses several topics related to interpolation and how it is used in numerical analysis. It begins with an overview of the aspects of interpolation theory that are relevant to the discussion at hand before presenting three new contributions to the field. The first new result is a detailed error analysis of the barycentric formula for trigonometric interpolation in equally-spaced points. We show that, unlike the barycentric formula for polynomial interpolation in Chebyshev points (and contrary to the main view in the literature), this formula is not always stable. We demonstrate how to correct this instability via a rewriting of the formula and establish the forward stability of the resulting algorithm. Second, we consider the problem of trigonometric interpolation in grids that are perturbations of equally-spaced grids in which each point is allowed to move by at most a fixed fraction of the grid spacing. We prove that the Lebesgue constant for these grids grows at a rate that is at most algebraic in the number of points, thus answering questions put forth by Trefethen and Weideman about the robustness of numerical methods based on trigonometric interpolation in points that are uniformly distributed but not equally-spaced. We use this bound to derive theorems about the convergence rate of trigonometric interpolation in these grids and also discuss the related question of quadrature. Specifically, we prove that if a function has V ≥ 1 derivatives, the Vth of which is Hölder continuous (with a Hölder exponent that depends on the size of the maximum allowable perturbation), then the interpolants converge uniformly to the function at an algebraic rate; larger values of V lead to more rapid convergence. A similar statement holds for the corresponding quadrature rule. We also consider what analogue, if any, there is for trigonometric interpolation of the famous 1/4 theorem of Kadec from sampling theory that restricts the size of the perturbations one can make to the integers and still be guaranteed to have a set of stable sampling for the Paley-Wiener space. We present numerical evidence suggesting that in the discrete case, the 1/4 threshold takes the form of a threshold for the boundedness of a "2-norm Lebesgue constant" and does not appear to have much significance in practice. We believe that these are the first results regarding this problem to appear in the literature. While we do not believe the results we establish are the best possible quantitatively, they do (rigorously) capture the main features of trigonometric interpolation in perturbations of equally-spaced grids. We make several conjectures as to what the optimal results may be, backed by extensive numerical results. Finally, we consider a new application of interpolation to numerical linear algebra. We show that recently developed methods for computing the eigenvalues of a matrix by dis- cretizing contour integrals of its resolvent are equivalent to computing a rational interpolant to the resolvent and finding its poles. Using this observation as the foundation, we develop a method for computing the eigenvalues of real symmetric matrices that enjoys the same advantages as contour integral methods with respect to parallelism but employs only real arithmetic, thereby cutting the computational cost and storage requirements in half.
APA, Harvard, Vancouver, ISO, and other styles
19

More, Joshua N. "Algorithms and computer code for ab initio path integral molecular dynamics simulations." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:b8ca7471-21e3-4240-95b1-8775e5d6c08f.

Full text
Abstract:
This thesis presents i-PI, a new path integral molecular dynamics code designed to capture nuclear quantum effects in ab initio electronic structure calculations of condensed phase systems. This software has an implementation of estimators used to calculate a wide range of static and dynamical properties and of state-of-the-art techniques used to increase the computational efficiency of path integral simulations. i-PI has been designed in a highly modular fashion, to ensure that it is as simple as possible to develop and implement new algorithms to keep up with the research frontier, and so that users can take maximum advantage of the numerous electronic structure programs which are freely available without needing to rewrite large amounts of code. Among the functionality of the i-PI code is a novel integrator for constant pressure dynamics, which is used to investigate the properties of liquid water at 750 K and 10 GPa, and efficient estimators for the calculation of single particle momentum distri- butions, which are used to study the properties of solid and liquid ammonia. These show respectively that i-PI can be used to make predictions about systems which are both difficult to study experimentally and highly non-classical in nature, and that it can illustrate the relative advantages and disadvantages of different theoretical methods and their ability to reproduce experimental data.
APA, Harvard, Vancouver, ISO, and other styles
20

Ceh, Ondřej. "Průmyslová hala s administrativním objektem." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-391917.

Full text
Abstract:
This thesis contains a design of a steel structure of two-bay industrial hall with an administrative facility in the city of Brno. There are 3 variants of solution for industrial steel hall with duopitch roof with overall dimensions 60x74 m and detailed structural analysis of the first variant. Height of the hall is 10,5 m. Main structure of the roof is designed as frames from steel profiles with purlins from cold rolled sections, a span of frames is 30 m. The material of main frames is steel S355, material of purlins is S450. The hall is equipped by overhead crane with loading capacity 5 tons. Bolted connections will be done with maximal stiffness and feasibility of the structure. The administrative facility is designed as an independent building with 3 floors, the floor plan is semicircular with diameter 48 m.
APA, Harvard, Vancouver, ISO, and other styles
21

Hållen, Nicklas. "Travelling objects : modernity and materiality in British Colonial travel literature about Africa." Doctoral thesis, Umeå universitet, Institutionen för språkstudier, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46365.

Full text
Abstract:
This study examines the functions of objects in a selection of British colonial travel accounts about Africa. The works discussed were published between 1863 and 1908 and include travelogues by John Hanning Speke, Verney Lovett Cameron, Henry Morton Stanley, Mary Henrietta Kingsley, Ewart Scott Grogan, Mary Hall and Constance Larymore. The author argues that objects are deeply involved in the construction of pre-modern and modern spheres that the travelling subject moves between. The objects in the travel accounts are studied in relation to a contextual background of Victorian commodity and object culture, epitomised by the 1851 Great Exhibition and the birth of the modern anthropological museum. The four analysis chapters investigate the roles of objects in ethnographical and geographical writing, in ideological discussions about the transformative powers of colonial trade, and in narratives about the arrival of the book in the colonial periphery. As the analysis shows, however, objects tend not to behave as they are expected to do. Instead of marking temporal differences, descriptions of objects are typically unstable and riddled with contradictions and foreground the ambivalence that characterises colonial literature.
APA, Harvard, Vancouver, ISO, and other styles
22

Remmert, Sarah M. "Reduced dimensionality quantum dynamics of chemical reactions." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:7f96405f-105c-4ca3-9b8a-06f77d84606a.

Full text
Abstract:
In this thesis a reduced dimensionality quantum scattering model is applied to the study of polyatomic reactions of type X + CH4 <--> XH + CH3. Two dimensional quantum scattering of the symmetric hydrogen exchange reaction CH3+CH4 <--> CH4+CH3 is performed on an 18-parameter double-Morse analytical function derived from ab initio calculations at the CCSD(T)/cc-pVTZ//MP2/cc-pVTZ level of theory. Spectator mode motion is approximately treated via inclusion of curvilinear or rectilinear projected zero-point energies in the potential surface. The close-coupled equations are solved using R-matrix propagation. The state-to-state probabilities and integral and differential cross sections show the reaction to be primarily vibrationally adiabatic and backwards scattered. Quantum properties such as heavy-light-heavy oscillating reactivity and resonance features significantly influence the reaction dynamics. Deuterium substitution at the primary site is the dominant kinetic isotope effect. Thermal rate constants are in excellent agreement with experiment. The method is also applied to the study of electronically nonadiabatic transitions in the CH3 + HCl <--> CH4 + Cl(2PJ) reaction. Electrovibrational basis sets are used to construct the close-coupled equations, which are solved via Rmatrix propagation using a system of three potential energy surfaces coupled by spin-orbit interaction. Ground and excited electronic surfaces are developed using a 29-parameter double-Morse function with ab initio data at the CCSD(T)/ccpV( Q+d)Z-dk//MP2/cc-pV(T+d)Z-dk level of theory, and with basis set extrapolated data, both corrected via curvilinear projected spectator zero-point energies. Coupling surfaces are developed by fitting MCSCF/cc-pV(T+d)Z-dk ab initio spin orbit constants to 8-parameter functions. Scattering calculations are performed for the ground adiabatic and coupled surface models, and reaction probabilities, thermal rate constants and integral and differential cross sections are presented. Thermal rate constants on the basis set extrapolated surface are in excellent agreement with experiment. Characterisation of electronically nonadiabatic nonreactive and reactive transitions indicate the close correlation between vibrational excitation and nonadiabatic transition. A model for comparing the nonadiabatic cross section branching ratio to experiment is discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Shaw, Robert Laurence John. "The Celestine monks of France, c. 1350-1450 : monastic reform in an age of Schism, councils and war." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:d1669ab4-1650-4396-b856-5e1fe53b5b7f.

Full text
Abstract:
This thesis focuses on the Celestine monks of France, a largely neglected and distinctive reformed Benedictine congregation, at their apex of growth (c.1350-1450). Based largely within the kingdom of France, but also including key houses in the contiguous territories of Lorraine and the Comtat, they expanded significantly in this period, from four monasteries to seventeen within a hundred years. They also gained independence from the mother congregation in Italy with the coming of the Great Western Schism (1376-1418). The study aims view the French Celestines against the backdrop of a vibrant culture of 'reform' within both the monastic estate (the Observants) and the Church as a whole, as well as the political instability and war in France. It will reveal a congregation alive with the passions of their times and relevant within them. Following an introductory section, chapter 1 will discuss the previously unstudied Vita of the leading French Celestine Jean Bassand (d.1445) in depth and introduce the key themes of the subsequent chapters. Chapter 2 will examine their Constitutions, in the process providing perspective on their hyper-scrupulous understanding of sin and the relation of their statutes to the Christian idea of 'reform'. Chapter 3 will look to anecdotal evidence concerning the quality of their observance in practice, as well the spiritual and moral writings of Pierre Pocquet (d.1408), another important Celestine leader. Chapter 4 will begin to establish how and why the order grew, examining records of benefaction (contemporary martyrologies and charters) as well as taking view of the financial (and in the end, moral) difficulties brought by war through the documents concerning the reductions of founded masses at the Paris and Sens houses. Chapter 5 will look at monumental and anecdotal/literary evidence, as well as the works of Jean Gerson, a friend of the order, to further define the cultural impact of the monks.
APA, Harvard, Vancouver, ISO, and other styles
24

Haen, Roel. "Breast cancer related lymphedema." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:c4a83ffc-790f-46ec-bde2-3ac639dd7c89.

Full text
Abstract:
Improvements in the treatment of breast cancer have resulted in better survival rates and less breast cancer related morbidity. Nevertheless, a significant group of patients still experience a diminished quality of life as a result of lymphedema. In the early, often reversible, stage of lymphedema patients can experience subjective changes in the affected area. However, with the traditionally available tools the lymphedema often remains clinically undetectable and patients are denied essential care that can prevent worsening. Furthermore, most lymphedema assessment tools fail to support a clear unambiguous definition of lymphedema. This underlines the need for a sensitive objective measurement method that can assess lymphedema in a subclinical stage. In this study we demonstrated that measuring tissue dielectric constant (TDC) using the MoistureMeter-D is an effective method to detect tissue water changes and could potentially provide a cost-effective adequate tool to measure the early onset of breast cancer related lymphedema (BCRL). Secondarily, we established the correlation between the novel TDC method and the frequently used arm volume measurements and self-assessment questionnaires. A group of 20 female patients with clinically BCRL were included. TDC measurements in both arms and all quadrant of both breast were recorded along with volumetric measurements of both arms. All patients were asked to complete a self-report questionnaire. The novel TDC method detected significantly higher tissue water levels in the affected arm and breast compared to the control side. The TDC ratio between control and affected side showed significant correlation with self-reported pain and discomfort in both arm and breast. In the arm, the TDC method also showed correlation with the volume measurement method. The TDC value of the arm was correlated to age, but not to BMI. This study demonstrates that measuring TDC using the MMD is an effective method for quantifying lymphedema in arm and breast and is an important tool in detecting early TWC changes.
APA, Harvard, Vancouver, ISO, and other styles
25

Heng, Jeremy. "On the use of transport and optimal control methods for Monte Carlo simulation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:6cbc7690-ac54-4a6a-b235-57fa62e5b2fc.

Full text
Abstract:
This thesis explores ideas from transport theory and optimal control to develop novel Monte Carlo methods to perform efficient statistical computation. The first project considers the problem of constructing a transport map between two given probability measures. In the Bayesian formalism, this approach is natural when one introduces a curve of probability measures connecting the prior to posterior by tempering the likelihood function. The main idea is to move samples from the prior using an ordinary differential equation (ODE), constructed by solving the Liouville partial differential equation (PDE) which governs the time evolution of measures along the curve. In this work, we first study the regularity solutions of Liouville equation should satisfy to guarantee validity of this construction. We place an emphasis on understanding these issues as it explains the difficulties associated with solutions that have been previously reported. After ensuring that the flow transport problem is well-defined, we give a constructive solution. However, this result is only formal as the representation is given in terms of integrals which are intractable. For computational tractability, we proposed a novel approximation of the PDE which yields an ODE whose drift depends on the full conditional distributions of the intermediate distributions. Even when the ODE is time-discretized and the full conditional distributions are approximated numerically, the resulting distribution of mapped samples can be evaluated and used as a proposal within Markov chain Monte Carlo and sequential Monte Carlo (SMC) schemes. We then illustrate experimentally that the resulting algorithm can outperform state-of-the-art SMC methods at a fixed computational complexity. The second project aims to exploit ideas from optimal control to design more efficient SMC methods. The key idea is to control the proposal distribution induced by a time-discretized Langevin dynamics so as to minimize the Kullback-Leibler divergence of the extended target distribution from the proposal. The optimal value functions of the resulting optimal control problem can then be approximated using algorithms developed in the approximate dynamic programming (ADP) literature. We introduce a novel iterative scheme to perform ADP, provide a theoretical analysis of the proposed algorithm and demonstrate that the latter can provide significant gains over state-of-the-art methods at a fixed computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
26

Hanning-Lee, Mark Adrian. "A study of atom and radical kinetics." Thesis, University of Oxford, 1990. http://ora.ox.ac.uk/objects/uuid:89cabe5d-7cc8-43b3-8c2a-686563ff1b3f.

Full text
Abstract:
This thesis describes the measurement of rate constants for gas phase reactions as a function of temperature (285 ≤ T/K ≤ 850) and pressure (48 ≤ P/Torr ≤ 700). One or both reactants was monitored directly in real time, using time–resolved resonance fluorescence (for atoms) and u.v. absorption (for radicals). Reactants were produced by exciplex laser flash photolysis. The technique was used to measure rate constants to high precision for the following reactions under the stated conditions: • H+O2+He->HO2+He and H+O2−→OH+O, for 800 ≤ T/K ≤ 850 and 100 ≤ P/Torr ≤ 259. A time–resolved study was performed at conditions close to criticality in the H2–O2 system. The competition between the two reactions affected the behaviour of the system after photolysis, and the rate constants were inferred from this behaviour. • H+C2H4+He<-->C2H5+He (T = 800 K, 97 ≤ P/Torr ≤ 600). The reactions were well into the fall–off region at all conditions studied. At 800 K, the system was studied under equilibrating conditions. The study provided values of the forward and reverse rate constants at high temperatures and enabled a test of a new theory of reversible unimolecular reactions. The controversial standard enthalpy of formation of ethyl, DH0f,298 (C2H5), was determined to be 120.2±0.8 kJ mol−1. Master Equation calculations showed that reversible and irreversible treatments of an equilibrating system should yield the same value for both thermal rate constants. • H+C3H5+He->C3H6+He (T = 291 K, 98 ≤ P/Torr ≤ 600) and O+C3H5 −→ products (286 ≤ T/K ≤ 500, 48 ≤ P/Torr ≤ 348). Both reactions were pressure–independent, and the latter was also independent of temperature with a value of (2.0±0.2) ×10−10 cm3 molecule−1 s−1. • H+C2H2+He<-->C2H3+He (298 ≤ T/K ≤ 845, 50 ≤ P/Torr ≤ 600). At 845 K, both reactions were in the fall–off region; rate constants were used to determine the standard enthalpy of formation of vinyl, ¢H0f,298 (C2H3), as 293±7 kJ mol−1. The value of this quantity has until recently been very controversial. • H+CH4 <--> CH3+H2. The standard enthalpy of formation of methyl, DH0 f,298 (CH3), was determined by re–analysing existing kinetic data at T = 825 K and 875 K. A value of 144.7±1.1 kJ mol−1 was determined. Preliminary models were examined to describe the loss of reactants from the observation region by diffusion and pump–out. Such models, including diffusion and drift, should prove useful in describing the loss of reactive species in many slow–flow systems, enabling more accurate rate constants to be determined.
APA, Harvard, Vancouver, ISO, and other styles
27

Meng, Yao. "Hydrogen electrochemistry in room temperature ionic liquids." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:be24c6ea-c351-4855-ad9c-98e747ac87e4.

Full text
Abstract:
This thesis primarily focuses on the electrochemical properties of the H2/H+ redox couple, at various metallic electrodes in room temperature ionic liquids. Initially, a comprehensive overview of room temperature ionic liquids, RTILs, compared to conventional organic solvents is presented which identifies their favourable properties and applications, followed by a second chapter describing the basic theory of electrochemistry. A third chapter presents the general experimental reagents, instruments and measurements used in this thesis. The results presented in this thesis are summarized in six further chapters and shown as follows. (1) Hydrogenolysis, hydrogen loaded palladium electrodes by electrolysis of H[NTf2] in a RTIL [C2mim][NTf2]. (2) Palladium nanoparticle-modified carbon nanotubes for electrochemical hydrogenolysis in RTILs. (3) Electrochemistry of hydrogen in the RTIL [C2mim][NTf2]: dissolved hydrogen lubricates diffusional transport. (4) The hydrogen evolution reaction in a room temperature ionic liquid: mechanism and electrocatalyst trends. (5) The formal potentials and electrode kinetics of the proton_hydrogen couple in various room temperature ionic liquids. (6) The electroreduction of benzoic acid: voltammetric observation of adsorbed hydrogen at a Platinum microelectrode in room temperature ionic liquids. The first two studies show electrochemically formed adsorbed H atoms at a metallic Pt or Pd surface can be used for clean, efficient, safe electrochemical hydrogenolysis of organic compounds in RTIL media. The next study shows the physicochemical changes of RTIL properties, arising from dissolved hydrogen gas. The last three studies looked at the electrochemical properties of H2/H+ redox couple at various metallic electrodes over a range of RTILs vs a stable Ag/Ag+ reference couple, using H[NTf2] and benzoic acid as proton sources. The kinetic and thermodynamic mechanisms of some reactions or processes are the same in RTILs as in conventional organic or aqueous solvents, but other remarkably different behaviours are presented. Most importantly significant constants are seen for platinum, gold and molybdenum electrodes in term of the mechanism of proton reduction to form hydrogen.
APA, Harvard, Vancouver, ISO, and other styles
28

Babichev, Dmitry. "On efficient methods for high-dimensional statistical estimation." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE032.

Full text
Abstract:
Dans cette thèse, nous examinons plusieurs aspects de l'estimation des paramètres pour les statistiques et les techniques d'apprentissage automatique, aussi que les méthodes d'optimisation applicables à ces problèmes. Le but de l'estimation des paramètres est de trouver les paramètres cachés inconnus qui régissent les données, par exemple les paramètres dont la densité de probabilité est inconnue. La construction d'estimateurs par le biais de problèmes d'optimisation n'est qu'une partie du problème, trouver la valeur optimale du paramètre est souvent un problème d'optimisation qui doit être résolu, en utilisant diverses techniques. Ces problèmes d'optimisation sont souvent convexes pour une large classe de problèmes, et nous pouvons exploiter leur structure pour obtenir des taux de convergence rapides. La première contribution principale de la thèse est de développer des techniques d'appariement de moments pour des problèmes de régression non linéaire multi-index. Nous considérons le problème classique de régression non linéaire, qui est irréalisable dans des dimensions élevées en raison de la malédiction de la dimensionnalité. Nous combinons deux techniques existantes : ADE et SIR pour développer la méthode hybride sans certain des aspects faibles de ses parents. Dans la deuxième contribution principale, nous utilisons un type particulier de calcul de la moyenne pour la descente stochastique du gradient. Nous considérons les familles exponentielles conditionnelles (comme la régression logistique), où l'objectif est de trouver la valeur inconnue du paramètre. Nous proposons le calcul de la moyenne des paramètres de moments, que nous appelons fonctions de prédiction. Pour les modèles à dimensions finies, ce type de calcul de la moyenne peut entraîner une erreur négative, c'est-à-dire que cette approche nous fournit un estimateur meilleur que tout estimateur linéaire ne peut jamais le faire. La troisième contribution principale de cette thèse porte sur les pertes de Fenchel-Young. Nous considérons des classificateurs linéaires multi-classes avec les pertes d'un certain type, de sorte que leur double conjugué a un produit direct de simplices comme support. La formulation convexe-concave à point-selle correspondante a une forme spéciale avec un terme de matrice bilinéaire et les approches classiques souffrent de la multiplication des matrices qui prend beaucoup de temps. Nous montrons que pour les pertes SVM multi-classes avec des techniques d'échantillonnage efficaces, notre approche a une complexité d'itération sous-linéaire, c'est-à-dire que nous devons payer seulement trois fois O(n+d+k) : pour le nombre de classes k, le nombre de caractéristiques d et le nombre d'échantillons n, alors que toutes les techniques existantes sont plus complexes
In this thesis we consider several aspects of parameter estimation for statistics and machine learning and optimization techniques applicable to these problems. The goal of parameter estimation is to find the unknown hidden parameters, which govern the data, for example parameters of an unknown probability density. The construction of estimators through optimization problems is only one side of the coin, finding the optimal value of the parameter often is an optimization problem that needs to be solved, using various optimization techniques. Hopefully these optimization problems are convex for a wide class of problems, and we can exploit their structure to get fast convergence rates. The first main contribution of the thesis is to develop moment-matching techniques for multi-index non-linear regression problems. We consider the classical non-linear regression problem, which is unfeasible in high dimensions due to the curse of dimensionality. We combine two existing techniques: ADE and SIR to develop the hybrid method without some of the weak sides of its parents. In the second main contribution we use a special type of averaging for stochastic gradient descent. We consider conditional exponential families (such as logistic regression), where the goal is to find the unknown value of the parameter. Classical approaches, such as SGD with constant step-size are known to converge only to some neighborhood of the optimal value of the parameter, even with averaging. We propose the averaging of moment parameters, which we call prediction functions. For finite-dimensional models this type of averaging can lead to negative error, i.e., this approach provides us with the estimator better than any linear estimator can ever achieve. The third main contribution of this thesis deals with Fenchel-Young losses. We consider multi-class linear classifiers with the losses of a certain type, such that their dual conjugate has a direct product of simplices as a support. We show, that for multi-class SVM losses with smart matrix-multiplication sampling techniques, our approach has an iteration complexity which is sublinear, i.e., we need to pay only trice O(n+d+k): for number of classes k, number of features d and number of samples n, whereas all existing techniques have higher complexity
APA, Harvard, Vancouver, ISO, and other styles
29

Abdennur, Nezar A. "A Framework for Individual-based Simulation of Heterogeneous Cell Populations." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20478.

Full text
Abstract:
An object-oriented framework is presented for developing and simulating individual-based models of cell populations. The framework supplies classes to define objects called simulation channels that encapsulate the algorithms that make up a simulation model. These may govern state-updating events at the individual level, perform global state changes, or trigger cell division. Simulation engines control the scheduling and execution of collections of simulation channels, while a simulation manager coordinates the engines according to one of two scheduling protocols. When the ensemble of cells being simulated reaches a specified maximum size, a procedure is introduced whereby random cells are ejected from the simulation and replaced by newborn cells to keep the sample population size constant but representative in composition. The framework permits recording of population snapshot data and/or cell lineage histories. Use of the framework is demonstrated through validation benchmarks and two case studies based on experiments from the literature.
APA, Harvard, Vancouver, ISO, and other styles
30

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
31

Miller, Erik G., Kinh Tieu, and Chris P. Stauffer. "Learning Object-Independent Modes of Variation with Feature Flow Fields." 2001. http://hdl.handle.net/1721.1/6659.

Full text
Abstract:
We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single example of that object. We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.
APA, Harvard, Vancouver, ISO, and other styles
32

Hedrich, M., M. Bloj, and A. I. Ruppertsberg. "Color constancy improves for real 3D objects." 2009. http://hdl.handle.net/10454/6011.

Full text
Abstract:
In this study human color constancy was tested for two-dimensional (2D) and three-dimensional (3D) setups with real objects and lights. Four different illuminant changes, a natural selection task and a wide choice of target colors were used. We found that color constancy was better when the target color was learned as a 3D object in a cue-rich 3D scene than in a 2D setup. This improvement was independent of the target color and the illuminant change. We were not able to find any evidence that frequently experienced illuminant changes are better compensated for than unusual ones. Normalizing individual color constancy hit rates by the corresponding color memory hit rates yields a color constancy index, which is indicative of observers' true ability to compensate for illuminant changes.
APA, Harvard, Vancouver, ISO, and other styles
33

Hedrich, Monika, Marina Bloj, and Alexa I. Ruppertsberg. "Color constancy improves for real 3D objects." 2009. http://hdl.handle.net/10454/4722.

Full text
Abstract:
No
In this study human color constancy was tested for two-dimensional (2D) and three-dimensional (3D) setups with real objects and lights. Four different illuminant changes, a natural selection task and a wide choice of target colors were used. We found that color constancy was better when the target color was learned as a 3D object in a cue-rich 3D scene than in a 2D setup. This improvement was independent of the target color and the illuminant change. We were not able to find any evidence that frequently experienced illuminant changes are better compensated for than unusual ones. Normalizing individual color constancy hit rates by the corresponding color memory hit rates yields a color constancy index, which is indicative of observers¿ true ability to compensate for illuminant changes.
APA, Harvard, Vancouver, ISO, and other styles
34

Bloj, Marina, D. Brainard, L. Maloney, C. Ripamonti, K. Mitha, R. Hauck, and S. Greenwald. "Measurements of the effect of surface slant on perceived lightness." 2009. http://hdl.handle.net/10454/2723.

Full text
Abstract:
No
When a planar object is rotated with respect to a directional light source, the reflected luminance changes. If surface lightness is to be a reliable guide to surface identity, observers must compensate for such changes. To the extent they do, observers are said to be lightness constant. We report data from a lightness matching task that assesses lightness constancy with respect to changes in object slant. On each trial, observers viewed an achromatic standard object and indicated the best match from a palette of 36 grayscale samples. The standard object and the palette were visible simultaneously within an experimental chamber. The chamber illumination was provided from above by a theater stage lamp. The standard objects were uniformly-painted flat cards. Different groups of naïve observers made matches under two sets of instructions. In the Neutral Instructions, observers were asked to match the appearance of the standard and palette sample. In the Paint Instructions, observers were asked to choose the palette sample that was painted the same as the standard. Several broad conclusions may be drawn from the results. First, data for most observers were neither luminance matches nor lightness constant matches. Second, there were large and reliable individual differences. To characterize these, a constancy index was obtained for each observer by comparing how well the data were accounted for by both luminance matching and lightness constancy. The index could take on values between 0 (luminance matching) and 1 (lightness constancy). Individual observer indices ranged between 0.17 and 0.63 with mean 0.40 and median 0.40. An auxiliary slant-matching experiment rules out variation in perceived slant as the source of the individual variability. Third, the effect of instructions was small compared to the inter-observer variability. Implications of the data for models of lightness perception are discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Hesse, Constanze [Verfasser]. "The use of visual information when grasping objects / vorgelegt von Constanze Hesse." 2008. http://d-nb.info/989391019/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography