Dissertations / Theses on the topic 'Time-map'

To see the other types of publications on this topic, follow the link: Time-map.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Time-map.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Yang. "Temporal Saliency Map of Real-time Video Sequence." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.498564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ezequiel, Carlos Favis. "Real-Time Map Manipulation for Mobile Robot Navigation." Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4481.

Full text
Abstract:
Mobile robots are gaining increased autonomy due to advances in sensor and computing technology. In their current form however, robots still lack algorithms for rapid perception of objects in a cluttered environment and can benefit from the assistance of a human operator. Further, fully autonomous systems will continue to be computationally expensive and costly for quite some time. Humans can visually assess objects and determine whether a certain path is traversable, but need not be involved in the low-level steering around any detected obstacles as is necessary in remote-controlled systems. If only used for rapid perception tasks, the operator could potentially assist several mobile robots performing various tasks such as exploration, surveillance, industrial work and search and rescue operations. There is a need to develop better human-robot interaction paradigms that would allow the human operator to effectively control and manage one or more mobile robots. This paper proposes a method of enhancing user effectiveness in controlling multiple mobile robots through real-time map manipulation. An interface is created that would allow a human operator to add virtual obstacles to the map that represents areas that the robot should avoid. A video camera is connected to the robot that would allow a human user to view the robot's environment. The combination of real-time map editing and live video streaming enables the robot to take advantage of human vision, which is still more effective at general object identification than current computer vision technology. Experimental results show that the robot is able to plan a faster path around an obstacle when the user marks the obstacle on the map, as opposed to allowing the robot to navigate on its own around an unmapped obstacle. Tests conducted on multiple users suggest that the accuracy in placing obstacles on the map decreases with increasing distance of the viewing apparatus from the obstacle. Despite this, the user can take advantage of landmarks found in the video and in the map in order to determine an obstacle's position on the map.
APA, Harvard, Vancouver, ISO, and other styles
3

Martinelli, Earl Nicholas. "A Dynamic Time Course of Cognitive Map Distortion." Thesis, Connect to title online (Scholars' Bank), 2008. http://hdl.handle.net/1794/7892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paci, Lucia <1985&gt. "Bayesian space-time data fusion for real-time forecasting and map uncertainty." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6182/2/Paci_Lucia_tesi.pdf.

Full text
Abstract:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
APA, Harvard, Vancouver, ISO, and other styles
5

Paci, Lucia <1985&gt. "Bayesian space-time data fusion for real-time forecasting and map uncertainty." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6182/.

Full text
Abstract:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
APA, Harvard, Vancouver, ISO, and other styles
6

Svenzén, Niklas. "Real Time Implementation of Map Aided Positioning Using a Bayesian Approach." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1493.

Full text
Abstract:

With the simple means of a digitized map and the wheel speed signals, it is possible to position a vehicle with an accuracy comparable to GPS. The positioning problem is a non-linear filtering problem and a particle filter has been applied to solve it. Two new approaches studied are the Auxiliary Particle Filter (APF), that aims at lowerering the variance of the error, and Rao-Blackwellization that exploits the linearities in the model. The results show that these methods require problems of higher complexity to fully utilize their advantages.

Another aspect in this thesis has been to handle off-road driving scenarios, using dead reckoning. An off road detection mechanism has been developed and the results show that off-road driving can be detected accurately. The algorithm has been successfully implemented on a hand-held computer by quantizing the particle filter while keeping good filter performance.

APA, Harvard, Vancouver, ISO, and other styles
7

Myers, Vanessa. "Evaluation of Real-Time Weather Map Discussions in the Middle School Classroom." Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1240258414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Carson-Berndsen, Julie. "Time map phonology : finite state models and event logics in speech recognition /." Dordrecht ; Boston ; London : Kluwer academic publishers, 1998. http://catalogue.bnf.fr/ark:/12148/cb37533760b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Padmanabhan, Vijaybalaji. "Developing an operational procedure to produce digitized route maps using GPS vehicle location data." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/32202.

Full text
Abstract:
Advancements in Global Positioning System (GPS) technology now make GPS data collection for transportation studies and other transportation applications a reality. Base map for the application can be obtained by importing the road centerline map into GIS software like AutoCAD Map, or Arc/Info or MapixTM. However, such kinds of Road Centerline maps are not available for all places. Therefore, it may be necessary to collect the data using GPS units. This thesis details the use of GPS technology to produce route maps that can be used to predict arrival time of a bus. This application is particularly useful in rural areas, since the bus headway in a rural area is generally larger than that in an urban area. The information is normally communicated through various interfaces such as internet, cable TV, etc., based on the GPS bus location data. The objective of this thesis is to develop an operational procedure to obtain the digitized route map of any desired interval or link length and to examine the accuracy of the digitized map. The operational procedure involved data collection, data processing, algorithm development and coding to produce the digitized route maps. An algorithm was developed produce the digitized route map from the base map of the route, coded in MATLAB, and can be used to digitize the base map into any desired interval of distance. The accuracy comparison is made to determine the consistency between the digitized route map and the base map.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Park, Chanoh. "Multimodal dense map-centric SLAM." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/209881/1/Chanoh_Park_Thesis.pdf.

Full text
Abstract:
This thesis focuses on the problem of LiDAR sensor-based mapping where conventional methods has difficulties with the long-term operation or sensor integration. A new mapping system framework has been proposed to overcome the shortcomings of the conventional methods and we demonstrate its advantages on multiple map datasets collected from various environments. The outcome of the research will be useful for several applications where long-term mapping required, such as security robots, autonomous cars, and service robots.
APA, Harvard, Vancouver, ISO, and other styles
11

Roxanas, Dimitrios. "Long-time dynamics for the energy-critical harmonic map heat flow and nonlinear heat equation." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61612.

Full text
Abstract:
The main focus of this thesis is on critical parabolic problems, in particular, the harmonic map heat from the plane to S2, and nonlinear focusing heat equations with an algebraic nonlinearity. The focus of this work has been on long-time dynamics, stability and singularity formation, and the investigation of the role of special, soliton-like, solutions to the asymptotic behaviour of solutions. Harmonic Map Heat Flow: Flow: we consider m-corotational solutions to the harmonic map heat flow from R2 to S2. We first work in a class of maps with trivial topology and energy of the initial data below two times the energy of the stationary harmonic map solutions. We give a new proof of global existence and decay. The proof is based on the "concentration-compactness plus rigidity" approach of Kenig and Merle and relies on the dissipation of the energy and a profile decomposition. We also treat m-corotational maps (m greater than 3) with non-trivial topology and energy of the initial data less than three times the energy of the stationary harmonic map solutions. Through a new stability argument we rule out finite-time blow-up and show that the global solution asymptotically converges to a harmonic map. Nonlinear Heat Equation: we also study solutions of the focusing energy-critical nonlinear heat equation. We show that solutions emanating from initial data with energy and kinetic energy below those of the stationary solutions are global and decay to zero. To prove that global solutions dissipate to zero we rely on a refined small data theory, L2-dissipation and an approximation argument. We then follow the "concentration-compactness plus rigidity" roadmap of Kenig and Merle (and in particular the approach taken by Kenig and Koch for Navier-Stokes) to exclude finite-time blow-up.
Science, Faculty of
Mathematics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
12

Sanaullah, Irum. "Real-time estimation of travel time using low frequency GPS data from moving sensors." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/11938.

Full text
Abstract:
Travel time is one of the most important inputs in many Intelligent Transport Systems (ITS). As a result, this information needs to be accurate and dynamic in both spatial and temporal dimensions. For the estimation of travel time, data from fixed sensors such as Inductive Loop Detectors (ILD) and cameras have been widely used since the 1960 s. However, data from fixed sensors may not be sufficiently reliable to estimate travel time due to a combination of limited coverage and low quality data resulting from the high cost of implementing and operating these systems. Such issues are particularly critical in the context of Less Developed Countries, where traffic levels and associated problems are increasing even more rapidly than in Europe and North America, and where there are no pre-existing traffic monitoring systems in place. As a consequence, recent developments have focused on utilising moving sensors (i.e. probe vehicles and/or people equipped with GPS: for instance, navigation and route guidance devices, mobile phones and smartphones) to provide accurate speed, positioning and timing data to estimate travel time. However, data from GPS also have errors, especially for positioning fixes in urban areas. Therefore, map-matching techniques are generally applied to match raw positioning data onto the correct road segments so as to reliably estimate link travel time. This is challenging because most current map-matching methods are suitable for high frequency GPS positioning data (e.g. data with 1 second interval) and may not be appropriate for low frequency data (e.g. data with 30 or 60 second intervals). Yet, many moving sensors only retain low frequency data so as to reduce the cost of data storage and transmission. The accuracy of travel time estimation using data from moving sensors also depends on a range of other factors, for instance vehicle fleet sample size (i.e. proportion of vehicles equipped with GPS); coverage of links (i.e. proportion of links on which GPS-equipped vehicles travel); GPS data sampling frequency (e.g. 3, 6, 30, 60 seconds) and time window length (e.g. 5, 10 and 15 minutes). Existing methods of estimating travel time from GPS data are not capable of simultaneously taking into account the issues related to uncertainties associated with GPS and spatial road network data; low sampling frequency; low density vehicle coverage on some roads on the network; time window length; and vehicle fleet sample size. Accordingly this research is based on the development and application of a methodology which uses GPS data to reliably estimate travel time in real-time while considering the factors including vehicle fleet sample size, data sampling frequency and time window length in the estimation process. Specifically, the purpose of this thesis was to first determine the accurate location of a vehicle travelling on a road link by applying a map-matching algorithm at a range of sampling frequencies to reduce the potential errors associated with GPS and digital road maps, for example where vehicles are sometimes assigned to the wrong road links. Secondly, four different methods have been developed to estimate link travel time based on map-matched GPS positions and speed data from low frequency data sets in three time windows lengths (i.e. 5, 10 and 15 minutes). These are based on vehicle speeds, speed limits, link distances and average speeds; initially only within the given link but subsequently in the adjacent links too. More specifically, the final method draws on weighted link travel times associated with the given and adjacent links in both spatial and temporal dimensions to estimate link travel time for the given link. GPS data from Interstate I-880 (California, USA) for a total of 73 vehicles over 6 hours were obtained from the UC-Berkeley s Mobile Century Project. The original GPS dataset which was broadcast on a 3 second sampling frequency has been extracted at different sampling frequencies such as 6, 30, 60 and 120 seconds so as to evaluate the performance of each travel time estimation method at low sampling frequencies. The results were then validated against reference travel time data collected from 4,126 vehicles by high resolution video cameras, and these indicate that factors such as vehicle sample size, data sampling frequency, vehicle coverage on the links and time window length all influence the accuracy of link travel time estimation.
APA, Harvard, Vancouver, ISO, and other styles
13

Staedter, David. "Femtosecond time-resolved spectroscopy in polyatomic systems investigated by velocity-map imaging and high-order harmonic generation." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2116/.

Full text
Abstract:
Dans cette thèse, la dynamique de photodissociation de l'azoture de chlore (ClN3) est étudiée dans le domaine temporel par imagerie de vecteur vitesse des photofragments, spécialement du chlore et de N3. Cette imagerie résolue à l'échelle femtoseconde permet d'extraire les temps de dissociation, l'établissement temporel de la balance d'énergie de la réaction ainsi que la conservation des moments. Cette étude a permis de différencier deux domaines d'énergie: l'un menant à la formation d'un fragment N3 linéaire (étude autour de 4. 5 eV d'excitation électronique) et le plus intéressant aboutissant à la formation d'un fragment N3 cyclique (autour de 6 eV). Dans une seconde étude, la dynamique de relaxation électronique du tétrathiafulvalène (C6H4S4-TTF) est étudiée autour de 4 eV par spectroscopie de masse résolue en temps ainsi que par spectroscopie de photoélectron. Les seuils d'ionisation dissociative sont extraits d'une détection en coïncidence entre les photoélectrons de seuil et les fragments ionisés réalisée sur rayonnement synchrotron. Les deux dernières expériences sont basées sur la génération d'harmoniques d'ordre élevé dans l'XUV d'une impulsion femtoseconde à 800 nm ou à 400 nm. Dans la première expérience, les harmoniques sont couplées à un imageur de vecteur vitesse en tant que rayonnement secondaire VUV. Par imagerie de photoélectron résolue en temps, nous avons révélé ainsi les dynamiques de relaxation des états de Rydberg initiée par une impulsion femtoseconde XUV à 15. 5 eV dans l'argon et à 9. 3 eV dans l'acétylène. Dans la seconde expérience, couramment nommée spectroscopie attoseconde, les harmoniques constituent le signal pompe sonde. Deux types de spectroscopie attoseconde ont été réalisés pour étudier la dynamique vibrationnelle de SF6: une expérience en réseau transitoire créé par deux impulsions pompe Raman avec une impulsion sonde intense générant les harmoniques à partir du réseau d'excitation et une expérience d'interférence de deux rayonnement XUV en champ lointain créés par deux impulsions sonde intenses
Revealing the underlying ultrafast dynamics in molecular reaction spectroscopy demands state-of-the-art imaging techniques to follow a molecular process step by step. Femtosecond time-resolved velocity-map imaging is used to study the photodissociation dynamics of chlorine azide (ClN3). Here especially the co-fragments chlorine and N3 are studied on the femtosecond timescale in two excitation energy regions around 4. 67 eV and 6. 12 eV, leading to the formation of a linear N3 fragment and a cyclic N3 fragment, respectively. This work is the first femtosecond spectroscopy study revealing the formation of cyclic N3. Tetrathiafulvalene (TTF, C6H4S4) electronic relaxation is studied, while scanning the electronic excitation around 4 eV, by time resolved mass and photoelectron spectroscopy. As only few is known about the ion continuum about TTF the imaging photoelectron photoion coincidence (iPEPICO) technique is used in order to disentangle the complex ionic dissociation. The second part of the thesis is based on the generation and application of XUV light pulses by high-order harmonic generation with an intense femtosecond laser pulse in a molecular target. Two types of phase sensitive attosecond spectroscopy experiments were conducted to study the vibrational dynamics of SF6: one using strong field transient grating spectroscopy, where high-order harmonic generation takes place in a grating of excitation, and the second experiment using high-order harmonic interferometry using two intense XUV probe pulses. The temporal dependencies in phase and amplitude reveal the vibrational dynamics in SF6 and demonstrate that high-order harmonic generation is sensitive to the internal excitations. Last but not least, the use of high-order harmonics as a XUV photon source for the velocity-map imaging spectrometer is investigated. Using time-resolved photoelectron imaging, the relaxation dynamics initiated with 15. 5 eV in argon and 9. 3 eV in acetylene are revealed
APA, Harvard, Vancouver, ISO, and other styles
14

Ouředníková, Lucie. "TIME MANAGEMENT - nástroj nejen pro prokrastinující studenty." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359472.

Full text
Abstract:
This master thesis introduces term time management and describes steps for its successful implementation to our practical life. The aim of this thesis is to describe students´of University of Economics knowledge of this term and found out if study on this university and mainly passing course Management of Personal Development can expand their knowledge of this term. Mind mapping is used to deal with this aim. These mind maps ilustrate what students ideas about this term are. From the mind maps´ analysis arise that students of this university know time management very well, however passing the Management of Personal Development can highly expand this knowledge.
APA, Harvard, Vancouver, ISO, and other styles
15

Balazadegan, Sarvrood Yashar, and Md Nurul Amin. "Server Based Real Time GPS-IMU Integration Aided by Fuzzy Logic Based Map Matching Algorithm for Car Navigation." Thesis, KTH, Geoinformatik och Geodesi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-41263.

Full text
Abstract:
The stand-alone Global Positioning System (GPS) or an Integrated GPS and Dead Reckoning Systems (such as Inertial Navigation System or Odometer and magnetometer) have been widely used for vehicle navigation. An essential process in such an application is to map match the position obtained from GPS (or/and other sensors) on a digital road network map. GPS positioning is relatively accurate in open sky conditions, but its position is not accurate in dense urban canyon conditions where GPS is affected by signal blockage and multipath. High sensitivity GPS (HS GPS) receivers, can increase the availability, but are affected by multipath and cross correlation due to weak signal tracking. Inertial navigation system can be used to bridge GPS gaps, However, position and velocity results in such conditions are typically biased, therefore, fuzzy logic  based map matching, is mostly used because it can take noisy, imprecise input, to yield crisp (i.e. numerically accurate) output. Fuzzy logic can be applied effectively to map match the output from a High sensitivity GPS receiver or integrated GPS and INS in urban canyons because of its inherent tolerance to imprecise inputs. In this thesis stand-alone GPS positioning and integrated GPS and Inertial Measurement Unit (IMU) positioning aided by fuzzy logic based map matching for Stockholm urban and suburban areas are performed.  A comparison is carried out between, Map matching for stand-alone GPS and integrated GPS and IMU. Stand-alone GPS aided map matching algorithms identifies 96.4% of correct links for rural area, 92.6% for urban area (car test) and 93.4% for bus test in urban area. Integrated GPS and IMU aided map matching algorithms identifies 97.3% of correct links for rural area, 94.4% for urban area (car test) and 94.4% for bus test in urban area. Integrated GPS and Inertial Measurement Unit produces better vehicle azimuth than stand-alone GPS, especially at low speed. Furthermore, there are five more fuzzy rules based on gyro rate in integrated GPS and IMU map matching algorithm. Therefore, it shows better map matching results. GPS blackout happens rarely in Stockholm, because there are not many tall buildings in this city. Therefore, the integrated GPS and IMU aided by map matching shows only small improvement over stand-alone GPS aided by map matching.
APA, Harvard, Vancouver, ISO, and other styles
16

Paltian, Luciano Pinheiro. "Cartografias do tempo : uma proposta de "mapa do tempo interativo" para o ensino de história." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/172918.

Full text
Abstract:
A linha do tempo aparece como metáfora visual apenas no século XVIII, intimamente ligada à noção moderna de progresso, quando passa a ganhar espaço nas sociedades industriais até se tornar uma espécie de retrato da forma como concebemos o tempo. Atualmente, as timelines estão em jornais, revistas, exposições, sites, redes sociais, e, obviamente, no ensino de História. Entretanto, a forma de um tempo em linha carrega a noção de uma evolução contínua e incontornável, teleológica, exato oposto de um tempo histórico social, múltiplo, construído pelas sociedades humanas. No entanto, trabalhos historiográficos recentes mostram um conjunto muito mais amplo de possibilidades nas chamadas cartografias do tempo, que, baseadas em recursos da linguagem visual, resultam em objetos de aprendizagem extremamente valiosos para o ensino de História. Conhecidos como History Charts, Chronographics, ou, simplesmente, mapas do tempo, tem como característica comum o fato de apresentarem alternativas que se prestam bastante bem ao ensino de certas noções básicas do conhecimento histórico contemporâneo, para além da mera cronologia. Este trabalho apresenta um estudo desses materiais, e busca, a partir desses formatos e das formas do tempo medido, referências para a apresentação de um mapa do tempo interativo como forma de atingir resultados significativos no ensino de História junto a turmas do nível médio do ensino básico.
The timeline appears as a visual metaphor only in the eighteenth century, closely linked to the modern notion of progress, as it gains space in industrial societies until it becomes a sort of picture of how we conceive of time. Currently, the timelines are in newspapers, magazines, exhibitions, websites, social networks, and obviously in teaching History. However, the form of an online time carries the notion of a continuous and uncontrollable evolution, teleological, exact opposite of a historical social time, multiple, constructed by human societies. However, recent historiographical works show a much broader set of possibilities in so-called cartographies of time, which, based on visual language resources, result in learning objects that are extremely valuable for teaching history. Known as History Charts, Chronographics, or simply maps of time, it has as a common characteristic that they present alternatives that lend themselves well to the teaching of certain basic notions of contemporary historical knowledge, beyond mere chronology. This work presents a study of these materials and searches, from these formats and the forms of the measured time, references for the presentation of a map of interactive time as a way to achieve significant results in the teaching of History to classes of the High School level of basic education.
APA, Harvard, Vancouver, ISO, and other styles
17

Prelipcean, Adrian Corneliu. "Implementation and evaluation of Space Time Alarm Clock." Thesis, KTH, Geodesi och geoinformatik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157533.

Full text
Abstract:
Many modern mobile communication devices are equipped with a GPS receiver and anavigation tool. These devices are useful when a user seeks to reach a specified destinationas soon as possible, but may not be so when he/she only needs to arrive at thedestination in time and wants to focus on some activities on the way. To deal with thislatter situation, a method and device called “Space Time Alarm Clock” is presented forhelping the user reach the destination by a specified deadline and inform the user aboutthe consequences of his/her decisions. It does so by continuously and efficiently computinghow much more time the user may stay at his/her current location without failing toreach the destination by the deadline. Furthermore, it determines the possible movementchoices that a user can make with regards to an underlying road network, it computesthe shortest travel time associated with each choice and informs the user about the consequencesof his/her decisions. Advantage of this approach is that it works completelyin the background so that the user‘s en-route activities will never be interfered with. The“Space Time Alarm Clock” was implemented for Stockholm, where it was tested.
APA, Harvard, Vancouver, ISO, and other styles
18

Chehtane, Mounir. "REAL TIME REVERSE TRANSCRIPTION-POLYMERASE CHAIN REACTION FOR DIRECT DETECTION OF VIABLE MYCOBACTERIUM AVIUM SUBSPECIES PARATUBERCULOSIS IN CROHN S DISEASE PATIENTS and ASSOCIATION OF MAP INFECTION WITH DOWNREGUALTION IN INTERFERON-GAMMA RECEPTOR (INFG." Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4281.

Full text
Abstract:
Association of Mycobacterium avium subspecies paratuberculosis (MAP) with Crohn's disease (CD) and not with ulcerative colitis (UC), two forms of inflammatory bowel disease (IBD), has been vigorously debated in recent years. This theory has been strengthened by recent culture of MAP from breast milk, intestinal tissue and Blood from patients with active Crohn's disease. Culture of MAP from clinical samples remained challenging due to the fastidious nature of MAP including its lack of cell wall in infected patients. The advent of real time PCR has proven to be significant in infectious disease diagnostics. In this study, real time reverse transcriptase PCR (RT-PCR) assay based on targeting mRNA of the IS900 gene unique to MAP has been developed. All variables included in RNA isolation, cDNA synthesis and real time PCR amplification have been optimized. Oligonucleotide primers were designed to amplify 165 bp specific to MAP and the assay demonstrated sensitivity of 4 genomes per sample. In hope this real time RT-PCR may aid in the detection of viable MAP cells in Crohn's disease patients, a total of 45 clinical samples were analyzed. Portion of each sample was also subjected to 12 weeks culture followed by standard nested PCR analysis. The samples consisted of 17 cultures (originated from 13 CD, 1 UC and 3 NIBD subjects), 24 buffy coat blood (originated from 7 CD, 2 UC, 11 NIBD and 4 healthy subjects) and 4 intestinal biopsies from 2 CD patients. Real time RT-PCR detected viable MAP in 11/17 (65%) of iii suspected cultures compared to 12/17 (70%) by nested PCR including 77% and 84% from CD samples by both methods, respectively. Real time RT-PCR detected MAP RNA directly from 3/7 (42%) CD, 2/2 (100%) UC and 0/4 healthy controls similar to results following long term culture incubation and nested PCR analysis. Interestingly, real time RT-PCR detected viable MAP in 2/11 (13%) compared to 4/11 (26%) by culture and nested PCR in NIBD patients. For tissue samples, real time RT-PCR detected viable MAP in one CD patient with the culture outcome remains pending. This study clearly indicates that a 12-hr real time RT-PCR assay provided data that are similar to those from 12 weeks culture and nested PCR analysis. Consequently, use of real time In our laboratory, we previously demonstrated a possible downregulation in the Interferon-gamma receptor gene (IFNGR1) in patients with active Crohn's disease using microarray chip analysis. In this study, measurement of RNA by real time qRT-PCR indicated a possible downregulation in 5/6 CD patients compared to 0/12 controls. The preliminary data suggest that downregulation in INFGR1 gene, and the detection of viable MAP in CD patients provides yet the strongest evidence toward the linkage between MAP and CD etiology.
M.S.
Department of Molecular Biology and Microbiology
Burnett College of Biomedical Sciences
Molecular Biology and Microbiology
APA, Harvard, Vancouver, ISO, and other styles
19

Sixsmith, Jaimie. "Development of real time PCR to map poly(ADP-ribose) polymerase family gene expression in human brain tissue and cultured cells." Thesis, University of Newcastle Upon Tyne, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pentikäinen, Filip, and Albin Sahlbom. "Combining Influence Maps and Potential Fields for AI Pathfinding." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18228.

Full text
Abstract:
This thesis explores the combination of influence maps and potential fields in two novel pathfinding algorithms, IM+PF and IM/PF, that allows AI agents to intelligently navigate an environment. The novel algorithms are compared to two established pathfinding algorithms, A* and A*+PF, in the real-time strategy (RTS) game StarCraft 2. The main focus of the thesis is to evaluate the pathfinding capabilities and real-time performance of the novel algorithms in comparison to the established pathfinding algorithms. Based on the results of the evaluation, general use cases of the novel algorithms are presented, as well as an assessment if the novel algorithms can be used in modern games. The novel algorithms’ pathfinding capabilities, as well as performance scalability, are compared to established pathfinding algorithms to evaluate the viability of the novel solutions. Several experiments are created, using StarCraft 2’s base game as a benchmarking tool, where various aspects of the algorithms are tested. The creation of influence maps and potential fields in real-time are highly parallelizable, and are therefore done in a GPGPU solution, to accurately assess all algorithms’ real-time performance in a game environment. The experiments yield mixed results, showing better pathfinding and scalability performance by the novel algorithms in certain situations. Since the algorithms utilizing potential fields enable agents to inherently avoid and engage units in the environment, they have an advantage in experiments where such qualities are assessed. Similarly, influence maps enable agents to traverse the map more efficiently than simple A*, giving agents inherent advantages. In certain use cases, where multiple agents require pathfinding to the same destination, creating a single influence map is more beneficial than generating separate A* paths for each agent. The main benefits of generating the influence map, compared to A*-based solutions, being the lower total compute time, more precise pathfinding and the possibility of pre-calculating the map.
Denna rapport utforskar kombinationen av influence maps och potential fields med två nya pathfinding algoritmer, IM+PF och IM/PF, som möjliggör intelligent navigation av AI agenter. De nya algoritmerna jämförs med två existerande pathfindingalgoritmer, A* och A*+PF, i realtidsstrategispelet StarCraft 2. Rapportens fokus är att utvärdera de nya algoritmernas pathfindingförmåga samt realtidsprestanda i förhållande till de två existerande algoritmerna, i sex olika experiment. Baserat på resultaten av experimenten presenteras generella användningsområden för algoritmerna tillsammans med en bedömning om algoritmerna kan användas i moderna spel. De fyra pathfindingalgoritmerna implementeras för att jämföra pathfindingförmåga och realtidsprestanda, för att dra slutsatser angående de nya algoritmernas livsduglighet. Med användningen av StarCraft 2 som ett benchmarkingvertyg skapas sex experiment där olika aspekter av algoritmerna testas. Genereringen av influence maps och potential fields i realtid är ett arbete som kan parallelliseras, och därför implementeras en GPGPU-lösning för att få en meningsfull representation av realtidsprestandan av algoritmerna i en spelmiljö. Experimenten visar att de nya algoritmerna presterar bättre i både pathfindingförmåga och skalbarhet under vissa förhållanden. Algoritmerna som använder potential fields har en stor fördel gentemot simpel A*, då agenterna kan naturligt undvika eller konfrontera enheter i miljön, vilket ger de algoritmerna stora fördelar i experiment där sådana förmågor utvärderas. Influence maps ger likväl egna fördelar gentemot A*, då agenter som utnyttjar influence maps kan traversera världen mer effektivt. Under förhållanden då flera AI agenter ska traversera en värld till samma mål kan det vara förmånligt att skapa en influence map, jämfört med att generera individuella A*-vägar till varje agent. De huvudsakliga fördelarna för de influence map-baserade algoritmerna är att de kräver lägre total beräkningstid och ger en merexakt pathfinding, samt möjligheten att förberäkna influence map-texturen.
APA, Harvard, Vancouver, ISO, and other styles
21

Schwarz, Sebastian. "Depth Map Upscaling for Three-Dimensional Television : The Edge-Weighted Optimization Concept." Licentiate thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17048.

Full text
Abstract:
With the recent comeback of three-dimensional (3D) movies to the cinemas, there have been increasing efforts to spread the commercial success of 3D to new markets. The possibility of a 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Scene depth information plays a crucial role in all parts of the distribution chain from content capture via transmission to the actual 3D display. This depth information is transmitted in the form of depth maps and is accompanied by corresponding video frames, i.e. for Depth Image Based Rendering (DIBR) view synthesis. Nonetheless, scenarios do exist for which the original spatial resolutions of depth maps and video frames do not match, e.g. sensor driven depth capture or asymmetric 3D video coding. This resolution discrepancy is a problem, since DIBR requires accordance between the video frame and depth map. A considerable amount of research has been conducted into ways to match low-resolution depth maps to high resolution video frames. Many proposed solutions utilize corresponding texture information in the upscaling process, however they mostly fail to review this information for validity. In the strive for better 3DTV quality, this thesis presents the Edge-Weighted Optimization Concept (EWOC), a novel texture-guided depth upscaling application that addresses the lack of information validation. EWOC uses edge information from video frames as guidance in the depth upscaling process and, additionally, confirms this information based on the original low resolution depth. Over the course of four publications, EWOC is applied in 3D content creation and distribution. Various guidance sources, such as different color spaces or texture pre-processing, are investigated. An alternative depth compression scheme, based on depth map upscaling, is proposed and extensions for increased visual quality and computational performance are presented in this thesis. EWOC was evaluated and compared with competing approaches, with the main focus was consistently on the visual quality of rendered 3D views. The results show an increase in both objective and subjective visual quality to state-of-the-art depth map upscaling methods. This quality gain motivates the choice of EWOC in applications affected by low resolution depth. In the end, EWOC can improve 3D content generation and distribution, enhancing the 3D experience to boost the commercial success of 3DTV.
APA, Harvard, Vancouver, ISO, and other styles
22

Unsal, Ahmet Dundar. "Estimation Of Time-dependent Link Costs Using Gps Track Data." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12608010/index.pdf.

Full text
Abstract:
Intelligent Transport Systems (ITS) are becoming a part of our daily lives in various forms of application. Their success depends highly on the accuracy of the digital data they use. In networks where characteristics change by time, time-based network analysis algorithms provide results that are more accurate. However, these analyses require time-based travel speed data to provide accurate results. Conventionally, traffic data are usually obtained using the data provided from loop-detectors. These detectors usually exist on main arteries, freeways and highways
they rarely exist on back roads, secondary roads and streets due to their deployment costs. Today, telematics systems offer fleet operators to track their fleet remotely from a central system. Those systems provide data about the behaviors of vehicles with time information. Therefore, a tracking system can be used as an alternative to detector-based systems on estimating travel speeds on networks. This study aims to provide methods to estimate network characteristics using the data collected directly from fleets consisting of global positioning system (GPS) receiver equipped vehicles. GIS technology is used to process the collected GPS data spatially to match digital road maps. After matching, time-dependent characteristics of roads on which tracked vehicles traveled are estimated. This estimation provides data to perform a time-dependent network analysis. The methods proposed in this study are tested on traffic network of Middle East Technical University campus. The results showed that the proposed methods are capable of measuring time-dependent link-travel times on the network. Peak hours through the network are clearly detected.
APA, Harvard, Vancouver, ISO, and other styles
23

Dally, Nadine [Verfasser]. "Map based cloning and functional analysis of two bolting time loci BTC1 and BvBBX19 in sugar beet (Beta vulgaris L.) / Nadine Dally." Kiel : Universitätsbibliothek Kiel, 2014. http://d-nb.info/1053653506/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hadachi, Amnir. "Travel Time Estimation Using Sparsely Sampled Probe GPS Data in Urban Road Networks Context." Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00800203.

Full text
Abstract:
This dissertation is concerned with the problem of estimating travel time per links in urban context using sparsely sampled GPS data. One of the challenges in this thesis is use the sparsely sampled data. A part of this research work, i developed a digital map with its new geographic information system (GIS), dealing with map-matching problem, where we come out with an enhancement tecnique, and also the shortest path problem.The thesis research work was conduct within the project PUMAS, which is an avantage for our research regarding the collection process of our data from the real world field and also in making our tests. The project PUMAS (Plate-forme Urbaine de Mobilité Avancée et Soutenable / Urban Platform for Sustainable and Advanced Mobility) is a preindustrial project that has the objective to inform about the traffic situation and also to develop an implement a platform for sustainable mobility in order to evaluate it in the region, specifically Rouen, France. The result is a framework for any traffic controller or manager and also estimation researcher to access vast stores of data about the traffic estimation, forecasting and status.
APA, Harvard, Vancouver, ISO, and other styles
25

Svanström, Fredrik. "Properties of a generalized Arnold’s discrete cat map." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-35209.

Full text
Abstract:
After reviewing some properties of the two dimensional hyperbolic toral automorphism called Arnold's discrete cat map, including its generalizations with matrices having positive unit determinant, this thesis contains a definition of a novel cat map where the elements of the matrix are found in the sequence of Pell numbers. This mapping is therefore denoted as Pell's cat map. The main result of this thesis is a theorem determining the upper bound for the minimal period of Pell's cat map. From numerical results four conjectures regarding properties of Pell's cat map are also stated. A brief exposition of some applications of Arnold's discrete cat map is found in the last part of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
26

Fookes, William. "Optimisation of a pillow production line applying Lean principles." Thesis, KTH, Industriell produktion, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102777.

Full text
Abstract:
Manufacturing companies throughout the world are interested in reducing the time between a customer placing an order and them receiving the payment for that order. This premise is something that is a central characteristic for the Lean philosophy, and is one of the reasons to apply it. Today manufacturers around the world are embracing Lean techniques in order to reduce waste and increase productivity, and also increase the inventory turns, which reflects in an improvement of cash flow for the company. Nowadays, with all the financial turmoil, every company is looking forward to reduce the inventories, to work with Just in Time supply chains, to develop production systems that reduce the scrap and produce only what is needed, saving space, and freeing up time to work on new design and be at the edge of innovation in order to gain market share and keep improving. This master thesis is focused on implementing the Lean principles in a pillow production line, in order to achieve it, a series of techniques to assess the facility where implemented, which allowed to understand how the facility was working, where is the bottleneck, and to understand the function of it as a system, avoiding to focus on a single point but viewing it as a whole, where each part contributes in a specific and unique way, but where all of them are necessary. Applying Lean principles is a daunting task that takes a long time, a never ending trial and error process, because of this the goal of this study is to develop the bases for a Lean transformation, a schedule for the implementation will be developed and proposed to the company, after analyzing the facility. The study reveals that it is possible to reduce the lead-time of the facility in 60%, and avoid the backorders situation that is present in the company, improving also the service level.
APA, Harvard, Vancouver, ISO, and other styles
27

Westman, Freddie. "Modeller för restidsuppskattning baserat på Floating Car Data." Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1510.

Full text
Abstract:

I storstadsregionerna blir trafikläget allt mer ansträngt för vart år som går. Inflyttningen fortsätter i oförändrad takt och fler människor måste försöka samsas om samma utrymme. Situationen på vägarna börjar bli ohållbar och det måste till att dessa problem löses snart för att utvecklingen i regionerna inte ska stagnera. Möjligheter för ytterligare utbyignationer finns dock i en begränsad grad och man måste börja se till andra lösningar. Inom området för intelligenta transportsystem(ITS) erbjuds många nya tillämpningar där man med ny teknik försöker hitta lösningar till dagens trafikproblem. Ett led i detta är att samla in och distribuera information om restider på vägarna, för att försöka fördela trafiken mer jämt över hela vägnätet. Det finns olika metoder för att hämta in den här typen av information, men den här rapporten fokuserar sig vid att beskriva system baserat på Floating Car Data(FCD).

Arbetet som beskrivs i rapporten har i huvudsak analyserat fyra olika restidsuppskattnings-modeller och jämfört dessa med varandra. Modellerna baserar sina beräkningar på observationer från oidentifierade fordon, dvs att observationerna inte har någon identitetsstämpel som kan kopplas till ett specifikt fordon. Två av modellerna betraktar länkarna som en helhet och utför beräkningarna med detta som grund, medan de två andra delar upp varje länk i mindre segment vilket skapar möjlighet för en större noggrannhet. Modellerna testades inledningsvis på simulerad data baserat på trafikmätningar i Göteborgstrakten. Alla beräkningar begränsades ner till länknivå och inte hela vägnät. Detta p.g.a. att det initialt var för komplicerat att skapa en map- matchingmetod som skulle krävas för genomföra beräkningar på olika länkar samtidigt.

Efter genomförda tester på simulerad data prövades modellerna även på en reella datamängd hämtad från projektet Probe i Stockholmsområdet. Resultaten från de utförda testerna visar på att det inte skiljer sig nämnvärt i restidsuppskattningarna mellan de olika modellerna. Sträckan som valdes att analyseras i de simulerade fallen, påverkades inte av några större störningar eller flödesvariationer. Det resulterade i att alla modellerna genererade likvärdiga restider. Även i fallet med den reella datamängden, som innehöll större flödesvariationer över tiden, kunde de olika modellernas uppskattningar inte skiljas åt nämnvärt.

Slutsatsen är att trafiken i allmänhet inte har så kraftiga förändringar i flödet över tiden, att det krävs särskilt avancerade modeller för att beräkna restider på länknivå. I alla fall inte om man bortser från incidenter. De framräknade restiderna och den information som dessa ger, bör främst användas för direkt trafikstyrning för att uppnå önskat resultat. Människor förlitar sig mer till sina egna erfarenheter i kända områden, så information av den här typen lämpar sig mer som hjälpmedel för den enskilde individen vid resor i okänd trafik.

APA, Harvard, Vancouver, ISO, and other styles
28

Schwarz, Sebastian. "Gaining Depth : Time-of-Flight Sensor Fusion for Three-Dimensional Video Content Creation." Doctoral thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21938.

Full text
Abstract:
The successful revival of three-dimensional (3D) cinema has generated a great deal of interest in 3D video. However, contemporary eyewear-assisted displaying technologies are not well suited for the less restricted scenarios outside movie theaters. The next generation of 3D displays, autostereoscopic multiview displays, overcome the restrictions of traditional stereoscopic 3D and can provide an important boost for 3D television (3DTV). Then again, such displays require scene depth information in order to reduce the amount of necessary input data. Acquiring this information is quite complex and challenging, thus restricting content creators and limiting the amount of available 3D video content. Nonetheless, without broad and innovative 3D television programs, even next-generation 3DTV will lack customer appeal. Therefore simplified 3D video content generation is essential for the medium's success. This dissertation surveys the advantages and limitations of contemporary 3D video acquisition. Based on these findings, a combination of dedicated depth sensors, so-called Time-of-Flight (ToF) cameras, and video cameras, is investigated with the aim of simplifying 3D video content generation. The concept of Time-of-Flight sensor fusion is analyzed in order to identify suitable courses of action for high quality 3D video acquisition. In order to overcome the main drawback of current Time-of-Flight technology, namely the high sensor noise and low spatial resolution, a weighted optimization approach for Time-of-Flight super-resolution is proposed. This approach incorporates video texture, measurement noise and temporal information for high quality 3D video acquisition from a single video plus Time-of-Flight camera combination. Objective evaluations show benefits with respect to state-of-the-art depth upsampling solutions. Subjective visual quality assessment confirms the objective results, with a significant increase in viewer preference by a factor of four. Furthermore, the presented super-resolution approach can be applied to other applications, such as depth video compression, providing bit rate savings of approximately 10 percent compared to competing depth upsampling solutions. The work presented in this dissertation has been published in two scientific journals and five peer-reviewed conference proceedings.  In conclusion, Time-of-Flight sensor fusion can help to simplify 3D video content generation, consequently supporting a larger variety of available content. Thus, this dissertation provides important inputs towards broad and innovative 3D video content, hopefully contributing to the future success of next-generation 3DTV.
APA, Harvard, Vancouver, ISO, and other styles
29

Javed, Nasir. "Development of Genetic Linkage Maps and Identification of Quantitative Trait Loci Influencing Seed Oil Content, Fatty Acid Profile and Flowering Time in Brassica napus L." Hereditary Genetics, 2012. http://hdl.handle.net/1993/30633.

Full text
Abstract:
Identification of allelic variation through quantitative trait loci (QTL) mapping offers possibilities for the improvement of quantitatively inherited traits. This requires a genetic map along with the phenotypic characterization of a mapping population. A doubled haploid (DH) Polo X Topas population consisting of 194 lines and a recombinant inbred line population of 92 lines was developed. Individual genetic maps derived from each population were integrated into a consensus map. The DH-based genetic map was used for QTL mapping. The DH-based map was comprised of 620 loci that were assembled into 19 linkage groups that were anchored to the B. napus chromosomes. The DH-based map covered 2244.1 cM genomic distance with an average marker interval of 3.7 cM. The DH population was phenotyped in four environments with each line replicated twice in a randomized complete block design. Days to flowering was recorded and oil content and fatty acid composition were determined using Near Infrared spectroscopy (NIR) and Gas Chromatography, respectively. Fourteen QTL were identified for oil content, 33 QTL for palmitic acid content, 18 QTL for stearic acid content, 21 QTL for oleic acid content, 20 QTL for linoleic acid content, 23 QTL for linolenic acid content, 16 QTL for arachidic acid content and 14 QTL for flowering time. Oil content QTL were identified on five linkage groups, A3, A10, C1, C5, and C6. An oil content QTL, qOIL-A10c appeared in all four environments, whereas qOIL-A10a appeared in only one environment but explained 26.99% variation. The oil content in the population ranged from 35% to 55.5% with the parents having values of 42% to 46%. Two genomic regions on C3, with map positions at 147.83 cM and 154.55 cM harbored QTL (rQTL) for all the fatty acids studied. The additive effects of the rQTL reveal a correlation pattern which is supported by the phenotypic correlation observed between the fatty acids. This suggests rQTL have role in the fatty acid composition and possibly determine total seed oil content. The rQTL and flanking markers of the identified QTL offer utility in further development of B. napus.
October 2015
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Jason W. L. "Novel developments in time-of-flight particle imaging." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:195be057-7ce0-4a15-b639-b08892fde312.

Full text
Abstract:
In the field of physical chemistry, the relatively recently developed technique of velocity-map imaging has allowed chemical dynamics to be explored with a greater depth than could be previously achieved using other methods. Capturing the scattering image associated with the products resulting from fragmentation of a molecule allows the dissociative pathways and energy landscape to be investigated. In the study of particle physics, the neutron has become an irreplaceable spectroscopic tool due to the unique nature of the interaction with certain materials. Neutron spectroscopy is a non-destructive imaging technique that allows a number of properties to be discerned, including chemical identification, strain tensor measurements and the identification of beneath the sample surface using radiography and tomography. In both of these areas, as well as a multitude of other disciplines, a flight tube is used to separate particles, distinguishing them based upon their mass in the former case and their energy in the latter. The experiments can be vastly enhanced by the ability to record both the position and arrival time of the particle of interest. This thesis describes several new developments made in instrumentation for experiments involving time-of-flight particle imaging. The first development described is the construction of a new velocity-map imaging instrument that utilises electron ionisation to perform both steps of molecular fragmentation and ionisation. Data from CO2 is presented as an example of the ability of the instrument, and a preliminary analysis of the images is performed. The second presented project is the design of a time-resolved and position-resolved detector developed for ion imaging experiments. The hardware, software and firmware are described and presented alongside data from a variety of the experiments showcasing the breadth of investigations that are possible using the sensor. Finally, the modifications made to the detector to allow time-resolved neutron imaging are detailed, with an in-depth description of the various proof-of-concept experiments carried out as part of the development process.
APA, Harvard, Vancouver, ISO, and other styles
31

Uhl, Philip J. "A Spatio-Temporal Data Model for Zoning." BYU ScholarsArchive, 2002. https://scholarsarchive.byu.edu/etd/1.

Full text
Abstract:
Planning departments are besieged with temporal/historical information. While for many institutions historical information can be relegated to archives, planning departments have a constant need to access and query their historical information, particularly their historical spatial information such as zoning. This can be a cumbersome process fraught with inaccuracies due to the changing organizational methods and the extended historical legacies of most municipalities. Geographic Information Systems can be a tool to provide a solution to the difficulties in querying spatio-temporal planning data. Using a data model designed specifically to facilitate the querying of historical zoning information, queries can be performed to answer basic zoning questions such as "what is the zoning history for a specific parcel of land?" This work outlines this zoning data model, its implementation, and its testing using queries basic to the needs of planning departments.
APA, Harvard, Vancouver, ISO, and other styles
32

Ramos, Elisabete Manuela de Sousa. "Sistemas Multiplex para a deteção e caracterização molecular de infeções." Master's thesis, [s.n.], 2012. http://hdl.handle.net/10284/3575.

Full text
Abstract:
Trabalho apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Atualmente existe uma enorme preocupação em estudar os microrganismos, em particular os patogénicos para o Homem, uma vez que a sua presença no organismo constitui um risco potencial para a saúde Humana. Por este motivo, nos últimos 10-20 anos, temos assistido a uma evolução significativa nas técnicas disponíveis para a identificação laboratorial dos agentes responsáveis por patologias de base infeciosa. Os novos métodos, especialmente os baseados na tecnologia dos ácidos nucleicos, apresentam enormes vantagens em termos de sensibilidade, especificidade e grande rapidez na obtenção de resultados. Por este motivo, têm sido cada vez mais utilizados na deteção e identificação de microrganismos com interesse diagnóstico, prognóstico e de orientação do tratamento. Diversas tecnologias moleculares são utilizadas em laboratório para detetar agentes patogénicos. As mais utilizadas dependem de plataformas de PCR, e mais recentemente de PCR-em-tempo-real, disponibilizando sensibilidade, especificidade e rapidez sem precedentes na história da microbiologia e virologia clínica. No entanto a elevada especificidade destas técnicas podem também constituir uma desvantagem em situações clínicas que possam ter etiologias associadas a um largo espetro de microrganismos. Por este motivo têm sido desenvolvidas estratégias moleculares para detetar simultaneamente múltiplos agentes: os designados Sistemas Multiplex. O primeiro destes sistemas a ser desenvolvido foi o PCR Multiplex o qual permite detetar e amplificar simultaneamente mais do que um organismo identificando assim o agente etiológico. Mais recentemente têm sido desenvolvidos outros métodos multiplex com o mesmo fim: Técnicas baseadas em Microchips (de hibridização ou de amplificação as quais permitem a deteção e identificação de até milhares de sequências e consequentemente centenas a milhares de agentes patogénicos em simultâneo) bem comos sistemas baseados na tecnologia X-Map (fusão da citometria de fluxo e hibridização em solução com partículas fluorescentes). Do mesmo modo, os sistemas baseados na amplificação e sequenciação do gene rRNA 16S permitem a deteção e identificação de praticamente qualquer bactéria patogénica, sem necessitar de qualquer suspeita prévia. Finalmente, o advento das técnicas de sequenciação de 2ª e 3ª geração permitiu o desenvolvimento de uma nova área científica: a Metagenómica, a qual procura caracterizar simultaneamente e completamente a composição microbiológica de ecossistemas complexos, sejam eles humanos (pele, urina, fezes, boca, etc) ou ambientais (ar, águas residuais, solo, etc). Atualmente existe uma enorme preocupação em estudar os microrganismos, em particular os patogénicos para o Homem, uma vez que a sua presença no organismo constitui um risco potencial para a saúde Humana. Por este motivo, nos últimos 10-20 anos, temos assistido a uma evolução significativa nas técnicas disponíveis para a identificação laboratorial dos agentes responsáveis por patologias de base infeciosa. Os novos métodos, especialmente os baseados na tecnologia dos ácidos nucleicos, apresentam enormes vantagens em termos de sensibilidade, especificidade e grande rapidez na obtenção de resultados. Por este motivo, têm sido cada vez mais utilizados na deteção e identificação de microrganismos com interesse diagnóstico, prognóstico e de orientação do tratamento. Diversas tecnologias moleculares são utilizadas em laboratório para detetar agentes patogénicos. As mais utilizadas dependem de plataformas de PCR, e mais recentemente de PCR-em-tempo-real, disponibilizando sensibilidade, especificidade e rapidez sem precedentes na história da microbiologia e virologia clínica. No entanto a elevada especificidade destas técnicas podem também constituir uma desvantagem em situações clínicas que possam ter etiologias associadas a um largo espetro de microrganismos. Por este motivo têm sido desenvolvidas estratégias moleculares para detetar simultaneamente múltiplos agentes: os designados Sistemas Multiplex. O primeiro destes sistemas a ser desenvolvido foi o PCR Multiplex o qual permite detetar e amplificar simultaneamente mais do que um organismo identificando assim o agente etiológico. Mais recentemente têm sido desenvolvidos outros métodos multiplex com o mesmo fim: Técnicas baseadas em Microchips (de hibridização ou de amplificação as quais permitem a deteção e identificação de até milhares de sequências e consequentemente centenas a milhares de agentes patogénicos em simultâneo) bem comos sistemas baseados na tecnologia X-Map (fusão da citometria de fluxo e hibridização em solução com partículas fluorescentes). Do mesmo modo, os sistemas baseados na amplificação e sequenciação do gene rRNA 16S permitem a deteção e identificação de praticamente qualquer bactéria patogénica, sem necessitar de qualquer suspeita prévia. Finalmente, o advento das técnicas de sequenciação de 2ª e 3ª geração permitiu o desenvolvimento de uma nova área científica: a Metagenómica, a qual procura caracterizar simultaneamente e completamente a composição microbiológica de ecossistemas complexos, sejam eles humanos (pele, urina, fezes, boca, etc) ou ambientais (ar, águas residuais, solo, etc).
APA, Harvard, Vancouver, ISO, and other styles
33

Hallqvist, Kristoffer. "Dynamic label placement for moving objects." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-201632.

Full text
Abstract:
In command and control systems, for example air traffic control, operators must view many moving objects simultaneously. Graphical labels that identify objects move along with them, and for readability it is important that such labels do not overlap or hop around erratically as objects come close to each other. Instead, the labels should smoothly revolve around their objects. The goal of this thesis is to explore label placement strategies for moving objects that avoid overlap and hopping effects. In this thesis, we consider a simplified problem, in which time is coarsely discretized and each label is of a fixed size and can only be displayed in a limited number of distinct positions relative to its corresponding object. An optimal and a reactive heuristic algorithm are developed and applied to a number of test cases, which are then analysed for different statistical measures. In a scene with 25 objects traveling across a common area, the reactive algorithm is on average able to keep approximately half of the labels visible the whole time, whereas the optimal algorithm could only be applied to test cases with at most four objects. A prediction mechanism is implemented that on average decreases the number of times labels alternate between being hidden and visible. Future work could investigate how users perceive the usability of a system implementing the reactive algorithm.
I lednings- och övervakningssystem för t.ex. flygtrafik måste operatörer hålla uppsikt på flera rörliga objekt samtidigt. För att kunna identifiera objekten visas de tillsammans med grafiska etiketter som följer dem åt, och för att det ska gå att läsa etiketterna ordentligt är det viktigt att de inte överlappar eller gör hastiga oförutsägbara rörelser när objekt närmar sig varandra. Istället bör etiketterna röra sig mjukt runt sina respektive objekt. Målet med detta arbete är att utforska strategier för att placera etiketter till rörliga objekt på ett sådant sätt att överlapp och hastiga oförutsägbara rörelser undviks. I arbetet behandlas ett förenklat problem där tiden är grovt diskretiserad och varje etikett har en förutbestämd storlek och enbart kan visas på ett begränsat antal platser i förhållande till objektet den tillhör. En optimal och en reaktiv heuristisk algoritm utvecklas och tillämpas på ett antal testfall som sedan analyseras för mätdata. I en vy med 25 objekt som färdas genom ett gemensamt område klarar den reaktiva algoritmen i genomsnitt att behålla ungefär hälften av etiketterna synliga hela tiden, medan den optimala algoritmen endast kunde tillämpas på testfall med som mest fyra objekt. En förutsägelsemekanism implementeras och lyckas i många fall förhindra att etiketterna växlar mellan att vara dolda och synliga. Framtida arbete skulle kunna utreda hur användare upplever användbarheten av en praktisk tillämpning som använder den reaktiva algoritmen.
APA, Harvard, Vancouver, ISO, and other styles
34

Labrande, Hugo. "Explicit computation of the Abel-Jacobi map and its inverse." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0142/document.

Full text
Abstract:
L'application d'Abel-Jacobi fait le lien entre la forme de Weierstrass d'une courbe elliptique définie sur C et le tore complexe qui lui est associé. Il est possible de la calculer en un nombre d'opérations quasi-linéaire en la précision voulue, c'est à dire en temps O(M(P) log P). Son inverse est donné par la fonction p de Weierstrass, qui s'exprime en fonction de thêta, une fonction importante en théorie des nombres. L'algorithme naturel d'évaluation de thêta nécessite O(M(P) sqrt(P)) opérations, mais certaines valeurs (les thêta-constantes) peuvent être calculées en O(M(P) log P) opérations en exploitant les liens avec la moyenne arithmético-géométrique (AGM). Dans ce manuscrit, nous généralisons cet algorithme afin de calculer thêta en O(M(P) log P). Nous exhibons une fonction F qui a des propriétés similaires à l'AGM. D'une façon similaire à l'algorithme pour les thêta-constantes, nous pouvons alors utiliser la méthode de Newton pour calculer la valeur de thêta. Nous avons implanté cet algorithme, qui est plus rapide que la méthode naïve pour des précisions supérieures à 300 000 chiffres décimaux. Nous montrons comment généraliser cet algorithme en genre supérieur, et en particulier comment généraliser la fonction F. En genre 2, nous sommes parvenus à prouver que la même méthode mène à un algorithme qui évalue thêta en O(M(P) log P) opérations ; la même complexité s'applique aussi à l'application d'Abel-Jacobi. Cet algorithme est plus rapide que la méthode naïve pour des précisions plus faibles qu'en genre 1, de l'ordre de 3 000 chiffres décimaux. Nous esquissons également des pistes pour obtenir la même complexité en genre quelconque. Enfin, nous exhibons un nouvel algorithme permettant de calculer une isogénie de courbes elliptiques de noyau donné. Cet algorithme utilise l'application d'Abel-Jacobi, car il est facile d'évaluer l'isogénie sur le tore ; il est sans doute possible de le généraliser au genre supérieur
The Abel-Jacobi map links the short Weierstrass form of a complex elliptic curve to the complex torus associated to it. One can compute it with a number of operations which is quasi-linear in the target precision, i.e. in time O(M(P) log P). Its inverse is given by Weierstrass's p-function, which can be written as a function of theta, an important function in number theory. The natural algorithm for evaluating theta requires O(M(P) sqrt(P)) operations, but some values (the theta-constants) can be computed in O(M(P) log P) operations by exploiting the links with the arithmetico-geometric mean (AGM). In this manuscript, we generalize this algorithm in order to compute theta in O(M(P) log P). We give a function F which has similar properties to the AGM. As with the algorithm for theta-constants, we can then use Newton's method to compute the value of theta. We implemented this algorithm, which is faster than the naive method for precisions larger than 300,000 decimal digits. We then study the generalization of this algorithm in higher genus, and in particular how to generalize the F function. In genus 2, we managed to prove that the same method leads to a O(M(P) log P) algorithm for theta; the same complexity applies to the Abel-Jacobi map. This algorithm is faster than the naive method for precisions smaller than in genus 1, of about 3,000 decimal digits. We also outline a way one could reach the same complexity in any genus. Finally, we study a new algorithm which computes an isogeny of elliptic curves with given kernel. This algorithm uses the Abel-Jacobi map because it is easy to evaluate the isogeny on the complex torus; this algorithm may be generalizable to higher genera
APA, Harvard, Vancouver, ISO, and other styles
35

Wandeto, John Mwangi. "Self-organizing map quantization error approach for detecting temporal variations in image sets." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD025/document.

Full text
Abstract:
Une nouvelle approche du traitement de l'image, appelée SOM-QE, qui exploite quantization error (QE) des self-organizing maps (SOM) est proposée dans cette thèse. Les SOM produisent des représentations discrètes de faible dimension des données d'entrée de haute dimension. QE est déterminée à partir des résultats du processus d'apprentissage non supervisé du SOM et des données d'entrée. SOM-QE d'une série chronologique d'images peut être utilisé comme indicateur de changements dans la série chronologique. Pour configurer SOM, on détermine la taille de la carte, la distance du voisinage, le rythme d'apprentissage et le nombre d'itérations dans le processus d'apprentissage. La combinaison de ces paramètres, qui donne la valeur la plus faible de QE, est considérée comme le jeu de paramètres optimal et est utilisée pour transformer l'ensemble de données. C'est l'utilisation de l'assouplissement quantitatif. La nouveauté de la technique SOM-QE est quadruple : d'abord dans l'usage. SOM-QE utilise un SOM pour déterminer la QE de différentes images - typiquement, dans un ensemble de données de séries temporelles - contrairement à l'utilisation traditionnelle où différents SOMs sont appliqués sur un ensemble de données. Deuxièmement, la valeur SOM-QE est introduite pour mesurer l'uniformité de l'image. Troisièmement, la valeur SOM-QE devient une étiquette spéciale et unique pour l'image dans l'ensemble de données et quatrièmement, cette étiquette est utilisée pour suivre les changements qui se produisent dans les images suivantes de la même scène. Ainsi, SOM-QE fournit une mesure des variations à l'intérieur de l'image à une instance dans le temps, et lorsqu'il est comparé aux valeurs des images subséquentes de la même scène, il révèle une visualisation transitoire des changements dans la scène à l'étude. Dans cette recherche, l'approche a été appliquée à l'imagerie artificielle, médicale et géographique pour démontrer sa performance. Les scientifiques et les ingénieurs s'intéressent aux changements qui se produisent dans les scènes géographiques d'intérêt, comme la construction de nouveaux bâtiments dans une ville ou le recul des lésions dans les images médicales. La technique SOM-QE offre un nouveau moyen de détection automatique de la croissance dans les espaces urbains ou de la progression des maladies, fournissant des informations opportunes pour une planification ou un traitement approprié. Dans ce travail, il est démontré que SOM-QE peut capturer de très petits changements dans les images. Les résultats confirment également qu'il est rapide et moins coûteux de faire la distinction entre le contenu modifié et le contenu inchangé dans les grands ensembles de données d'images. La corrélation de Pearson a confirmé qu'il y avait des corrélations statistiquement significatives entre les valeurs SOM-QE et les données réelles de vérité de terrain. Sur le plan de l'évaluation, cette technique a donné de meilleurs résultats que les autres approches existantes. Ce travail est important car il introduit une nouvelle façon d'envisager la détection rapide et automatique des changements, même lorsqu'il s'agit de petits changements locaux dans les images. Il introduit également une nouvelle méthode de détermination de QE, et les données qu'il génère peuvent être utilisées pour prédire les changements dans un ensemble de données de séries chronologiques
A new approach for image processing, dubbed SOM-QE, that exploits the quantization error (QE) from self-organizing maps (SOM) is proposed in this thesis. SOM produce low-dimensional discrete representations of high-dimensional input data. QE is determined from the results of the unsupervised learning process of SOM and the input data. SOM-QE from a time-series of images can be used as an indicator of changes in the time series. To set-up SOM, a map size, the neighbourhood distance, the learning rate and the number of iterations in the learning process are determined. The combination of these parameters that gives the lowest value of QE, is taken to be the optimal parameter set and it is used to transform the dataset. This has been the use of QE. The novelty in SOM-QE technique is fourfold: first, in the usage. SOM-QE employs a SOM to determine QE for different images - typically, in a time series dataset - unlike the traditional usage where different SOMs are applied on one dataset. Secondly, the SOM-QE value is introduced as a measure of uniformity within the image. Thirdly, the SOM-QE value becomes a special, unique label for the image within the dataset and fourthly, this label is used to track changes that occur in subsequent images of the same scene. Thus, SOM-QE provides a measure of variations within the image at an instance in time, and when compared with the values from subsequent images of the same scene, it reveals a transient visualization of changes in the scene of study. In this research the approach was applied to artificial, medical and geographic imagery to demonstrate its performance. Changes that occur in geographic scenes of interest, such as new buildings being put up in a city or lesions receding in medical images are of interest to scientists and engineers. The SOM-QE technique provides a new way for automatic detection of growth in urban spaces or the progressions of diseases, giving timely information for appropriate planning or treatment. In this work, it is demonstrated that SOM-QE can capture very small changes in images. Results also confirm it to be fast and less computationally expensive in discriminating between changed and unchanged contents in large image datasets. Pearson's correlation confirmed that there was statistically significant correlations between SOM-QE values and the actual ground truth data. On evaluation, this technique performed better compared to other existing approaches. This work is important as it introduces a new way of looking at fast, automatic change detection even when dealing with small local changes within images. It also introduces a new method of determining QE, and the data it generates can be used to predict changes in a time series dataset
APA, Harvard, Vancouver, ISO, and other styles
36

Respati, Sara Wibawaning. "Network-scale arterial traffic state prediction: Fusing multisensor traffic data." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/202990/1/Sara%20Wibawaning_Respati_Thesis.pdf.

Full text
Abstract:
Road traffic congestion is an increasing societal problem. Road agencies and users seeks accurate and reliable travel speed information. This thesis developed a network-scale traffic state prediction based on Convolutional Neural Network (CNN). The method can predict the speed over the network accurately by preserving road connectivity and incorporating historical datasets. When dealing with an extensive network, the thesis also developed a clustering method to reduce the complexity of the prediction. By accurately predict the traffic state over a network, traffic operators can manage the network more effectively and travellers can make informed decision on their journeys.
APA, Harvard, Vancouver, ISO, and other styles
37

Rahmani, Mahmood. "Urban Travel Time Estimation from Sparse GPS Data : An Efficient and Scalable Approach." Doctoral thesis, KTH, Transportplanering, ekonomi och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167798.

Full text
Abstract:
The use of GPS probes in traffic management is growing rapidly as the required data collection infrastructure is increasingly in place, with significant number of mobile sensors moving around covering expansive areas of the road network. Many travelers carry with them at least one device with a built-in GPS receiver. Furthermore, vehicles are becoming more and more location aware. Vehicles in commercial fleets are now routinely equipped with GPS. Travel time is important information for various actors of a transport system, ranging from city planning, to day to day traffic management, to individual travelers. They all make decisions based on average travel time or variability of travel time among other factors. AVI (Automatic Vehicle Identification) systems have been commonly used for collecting point-to-point travel time data. Floating car data (FCD) -timestamped locations of moving vehicles- have shown potential for travel time estimation. Some advantages of FCD compared to stationary AVI systems are that they have no single point of failure and they have better network coverage. Furthermore, the availability of opportunistic sensors, such as GPS, makes the data collection infrastructure relatively convenient to deploy. Currently, systems that collect FCD are designed to transmit data in a limited form and relatively infrequently due to the cost of data transmission. Thus, reported locations are far apart in time and space, for example with 2 minutes gaps. For sparse FCD to be useful for transport applications, it is required that the corresponding probes be matched to the underlying digital road network. Matching such data to the network is challenging. This thesis makes the following contributions: (i) a map-matching and path inference algorithm, (ii) a method for route travel time estimation, (iii) a fixed point approach for joint path inference and travel time estimation, and (iv) a method for fusion of FCD with data from automatic number plate recognition. In all methods, scalability and overall computational efficiency are considered among design requirements. Throughout the thesis, the methods are used to process FCD from 1500 taxis in Stockholm City. Prior to this work, the data had been ignored because of its low frequency and minimal information. The proposed methods proved that the data can be processed and transformed into useful traffic information. Finally, the thesis implements the main components of an experimental ITS laboratory, called iMobility Lab. It is designed to explore GPS and other emerging data sources for traffic monitoring and control. Processes are developed to be computationally efficient, scalable, and to support real time applications with large data sets through a proposed distributed implementation.

QC 20150525

APA, Harvard, Vancouver, ISO, and other styles
38

Sævar, Guðbjörnssonn Alexander, and Yassin Haider Mohammed. "Flow Optimisation for Improved Performance of a Multivariant Manufacturing and Assembly Line." Thesis, KTH, Industriell produktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254443.

Full text
Abstract:
Stoneridge, Inc. is an independent designer and manufacturer of highly engineered electrical and electronic components, modules and systems principally for the automotive, commercial vehicle, motorcycle, agricultural and off-highway vehicle markets. A subsidiary of Stoneridge, Inc. is the company Stoneridge Electronics. They specialise in instrument clusters and tachographs which are manufactured in high quantityin their production plant in Örebro, Sweden. This master thesis focuses on the production line of an instrument cluster called Angela. In close collaboration with Stoneridge Electronics, the goal was to find ways to improve the output of the Angela line by at least 10% compared to the three best months in terms of output from the year before. The Angela line was analyzed thoroughly and from different perspectives using lean tools such as value stream mapping, spaghetti diagram and continuous improvement. Finally, the simulation software ExtendSim was used in order to simulate and analyse different suggestions. The results show that various steps can be taken to improve the efficiency and output of the manufacturing line by as much as 16.3%. Due to the fact that other production lines within the production are similar to the one that the project was carried out on, the project results could be applicable for the other lines as well.
Stoneridge, Inc. är en oberoende designer och tillverkare av högteknologiska elektriska och elektroniska komponenter, moduler och system huvudsakligen för fordonsmarknaderna. Ett dotterbolag till Stoneridge, Inc. är företaget Stoneridge Electronics. De är specialiserade på instrument kluster och färdskrivare som tillverkas i produktionsanläggning i Örebro. Denna examensarbete fokuserar på produktionslinje av ett instrumentkluster som heter Angela. I nära samarbete med Stoneridge Electronics,målet var att hitta sätt att förbättra produktionen av Angela linje med minst 10 % jämförtmed de tre bästa månaderna när det gäller produktion från året innan. Angela-linjen analyserades grundligt och från olika perspektiv med lean verktyg som värdeflödesanalys, spaghetti diagram och kontinuerlig förbättring. Slutligen användes simuleringsprogrammet ExtendSim för att simulera och analysera olika förslag. Resultaten visar att olika steg kan vidtas för att förbättra effektiviteten och produktionen av produktionslinjen med så mycket som 16.3%. På grund av att andra produktionslinjer inom produktionen liknar den som projektet genomfördes på, kan projektresultaten vara tillämpliga för andra linjer också.
APA, Harvard, Vancouver, ISO, and other styles
39

PALOZZI, ROBERTO. "The Ontogeny of foraging in Weddell seal pups and dietary behaviour in lactating females." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2010. http://hdl.handle.net/2108/1360.

Full text
Abstract:
Le femmine di mammifero che digiunano durante l’allattamento sono chiamate “capital breeders” mentre, al contrario, le femmine che allattano e che continuano a nutrirsi dal parto fino allo svezzamento dei cuccioli sono indicate come “income breeders”. La foca di Weddell (Leptonychotes weddellii), il pinnipede antartico più meridionale, è stata considerata per molto tempo una specie “capital breeder” estrema, ma i più recenti studi sul suo comportamento di immersione e sulle sue abitudini alimentari sollevano molti dubbi sulla validità di questa stringente categorizzazione. Inoltre non è chiaro se i cuccioli della foca di Weddell comincino a foraggiare già durante l’allattamento e quale sia la loro strategia adattativa per completare la transizione dal latte materno al foraggiamento indipendente. Anche se il comportamento di immersione della foca di Weddell è stato studiato per molto tempo in Antartide, soprattutto a McMurdo Sound, un aspetto ha ricevuto meno attenzione di altri: lo sviluppo delle capacità di immersione nei cuccioli durante l’allattamento (dalla nascita fino a circa 6 settimane di vita) insieme al comportamento associato delle femmine che allattano e l’ontogenesi della loro dieta. Questo lavoro ha provato a fare luce su questa fase cruciale per la sopravvivenza dei neonati e sulle strategie materne durante l’allattamento e offre una visione complementare del comportamento di immersione delle mamme con quello dei loro propri cuccioli. I dati biochimici delle femmine in allattamento, dei loro cuccioli, del latte e delle prede ottenuti dalle analisi degli isotopi stabili di carbonio e azoto sono stati comparati con quelli derivati dai Time Depth Recorders applicati su 16 mamme e 8 cuccioli tra il secondo e il terzo giorno dopo il parto; inoltre, i dati delle immersioni sono stati analizzati applicando un approccio tradizionale, nel quale le forme dei profili di immersione sono preventivamente fissati dal software in numero e modello (MT-Dive, Jensen Software Systems), in comparazione con una rete neurale artificiale “unsupervised”, la Self-Organizing Map (SOM). I risultati mostrano che il comportamento associato di immersione durante l’allattamento (tempo in acqua, numero di immersioni, massima profondità, durata, forme del profilo) riflette una alta variabilità intra-specifica; ma mentre l’approccio tradizionale sembra suggerire una chiara attività di foraggiamento sia nelle mamme che nei cuccioli (le immersioni con forma a U – tradizionalmente collegate con l’attività di foraggiamento – erano predominanti lungo tutto l’allattamento e la prima parte dello svezzamento), le SOMs hanno prodotto un risultato opposto. Le SOMs più grandi, che possono essere considerate come ordinamenti non lineari, hanno fornito una transizione molto più sfumata dalla forma a V alla forma a U, mentre le SOMs più piccole, che si comportano come classificatori non gerarchici, praticamente non hanno rilevato nessun cluster con forme a U. Le analisi del tempo passato in acqua hanno indicato una associazione molto stretta tra le mamme e i loro propri cuccioli ma anche un ampio range dell’uso dello spazio subacqueo da parte delle mamme quando lasciano i loro cuccioli da soli; i risultati mostrano strategie di sfruttamento delle risorse trofiche che variano apprezzabilmente da un individuo ad un altro, lasciando intendere che la strategia materna delle femmine in allattamento non è unica e varia da capital a income breeding. L’analisi degli isotopi stabili ha suggerito che i cuccioli non foraggiano indipendentemente in maniera rilevante e individuabile durante l’allattamento, ma questa metodologia non è stata in grado di indicare il momento esatto in cui avviene la transizione al foraggiamento indipendente così come di individuare chiaramente il passaggio da uno stato nutrizionale a un altro nelle mamme. Dati a una più alta risoluzione potrebbero arrivare dall’analisi isotopica della frazione acquosa del plasma che questo studio ha esplorato per la prima volta.
Mammal females that fast throughout lactation are called “capital breeders” while, on the contrary, lactating females that continue to forage from the parturition to the pups’ weaning are indicated as “income breeders”. The Weddell seal (Leptonychotes weddellii), the southernmost Antarctic pinniped, has been considered an extreme capital breeder species for long time, but the most recent studies on its diving behaviour and feeding habits raised many doubts with regard to the validity of this strict categorization. Also, it was unclear if Weddell seal lactating pups begin to forage during lactation and what is their adaptive strategy to complete the transition from maternal milk to independent foraging. Even if Weddell seal diving behaviour has been studied for a long time in Antarctica, above all in the McMurdo Sound, one aspect received less attention than others: the diving skill development in pups during lactation (from birth until about 6 weeks of life) along with the associate behaviour of lactating females and the ontogeny of their diet. This work tried to shed light on this crucial phase for the newborn survival and on the maternal strategies during lactation and it offers a complementary sight of the mum diving behaviour with that of their own pups. Biochemical data of lactating females, their dependent pups, milk and prey items from carbon and nitrogen stable isotopes analysis were compared with data from Time Depth Recorders deployed on 16 mums and 8 pups, between the second and the third day after parturition; moreover diving data were analyzed applying a traditional approach, in which the dive profile shapes are previously fixed by the software in number and pattern (MT-Dive, Jensen Software Systems) in comparison with an unsupervised artificial neural network, the Self-Organizing Map (SOM). Results showed that the associated diving behaviour during lactation reflects a high intra-specific variability (time in water, number of dives, max depth, duration, profile shapes); but while the traditional approach seems to suggest a clear foraging activity both in mums and pups (U-shape dives – traditionally linked with foraging activity - were predominant along the lactation and early weaning), SOMs produced an opposite result. Larger SOMs, which can be regarded as non linear ordinations, provided a much more faded transition from V-shape to U-shape, while smaller SOMs, which act as non hierarchical classifiers, practically did not find any U-shaped dive cluster. Analyses of the time spent in water indicated a very close association between mums and their own pups but also a wide range of the underwater space use by the mums when they leave the pups alone; results showed trophic resource exploitation strategies varying appreciably from an individual to another letting understand that the maternal strategy of the lactating females is not unique and ranges from capital to income breeding. Stable isotopes analysis suggested that pups do not forage independently in a relevant and detectable manner during lactation but this methodology was not able to indicate the exact moment in which the transition to the independent foraging occurs as well as to clearly detect the passages from a nutritional status to another in the mums. Data at a higher resolution could arrive from the isotopic analysis of the plasma aqueous fraction that this study explored for the first time.
APA, Harvard, Vancouver, ISO, and other styles
40

Hajný, Petr. "Studie optimalizace operativního řízení výroby." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2009. http://www.nusl.cz/ntk/nusl-222080.

Full text
Abstract:
This master’s thesis solves the lean production’s methods implementation in the steps assembly department at the company IFE CR, a. s. The first part of the thesis describes the material and information flows and the most serious problems are identified in this part. On the basis of the analysis the optimized future state is designed and possible proposals of solutions are recommended to do for achieving this state. The solution of implementation of a new steps assembly line is solved in detail in the last part.
APA, Harvard, Vancouver, ISO, and other styles
41

Avoni, Riccardo. "Analisi dei lead time di produzione e ottimizzazione del ciclo produttivo. Il Caso Ponzi srl." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
La seguente trattazione si prefigge di definire una metodologia in grado di ridurre i lead time di produzione di un’impresa e di quantificarne i benefici. La metodologia proposta si suddivide in 3 fasi: analisi della situazione attuale, rilievo delle criticità e proposta soluzione. L’approccio appena esposto è stato applicato ad un caso reale, la Ponzi s.r.l. una azienda leader a livello nazionale nel settore delle porte automatiche ed infissi. Una volta individuato la principale causa di perdite di tempo, lungo il processo produttivo dell’azienda in esame, si è sviluppato un progetto in grado di eliminare alla fonte il problema riscontrato. Al fine di determinare l’impatto positivo sull’azienda in esame, si sono quantificati i benefici apportati da tale progetto, in termini di riduzione del lead time medio di produzione, incremento della produttività e convenienza economica.
APA, Harvard, Vancouver, ISO, and other styles
42

Jansson, Mattias, and Jimmy Johansson. "Interactive Visualization of Statistical Data using Multidimensional Scaling Techniques." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1716.

Full text
Abstract:

This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137.

In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions.

In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize.

Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.

APA, Harvard, Vancouver, ISO, and other styles
43

Rahmani, Mahmood. "Path Inference of Sparse GPS Probes for Urban Networks : Methods and Applications." Licentiate thesis, KTH, Trafik och logistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104524.

Full text
Abstract:
The application of GPS probes in traffic management is growing rapidly as the required data collection infrastructure is increasingly in place in urban areas with significant number of mobile sensors moving around covering expansive areas of the road network. Most travelers carry with them at least one device with a built-in GPS receiver. Furthermore, vehicles are becoming more and more location aware. Currently, systems that collect floating car data are designed to transmit the data in a limited form and relatively infrequently due to the cost of data transmission. That means the reported locations of vehicles are far apart in time and space. In order to extract traffic information from the data, it first needs to be matched to the underlying digital road network. Matching such sparse data to the network, especially in dense urban, area is challenging. This thesis introduces a map-matching and path inference algorithm for sparse GPS probes in urban networks. The method is utilized in a case study in Stockholm and showed robustness and high accuracy compared to a number of other methods in the literature. The method is used to process floating car data from 1500 taxis in Stockholm City. The taxi data had been ignored because of its low frequency and minimal information. The proposed method showed that the data can be processed and transformed into information that is suitable for traffic studies. The thesis implemented the main components of an experimental ITS laboratory, called iMobility Lab. It is designed to explore GPS and other emerging traffic and traffic-related data for traffic monitoring and control.

QC 20121107

APA, Harvard, Vancouver, ISO, and other styles
44

Cebecauer, Matej. "Short-Term Traffic Prediction in Large-Scale Urban Networks." Licentiate thesis, KTH, Transportplanering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-250650.

Full text
Abstract:
City-wide travel time prediction in real-time is an important enabler for efficient use of the road network. It can be used in traveler information to enable more efficient routing of individual vehicles as well as decision support for traffic management applications such as directed information campaigns or incident management. 3D speed maps have been shown to be a promising methodology for revealing day-to-day regularities of city-level travel times and possibly also for short-term prediction. In this paper, we aim to further evaluate and benchmark the use of 3D speed maps for short-term travel time prediction and to enable scenario-based evaluation of traffic management actions we also evaluate the framework for traffic flow prediction. The 3D speed map methodology is adapted to short-term prediction and benchmarked against historical mean as well as against Probabilistic Principal Component Analysis (PPCA). The benchmarking and analysis are made using one year of travel time and traffic flow data for the city of Stockholm, Sweden. The result of the case study shows very promising results of the 3D speed map methodology for short-term prediction of both travel times and traffic flows. The modified version of the 3D speed map prediction outperforms the historical mean prediction as well as the PPCA method. Further work includes an extended evaluation of the method for different conditions in terms of underlying sensor infrastructure, preprocessing and spatio-temporal aggregation as well as benchmarking against other prediction methods.

QC 20190531

APA, Harvard, Vancouver, ISO, and other styles
45

Guo, Jun. "Further development of shaders for realistic materials and global illumination effects." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-76769.

Full text
Abstract:
Shader programming is important for realistic material and global illumination real-time rendering, especially in 3D industrial fields nowadays, more and more customers of Visual Components Oy, a Finnish 3D software company have been found to be no longer only content with the correct simulation result, but also the result of realistic real-time rendering. This thesis project will provide a deep research on real world material classification, property definition and global illumination techniques in industrial fields. On the other hand, the Shader program for different materials and global illumination techniques are also created according to the classification and definition in this thesis work. Moreover, an external rendering tool Redway3D is evaluated as the reference and regarded as the considerable solution in the future development work.
APA, Harvard, Vancouver, ISO, and other styles
46

Erlandsson, Oskar. "Look2Hook - A Comparative Study of Eye-tracker and Mouse Based Object Selection in a Complex Environment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303013.

Full text
Abstract:
In this thesis the Tobii eye-tracker 4L was used to investigate how well eye-tracking solutions such as a confirmation-click and dwell-time algorithm compares to the standard mouse input device when performing selection tasks in a map environment. In order to distinguish the different complexity one could face, two user cases are proposed. Scenario one includes non clustered objects. Scenario two include clustered occluded objects. A user study with nine different participants where conducted in order to compare the execution times and find out how error prone the different methods were. Each test participant performed eight different tests, three in the non-clustered scenario and five in the clustered scenario. In two of the tests in the clustered scenario test participants were aided with zooming through a zoom algorithm. The methods was evaluated by calculating the average execution times and errors along with the corresponding standard deviations. In order to grasp the users experience a subjective cognitive load score was calculated with the help of a questionnaire. The eye-tracker methods was found to be competitive in comparison to mouse interaction in the more simple non-clustered case. However, in a more complex scenario such as the clustered case the mouse interaction had the lowest average completion time and cognitive load score. A different type of selection behaviour was discovered among the test participants in the clustered scenario due to the difference in precision between the eye-tracker and mouse interaction. Finally interesting areas to consider in the future is presented and discussed.
I denna avhandling användes en Tobii eye-tracker 4L för att undersöka hur väl eye-tracking metoder så som en bekräftelseklick och dwell-time algoritm jämför sig med standard mus interaktion vid objekt selektion i en kartmiljö. För att urskilja variationen i komplexitet man kan möta föreslås två olika användarfall. Scenario ett inkluderar objekt som är distinktivt separerade och därav ej grupperade. Scenario två inkluderar grupperade samt ockluderade objekt. En användarstudie med nio olika deltagare genomfördes för att jämföra exekveringstiderna och ta reda på hur felbenägna de olika metoderna var. Varje testdeltagare utförde åtta olika tester, tre i det icke-grupperade scenariot och fem i det grupperade scenariot. I två av testerna i det grupperade scenariot fick deltagarna hjälp med att zooma genom en zoomalgoritm. Metoderna utvärderades genom att beräkna de genomsnittliga exekveringstiderna samt antal fel tillsammans med motsvarande standardavvikelser. För att förstå hur användarna upplevde de olika metoderna togs en subjektiv kognitiv belastningspoäng fram genom ett frågeformulär. Eye-tracker metoderna var konkurrenskraftiga i jämförelse med musinteraktion i det enklare fallet där objekt ej var grupperade. I ett mer komplext scenario, såsom i det grupperade fallet, hade dock musinteraktionen den lägsta genomsnittliga exekveringstiden och kognitiva belastningspoängen. En annan typ av selektions beteende upptäcktes bland testdeltagarna i det grupperade scenariot på grund av skillnaden i precision mellan eye-trackern och musinteraktionen. Slutligen presenteras och diskuteras intressanta områden att överväga vid framtida arbeten.
APA, Harvard, Vancouver, ISO, and other styles
47

Vlček, Adam. "Real-time vizualizace povětrnostních vlivů v terénu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236789.

Full text
Abstract:
Thanks to the increasing computation power the complexity and dynamism of virtual reality is continuously improving. This work aims to examine influences of weather in a landscape and the means to simulate and dynamically visualize them in real time on the current personal computer hardware. The main goal is to find quick well looking approximations rather than a complex physically correct simulation. The work covers using modern programmable GPU not only for visualization but also as a powerful simulation instrument. The main topic is water movement in the terrain and its effects on it like erosion, snow melting and moisture impact on vegetation. This requires dynamic terrain texturing and algorithms supporting fast geometry and normals updates.
APA, Harvard, Vancouver, ISO, and other styles
48

Roux-Kéréver, Catherine. "L'être-carnet : du dessin à la page, approche comparative." Thesis, Bordeaux 3, 2018. http://www.theses.fr/2018BOR30061.

Full text
Abstract:
Mon sujet de thèse en arts plastiques s’appuie sur une pratique personnelle du carnet que j'entretiens depuis la fin de mes études à l'université de Rennes 2. Carnet de voyage, carnet de parcours, carnet de vie, carnet d’expériences où se mêlent et s'entremêlent des langages de différentes natures : écrit, graphique, collagiste, photographique… Je souhaite donner à cette production une dimension réflexive et spéculative qui permette de mieux en saisir les modalités plastiques et iconiques et d'en comprendre les enjeux. Pour ce, j’examinerai les travaux réalisés à la lumière des sciences humaines (sociologie, anthropologie, géographie, etc.) et des sciences de l’art (histoire de l’art, esthétique, poïétique ). De même je les confronterai à un corpus d’œuvres-références historiques, contemporaines et actuelles afin d’en dégager les affinités et spécificités. Qu'est ce que le carnet engage ? La perception du moment, la volonté de comprendre, les liens avec le patrimoine artistique et culturel, le rapport au monde contemporain jusque dans ses éléments les plus ténus ? D'autres choses plus complexes telles la construction de soi, le développement d'une identité graphique ou plastique ? Les tentatives de réponses à ces questions permettront d’envisager une ouverture de cette pratique de carnettiste vers l’univers scolaire. En tant qu'inspecteur pédagogique régional d'arts plastiques (antérieurement j’étais enseignante de collèges et lycées dans cette discipline), je m'interroge sur les modalités pédagogiques de mise en place d'un parcours artistique et culturel de l'élève par le biais du carnet ou du cahier devenu carnet. Ce dernier, dans les mains d'un public jeune, ne serait-il pas un moyen de s'approprier de manière constructive, ludique et imagée l'univers artistique à travers le prisme scolaire ? Au-delà de l'investissement artistique et théorique personnel, cette thèse pourrait avoir une portée pragmatique et connaître des applications expérimentales en milieu scolaire au sens large, c'est-à-dire de l'école à l'université
My subject of thesis in visual arts is based on a personal practice of the notebook that I have been carrying on since the end of my studies at the university of Rennes 2. These sketchbooks trace travel, career and life experiments by mixing and intermingling different languages: writing, graphics, collages and photography. I wish to infuse a reflexive and speculative dimension to this production in order to catch its plastic and iconic methods and to understand its challenges. To achieve this, I will study works in the light of social sciences (sociology, anthropology, geography, etc) and sciences of Arts (art history, aesthetics, poetics). In the same way I will confront them with a corpus of historical, contemporary and current work-references in order to reveal their affinities and specificities. What does the notebook imply? The perception of the present moment, the will to understand, the links with the artistic and cultural heritage, the relationship to the contemporary world down to its thinnest elements? Other more complex things such as self-construction, development of a graphic or plastic identity? The attempt to answer these questions enables a possible opening of this practice of carnettist towards the world of school. As a regional school Inspector for Visual Arts and a former teacher in lower and upper secondary school for that discipline, I wonder about teaching methods to provide pupils with a cultural and artistic background by using notebooks or workbook that became notebooks. Could it be the tool for young people to acquire in a constructive, playful and colourful way to enhance their approach to the artistic universe through the scholar prism? Beyond the artistic and theoretical personal involvement, this thesis could find a pragmatic application and an experimental implementation at school in a broad sense, i.e. from school to university
APA, Harvard, Vancouver, ISO, and other styles
49

Boldrini, Simone. "Riprogettazione del flusso di produzione mediante l'applicazione di tecniche della Lean Production: il caso Orlandi Radiatori S.p.A." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
La presente tesi di laurea è incentrata sul mio contributo, nel ruolo di facilitatore, alle modifiche ed ai miglioramenti effettuati all'interno della riprogettazione del flusso di produzione di tre prodotti dell'azienda Orlandi Radiatori S.p.A., il cui core business è quello della realizzazione di scambiatori di calore che sfruttano la tecnologia "Plate and Bar", particolarmente idonei per essere installati su motori destinati al settore agricolo, a quello del movimento terra, a quello dei gruppi elettrogeni e per applicazioni speciali.L'obiettivo del progetto è quello di apportare opportune modifiche al processo di produzione, migliorandone i K.P.I., mediante un'attenta valutazione della situazione di partenza ed un opportuno studio dei cambiamenti implementabili secondo un'adeguata scala di priorità. Lo strumento utilizzato per valutare lo scenario di partenza è stato la Value Stream Mapping, ovvero mappatura del flusso a valore, allo scopo di generare la Current State Map, ovvero la mappatura dello stato attuale; con questo strumento è stato possibile effettuare un'attenta analisi del processo produttivo, mettendo in evidenza le principali criticità e permettendo di iniziare a ragionare sui possibili miglioramenti che hanno portato alla stesura della Future State Map, ovvero la mappatura dello stato futuro, situazione raggiungibile mediante l'applicazione di alcuni strumenti propri della Lean Production, quali, principalmente, le 5S ed il Visual Management, l'introduzione di un sistema a Kanban, il Milk-Run e la Total Productive Maintenance, al fine di dar vita ad una gestione Pull, dunque tirata dal cliente, della produzione.I risultati ottenuti sono stati quelli di una netta riduzione del WIP in tutto il reparto, un incremento sensibile di efficienza in una fase critica del processo produttivo, la riduzione del Lead Time, ovvero del tempo di attraversamento di un pezzo fra le fasi di produzione, e la strutturazione di un sistema informativo solido e stabile.
APA, Harvard, Vancouver, ISO, and other styles
50

Jansson, Ove. "Using social network analysis as a tool to create and compare mental models." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119369.

Full text
Abstract:
The field of social network analysis has expanded from the field of social science to the fields of human factors and ergonomics. There is a theory that suggest that one can use the social network methods and create an information network which describes the network from an information sharing perspective and and there are also theories which describes how social network analysis can be used study cognitive maps (mental models). This thesis touches both of these subjects in an attempt to investigate how social network analysis can be used together with real-time information as a data source to investigate the cognitive maps of individuals and comparing these maps with an organisations expected structure based on protocols. The study conducted showed that it was indeed possible to change the social network analysis method into an information based network which explains the origin of a mental model and to study information be- haviour, in a network, but there are still variables which needs to be studied further (e.g. failed information sharing and temporal aspects of information sharing).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography